Every use of this function needs to allocate the same buffer. None of
the callers uses a differently sized buffer, so we might as well put it
in a struct and have the type checker prove the buffer size is correct.
Also rename `ip_ntoa` to `net_ip_ntoa` to avoid clashes with ESP-IDF
system libraries which define this function as well.
This is the "server-side" part of the new friend finding system,
allowing DHT nodes to store small amounts of data and permit searching
for it. A forwarding (proxying) mechanism allows this to be used by TCP
clients, and deals with non-transitivity in the network.
The idea here is to have a `Network` object that contains functions for
network operations and an optional userdata object that can manage those
network operations. This allows e.g. a fuzzer to replace the network
functions with no-ops or fuzzer inputs, reducing the need for `#ifdef`s.
We were leaving a self_connection_status callback attached to tox1 while
reloading tox2 and 3, the only other toxes in the network. This lead to
tox1 occasionally going offline, triggering the "Tox went offline"
assert and failing the test.
As a side-effect, DHT now always accepts LAN discovery packets, even
when LAN discovery is disabled. When LAN discovery is disabled, those
packets are ignored.
These don't test anything that isn't covered by higher level tox tests.
These are also not unit tests and have never found any bug that wasn't
also caught by other tests. This makes them a pure maintenance burden.
These were found by the new stronger type check in cimple. The one
bugfix is in `crypto_sha512_cmp`, which used to think `crypto_verify_32`
returns bool while actually it's -1/0/1.
This uses mallocfail to further increase coverage using the existing
tests. Also:
* Moved the non-auto "tox_one_test" to gtest. This should be
split into smaller tests later.
* Changed `hole_punching` to `bool`.
- Make sender send more data per iteration.
- Make receiver iterate more often while receiving.
Before this commit tox would send at maximum around 4MiB/s. With this
patch sustained speeds of up to 100MiB/s were observed on a
low-latency, high-bandwidth network.
As a consequence of iterating more frequently the receiver's CPU usage
is increased for the duration of the transfer. The data structures
used to represent friends and file transfers cause the sender code use
costly loops that do little real work. This patch makes this problem
more visible: the sender uses more CPU while sending.
Poor network conditions were simulated using the netem kernel
facility: $ tc qdisc add dev lo root netem delay 100ms 50ms \
loss 1% duplicate 1% corrupt 1% reorder 25% 50%
and no adverse behavior was encountered. Tests were conducted
using toxic using both UDP and TCP.
One of these was creating a single 262144 byte stack frame. We now have
a way to check and limit the allocation size of a VLA. The `Cmp_Data`
ones were also fairly large. Now, no allocation is larger than 2KiB
(though rtp.c allocates close to that much).
Instead of synchronously handling events as they happen in
`tox_iterate`, this first collects all events in a structure and then
lets the client process them. This allows clients to process events in
parallel, since the data structure returned is mostly immutable.
This also makes toxcore compatible with languages that don't (easily)
support callbacks from C into the non-C language.
If we remove the callbacks, this allows us to add fields to the events
without breaking the API.