- Make sender send more data per iteration.
- Make receiver iterate more often while receiving.
Before this commit tox would send at maximum around 4MiB/s. With this
patch sustained speeds of up to 100MiB/s were observed on a
low-latency, high-bandwidth network.
As a consequence of iterating more frequently the receiver's CPU usage
is increased for the duration of the transfer. The data structures
used to represent friends and file transfers cause the sender code use
costly loops that do little real work. This patch makes this problem
more visible: the sender uses more CPU while sending.
Poor network conditions were simulated using the netem kernel
facility: $ tc qdisc add dev lo root netem delay 100ms 50ms \
loss 1% duplicate 1% corrupt 1% reorder 25% 50%
and no adverse behavior was encountered. Tests were conducted
using toxic using both UDP and TCP.
One of these was creating a single 262144 byte stack frame. We now have
a way to check and limit the allocation size of a VLA. The `Cmp_Data`
ones were also fairly large. Now, no allocation is larger than 2KiB
(though rtp.c allocates close to that much).
Instead of synchronously handling events as they happen in
`tox_iterate`, this first collects all events in a structure and then
lets the client process them. This allows clients to process events in
parallel, since the data structure returned is mostly immutable.
This also makes toxcore compatible with languages that don't (easily)
support callbacks from C into the non-C language.
If we remove the callbacks, this allows us to add fields to the events
without breaking the API.
Also added a whole bunch of logging that I needed while debugging the
issue. The solution in the end is that bootstrap needs to resolve IPs,
and getaddrinfo fails in the browser. Most of the time we bootstrap
against IPs anyway, so trying to parse as IP address first will shortcut
that.
- Use one node list and public bootstrap function for all autotests
- Use ifdefs for testnet/mainnet nodes
- Replace a few broken nodes with working ones
We have a more portable wrapper that is now also thread-safe. Also
stopped using sprintf in the one place we used it. This doesn't really
help much, but it allows us to forbid sprintf globally.
Currently: 1) libsodium and 2) nacl.
Note that the "nacl" variant is actually libsodium. We just want to make
sure the static analysers see the `VANILLA_NACL` code paths.
Also added a valgrind build to run it on every pull request. I've had to
disable a few tests because valgrind makes those run infinitely slowly,
consistently timing them out.
Apidsl is not powerful enough to express all the things we need and
doesn't know how `#include` works. The generated headers are more complex
than they should be.
This aligns the autotools build with the cmake build, which doesn't have
a config.h file. It also removes the ambiguity of config.h and
other/bootstrap_daemon/src/config.h.
We used to have lots of these in the code, but now that all the endian
stuff is no longer dependent on host byte order, we can re-enable the
warning flag and catch any future violations.
The test explicitly wanted a UDP connection when a TCP connection would suffice. This
was a remnant of back when the test was part of a multi-purpose autotest that
didn't attempt to connect to TCP relays and needed a UDP connection specifically.
* Use-after-free because we free network before dht in one case.
* Various unchecked allocs in tests (not so important).
* We used to not check whether ping arrays were actually allocated in DHT.
* `ping_kill` and `ping_array_kill` used to crash when passing NULL.
Also:
* Added an assert in all public API functions to ensure tox isn't NULL.
The error message you get from that is a bit nicer than "Segmentation
fault" when clients (or our tests) do things wrong.
* Decreased the sleep time in iterate_all_wait from 20ms to 5ms.
Everything seems to still work with 5ms, and this greatly decreases
the amount of time spent per test run, making oomer run much faster.