If the `recvbuf` network function returns 0 all the time, that means
there is never any data available on the TCP socket. This change makes
it so there is a random amount of data available on the TCP socket.
This invalidates the bootstrap fuzzer corpus.
This makes more sense as a module for them to live in. Now, util no
longer depends on crypto_core and can thus potentially be used in
crypto_core in the future (functions like min/max may be useful).
It can't happen in almost every reality, except when the RNG is fairly
broken and doesn't add 2 fake DHT friends on startup. Still, this code
should be defensive and never index outside `num_friends` elements.
Right now it only gets built from the first 2 friends in the DHT friend
list: either friend 0 and then 1 or friend 1 and then 0. The randomness
in this code doesn't make sense unless the intention was to select from
all friends, which the code will now do.
Also: use uniform random distribution to select the friends rather than
modulus, which is only uniform for powers of 2.
Ideally this would be able to reach some of the events, so we can write
code to respond to those events, but so far only the friend request
event actually happens.
This isn't in production yet. It's in the new announce store code. The
problem was that a negative plain_len was converted to unsigned, which
made it a very large number.
`system_random()` can fail and return NULL, which should be handled by
toxencryptsave functions.
Also synced function comments between .h and .c file for toxencryptsave.
Every use of this function needs to allocate the same buffer. None of
the callers uses a differently sized buffer, so we might as well put it
in a struct and have the type checker prove the buffer size is correct.
Also rename `ip_ntoa` to `net_ip_ntoa` to avoid clashes with ESP-IDF
system libraries which define this function as well.
This is the "server-side" part of the new friend finding system,
allowing DHT nodes to store small amounts of data and permit searching
for it. A forwarding (proxying) mechanism allows this to be used by TCP
clients, and deals with non-transitivity in the network.
Nothing checks whether these layers are actually observed. The bazel
build does check this, so there's no need to have this documentation in
the cmake build. It'll just go out of date.
The idea here is to have a `Network` object that contains functions for
network operations and an optional userdata object that can manage those
network operations. This allows e.g. a fuzzer to replace the network
functions with no-ops or fuzzer inputs, reducing the need for `#ifdef`s.
Cimple cannot actually find these without also causing false positives,
but I found them with cimple before removing the code causing false
positives again.
I don't know if this will actually work, or how many of these "fixes" I
need to get msan to be happy on CI. For me locally, it all works fine.
On CI, for some reason it's not fine even though I run in the exact same
docker image as CI.
For coverity, which continues to think we're overrunning buffers when
at this point it's easy to prove we're not. Here would be the corrected
coverity finding:
6. Condition packet_length <= 105 /* 1 + 32 * 2 + 24 + 16 */, taking false branch.
7. Condition packet_length > 1024, taking false branch.
Now packet_length must be > 105 and <= 1024.
12. Condition len1 == packet_length - (89 /* 1 + 32 * 2 + 24 */) - 16, taking true branch.
len1 must be > 0 (105 - 89 - 16) and <= 919 (1024 - 89 - 16).
14. decr: Decrementing len1. The value of len1 is now between 0 and 919 (inclusive).
This is where coverity goes wrong: it thinks len1 could be up to 2147483629.
15. buffer access should be OK. Coverity thinks it's not.
Nothing very noteworthy, I just came across this and made it slightly
more readable.
I'm not making this function `bool` right now because it's used in NGC
and that will break.