Could only be hit by pausing at a key point in a debugger until the call timed-out.
Having one thread going up the call stack and acquiring locks (toxcore callbacks), while another thread goes down taking locks in the other order (CoreAV calling toxav functions) creates some pretty freezy situations.
The deadlock was caused by the GUI thread calling the CoreAV thread, acquiring the CoreAV callback, then right before calling a toxav function, not schedule the thread until the call times out. At this point the toxcore thread fires its state callback to tell us the call is over, locking internal toxcore/toxav mutexes, it reaches our callback function which tries to switch to the CoreAV thread to clean up the call data structures, but has to wait since the CoreAV thread holds its own lock. At this point if we resume the CoreAV thread, it'll be busy calling into a toxav function, which tries to acquire internal toxav locks, those locks are held by the toxcore callback so we deadlock.
Our solution is that when getting a toxcore callback, we immediately switch to a temporary thread, allowing toxcore to release the locks it held, and that temporary thread tries to switch to do work on call data structures. Meanwhile if the CoreAV thread needs internal toxcore locks, it can get them.
Add static icons for tray menu logout and quit
antis made us an icon for logout action and I reused cancel call icon
as quit action in tray menu
Refactored methods to change way how icons are loaded in widget.cpp
to make the methods more flexible
This should permanently fix#2491
antis made us an icon for logout action and I reused cancel call icon
as quit action in tray menu
Refactored methods to change way how icons are loaded in widget.cpp
to make the methods more flexible
This should permanently fix#2491
A call cancel/accepted race was locking up both UI and AV threads, while the stream thread was shoveling more and more video frames on the AV thread's event queue
This error condition only happens when a peer cancels its outgoing call in the middle of us answering it. We can simply ignore the error and things should nicely fall back into place. Since this race should be pretty rare in normal usage, it's nice to leave a log message, as it might mean we're being fuzzed.
We can prograssively replace more of those asserts by fallbacks and log messages now that everything has been shown to work fine, and the race conditions are harmless.
I feel like writing a novel today. Good thing nobody looks at these!