Starting a new instance with the -p option will force it to start a new instance with the given profile instead of bringing an eventual existing instance to the foreground
Two instances can not run with the same profiles, the profile locking code will ensure that. A user who likes to live dangerously could manually delete the lock to force two instances on the same profile, but such an hypothetical user would be asking for it.
If a qTox instance starts and becomes owner of the IPC shared memory on its first try, it considers itself the only running freshly-started instance, and deletes any possibly stale lock before starting up. This should be fine in the vast majority of cases, but if an existing qTox instance freezes for a long enough time to lose ownership of the IPC and a new instance is started without first killing the frozen one, the frozen instance's lock will be deleted as stale by the new one. If the frozen instance subsequentely unfreezes, it will be running on a profile for which it doesn't have a lock, which could cause trouble. This is an intentionaly allowed edge case, the alternative being a stale lock staying forever until removed manually. A potential solution not yet implemented would be to check that the lock is still actually present before attempting any write.
If we're running on a given profile, reload the qtox.ini, and it has a diffeent value for the current profile, we don't overwrite our current value with whatever qtox.ini says anymore
It would cause current profile confusions when multiple qTox instances where using different instances but sharing the qtox.ini
When loading an encrypted profile, not entering the password and switching directly to a plaintext profile could have overwritten the plaintext profile with the encrypted one
It would only trigger when multiple instances where running in parallel,
with one having enough privilege to block the other from accessing the shared memory (e.g. root)
We returned a shallow copy of the delete[]'d salt buffer
As a result the history consistently failed to decrypt and was removed as corrupted. This is now fixed.
Profile encryption should be fairly stable. History encryption was *NOT* tested yet and as such may not work, cause profile corruption, or invoke nasal daemons.
what happened was- When message exceeded TOX_MESSAGE_LENGTH, the whole message was inserted in sender's chatlog X times.
if length of message is N,
X = (N/TOX_MESSAGE_LENGTH) + 1
There is no bug in recieving end. Receving end gets X messages (splitted).
In the sample case provided, the message had whitespaces in the end, so the reciever thought the message is empty.