tox did something similar with this outcome, but it never took off. Basically with tox each account is actually stored locally, much like how Skype did when it was p2p, but the difference is your account is actually on your device, as in if you lost your “key” you lost your account, when you connected with others, you gave your friends your TOXID which was essentially your public key signature with some added information regarding what you wanted for privacy added to it, and then your messages were relayed through a p2p DHS network. Every communication was encrypted e2e. With tox anyone could create an account with any information, but only people you added were able to message you, and visa versa. The only time you were ever publicly disclosed was during adding contacts to people you didn’t already have, which helped minimize botting on it as bots wouldn’t be able to message you without your ID. The issue with that method was, both parties had to be online to message each other, there was no central server to manage identity and handle users, so every connection was considered trusted since you had to manually add the person via their tox ID.
I expect this solution /could/ be moved into a centralized system for all user accounts, since the only way to add people was manually adding their private key, but I would expect that on large scale, the lack of ability to actually stop problematic users might dissuade platforms from wanting to implement it, since account creation was as easy as just clicking “create account” and no accounts were ever verified server side, which in order to do, brings back to the issue topic: Privacy vs Security
tox did something similar with this outcome, but it never took off. Basically with tox each account is actually stored locally, much like how Skype did when it was p2p, but the difference is your account is actually on your device, as in if you lost your “key” you lost your account, when you connected with others, you gave your friends your TOXID which was essentially your public key signature with some added information regarding what you wanted for privacy added to it, and then your messages were relayed through a p2p DHS network. Every communication was encrypted e2e. With tox anyone could create an account with any information, but only people you added were able to message you, and visa versa. The only time you were ever publicly disclosed was during adding contacts to people you didn’t already have, which helped minimize botting on it as bots wouldn’t be able to message you without your ID. The issue with that method was, both parties had to be online to message each other, there was no central server to manage identity and handle users, so every connection was considered trusted since you had to manually add the person via their tox ID.
I expect this solution /could/ be moved into a centralized system for all user accounts, since the only way to add people was manually adding their private key, but I would expect that on large scale, the lack of ability to actually stop problematic users might dissuade platforms from wanting to implement it, since account creation was as easy as just clicking “create account” and no accounts were ever verified server side, which in order to do, brings back to the issue topic: Privacy vs Security