A friend asked: “Why aren’t you working on the development of a decentralized communications platform?”
My answer was that I don’t know how to solve the unwanted communication (spam, brigading, etc) problem.
I don’t think anyone else does, and I think most people are unaware of how hard it is. That’s because it’s a problem of mechanism design in addition to being a problem of engineering. And most people are just not devious enough to design mechanisms that are resistent to adversarial usage. A while ago, I brought the issue up on the mastodon issue tracker, and did not get much traction.
Someone recently proposed an option to diable replies on Mastodon posts. This is, abstractly, not a terrible idea: I don’t have comments on my blog, and I could imagine making certain social media posts where I don’t want comments. There was even the really neat follow-up idea of “no replies except for people who are @mentioned”. But what does “no replies” mean? To understand this, you need to know a little bit about how Mastodon (or, more precisely, the underlying ActivityPub standard) works:
There are a number of servers (e.g. mastodon.social, oulipo.social, etc), and each user belongs to one. If I post an “activity” (roughly, a tweet), it goes into my public outbox, and anyone who has read access to my outbox can read it. A reply is just an activity with the inReplyTo field set. If you post a reply, it (a) goes into your public outbox, and (b) goes to my inbox on my server. My server might, when receiving a reply, forward it on to the folks who saw the original message (that is, my followers, who would otherwise not notice it in your public outbox unless they were also following you).
So things that disabling replies could mean include:
- I configure my server to not notify me about replies (i.e. they do not appear in my inbox)
- I configure my server to not forward replies.
- I add a field to my message indicating that I do not wish for people to reply to it, and other servers will enforce this in their UI.
All three things of these things are reasonable to want, but the third only works for “well-behaved” participants. I put “well-behaved” in quotes here, because of course if someone says something mean or false about me and then sets it to no-reply, I might well wish to override the “no-reply” setting so that at least my followers can see my rebuttal. So it was quite surprising to me to see, here, someone suggesting requiring (“must”, in RFC-speak) the third, unenforceable option.
I also saw a suggestion to implement a proof of work system, Hashcash, to reduce the frequency of direct harassment. This seems extremely unlikely to be useful for this, because Hashcash is intended to stop large numbers of messages, but even one message per sender is sufficient to harrass people (the most common risk is one message per person from a thousand people). Also, Bitcoin has shown us that there are something like seven orders of magnitude between the hash rate of an ASIC and that of a typical CPU. This makes setting a correct Hashcash cost impossible. A memory-hard hash function might reduce this gap, but probably not by enough. Also, most “clients” are actually someone else’s server, meaning that the effect would not be felt by the message sender directly.
I wish I knew of better solutions, but at least I am glad that I don’t falsely believe that good there are solutions out there that nobody has the will to implement. That would be depressing.