With Twitter finally suspending Trump’s account, and other social networks finally starting to take actions, the natural debate has followed on the role of moderation, censorship, and power in social networks.
Should there be moderation, or should everything be allowed? Who should be in charge of the moderating? Should the companies be able to de-platform anyone? Is the power of centralizing this decision making a problem? What are the risks of allowing anything to be posted?
This is a difficult problem, and will probably remain a difficult problem. Social networks have tried for many years to remain neutral in the content moderation game, jumping through hoops to try to justify keeping up posts from Trump under the idea that they were notable, even though they violated their own terms. But in trying to remain “neutral,” they have been enacting a certain kind of content moderation policy. This is one that may have too high a bar for removing harmful posts.
So if someone is banned, is it arbitrary? Is it due to the site rules? Or something else?
Now, there are varied viewpoints on the banning of Trump social media accounts and wide and rapid de-platforming. Some people say he deserved to get banned, some say he should have been banned sooner, and some say he should not have been banned, or even that no one should be banned.
My opinion is that he should have been banned way sooner– you don’t have an inherent right to incite war on a social media platform as a national leader, and they shouldn’t support it.
That it took an attempted coup incited by Trump to cause social media sites to take action is in many ways quite sad. For years, I wondered: where is the line? If this isn’t bad enough, what is? What action would cause a site to suspend his account or take action? (Or more broadly, for Congress to take action.) We finally have that answer: inciting a coup. I think it’s a pathetic answer, really, because we shouldn’t want to get to the point of an attempted coup. If there are obvious massive consequences of this type of violent rhetoric, and where it leads, we don’t need to wait until all that has happened to take action.
But, instead of my view on what should have happened here, what are the consequences of another view point: that Trump should not have been banned, that possibly there is nothing that should cause you to get banned, and that any censorship or moderation of any kind is bad. And that this censorship is a greater threat than the impacts of the messages.
I think it’s possible to hold the viewpoint that centralized power on moderation of a few sites is a problem, and that Trump should have been banned. The internet discourse splits into mutually exclusive camps that are usually both wrong.
But say you have a viewpoint that there should not be moderation, and any censorship is bad. I think there are a few immediate things to note. Many who hold that view may be using a site where there are moderation policies in effect, and not realizing how that is improving their experience. For example, moderation policies that remove some disturbing content may be ones they approve of, while they object to the removal of political content.
These widely used social networks have massive spam, bot, troll, and other content issues. If there were site with no moderation or censorship, how would it not get overloaded with horrible content? There could be mechanisms built in, but that may contribute to a negative user experience.
Sites like Parler and Gab have emerged as right-wing sites and gained popularity as a result of some crackdown by Twitter. Parler was also kicked off their hosting services and many other websites.
So why does this lead me to blockchain-based social networks? If there is a company site like Twitter or Facebook, then the company has some moderation policy. If people are worried about that moderation policy, viewing it as censorship, or viewing it as arbitrary, they will go to other sites. But any site may end up with a moderation policy. It also will depend on how the legal liability shifts for websites, but if someone hosts a message board encouraging violence, do they own some of that blame? Probably.
One attribute of the blockchains is that they are essentially immutable. If you post a transaction or message to a blockchain, it is on the public ledger and visible and cannot be modified. If there are wider groups with censorship concerns, they may look to blockchain technologies as a way to build a social network. You can already do this today: you post a message in an ethereum transaction and it is there, public, visible, and cannot be modified. Essentially, it is censorship resistant as long as that blockchain exists. And that is more robust than the arbitrary censorship decisions of a particular site.
There are certainly benefits to being censorship resistant. There may be real reasons that people need a way to speak out, and they aren’t able to in certain forums.
But I don’t think people have reckoned with the risks of blockchain-based social networks. Say something is posted and can’t be removed. There isn’t an undo or delete button. If you post something you didn’t want to say, it’s there. You could build an interface to hide or remove some things, but if it’s in the database, it’s there. This also leads to potential problems around doxxing, and that this makes easy doxxing permanent.
As lots of sites have developed there are almost two types of internets developing. There is the ephemeral internet, more Snapchat-style, where you post something but it disappears. You can make mistakes, things aren’t out there to live forever. Many sites keep things for a long time, but you still have a way to delete them. There is now the extra-permanent internet developing with the blockchain. I’m excited about the blockchain, I think it has great applications for finance, and you have reasons to want permanence in financial transactions.
But if blockchains become a database and protocol for open source social networks (which I think is the natural end-state of these discussions), then you have extra-permanent social networks and posting. And these are not automatically healthy communities that are created. They can be very negative, violent, full of misinformation.
I think people will develop these, some exist in various forms, but I think if you get to a point of censorship-resistant social networks, there isn’t really turning back. I think you should have free speech. But it should certainly have limits. I don’t think we want an internet full of people calling for violence and death to others. The law on free speech already has many limits. But I think some people want censorship free zones and I’m concerned about where that leads.