Push moderation to the end of the networks
If you have not read Mike Masnick’s article on “Protocols, Not Platforms” and you are interested in the possibilities of returning to a more decentralized web please do so.1 In his article, he argues that we should move back towards a protocol-based web, rather than one dominated by proprietary platforms. Doing so, he argues, “would push the power and decision making out to the ends of the network, rather than keeping it centralized among a small group of very powerful companies.”
This suggestion, to push to the end of the network, seems like a perilous but much-needed move to address the problems that the modern web faces. Platforms, particularly social networks, have seen the rapid growth of trolls, spam, illegal content, and bots that interfere with most users. Many of the prevailing problems of the web stem from the massive growth of users and the inability for proprietary web platforms to be able to tailor their products and digital public spaces to a global audience. In short, there is simply too much noise and complexity for a single company to be able to manage.
No matter how many human moderators or intelligent content bots a company employs, it doesn’t change the fact that content moderation is difficult, potentially controversial, and ultimately rooted in human assumptions about what “ought” to be allowed on a platform. The spectrum of acceptability differs from company to company and the rules of moderation often are applied unevenly. Enrolling users in the moderation of content can help, but asking users to report or block content is not a cure-all.2
Masnick suggests that we offload the duty of moderation onto individual users through the use of community-created content filters. Users, rather than platforms, can decide what type of messages they want to allow and tailor their social network around their own taste rather than a single standard. Under this paradigm, users that want an unfiltered experience can opt for minimal or no filtering. Users could also adopt a patchwork of filters (much like how ad-blocking tools pull from multiple lists of banned sources, with some lists being stricter than others).
This vision of a new method of moderation is only possible if platforms start to adopt open protocols that allow for the rich manipulation of data that is necessary for this type of filtering. So long as platforms store, organize, and present information through a single portal, they will still be responsible for the moderation of blatantly illegal content or other violations of laws. Masnick gives presents us an idea of protocols that would shifts the ownership and transmission away from a single platforms and towards federated networks. This would be a massive upheaval of the current economic order, has the potential of creating a more fractured platform landscape, and is unlikely to occur quickly. Despite this, I believe the core of Masnick’s article offers us a landscape to build more robust platforms.
It is risky to move moderation to the end of a network, but it may be the only reasonable choice to address the ills of contemporary platforms. Opening networks is difficult because it has the potential to create a less cohesive web, with each individual being presented a different version of the web. We should be concerned with the risk of new filter bubbles3, but we shouldn’t pretend that our current bubbles are working perfectly. Platforms are already attempting to filter messages for us, often to tailor an advertisement or to shelter us from more distasteful content. Putting users in control of filters wouldn’t fix the filter bubble issue, but it would make us responsible for shaping the type of web that we want, and ultimately making the issue of content moderation something we opt into rather than live under the domination of the defaults.