It is barely an oversimplification to characterise current debate on internet regulation as a fight over the things people see, and the things they don’t. The systems of curation and moderation that dictate what is and isn’t permitted are the machinery most responsible for the health of online spaces. It follows that the ways they work are the subject of intense government scrutiny.
This paper argues that the principle and practice underpinning most major platforms have failed to create healthy online spaces, and that current attempts by states to regulate these spaces will in all likelihood fall short of addressing the root causes of this failure.
The author identifies three failures in current approaches to content moderation:
- There is a democratic deficit in the way the majority of online platforms are moderated both in principle and in practice.
- The architecture of the majority of online platforms undermine the abilities of communities to moderate themselves both in principle and in practice.
- The majority of online platforms lack the cultures and norms that in the offline world act as a bulwark against supposed harms.
This paper presents solutions in principle and practice to the challenges presented by content moderation practices and systems, and puts forward a number of recommendations.