Is unmoderated social media a good idea?

Posted by Crisp on

Social Media Moderation Toolkit.png

In the early 2000s social media was seen as a way of connecting with friends, chatting with people across the globe and sharing thoughts without the constraint of putting your name to it. But almost 20 years on, this anonymity has contributed to the rise of cyberbullying, fake news, terrorist grooming, child abuse and other toxic activities.

This unwanted content is appearing despite social media currently being monitored to some extent by moderation companies, social platforms and users. Which begs the question, if social media was completely unmoderated, would it be safe enough for most of us to use?

Download a free social media toolkit-1.png

In another 20 years will most of us have closed our Facebook accounts and logged off Twitter for the very last time? Or will we have resigned ourselves to a social media cesspit which we simply navigate with caution?

The push for regulation

In recent years, the cry for social media surveillance has grown louder, and now the European Commission, governments, police, charities and individuals are pushing for legislation to counter the surge of toxic content that’s flooding social media. US tech companies and social platforms are reacting to the cry by vocalizing the fact they are experimenting with AI as a moderation tool. However, as social media spans governance borders and has around 2.3 billion active users, the task of policing social media is a phenomenal undertaking for any single entity.

The right to free expression

As the limbo of who is responsible for social media content continues, the trump card concerning our fundamental right to free speech is often played. Platforms, governments and companies cannot ‘clean up’ social media without encroaching on our right to speak our minds without fear of censorship, judgement or criticism. 

In 2016, Facebook removed the iconic image of a girl fleeing a Napalm attack during the Vietnam war because it was a photograph of a nude child, but critics fought back saying social platforms do not have the right to censor history or create global rules for what can and cannot be published.

An in-depth report by the renowned Pew Research Centre finds that experts are predicting two camps of social media in the future. One heavily patrolled by robotic AI and regulation, whilst the other camp is an unsanitary free-for-all zone. 


Whatever the future, we believe that in the here and now, users and brands have the power and responsibility to protect people on their social media pages. For brands, this opens up the opportunity to moderate their owned social media pages in-line with the brand’s values. They can control content based on their target audience’s tolerance for profanities and can nurture communities which shun bullies, racists and haters.

Get top tips on how to effectively moderate your brand's social media channels with our free Social Media Moderation Toolkit. Click the image to download now. 

Download a free social media toolkit-1.png



Written by Crisp

Crisp’s mission is to provide the fastest detection of critical issues and crises to protect global brands and platforms. From supporting PRs in reputational management and helping pharma brands to remain compliant, to protecting vulnerable individuals from the exploitation of bad actors... wherever social media has the potential to trigger a crisis, you can be sure we have expertise to share.

Read more posts from Crisp »