With the global wave of fake news, hate speech and ads appearing against terrorist videos, the demand from both the public and governments for social platforms to police themselves has reached fever pitch.
But it’s a thorny issue.
When the likes of YouTube sets guidelines on what videos can be uploaded to its channels or Twitter dictates what users can say, they are accused of interfering with our right to free speech. On the other hand, it can be said that platforms have a responsibility to protect their users from toxic content.
Social channels are rightly nervous about entering into this fiery topic of brand safety, but headlines have forced their hands. Google has now introduced safeguards to give brands more control about where their adverts are placed, Facebook has launched ‘disputed by fact-checkers’ pop ups that notify users to potential fake news stories, and Twitter has rolled out tools to block hateful content.
Have we been here before?
Platforms are finding themselves in a similar position to that of British broadcasters in the 1960s. Over fears of the type of content children were being exposed to, the 1964 Television Act forced commercial broadcasters to take responsibility for the content of their programs by introducing the ‘watershed’.
Nowadays, the German government is proposing fines if social media channels do not remove ‘criminal incitement and slander’ quickly, and the UK Children’s Commissioner is calling for social media channels to do more to protect children online.
What is the answer?
With around 300 hours of video uploaded to YouTube every single hour, sheer volume of content is forcing social media platforms to use training-stage AI. Despite being very advanced, this tech is still learning on the job and unfortunately is being shown to make some mistakes. Zuckerberg has said that major advances are needed before AI can accurately judge the content of text, photos and videos, but we feel that AI will only ever be part of the moderation solution.
Whilst AI has the potential to manage a huge amount of analysis, we believe that a combination of machine learning and human reasoning are – and will always be – the key to fast and accurate moderation. AI assists our expert Social Media Risk Analysts in reviewing three billion pieces of content every month for over 1,000 global brands, but alone it cannot understand the context, subtleties or cultural cues within a statement or image, and therefore cannot make the shrewd judgement that a human can. At the same time, a team of moderators are difficult and costly to scale during social media storms.
Just like commercial television and radio broadcasters, companies want to protect their audiences and reflect their brand values on social media, which means they are happy to decide what is and is not appropriate for their customers to see.
Whilst change has already started in terms of social media platforms trialing AI to weed out top-level risks on their pages, the keen eyes of skilled moderators will always be needed to spot the nuances of hate speech, extremist videos, and fake news.