Modern day moderators: the unsung heroes of the internet

14389763077_113f06d0d9_k.jpg©Image: JDHancock

Across the globe, right now, are a group of eagle-eyed people reviewing some of the worst images, insults and inhumane acts you can imagine. They’re getting paid for it. And they choose to do it. They make that decision because they don’t want you to have to see it, so your children don’t stumble across it, so people don’t get hurt.

Who are these unsung heroes of the internet? Content moderators.

It used to be that the people who reviewed comments in online forums and removed offensive remarks were out-of-work students or fair-weather workers, and many articles online would have you believe that’s still true today. The truth is that moderators today have to be resilient, emotionally intelligent and incredibly diligent and perceptive. They are the people who look at the worst content online, protecting all of us from the darker side of the internet.

Emma Monks, Head of Moderation, Trust and Safety here at Crisp, has experienced this industry change first-hand. An Internet Relay Chat Community Manager back in the 90s, she felt at home in online chatrooms but also realized there was a darker side. She noticed patterns in chats arranging for children to be abused and rings sharing horrific photos of youngsters. She wasn’t going to let these horrendous crimes happen in her online community, so she reported them to the authorities.

Emma soon became a full-time Community Manager, building friendly online communities, managing a team of Moderators and writing moderation policy to protect users. Whilst the things Emma saw used to affect her, now twenty years on, she doesn’t give it a second thought. Like many seasoned Community Managers and Moderators, she has the amazing ability to see the most horrendous things and simply be objective about it. Their resilience is what protects the rest of us.

The web has evolved

stats-2.pngThe amount of user-generated content posted every day is staggering. With such massive volumes even a small percentage of explicit and inappropriate content equates to millions of posts that need moderation.

As a result, guidelines are needed as to what is acceptable. For the likes of YouTube, which was founded in 2005, their first clear user rules were only published in 2007. As the nature of risk will never stop evolving, it is still the case that guidelines are developed in response to new events that push the boundaries of what is acceptable in mainstream society. 

pyramid-2.png

With levels of online abuse and violence continuing to rise, companies have had to take serious steps to moderate their social media channels in recent years. Despite the increased volume, one thing they’ve found is that identifying the risks early is essential to preventing them becoming a major issue.

By continually monitoring social media, we alert our clients to high-impact risks within 15 minutes of them happening. These can be anything from a cybersecurity breach to a trending product issue. 

The role of AI and technology in moderation

With social media posts hitting the billions every day, technology is an essential tool in moderating the internet. Increasingly sophisticated artificial intelligence (AI) is needed to trawl through every post and pinpoint those that pose a threat.

These 'risky' posts can then be prioritised so that human moderators are addressing the most serious content first. It's a case of technology and human intelligence working closely together. 

Who are the human moderators?

In response to the increased levels of severely risky user-generated content online, those who moderate these channels must possess the emotional strength and resilience to deal with what they see. The moderators we work with are referred to as Risk Analysts to reflect the increased importance of the role they play in identifying and categorizing online content and protecting us from the worst of it.

Risk Analysts need to have a high level of emotional intelligence to pick up on subtle or prolonged risks that develop over time, such as domestic violence, grooming and cyberbullying.

We also find that, contrary to the out-of-work student stereotype, Risk Analysts tend to have a strong academic background and some speak four or more languages. Working from home across the world, they are highly skilled authors, teachers, filmmakers and entrepreneurs.

One story that Emma Monks recalls is of a Risk Analyst who, by highlighting a pattern of little comments and jokes, noticed that an employee was being bullied at work. The Risk Analyst alerted the client and they quickly put an end to the bullying. Risk Analysts’ ability to spot the subtle risks that computer algorithms simply skim over is why AI or tech alone isn't enough. 

How do you become a Risk Analyst?

The first step is an anonymous online assessment that involves identifying risks in a series of social media posts. As some risks are not instantly obvious, we’re not only testing for someone's ability to follow detailed instruction, we’re also assessing their natural level of emotional intelligence to understand the immediate risk, plus the more damaging effect a comment might have. Only those with high scores make it through to round two.

New Risk Analysts must then demonstrate a high level of skill via in-depth quality tests before they can moderate any of our clients’ content. 

Work is paid on a pay-per-item system, rather than an hourly rate, as we think this is the fairest way to reward dedicated, hard-working Risk Analysts. We also pay bonuses for high-quality work. The system does away with scheduled shifts so that Risk Analysts can work as and when suits them. 

To apply to become a Risk Analyst, click here.