If the Devil wears Prada, what do fashion activists wear? Certainly not fur…
You wouldn’t have expected an 18th century author to feature at the heart of a 21st century court case around online abuse, but a couple of years ago that’s exactly what happened.
Across the globe, right now, are a group of eagle-eyed people reviewing some of the worst images, insults and inhumane acts you can imagine. They’re getting paid for it. And they choose to do it. They make that decision because they don’t want you to have to see it, so your children don’t stumble across it, so people don’t get hurt.
In the fall-out from the horrific terrorist attack on Manchester Arena on Monday, the UK terror threat has been raised to its highest level of ‘critical’ as authorities fear that more attacks are imminent. Whilst MI5 is working round the clock to keep us safe from this, terrorist propaganda and extremist views were spreading quickly, especially on social media.
In the wake of Facebook’s moderator training manuals being seen by The Guardian newspaper on Sunday, the world has been questioning if Facebook should decide what is and isn’t acceptable online; why videos of violent deaths are not always being deleted; and why photos of child abuse are only marked as ‘disturbing’.
It all started in the small hours of Monday morning. Several videos were shared on social media of a man being violently dragged down the aisle of a United Airlines plane. Within hours, the videos had gone viral and undoubtedly the press were on the phone to Oscar Munoz, CEO of United Airlines, and his PR team.
Social media moderation teams and managers can find themselves faced with content that poses a real risk to their online community. Whether it is a threat from a vulnerable user, graphic or sexualized image, cyberbullying or a sexually inappropriate comment, the team will expect intervention from the social channel to ban the user, remove the content or help a vulnerable user.
We read every piece of content from our clients' social media channels looking for complex word combinations that trigger any of over 100 risks, such as a bomb threat, hate speech or illness after taking prescribed medication. When a risk is found, the comment and its context are reviewed by one of our skilled Risk Analysts. It’s their job to understand the real intent of the comment and to tag it appropriately so the right action can be taken.
We recently blogged about Crisp developers rigorously testing Facebook updates to protect our clients from bugs that could compromise the security of their Facebook accounts.
We’ve all logged in to our computers and seen automatic updates launch. Whilst it’s not always obvious what has been updated, we take a leap of faith that the programme is still safe to use and won’t put us, or our customers, at risk.