In a world where fake news is continuously in the headlines(if not the actual headlines) it’s no wonder that we’re seeing a resurgence in purpose-driven marketing amongst brands.
Across the globe, right now, are a group of eagle-eyed people reviewing some of the worst images, insults and inhumane acts you can imagine. They’re getting paid for it. And they choose to do it. They make that decision because they don’t want you to have to see it, so your children don’t stumble across it, so people don’t get hurt.
In the fall-out from the horrific terrorist attack on Manchester Arena on Monday, the UK terror threat has been raised to its highest level of ‘critical’ as authorities fear that more attacks are imminent. Whilst MI5 is working round the clock to keep us safe from this, terrorist propaganda and extremist views were spreading quickly, especially on social media.
In the wake of Facebook’s moderator training manuals being seen by The Guardian newspaper on Sunday, the world has been questioning if Facebook should decide what is and isn’t acceptable online; why videos of violent deaths are not always being deleted; and why photos of child abuse are only marked as ‘disturbing’.
It all started in the small hours of Monday morning. Several videos were shared on social media of a man being violently dragged down the aisle of a United Airlines plane. Within hours, the videos had gone viral and undoubtedly the press were on the phone to Oscar Munoz, CEO of United Airlines, and his PR team.
Social media moderation teams and managers can find themselves faced with content that poses a real risk to their online community. Whether it is a threat from a vulnerable user, graphic or sexualized image, cyberbullying or a sexually inappropriate comment, the team will expect intervention from the social channel to ban the user, remove the content or help a vulnerable user.