In the fall-out from the horrific ISIS attack on Manchester Arena on Monday, the UK terror threat has been raised to its highest level of ‘critical’ as authorities fear that more attacks are imminent. Whilst MI5 is working round the clock to keep us safe from this, terrorist supporters are celebrating their victory and spreading extremist views in any way they can, especially on social media.
In the wake of Facebook’s moderator training manuals being seen by The Guardian newspaper on Sunday, the world has been questioning if Facebook should decide what is and isn’t acceptable online; why videos of violent deaths are not always being deleted; and why photos of child abuse are only marked as ‘disturbing’.
It all started in the small hours of Monday morning. Several videos were shared on social media of a man being violently dragged down the aisle of a United Airlines plane. Within hours, the videos had gone viral and undoubtedly the press were on the phone to Oscar Munoz, CEO of United Airlines, and his PR team.
Social media moderation teams and managers can find themselves faced with content that poses a real risk to their online community. Whether it is a threat from a vulnerable user, graphic or sexualized image, cyberbullying or a sexually inappropriate comment, the team will expect intervention from the social channel to ban the user, remove the content or help a vulnerable user.
We recently blogged about Crisp developers rigorously testing Facebook updates to protect our clients from bugs that could compromise the security of their Facebook accounts.