In the fall-out from the horrific ISIS attack on Manchester Arena on Monday, the UK terror threat has been raised to its highest level of ‘critical’ as authorities fear that more attacks are imminent. Whilst MI5 is working round the clock to keep us safe from this, terrorist supporters are celebrating their victory and spreading extremist views in any way they can, especially on social media.
In the wake of Facebook’s moderator training manuals being seen by The Guardian newspaper on Sunday, the world has been questioning if Facebook should decide what is and isn’t acceptable online; why videos of violent deaths are not always be deleted; and why photos of child abuse are only marked as ‘disturbing’.
In the early 2000s social media was seen as a way of connecting with friends, chatting with people across the globe and sharing thoughts without the constraint of putting your name to it. But almost 20 years on, this anonymity has contributed to the rise of cyberbullying, fake news, terrorist grooming, child abuse and other toxic activities.
The content you and your customers post on social media can reach millions of people in seconds around the world. Whilst this can offer unrivalled opportunities to engage with your consumers, it also means they could be exposed to a myriad of social media risks at any time of the day or night. Below is a short extract from Crisp Labs’ ‘Marketing In A Social Age’ report which explores just some of the business-critical risks facing global brands in 2017.
Read time: 5 mins
The problem of graphic images and videos appearing on social media is not a new one, but unacceptable, toxic and illegal videos for all to see on social media are certainly hitting the headlines more frequently. The self-video of a Thai man murdering his baby daughter on Facebook Live was up for almost 24 hours this week. Only 10 days previously, Steve Stephens posted a video on Facebook of himself shooting Robert Godwin Sr. in Cleveland. That video was live for three hours before being removed.