Social media moderation teams and managers can find themselves faced with content that poses a real risk to their online community. Whether it is a threat from a vulnerable user, graphic or sexualized image, cyberbullying or a sexually inappropriate comment, the team will expect intervention from the social channel to ban the user, remove the content or help a vulnerable user.
Crisp Labs, our research division, has revealed new statistics that confirm the growing problem of fraudulent content on social media. There are a staggering 14 million scams on Facebook a day (that’s 165 per second), and written posts aren’t the only source of fraud. Instagram, that has 80 million photos uploaded a day, is also seeing a huge amount of content with malicious intent.
With an ever-growing number of spammers, scammers, and general all-round haters and trolls running riot on our social networks these days, it's essential that big and small brands have a comprehensive policy in place to ensure they're ready for an attack.
These days on social media, good or bad content can go viral at the drop of a hat. A post that may seem irrelevant at first, can quickly go viral, reaching tens, if not hundreds of millions of people in a matter of hours. For you and your team, protecting your brand and reducing your online risk is not always easy to manage in such instances.
Social media risk rocketed up the agenda at financial services brands in 2015 according to the 'Banking Banana Skins 2015' report from the Centre for the Study of Financial Innovation (CSFI) and PWC. Banks are in the process of rebuilding public trust and risk managers are increasingly aware of how comments spread on social media can damage reputations.