After five years of development, and real-world roll-outs with some of the largest digital kids platforms, Crisp has now officially launched its fully managed ‘Kids and Teens’ safety & moderation service. This service is set to transform the way Social Platforms deliver safe experiences for their younger users.
We’ve all logged in to our computers and seen automatic updates launch. Whilst it’s not always obvious what has been updated, we take a leap of faith that the programme is still safe to use and won’t put us, or our customers, at risk.
In July 2016, a 15-year-old girl received a seemingly innocent Facebook message from a 28-year-old man she didn’t know. That simple “Hey, how are you?” led to her spending the night at the man’s house only 13 days later. She was held against her will and after trying to escape, was killed.
Over the last couple of years there have been some huge PR crises for a number of major kids’ entertainment providers and social networks. Some of these PR disasters were so damaging that they led to the downfall of a number of businesses – businesses which up until that point had been highly successful. The main issue that led to their untimely demise was inadequate protection of their customers (or users whichever you prefer) from issues like cyber-bulling, trolling and sexual predators. Something as relatively harmless as SPAM also played its part in having a damaging affect on their essential advertising revenue due to lack of effective content moderation.
With all the recent press around Facebook’s moderation performance, deciding to open up your company’s Facebook wall for all to comment on can be a challenging decision. It is estimated that 25% of global companies have a closed wall on Facebook. But what is the effect of a closed wall on the perceptions of your social community and, importantly, what can be done to mitigate the risks of that open wall.
As the Daily Mail rightly reports the ‘fury as child abuse picture goes viral on Facebook with 16,000 ‘shares’ and 4,000 ‘likes”, we still see too little focus on strong moderation as a brand protection solution. Whilst it’s the most extreme forms of pornography which continue to make headlines, unregulated spam content of many kinds can be damaging to brands.
Both Burger King and Jeep’s Twitter accounts have been hacked this week, showing just how easy it is for an external threat to cause a serious risk to a brand’s social media reputation. In both instances similar content was added by the hackers, with negative references to company staff and the companies in question being sold to their nearest and dearest competitors. This emphasizes the importance for companies to have a proper brand protection strategy when it comes to managing their social media reputation.
Facebook’s latest feature release; Ranked comments, is in beta on brand pages. This change impacts the ordering of comments in response to a brand’s posts; comments are now ranked on popularity with the most engaging comments at the top of the list – a move that impacts brands and agencies moderating Facebook pages directly. Previously comments would simply appear in the order they were posted and this meant that some form of structure (although not ideal) could be applied to scanning the page. You would simply scroll back through comments until the last time it was checked. This was annoying on busy posts as it meant scrolling through lots of comments, but at least there was a structure of sorts.
‘Predator detection’ software is police’s high-tech weapon against paedophiles
LONDON, 2 May 2012 – Predator detection software saved the police hundreds of man hours by sifting through a dangerous paedophile’s 5,000 web chats and identifying his vulnerable young targets.
Web safety experts Crisp used its Kids & Teens program, which usually monitors live web chats, to quickly process six years’ worth of online grooming history police found on a 46-year-old sex offender’s computer and flag up the girls he had targeted.