Crisp announces fully managed safety service to transform how Social Platforms keep kids safe

After five years of development, and real-world roll-outs with some of the largest digital kids platforms, Crisp has now officially launched its fully managed ‘Kids and Teens’ safety & moderation service. This service is set to transform the way Social Platforms deliver safe experiences for their younger users.

Read More

How do you detect and remove offensive videos from social media in real time?

Read time: 5 mins

The problem of graphic images and videos appearing on social media is not a new one, but unacceptable, toxic and illegal videos for all to see on social media are certainly hitting the headlines more frequently. The self-video of a Thai man murdering his baby daughter on Facebook Live was up for almost 24 hours this week. Only 10 days previously, Steve Stephens posted a video on Facebook of himself shooting Robert Godwin Sr. in Cleveland. That video was live for three hours before being removed.

Read More

How to effectively report inappropriate content to a social channel

Social media moderation teams and managers can find themselves faced with content that poses a real risk to their online community. Whether it is a threat from a vulnerable user, graphic or sexualized image, cyberbullying or a sexually inappropriate comment, the team will expect intervention from the social channel to ban the user, remove the content or help a vulnerable user.

Read More