How do you detect and remove offensive videos from social media in real time?

Posted by Crisp on


Read time: 5 mins

The problem of graphic images and videos appearing on social media is not a new one, but unacceptable, toxic and illegal videos for all to see on social media are certainly hitting the headlines more frequently. The self-video of a Thai man murdering his baby daughter on Facebook Live was up for almost 24 hours this week. Only 10 days previously, Steve Stephens posted a video on Facebook of himself shooting Robert Godwin Sr. in Cleveland. That video was live for three hours before being removed.

YouTube and Facebook aren’t ignoring the problem; far from it! Facebook have talked extensively about their efforts to speed up their review process and how they are constantly working on new ways to ensure their environment is safe for all to use. The fact is, social platforms are faced with a huge and difficult issue, that doesn’t have an overnight solution.

At the heart of the problem is the sheer volume of content every day (running into the billions), coupled with the dependence on reactive moderation which relies on users to report offensive, toxic and illegal images, videos and comments. Combined, these factors lead to removal times that people believe are simply too long.

Issues with user reporting

There are three key problems with solely placing the onus of reporting inappropriate content on users:

1. The majority of users do not report inappropriate content. In fact, many exacerbate the problem by sharing the content rather than reporting it to the platform. The graphic videos mentioned attracted over 2,351 views before they were reported.

2. Users that do report content are slow to do so. It often takes hours, and sometimes days, for users to make a report. It took over 24 hours for the content above to be reported by a user, even with all those views.

3. There are a huge number of false or low priority user reports. The tendency is for users to report things they don’t like rather than because it’s inappropriate or illegal. Which means despite platforms employing thousands of human moderators, the vast majority of the time, they’re not reviewing the real, high priority issues that need their attention. 

When we are talking about managing billions of pieces of content each and every day, the challenge of finding a toxic video or image in a haystack of content isn’t an easy one to solve. We are not at a stage whereby we can rely solely on AI or ‘robots’ to detect and remove toxic, inappropriate and illegal user generated content. AI is huge asset, but it’s still in its infancy when it comes to making 100% of all decisions – the results being that vast quantities of good content are incorrectly removed (sparking freedom of speech debates) and lots of negative content remain for all to see.

The fact is, today AI is not ready to take on the job of content moderation on its own, but it still provides most of the horse power.

At Crisp, we moderate billions of pieces of user generated content such as comments, posts, videos and images every month, that originate from the most popular social platforms, custom marketing campaigns, forums and review sites. Which means we share the same issues as the big four social platforms, and find that user reporting is an important tool in the arsenal when it comes to fast, accurate moderation; but we don’t rely on it solely.

We believe in bringing multiple factors together, including user reports, in real-time to help manage the areas of content moderation and identify the real problems – the needles in the haystack:

Reputation and behavior analysis

We built technology that analyses the reputation and behavior of users. It looks at both the reputation of the user reporting the content as well as the behavior of the person that posted it. Together these signals help us cut through the noise and provide a good signal for the rest of our stack.

Image and video AI recognition

Image and video recognition isn’t easy but there are some great players out there, such as Google, Microsoft and Amazon, who offer off-the-shelf services. Whilst working with these providers, we have also spent the last five years developing and training our own CNNs (Convolutional Neural Networks) solely on risky content such as gore, pornography, hate and illegal content, as well as weapons and trademarks. We are proud to have achieved the industry’s highest accuracy when it comes to true and false positives.

Again, real-time image recognition on its own doesn’t solve the problem, but it is a valuable signal in the stack.

Context profiling

We also assess numerous signals surrounding the video or image, such as the title, description and comments, together with the number of shares or references to it. This provides even more context on whether the piece of content is or isn’t appropriate.

Overall risk AI

Every signal we collect, from reputation analysis to context profiling, is processed in real-time by our AI engines to determine the probability a piece of content being a risk. Depending on this confidence rating, the content is either immediately removed or prioritized (based on the severity of the risk) for one of our human moderators to review.

Fast human intervention

AI alone can flag up many false positives, so risky or ambiguous content is assessed by skilled human moderators within minutes.

To do this at scale whilst retaining granular decision making isn’t easy. We have spent years experimenting by building different tools and user interfaces. Today our goal is for human Risk Analysts (moderators) to review and action potentially risky content within minutes of it being posted and to achieve 99.9% accuracy.

Feedback loop

Everything we learn goes back into our AI so we can continually improve our recognition capabilities. Through trial and error, we’ve found this is a fast and effective method for image and video moderation, and as this topic continues to hit the headlines, we will embrace new ways to refine our process.


If you are building your own tools or systems for real-time image and video moderation, we hope that the above gives a good sense of the approach we have taken at Crisp. It’s one solution to delivering services to thousands of brands in over 50 languages across billions of pieces of user generated content, all in real time.


Written by Crisp

Crisp’s mission is to provide the fastest detection of critical issues and crises to protect global brands and platforms. From supporting PRs in reputational management and helping pharma brands to remain compliant, to protecting vulnerable individuals from the exploitation of bad actors... wherever social media has the potential to trigger a crisis, you can be sure we have expertise to share.

Read more posts from Crisp »