Large leaps forward in online child safety have been made by Google and Microsoft this week. Despite originally claiming it was impossible to block search results, the internet giants have done a u-turn and introduced new algorithms to prevent illegal material appearing in searches.
About 100,000 search terms will now return zero results in a bid to stop people accessing illegal child abuse images. It 's hoped that these developments will serve to protect children; who seem to be becoming increasingly vulnerable to new and emerging digital threats. From now on, not only will search terms be blocked, but a message will also appear to warn the user that such images are illegal – in theory making it more difficult to find images of child abuse and giving potential offenders the chance to modify their behaviour.
This is a fantastic step forward in child protection, but experts are still wary that although progress has been made, it’s still not enough to stop people viewing and posting images of child abuse. Former head of the Child Exploitation and Online Protection Centre (CEOP) Jim Gamble told the BBC that he doesn’t believe the new changes will be effective because predators “don’t go on to Google to search for images. They go on to the dark corners of the internet on peer-to-peer websites.”
This is a good point as sex offenders are getting increasingly technologically advanced. It’s therefore important to stay one (if not a couple) of steps ahead. Blocking content on search engines is a great start but how can we make sure children are protected both on and offline?
Thankfully Crisp Thinking has the answer. Our world-leading image moderation helps in the fight against child abuse by detecting and removing inappropriate and illegal images. These may pop up as spam on social media pages, or as picture comments on websites, in forums and in other online communities. Crisp's image moderation stops these images being seen on all social media platforms – providing a more comprehensive, adaptable and scalable form of protection than can be achieved by blocking internet search results.
Google communications director Peter Barron told the BBC: “We’re agreed that child sexual imagery is a case apart, it’s illegal everywhere in the world, there’s a consensus on that. It’s absolutely right that we identify this stuff, we remove it and we report it to the authorities.”
I couldn’t have put it better myself. It is important; in fact it’s so important that it prompted Peter Wanless, chief executive officer of the NSPCC to say: “This is a key child protection issue of a generation – we cannot fail.”
But Crisp doesn't just tackle child abuse by making images less accessible, we also protect children from paedophiles who may try to contact them online. Our kids’ moderation service means that young internet users are given the very highest level of protection without ruining their online experience. Most moderation providers just detect certain words that they deem inappropriate, but this is easy for internet-savvy predators to work around. Crisp detects not only words, but context; so that any suspicious behaviour is immediately detected and users who may pose any threat to a child don’t slip through the net.
I think that we can learn a valuable lesson from Microsoft and Google. These fierce competitors have put cut-throat business and one-upmanship to one side in a bid to make the internet a safer place. Such a global issue calls for companies to unite to protect the world’s children. That’s why I urge anyone who provides a public platform to step-up and implement methods to make sure their creation provides a safe environment.
A combination of blocked search results and high quality online moderation will undoubtedly make it much more difficult to view child abuse images on the internet - and the more websites that use a moderation company like Crisp Thinking, the safer both online and offline communities will become.
When it comes to tackling child abuse we must stand up and fight… so please lend a hand – please moderate your site.