Profiling chat rooms, not chat: How can kids’ spaces stay 100% safe?

Posted by Sarah Arthur on

As kids’ digital spaces, such as online games, apps and chat rooms, come under heavier fire from governments and parent groups to sharpen up their online safety, what can platform and game creators do to manage this business-critical issue?

Safety by design is not a topic that’s taken lightly in platform development. Creators design features that have been rigorously assessed by Safety Teams and many digital spaces now have monitoring technology and human moderation processes in place around the clock. In the same way, while players will try to game the game, few set out to offend or abuse other players. However, when edgy environments and boundary-pushing players come together, we get headline-hitting online safety concerns.           

A platform found itself in the headlines recently after a mum spotted her 6-year-old daughter in a virtual sex room level. She had been invited there by a stranger who she accidentally added as a friend in the game.

In cases such as these where online platforms enable players to build virtual worlds and chat in rooms and personal messages, even if the platform’s moderation team assess all the content before it is uploaded into the game, once a game is live, platforms usually find it difficult to review everything that is published later in the game. 

How can you moderate every game, photo, and line of chat, 24/7, and in every language?

Like every game creator, we know that only a small percentage of players are actually responsible for creating the harmful or offensive content that could damage your platform’s reputation - but finding that truly harmful content is like finding a needle in a haystack.

So we have found that it’s more effective to take a wider view of what is happening in different areas of the game, as well as using a combination of humans and AI to moderate every line of commentary.

For brands Crisp has developed AI to profile social media users, mapping their traits and predicting what type of content they will post or what kind of conversations they will become involved in. In the same way as we profile people, we can also profile online environments.

For children’s games, we profile the commentary in and around rooms, forums, in-game chat, messaging and on social media, as well as moderating each of these spaces. Using this process we can quickly highlight which areas of the game have a lot of sexual commentary around them for instance or other areas which are creating a lot of coded chat around illegal activity or grooming flags. In the recent news story, this method could have allowed the sex room to be profiled and human Risk Analysts to have been alerted to the problems of that toxic room.

With this wider view of the game environment and powerful profiling data, platforms are better equipped to zone-in on the areas of the game or platform that are likely to attract the minority of ‘bad actors’ and make informed decisions about which rooms and forums need removing from the game. This profiling information can also be used to develop new, safe platform features and to build games that are more likely to nurture communities for young players.

You can read more about how Crisp protects children in digital spaces here.

Sarah Arthur

Written by Sarah Arthur

Sarah is a PR and Content Copywriter at Crisp. She loves bringing complicated concepts to life, but hates a misplaced apostrophe

Read more posts from Sarah »