G7: Making the internet hostile to terrorism

G7: Making the internet hostile to terrorism

Terrorist-related content online has become a major issue, on a global scale. At today’s G7 meeting in Italy it was agreed that internet companies will continue to take a proactive role and ensure decisive action in making their platforms more hostile to terrorism. They will also support civil society partners to develop alternative narratives online.

This agreement is a culmination of months of work by social platforms, government bodies, tech companies, media investigators and public campaigns:

8 months ago (9 February 2017)

The Times discovered that some of the world’s biggest brands were inadvertently funding terrorism by their online adverts being misplaced against extremist videos.

4 months ago (26 June 2017)

Facebook, Google and YouTube, Microsoft and Twitter form the Global Internet Forum to Counter Terrorism, which seeks to stop the spread of terrorism and extremism online. Since then most major platforms have announced new rules to remove terrorist content from their sites.

1 month ago (20 September 2017)

UK Prime Minister, Theresa May, told the United Nations that she is challenging social networks and search engines to find fixes to take down terrorist material in two hours.

22 days ago (28 September 2017)

The European Commission presented guidelines and principles to help online platforms quickly and proactively prevent, detect and remove illegal content that incites hatred, violence and terrorism.

1 week ago (17 October 2017)

Twitter and YouTube announced new rules to combat hate speech and glorifying violence.

Today (20 October 2017)

G7 Ministers of the Interior meeting focused on two key topics: prevention — together with the private sector — of the terrorist use of the internet and collaboration in the fight against the so-called “foreign fighters” through information exchange and activities to halt the extremists.

The participants (G7 governments and Google, Microsoft, Facebook and Twitter) agreed to increase joint efforts in four main areas, (1) the use of automated technology for the rapid detection and removal of terrorist content and in the prevention of its further dissemination, (2) sharing of best practices and technology to enhance the resilience of smaller companies, (3) improving our knowledge base through research and development, and (4) empowering civil society partners to develop alternative narratives.

 

These are major challenges, but today’s agreement demonstrates that there’s is global commitment to making significant progress.

This month Crisp unveiled new technology called Capture, that helps social platforms identify new terrorist material on their platform, in minutes.

This technology is already being used to detect hundreds of items of new terrorist content online each day. Capture monitors the most likely places that illegal content will be shared online. When it finds a suspected terrorist video, image, text or chatter, Capture reports the risky content to the social platform. The platform’s team then decide if the content breaks their guidelines, if it should be removed, and how quickly.

Crisp’s Capture technology is currently discovering illegal content which falls into the following categories:

  • 73%  - non-distressing terrorist propaganda: images and videos which glorify or promote the terrorist cause, but without the use of extremely distressing material
  • 15%  -  violent terrorist propaganda: beheadings and bloody gore resulting from terrorist activity shown in the context of glorifying or promoting the terrorist cause
  • 8%  -  extremist Islamic preaching: videos and written pieces that preach an extreme version of Islam that are promoting hatred or violence against others
  • 2%  -  instructional videos on terrorist devices: detailed instructions on how to create explosive, chemical devices or weaponry for use in terrorist acts
  • 2%  -  incitements to commit terrorist acts: content that directly incites the audience to carry out terrorist acts that would result in the harm or death of members of the public.

Looking at the nature of the content that’s being identified and remembering the fact that each piece of content can be shared thousands of times, it’s clear why Crisp, tech companies and governments across the globe are taking such a hard line against extremist content online.

You can read more about Crisp’s Capture technology here. Or social platforms can arrange a technical briefing here.