Is hate speech the price of free speech?

pexels-photo-356147.jpegIt has been nearly 70 years since the United Nations (UN) adopted the Universal Declaration of Human Rights, giving us the right to freedom of speech without fear of government retaliation, censorship, or societal sanction.

In recent years, this freedom has been wholeheartedly embraced on social media – where the technology to self-publish provides the perfect platform for people to say what they think. But is it right that any opinion can be published? With millions of people using social media every minute of every day to voice their opinion, the dividing line between what counts as free speech and what is seen as hate speech has become increasingly blurred.

Last summer, following the Brexit vote in the UK, hate crimes soared by 41%, according to the Home Office. It’s not unusual to see a rise in heated opinion around political issues, but we’re now seeing hate speech spill over into racial violence and an increase in hate crimes against minority groups.

Other, more recent events, such as what happened in Charlottesville, have brought the issue of hate speech into even sharper relief. 

As a result, pressure to combat hate speech online is growing. Yvette Cooper, a UK Labour parliamentarian and founder of Reclaim the Internet attacked Twitter today saying “Twitter claims to stop hate speech but they just don’t do it in practice. Abusive content needs to be removed far more quickly”.

As Charlottesville proved, hate speech can lead to hate crime argues Director of Public Prosecutions, Alison Saunders: “…it is only right that we do everything possible to ensure that people are protected from abuse that can now follow them everywhere via the screen of their smartphone or tablet. Whether shouted in their face on the street, daubed on a wall or tweeted into their living room, hateful abuse can have a devastating impact on victims.”


"Hate is hate. Online abusers must be dealt with harshly" 
Alison Saunders


Even the ACLU (American Civil Liberties Union), a nonpartisan nonprofit organization whose stated mission is "to defend and preserve the individual rights and liberties guaranteed to every person in this country by the Constitution and laws of the United States" is looking to adjust its stance on free speech following Charlottesville.

But is hate speech legal?

Yes…and no. It all depends on where you are.

In 1966 the United Nations (UN) adopted a multilateral treaty  the International Covenant on Civil and Political Rights (ICCPR) – which was then put in force from March 1976. This treaty states that:

"..any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law".

The United States Senate ratified the ICCPR in 1992, with five reservations, five understandings, and four declarations.

So that’s all good yes? Well, no. In the US for instance the ICCPR is difficult to apply as it comes up against the First Amendment, which states that:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof, or abridging the freedom of speech, or of the press, or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

In the UK there’s no first amendment to worry about but there is still no actual law against hate speech. Protection is available, however, under The Public Order Act 1986 which forbids racial and religious hatred against individuals or groups and includes written material that is designed to cause harassment and distress. In 2006 this was amended to include religious hatred and in 2008 to forbid the incitement of hatred on the grounds of sexual orientation.

Hate speech and social media

Social media is one place in particular where there is a growing need (and desire) to combat hate speech. In May last year the major social media platforms (Facebook, Twitter and YouTube) agreed to a new European Code of Conduct that requires them to review "the majority of" hateful online content within 24 hours of being notified — and to remove it, if necessary.

This went a stage further in June this year when a law was passed that announced that social media companies operating in Germany would face fines of up to €50 million if they don’t delete illegal, racist or slanderous comments and posts within 24 hours. 

Other platforms are stepping up too. Instagram's CEO Kevin Systrom recently announced that the internet was "a cesspool that he had to clean up" and has since launched new filters to start to address the issue.

So do I need to do anything?

If you want to ensure that your online communities are places that everyone feels safe to be, where people can speak freely without being abused or bullied then yes, you do need to do something.

The global brands we work with recognize that they are the guardians of their own pages. They want to create a healthy place online to engage with their followers. We work closely with these brands to address the line between free speech and hate speech by drawing up bespoke social page rules that deal with hurtful comments in a way that reflects the brand's own personality. This isn't time for a 'cookie cutter' solution. Each brand is different, reflecting the rich variety of people they interact with in the online world.

It is also a constantly evolving task. The terms used to convey ‘hate’ are always shifting. Context is incredibly important too. A simple watermelon emoji for instance can mean multiple things  from being a lovely refreshing fruit to a sexual act. We are continually developing our technology to seek out new terms and images used for expressing hate, across multiple languages.

We’re proud of the work we do for our customers because, for us, hate speech comes at too high a cost in terms of the damage it can cause others.

To find out more about how to address hate speech, and other offensive content, on your social media channels, download our free guide to social media moderation or contact the Crisp team.