In the wake of Facebook’s moderator training manuals being seen by The Guardian newspaper on Sunday, the world has been questioning if Facebook should decide what is and isn’t acceptable online; why videos of violent deaths are not always being deleted; and why photos of child abuse are only marked as ‘disturbing’.
At Crisp moderation is not bound by the same rules as social platforms such as Facebook. Ultimately our clients are the arbitrators of what is and isn’t acceptable for their followers to see. We use our decade of experience to guide them in their decisions, so that they are not caught out by grey areas.
Yesterday afternoon, our CEO, Adam Hildreth, was interviewed by the BBC World Service to explain how Crisp’s objective approach to moderation removes the grey areas that are troubling Facebook. Talking to Susannah Streeter on the World Business Report, Adam said that the leaked training guides revealed just how hard a job it is to moderate a site like Facebook. He also commented on where Facebook draws the line on inappropriate content and how that highlights the stark differences in where we as a society draw that line.
There are some cases where Facebook and Crisp take different views on social acceptability. Adam cited the topic of self-harm. Facebook appears to say that allowing the world to see some elements of self-harm is acceptable. Whereas at Crisp we think self-harm is inappropriate on a brand’s social pages, and in fact we advise our clients to hide suicide attempts on their Facebook pages, with only friends of the user able to see the post, so that the person’s cry for help doesn’t go unheard.
On the topic of freedom of speech, Adam feels that Facebook are not consistent in their moderation policies. They appear to remove content from certain political parties, and not others.
Adam went on to explain how moderation works at Crisp. Unlike Facebook, we don’t allow humans to decide what should and shouldn’t be removed as it can be biased by their own point of view. Instead we remove subjectivity in moderation by using a mixture of algorithms and humans. When a questionable piece of content is flagged by our AI, our Risk Analysts (moderators) answer a series of questions about the item to determine the severity of its content. It is those black and white answers which determine the ultimate fate of that item.
Adam added that of course there are lots of training hours and rule books to help our moderation team answer these questions, and some items are reviewed twice to ensure we get the right decision every time.
You can hear Adam Hildreth’s BBC interview for the next 30 days by downloading the World Business Report podcast from May 22nd, entitled ‘Ford Appoints New Chief Executive’ http://www.bbc.co.uk/programmes/p02tb8vq/episodes/downloads