We read every piece of content from our clients' social media channels looking for complex word combinations that trigger any of over 100 risks, such as a bomb threat, hate speech or illness after taking prescribed medication. When a risk is found, the comment and its context are reviewed by one of our skilled Risk Analysts. It’s their job to understand the real intent of the comment and to tag it appropriately so the right action can be taken.
Crisp's Risk Analyst team use their experience to predict the risks a business could encounter and advise whether it’s best practice to approve a comment or remove it. This is often a delicate balance between understanding the consequences of allowing a post to live online and the damage that removing it could do.
Take for instance, a suicide threat posted on a client’s Facebook wall. Most people would be inclined to remove it because it could be distressing for other users, or it could leave the sender open to bullying, or it could damage the brand if they respond controversially.
The option to hide
Facebook gives brands the option to hide a post from everyone apart from the sender and their friends. If a suicide threat does appear on a client's wall, we recommend hiding the comment from the wider community but leaving it visible to friends. By doing so, it is easier to pass the comment on to Facebook who then send the user helpful resources, and also allows the people closest to the sender to see the distress call and take action in their own way.
We also recommend choosing to hide any comments that demand action, rather than delete them. There may be the need to retrieve information for any number of reasons such as legal action following a product defect, a bomb threat by a minor whose parents were unaware, or if the police need to intervene in a physical security threat.
Facebook is the only social media channel to offer a hide option. But in sensitive situations, it is vital that appropriate care and action is taken. If a post is completely removed and there is no trace of it, this means the opportunity for the proper authorities to intervene with support or legal action has also been removed.
With social media seen as a communication tool for brands, it's good to know that flagging and intervening in off-brand content is an important part of the moderation process. Not only that, early risk detection is a key way to protect your employees and the individuals in your online community too.