'Facts do not cease to exist because they are ignored’. So said 20th century English philosopher and writer Aldous Huxley. Mr Huxley’s statement deserves mention this week in light of Instagram’s announcement to roll out a filter tool in an effort to 'keep comments safe' for all users.
On Monday, it was announced by CEO and co-founder Kevin Systrom that the online photo-sharing and social networking app would enable users to filter content. In his blog, Mr Systrom said the new keyword moderation tool would allow users to list words they believe to be offensive or inappropriate. Those words would then be hidden from their feeds.
In addition to choosing to hide certain comments based on a list of terms reported as inappropriate to Instagram moderators, users would also have the option to add custom keywords they feel might be used to specifically target them on the app.
The feature is timely.
Escalating misogynist, racist and xenophobic attacks online have caused a heightened sense of anxiety amongst us all. Adding to our collective fears has been the experience of high profile victims of social media attacks, such as Ghostbusters actress Leslie Jones on Twitter recently.
However, despite some efforts; Instagram's filter tool falls short of the mark.
Why Filtering Comments Does Not Protect Users
Yes, users can filter comments containing specific keywords or phrases from appearing on their profile feed.
However crucially those comments are not being removed by Instagram entirely. So while you won’t be able to see inappropriate comments on your personal or corporate Instagram account, be aware that others might.
For example, if one of your chosen ‘moderated’ words is used by another user on one of your Instagram posts, you and your followers may not see that comment. However anyone who knows that user potentially will.
This might create a nicer online experience for you initially, but really it’s a virtual case of ‘sticking your head in the sand’.
Importantly this is problematic for all users, including companies wanting to maintain a strong brand presence and reputation online. It could even cause more serious problems by allowing inappropriate user generated content to remain on a feed for longer than normal without the knowledge of the signed-in user.
Take for example, a high profile celebrity who berates a brand on one of their Instagram posts, but the team who manages that account can't see it. Things could escalate very quickly without their team knowing.
At Crisp, we believe filtering alone will never be enough. What about multilingual comments or words that are resistant to filters? Or colloquialisms or slang or just poor spelling?
So How Can We Protect Online Users & Brands?
Many in the media would argue that social networks have been falling behind for some time now when it comes to protecting users and brands online.
Even in his blog, Mr Systrom concluded by saying that he knew ‘tools aren’t the only solution for this complex problem’.
So what is?
From a business perspective, if you work as a Social Media Manager, PR Manager or Brand Manager, protecting your company’s brand and reputation is at the forefront of what you do.
In instances where inappropriate content, like profanities or hate speech, is being shared on your Instagram account or any other social media account, having a team of expert moderators in place 24/7 to quickly analyse the offending content before removing it within minutes is crucial.
If you’re still not sure, but would like to learn more about the social media risks, why not get in touch with one of our risk experts today for a free social media risk profile.