Categories
Perspectives

Throwing out the baby with the bath water when it comes to online hate and terrorism

An interesting thing happened today. Google, that behemoth that gives us so much of our information these days, has decided not to run advertising in the lead up to this year’s Canadian federal election because it does not want to develop a registry of ads and advertisers (although it apparently did so for the US midterms and the EU, so it is technically feasible). I imagine that Google is afraid – or at least aware – of accusations that its platform is – and has been – used for fake accounts and disinformation campaigns as we have seen in other elections worldwide.

Google’s decision fits into a larger problem: the use of social media to spread not only disinformation but also hate and violent messaging. We know, for instance, that jihadi groups and others mastered the arrival of the Internet and messaging apps to get their material to a vast audience, such that it appears possible for wannabe terrorists to learn as much as they need to make the leap towards becoming violent extremists themselves (and even learn to make bombs and related weapons).

The reaction to this phenomenon has been mixed. It took FaceBook, Twitter, Google and other providers a long time to realise just what their platforms were being exploited for, and as a result they have put in place algorithms to identify and remove objectionable content (or in some cases humans, although their experiences in reading and eliminating this garbage has had its cost – as this article in The Verge illustrates). The algorithms may be working a little too well: I think my podcasts (An intelligent look at terrorism) on YouTube may be filtered out because I use the words ‘terrorism’, ‘Al Qaeda’, ‘Islamic State’ and the like, and I am AGAINST terrorism!

Then there is the background debate on what exactly constitutes terrorism or hate online, as this article from The Economist explains. An EU plan to impose heavy penalties on companies that allow this material to be posted may not work either, as this piece points out. The UK is considering a law that would call for up to a 15-year prison sentence for clicking on a piece of terrorist propaganda – ONE TIME!

In some countries more draconian ideas are being considered. When I was in Central Asia in January I learned that some regional governments had decided just to ban platforms like FaceBook in their entirety under the belief, I suppose, that no access means no violent or terrorist propaganda whatsoever. India is trying to force WhatsApp “to allow authorities access to any messages they request, as well as make those messages traceable to their original sender”, a big problem for a company that prides itself on its end-to-end encryption and privacy for its users.

Wow! I think it is time to step back and take a deep breath. It may very well be, in the words of The Economist, that “social media have made it easier than ever to propagate prejudice and target scapegoats. Ideas and insinuations that would find no place in the respectable media or political discourse can cascade all too easily from phone to phone” (referring to anti-Semitism), but are total bans and increased government snooping the answer? Is the problem that big, that dangerous and that irresolvable so that these drastic measures are required? We need to figure this out first before going there.

In many ways this line of reasoning is flawed and could be applied in increasingly ridiculous ways. If we take down social media because terrorists, who represent an infinitesimally small proportion of humans, use, it why not go further:

  • some terrorists have used cars and vans to run people over: ban cars and vans!
  • some terrorists have used knives to stab people: ban knives!
  • some terrorists have used golf clubs (see Rehab Dughmosh): ban golf!

See where this can end up?

I do not have all the answers to these challenges. I do think companies can do better at policing their platforms, both through better algorithms and having human eyes on violent material (although the latter needs to be managed better). I think that we need more knowledge on how this material affects people and how to mitigate the worst effects. I think we need to keep all this in perspective.

We cannot go back to a pre-Internet or pre-social media world, or rather we should not (if we do I am out of a job as a post-intelligence career blogger!). Humans are smart – we can figure out a better way to not give room for the jihadis and other terrorists and hatemongers without throwing out the digital baby with the online bathwater.

By Phil Gurski

Phil Gurski is the President and CEO of Borealis Threat and Risk Consulting Ltd. Phil is a 32-year veteran of CSE and CSIS and the author of six books on terrorism.

Leave a Reply