Categories
Perspectives

What to do with violent extremist content online?

One thing that terrorists have been doing very well is getting their messages out to audiences, both willing and unwilling.  Groups such as Islamic State (IS), Al Qaeda (AQ) and many others use a panoply of social media apps to draft, disseminate and promote material that extols their achievements and threatens more carnage, all with a view to reminding the world that they remain a force to be reckoned with and to instil fear and panic.  On far too many occasions the merest warning that a terrorist group was planning an attack has led to travel advisories, public warnings and even the decision to move warships out of a port (this was in the early 2000s when the US told its fleet to sail out of Bahrain because of an AQ threat, a move that then leader Usama bin Laden made much mirth out of).

Terrorist groups also use these media to find and encourage new members.  The material they post leads to conversations, both virtual and face-to-face, which in turn can help provide new recruits to movements which need replenishment as some foot soldiers are killed in terrorist attacks or in military actions.  Postings range from detailed religious material to conspiracy theory-like screeds and all have their effects.  

One of the challenges in what to do with this data is volume.  Some groups have media arms dedicated to churning out videos and texts at a torrid pace.  It is hard to keep up with the amount of material made available as the messaging produced by terrorist groups often finds its way onto multiple platforms thanks to sharing arrangements by followers and middle men.  Not every post leads to new sign ups but the massive amounts do inevitably lead some to embrace extremist causes, either as new members or free agents (those we call ‘inspired’ – e.g. IS-inspired terrorists).

In the face of all this we struggle with what to do about it.  Clearly there is a critical need and interest in removing this material so that it cannot poison the minds of many and contribute to the goals and plans of terrorist groups.  Admonitions to ‘just take it down’ are understandable but less easy – or desirable – to actually do than any realise.  Here are several considerations in this regard:

  •  Material removed from platform A appears on platform B soon after.  In this sense the decision to excise objectionable content quickly becomes a game of ‘Whack-a-Mole” and hence never-ending.
  • The perceived need to act quickly has led to two solutions, neither of which is optimal.  Some companies like FaceBook have hired armies of moderators to identify and cut material or develop algorithms to carry out a similar function.  In using human intermediaries it has been learned that there is a time-sensitive aspect that forces decisions to be made in seconds before moving on to the next instance, a practice that leads to errors: while some things may be obviously violent (e.g. a beheading video),  others (a long religious text) are less so.  Secondly, algorithms are only as good as the humans that create them.  A lack of appreciation for what actually constitutes violent extremist content hampers these efforts.
  • In countries like the US where freedom of speech considerations are paramount it is not clear on what grounds material can be removed.
  • Finally, the continued presence of violent extremist messaging online is an important tool for security intelligence and law enforcement agencies to identify who is writing, who is listening/reading and how this develops. This data can become intelligence to help stop attacks from succeeding or evidence to help convict terrorists and put them away so they can do no more harm.  Allowing the violent material to stay can make the difference in protecting lives.

There is also the overarching truth that the vast majority of people who consume this messaging never act on it.   We know that there are, and always will be, more radicals than violent radical actors.  Do we want to lower the risk of an attack to zero?  If that is indeed our goal it is alas an impossible one.  Many (most?) planned attacks will be stopped thanks to security intelligence and law enforcement agencies and a few will go ahead.  Do we want to institute draconian censorship that while it may help limit the scope and influence of violent material will also cast a pall on legitimate protest and dissent?   Some countries would have no problem issuing blanket bans to quell any sign of opposition, terrorist or not.

In the end, this is a hard problem with no obvious solution.  We need to carefully think about what we are really trying to achieve.  Stopping terrorism is rightfully a noble cause, but not at the risk of dampening other freedoms.  We must above all remind ourselves that yes terrorism is real and serious but it does not constitute an existential threat.  Our responses must not cause existential threats either.

By Phil Gurski

Phil Gurski is the President and CEO of Borealis Threat and Risk Consulting Ltd. Phil is a 32-year veteran of CSE and CSIS and the author of six books on terrorism.

Leave a Reply