The challenge of threat assessment tools

We humans are constantly trying – one would hope – to make our jobs easier, more efficient and able to produce better results.  This is, usually, a good thing since the opposite – harder, less efficient and less productive – would strike most of us as kinda stupid.  So we get more automation, more streamlined processes, and fewer administrative and legal obstacles, not that these each are 100% without their own problems.

So it is with threat and risk assessment.  I obviously have a dog in this fight since I am the President and CEO of my own threat and risk assessment firm, Borealis.  And yet I have long been a healthy skeptic when it comes to the announcement that somebody has created a threat/risk assessment tool or program which, if used properly, can accurately determine whether a given individual poses a greater menace than someone else.  These kinds of initiatives have also been deployed to identify who is more likely to respond to rehabilitation, or in the case of terrorism, deradicalisation programs.

The challenges with these tools, an idea by the way I wholeheartedly approach with some caveats, are manifold.  The particular approach has to be data driven, i.e. not just theoretical.  It has to be testable and frequently tested both as a proof of concept and to tweak it to better reflect reality.  And most importantly it must be deployed modestly and  users must receive frequent and ongoing training on how to use the device.

A recent Canadian Supreme Court decision put the use of these assessments under scrutiny. On June 13, the country’s top judges ruled (in a 7-2 decision) that the five tools used by Correctional Services of Canada (CSC) to gauge the risk of reoffending and the potential for future violence must be reviewed to ascertain whether they are free of cultural bias (this is in relation to First Nations prisoners, who are widely and disproportionately over-represented in Canadian jails). If CSC cannot do this it must cease to use them.

It is important that I point out that I have no idea what these five tools are, who created them, how they were used and what training was provided so I do not want to malign the designers.  It could very well be that everything was done in the best possible way and yet serious shortcomings were identified.  If so, what could have caused this?

Put simply, it could have been human nature itself.  When we are told that a complicated process like recidivism (or rehabilitation or deradicalisation) can be made much more understandable through the use of a questionnaire or assessment survey we tend to say “let’s do that!”  Rather than go through hoops and spend inordinate amounts of time and energy to do a more in-depth analysis we go for the ready-made solution.  That’s just how we are as humans: most of us don’t put in work if we don’t have to.  This characteristic leads to mistakes, some of them tragic.

I will assume that those responsible for the CSC methodology took the time to explain what they were trying to achieve, how they built their tool, how to use it, what limitations it had, and how it fit in with other assessment instruments.  I hope my assumption is correct: then again I have seen some people attempt to sell their product in ways I find overzealous.  Any tool is only as good as those using it: they have to be able to do so and recognise their own biases and shortcomings.

As with algorithms (as I have already written) so with threat and risk assessment tools.  These are all good ideas as long as we realise what they can and what they cannot do. Alas, as with much in the ill-named ‘war on terror’ much is put on the market without the proper warnings.  There is a lot of money to be made in stopping crime/terrorism and many who boldly claim to have all the answers.  We need to separate the wheat from the chaff.  As we all have been taught since we were kids: caveat emptor – let the buyer beware.

I encourage those developing these aids to continue to do so provided they are honest and abide by a few rules:

  • use real data where possible
  • assume your tool is flawed
  • test, test and test it again and often to account for new data
  • don’t claim that your effort is a panacea.

With hard work we will get better at this.  Anything that can help us prevent terrorism, or re-integrate people back into society, is a good development.  Kudos to those who try.

By Phil Gurski

Phil Gurski is the President and CEO of Borealis Threat and Risk Consulting Ltd. Phil is a 32-year veteran of CSE and CSIS and the author of six books on terrorism.

Leave a Reply