The perils of predictive analysis

I know that I have written on this topic before so I am sorry for the repetition.  You probably don’t need to read yet another blog on that terrible Tom Cruise film but in the wake of CBC reports that alleged gay serial killer Bruce McArthur had been assessed way back in 2003 a “very minimal” risk for violence, and showed “absolutely no signs of psychopathy,” after assaulting a man with a metal pipe in 2001 I feel the  need to bring this topic up again.

(PS the film in question is of course Minority Report)

Within that original psychiatric report, ordered by the court as well as a pre-sentence report, it was assessed that Mr. McArthur exhibited the following:

  • he had “no sign of mental health problems” that could have contributed to the ‘incident’
  • he was “characteristically passive and indecisive”
  • he sought “to maintain an image of being a proper and co-operative person, prone to behaving correctly and modestly”
  • he displayed  “overt co-operativeness (which) may hide strong rebellious feelings that may occasionally break through his front of propriety and restraint” and
  •  “there was a low risk that Mr. McArthur would reoffend.”

I am not a psychiatrist and hence will not weigh in on any of this language.  What is important, however, to my mind is how wrong it appears to have been (especially that last bit, assuming he is guilty of the eight first degree murder charges in crimes allegedly committed in Toronto’s Gay Village from 2010-2017).  It speaks volumes to the inherent, and probable inevitable, limitations of analysis and prediction.  Although I have to concede that perhaps those responsible for the 2003 assessment did a lousy job, I nevertheless want to suggest that believing  in a fool-proof methodology for predicting future behaviour is indeed a fool’s errand.

Of course we want our authorities – police forces, security intelligence agencies – to stop crimes and terrorist attacks from happening.  We prefer arrests and disruptions BEFORE these acts happen, not AFTER.  In order to achieve this goal not only do our protectors need adequate human resources and funding but they need to know where to look.  We live in a big world with a lot of potential bad actors and it is not always straightforward to tease out the bad actors from ordinary people (as the McArthur case seems to suggest).  Traditionally this task has fallen to the ‘cop on the beat’ or his intelligence equivalent.  In other words, lots of leg work: dealing with human sources and agents, talking to citizens to see whether they have noticed anything unusual, warrants and SIGINT, and other tools.

Lately we have been told that there are predictive algorithms out there that can make all this a lot easier.  Just feed data into a program, let Artificial Intelligence stew over it and voila! you now know where to focus your resources.  This development promises to streamline work, get rid of onerous and labour-intensive tasks and save time and money.  If it works, it  promises to identify bad actors more quickly and will lead to more crimes prevented.

I am sure that the panoply of predictive algorithms and risk assessment tools all have some good in them.  If constructed properly, based on verified data, they probably can indeed help nip some acts in the bud.  I am familiar with some such tools but not enough to say whether this one is better than that one.  What I have seen, however, is (not surprisingly) those who develop their particular device make great claims for its abilities.  All I can offer here to those considering using these methods is: caveat emptor.

There is of course a much bigger elephant in the room. That pachyderm is the myth that human decision making can be reduced to an algorithm.  I am quite sure that we have mapped some of the neural mechanisms behind how humans make decisions and we can probably see those mechanisms with expensive imaging equipment.  What we cannot determine with any accuracy I would wager is the fuzzy notion of choice.  Even if you have all the possible relevant data you cannot account for the simple act of decision: why I may or may not choose to act on a given day. The variables are too complicated and the decision to make a decision is too idiosyncratic and subject to whim.  Bruce McArthur may have been diagnosed as unlikely to re-offend: still that  is exactly what he appears to have chosen to do.

Getting back to Minority Report and Tom Cruise, no we are not on the verge of predicting crime (or terrorism) using ‘pre-cogs’ floating in pools.  To reinforce the point I am trying to make I want to refer to another bad actor: William Shatner.  In the original Star Trek series Shatner, playing Captain Kirk, goes to a planet where the messiness of war has been eliminated by having computers calculate attacks and casualties and ‘dead’ citizens amble like cattle into ‘disintegration chambers’ (Kirk wrecks the chambers of course).  When challenged by the leader of the planet into why  he can’t see that this method of war is ‘neater’ and recognise that humans are ‘instinctive’ killers, Kirk, ever the Philosopher-Captain, says: “All right. It’s instinctive. But the instinct can be fought. We’re human beings with the blood of a million savage years on our hands! But we can stop it. We can admit that we’re killers . . . but we’re not going to kill today. That’s all it takes! Knowing that we’re not going to kill — today!” (from A Taste of Armageddon which aired on February 23, 1967)

In others words we have the power to choose to kill or not to kill (or to bomb or not to bomb).  We might want to bear all this in mind when we think about our ability to predict things.

By Phil Gurski

Phil Gurski is the President and CEO of Borealis Threat and Risk Consulting Ltd. Phil is a 32-year veteran of CSE and CSIS and the author of six books on terrorism.

Leave a Reply