The 2002 movie Minority Report, based on a short story by sci-fi writer Philip K. Dick, takes place in 2054 Washington D.C. where crime has been all but eliminated. This happy result is due to the work of the ‘Pre-Crime’ unit which relies on the foreshadowing of ‘Pre-cogs’, people floating in shallow pools who see future crimes and can help the police prevent them before they occur.
This of course is a work of fiction and nothing of this sort is possible. Or is it?
Police forces in the US and elsewhere are increasing their use of data and predictive modeling to help them identify criminal ‘hot spots’ where they can better focus resources and technology (cameras, licence plate readers, and cellphone trackers among other devices). These tools allow law enforcement to prioritise particular people and places for increased monitoring and contact.
If these approaches were to work they would be a boon to safety and security. Limited human and technical assets could be deployed more efficiently and more crime could be nipped in the bud. There are, however, many outstanding questions about these new techniques. Some centre on privacy concerns (all that data being collected on many people not tied to criminal activity) while others say that there is little empirical evidence to support the claim that these methods actually work (or even if they are better than human ones). Then there is the perennial elephant in the room: racial profiling. In other words, the jury is still out (bad pun!) on whether machines can stop crime. There are even algorithms that purportedly predict recidivism (i.e. a return to crime after a prison sentence) that have raised doubts.
As this is a terrorism (and counter terrorism) blog I want to discuss whether this technology, despite the remaining important question marks, is applicable to the task of identifying and neutralising terrorist threats. Here I am less sanguine about the possibilities.
I suppose that in a country like France, which has placed 18,000 people on a watchlist for ‘radicalisation’, there are useful patterns that can help narrow the field. There are undoubtedly links between certain cities or neighbourhoods, for example, and terrorist cells that should be exploited. We know that like attracts like when it comes to radicalisation to violence, so detecting a critical mass of people all on the same path should raise flags. Any aid has to be helpful when you are talking about 18,000 people on a watchlist (as a former Canadian intelligence analyst I cannot conceive of that number, or the heavy responsibility of worrying about so many potential terrorists).
On the other hand there is so much variability in who radicalises to violence and why that algorithms will always be of limited advantage. As many have said – and I have been thumping this tub since 2005! – there is no profile, no template that can help predict who becomes a terrorist. In addition, there are so many false positives of those who talk the talk but never walk the walk that the system would quickly become overrun. Having said that I do see some promise in work done by my former colleagues at CSIS on indicators of mobilisation to violence (although the Service does caution that the indicators are not 100% reliable).
I may not be the most tech savvy guy on the planet but at the same time I am not a Luddite. I welcome technological advances on our abilties to do lots of things and am not in favour of a return to a pre-IT world. Still, everyone has to be judicious in their claims of what these tools can do for us. No device, no paradigm, no program can remove uncertainty and work flawlessly. I worry that proponents (and tool developers) oversell what they have created: of course they do as they want to make money in the end. What we need a lot more of is evaluation and assessment of these approaches – we cannot take their effectiveness at face value. We also need to recognise that the best solution will always be a hybrid one: human judgment (as flawed as it is) married with technology. Perhaps it is time to reassert that humans are and always will be the best actors to assess the behaviours and intents of other humans. If machines can help security intelligence and law enforcement agencies in their tasks then great. We should embrace, albeit cautiously, such collaborative relationships.