Categories
Perspectives

Can facial recognition predict terrorists?

Years ago when I taught an introductory course in linguistics I included a small section on the brain and how scientists have determined – to an extent – how language is both processed and produced in all that grey matter.  Beginning in the 19th century really smart researchers learned that specific parts of our cortex are responsible for different aspects of language: for instance, the left temporal lobe has an area now known as Broca’s area (named after the scientist who “discovered” it) which is critical in production – making sounds and words.  Serious damage to that region results in severely impaired output.

In some ways, the findings of Broca and others seemed to support a theory of brain specialisation popular at the time – the ?science? of phrenology.  Practitioners of this pseudo-study claimed that by analysing head bumps (i.e. protrusions on the skull) one could determine underlying brain structure and that the size of these bumps correlated with certain skills, talents and behaviours.  A person with one pattern would be seen as a genius while a second person would with a different pattern would be labeled an idiot – or a criminal.  In an early version of Minority Report, trained experts could even predict future outcomes: in fact the wealthy would bring their children to these quacks to get the inside scoop on where their progeny would end up in society.

Phrenology is now seen as quaint at best and dangerous at worst.  It has been completely debunked and I don’t know of anyone who believes in this garbage today, although I did see what looked like a recent phrenology manual in a used bookstore in Kingston about 20 years ago (damn!  I should have picked it up – for scientific research purposes of course).  Nevertheless, a recent study on face recognition systems in China seems to be a throwback to skull measurements of yesteryear.

An article in New Scientist reports that two researchers claim that they have proven that such a system can tell whether someone is a criminal or not (one of the scientists is from McMaster University in McMaster).  Essentially these scholars gave a face recognition software program a series of pictures and asked it to guess whether the person was criminal: it then fed the machine the right answers and were gobsmacked to see that the system could then predict the difference between felons and law-abiding citizens with 90 percent accuracy.

I know that this sounds fantastic. Imagine if  it were true that faces betray criminal behaviour!  Police forces would have a powerful tool at their disposal.  Alas, it appears too good to be real.  Others have criticised the methodology and warned that the work is both unethical and risks ascribing “impartial legitimacy” to an inaccurate tool.  Some have even predicted that this kind of method could be sold to police departments with little knowledge of its shortcomings and be used for all kinds of purposes (discrimination, bias, etc.).

This story reminds me a lot of much of the work carried out in terrorism studies.  Everyone wants to create the next best tool to identify terrorists before they strike.  Such an invention would be a miracle for overstretched security intelligence and law enforcement agencies under tremendous pressure to stop terrorists and determine which investigations should get priority attention.  Hey, if it works for identifying criminals how hard can it be to modify the algorithm to locate terrorists?

All of this really worries me.  We are so desperate to stop terrorism that we are willing to use tools and models that have little reliability or corroborative study.  This use of facial recognition seems to me to be in this category.   If some agency decided to prototype this approach, what would the consequences of false positives (i.e. people who “look like” terrorists but aren’t) and false negatives (i.e. people who “look ordinary” but blow shit up)?  I sure hope that those in charge of anti-terrorism programmes get the good advice they need from experts with no vested interest in whatever approach is being offered.

The whole thing sounds like hubris.  As if we can somehow determine who is a terrorist and who isn’t based on irrelevant data (we might as well go back to using calipers on heads!).   I’d like to end this blog with a quote from a recent book on, of all things, baseball (Brian Kenney’s Ahead of the Curve):

  • “The world is billions of times more complicated than any of us understand, and because we are desperate to understand the world, we buy into these explanations that give us the illusion of understanding.”

Sage words – on the diamond or in the world of counter terrorism.

By Phil Gurski

Phil Gurski is the President and CEO of Borealis Threat and Risk Consulting Ltd. Phil is a 32-year veteran of CSE and CSIS and the author of six books on terrorism.

Leave a Reply