What if CVE effectiveness cannot be measured?

Knowing that what you are doing is the right thing is important.  There are all kinds of ideas in all kinds of fields of study and practice but they are not all equal.  Some are clearly better than others.  One way of telling which ones are which is to measure what they purport to do.  Take a standard drug trial.  A given substance is administered to (at least) two groups: a target group and one or more control group.  The results are compared and an attempt to figure out what effect, if any, the drug had against the illness in question is measured.  Even here it is not always easy to tell whether the observed effect is real.  In most cases there are a lot of factors at play and a good scientist will try to control for the extraneous elements, narrowing the field down to determine the precise outcome from the drug being tested.

Ascertaining the cause and effect in biology, chemistry or physics – the so- called hard sciences – is one thing.  For the ‘soft sciences’ – psychology, sociology, etc. – it is quite another.  When you throw human behaviour into the mix things get a lot more complicated.  The scandal over reproducibility in psychological experiments a few years ago was a good example of just how difficult it is to isolate a particular factor.

CVE, unfortunately, falls into the latter category.

There has been a tonne of stuff published on the need for effectiveness measurements in CVE programming around the world, and most people have concluded that thus far we don’t really know what works and why.  Hence the prioritisation of coming up with metrics to apply to any CVE proposal.  Especially when government funds are being doled out – in the case of the long-anticipated Canadian Office of the Coordinator for Counter Radicalisation and Community Engagement that could amount to $35 million over five years, and that is not chump change – there is an obvious desire to be assured that the money is being spent wisely.

But what if these efforts are largely unmeasurable?  What if there are no solid methods for determining the scope of the programming?  Should we pause all these efforts until a solution is found?

The answer to the last question is clearly no.  We cannot allow our only approach to counter radicalisation and counter terrorism to be the hard intelligence/law enforcement one, as necessary as that approach is.  We would be remiss in using the lack of evaluation tools as an excuse to sit back and watch our citizens head down the road to violent radicalisation, thus assuring that these cases become the purview of national security.

I for one support CVE and I am skeptical that anyone will devise a sure-fired way of measuring success. In this light I propose the following as a substitute for such metrics:

a) CVE is so much cheaper than investigation – orders of magnitude cheaper – that  it is worth doing regardless of whether we can scientifically categorise the results.

b) it is hard to imagine a sincere,well thought-out CVE strategy that would make matters worse (i.e. cause more radicalisation).  The same cannot be said for doing nothing.

c) CVE is an easy sell for communities who sometimes see government involvement as limited to spying and arresting its members

d) most importantly, those putting up their hands for funding MUST be vetted.  Giving tax dollars to the wrong people will be the death knell of any CVE programme.  As long as the candidates explain their approach and provide regular updates, assuming of course that their approach is feasible, it should be considered.

e) solutions must be local in nature.  Any attempt at a national standard is doomed to fail.

In the end, our security intelligence and law enforcement services can provide metrics: for this amount of money we carried out this many investigations and made this many arrests – and we stopped this many terrorist plots.  No CVE programme can make an analogous claim: we stopped this many people from becoming violent radicals.  In essence we are asking organisers to prove what did not occur.

I am not saying that we should ban a search for a scorecard.  By all means keep trying.  But we cannot sit idly by in the absence of one and we should not ignore solid suggestions just because the proponents have not solved what may be unsolvable.



By Phil Gurski

Phil Gurski is the President and CEO of Borealis Threat and Risk Consulting Ltd. Phil is a 32-year veteran of CSE and CSIS and the author of six books on terrorism.

Leave a Reply