This piece appeared in The Hill Times on December 17, 2018.
There has been an awful lot of news lately on privacy in the digital space. Facebook in particular has been in the hot seat over what it does with our data – data that we willingly provide every time we post a photo or a ‘like’. As my mother always said “nothing is ever really free in this life”. If it looks free that is because we are not looking hard enough. In effect we are the product on sites like Facebook, Twitter, Whatsapp and others.
Nevertheless, people are angry at having their personal information used to create targeted ad campaigns and more nefarious pitches that may not be consistent with what we think we want. As a result many are demanding greater guarantees that their private data will not be exploited and that their expectations of security will not be breached. Revelations about the moves made by Cambridge Analytics during the 2016 US presidential campaign have rattled a lot of online users and allegations that a state (i.e. Russia) was manipulating voters has cast a chill on what exactly is happening online. The fear of state involvement in what we do on the Internet is a legitimate concern. Some have moved to what they perceive as more secure platforms to evade this kind of snooping.
Are there, however, legitimate reasons for the state to have a peek at online activity and do those reasons overtake general expectations at privacy? One could argue yes – when it comes to terrorism. Australia has just passed a law called the Assistance and Access bill that would force online providers to help law enforcement and security intelligence agencies to get at encrypted messages used by criminals and terrorists. As the new bill states: “Should governments continue to encounter impediments to lawful access to information necessary to aid the protection of the citizens of our countries, we may pursue technological, enforcement, legislative, or other measures to achieve lawful access solutions.” Australia’s other partners in the so-called ‘5 eyes’ intelligence sharing club – Canada, New Zealand, the UK and the US – are keenly interested in similar powers and issued a communique in September in which it was noted that vendors have a ‘mutual responsibility’ to help law enforcement.
Cue the outrage.
Are there legitimate grounds for these powers or has Australia gone too far? Experts warn of the dangers of having ‘back doors’ in devices that weaken security and allow our encrypted data to be accessed by all kinds of actors, good and bad. Should we protest in the streets and on Parliament Hill over this?
What is missed in all of this clamour for privacy and expectation of data protection is the equally important expectation that our governments and their constituent agencies will keep us safe from bad guys like criminals and terrorists. When our spies and cops fail to stop violent acts from occurring few are willing to cut them some slack and try to understand the challenges inherent in the work they do. One of the hardest challenges is the use of unbreakable encryption: it is very difficult to determine violent intent when you cannot read the communications sent between nefarious parties. True our protectors can always deploy human sources and agents but this is not always possible.
As I have noted before, what if we had a similar law here in Canada that granted CSIS and the RCMP the access they need to foil serious acts of violence but that access was heavily constrained? We have such a mechanism already in the system of court-authorised communications intercept warrants. Could this problem not be handled in a similar way? If CSIS, for example, already has a Section 21 warrant and finds that some of its intercept is unreadable could a judge not tell the provider of the platform carrying the information to break it out for our spies? That way, the company giving us the privilege of sending secure information still controls the encryption algorithm, not CSIS or the RCMP.
I am sure there are technical issues surrounding this problem that are beyond my understanding and I know there are those that believe any breach of security undermines the whole system. Nevertheless, it is also certain that criminals and terrorists are increasingly turning to technologies our state bodies cannot read. What you cannot read you cannot stop.
In the end we have to decide as a society what is more important: universal privacy or universal safety. The two do not have to be mutually exclusive.
Phil Gurski is the President and CEO of Borealis Threat and Risk Consulting and a former strategic analyst at CSIS