Instead of inventing encryption that only government can break, we should just breed a special unicorn that magically blocks terrorist acts. - @segphault View tweet
Back doors into security systems weaken security. For everyone. This remains true despite wishful thinking on the part of those who may advocate back doors. The claim that back doors could be added to systems for law enforcement purposes without compromising the security of those systems was something that was heatedly discussed in the 1990s. I had hoped that we had driven a stake through its heart back then, but it has been revived in the wake Apple’s announcement last Autumn that they have no method to unlock iOS devices without the user’s consent, and so don’t have anything that can be given to law enforcement agencies. The current version of Apple’s statement reads:
On devices running iOS 8.0 and later versions, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. For all devices running iOS 8.0 and later versions, Apple will not perform iOS data extractions in response to government search warrants because the files to be extracted are protected by an encryption key that is tied to the user’s passcode, which Apple does not possess.
Ever since then there has been official and unofficial hand wringing about the threat that this poses to public safety and national security. This is often accompanied by “suggestions” of building systems that don’t compromise the security of a system, give (the right) governments the access they want, and are called something other than “back doors”. But in addition to whatever risks government access poses, there is a subtle but crucial point that is often overlooked:Â The kinds of security architectures in which it is easy to insert a back door are typically less secure than the security architectures in which it is hard to insert a back door. I will come back to that in more detail below, but first let me review a few events and concepts.
Wishful thinking
Over the past half a year, we’ve been told that through some technological wizardry there must be a way to provide governments with what they want without compromising user security. Each time suggestions of that sort come up they are met with ridicule from cryptographers and information security specialists. An early example is from a Washington Post editorial in October 2014
A police “back door” for all smartphones is undesirable — a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.
The phrase “secure golden key” has become a running joke among security specialists since then. More recently (in January of 2015) British Prime Minister David Cameron called for government readable encryption. Cameron declared that there should be no “means of communication” that his government “cannot read.” Yet he also stated that this would not involve a “back door.” Without a very specific proposal in hand, it is hard to analyze the suggestions in detail: all we can do is poke fun at what we imagine they might mean. At least we now have a slightly more specific idea of what it might mean in the US from Michael S. Rogers, the head of the National Security Agency (NSA). He appears to be advocating key escrow with threshold secret sharing for the escrowed key. As described in the Washington Post on April 10 2015:
Why not, suggested Rogers, require technology companies to create a digital key that could open any smartphone or other locked device to obtain text messages or photos, but divide the key into pieces so that no one person or agency alone could decide to use it?
I would love to talk about how keys can be divided into pieces so that no one person can decide to use it, but I will save that for another article. (It’s really cool, and the essential mathematical concept is not actually that hard to grasp.) Â But that slightly more specific proposal still doesn’t address the fact that key escrow can’t really be built into securely designed systems. This should become more clear below. Each of those proposals, in their own way, fail to recognize that entirely separate from the privacy concerns, inserting some government access mechanism into cryptographic systems requires a weakening of those systems.
What’s a back door?
A back door is simply a second way of gaining access to some resource. Imagine a bank vault with a very visible and secure vault door. Now imagine that there is a hidden back door into the vault that few people are aware of. Typically a back door is created deliberately and its existence is kept secret. It isn’t too far from the truth to consider a back door a deliberate security vulnerability. I am using the term “back door” broadly here because from the user’s point of view, and from the point of view of implications on security architecture, the narrower definition isn’t useful. Under a narrow definition, a back door can only be added systems that have (front) doors. Tools like 1Password and Knox for Mac don’t have any doors to begin with, as they operate solely through encryption and not authentication. Not everything that looks like a back door is secret or malicious. For example, when my bank needs to deposit or withdraw funds from my account, it doesn’t go in through the same door that I do. The bank has legitimate access through their own doors. Indeed, one of the major reasons I use a bank is so that it can perform such transactions on my behalf. So in this case the apparent back door is essential to the purpose of the system in the first place. I will not be including such things in my discussion of “back doors.” Those are just other front doors. Indeed, my usage is similar to what appears in Matt Blaze’s prepared testimony (PDF)  before Congress for April 29, 2015.
These law enforcement access features have been variously referred to as “lawful access”, “back doors”, “front doors”, and “golden keys”, among other things. While it may be possible to draw distinctions between them, it is sufficient for the purposes of the analysis in this testimony that all these proposals share the essential property of incorporating a special access feature of some kind that is intended solely to facilitate law enforcement interception under certain circumstances.
Key escrow
It appears that Admiral Rogers is advocating a key escrow system. Under my broad definition of back door, this is one mechanism. The notion is that a copy of a cryptographic key is deposited with a safe pair of hands (an escrow service) who store that copy securely and will only release it under certain circumstances.
Additionally, he is suggesting that it not be a single entity or agency that holds the key, but the key is “split” in such a way that it may require multiple parties to work together to retrieve or reconstruct the key. Typically this is done through an algorithm called Shamir secret sharing which allows one to do things like give a separate secret to five different people which will allow any three of them to recover the master secret (“three of five”). I really, really want to write about how Shamir secret sharing works, but I must leave that for another day.
Although this kind of key splitting for the escrowed key is a good thing to help protect it from theft or abuse, it does nothing to address its implications for the security design of some application which must comply with it. So let me repeat again that these sorts of proposals have implications for the security design of systems that comply.
Vital Technicalities
There are a number of technical facts that policy makers should understand:
- Software and hardware cannot distinguish between good guys and bad guys.
- Back doors pose a direct risk to all users.
- Designs that enable back doors (whether or not a back door is present) are weaker than systems which preclude back doors.
- There is no useful and coherent way to distinguish between cryptographic tools for communication and those not for communication.
I am mostly going to talk about number 3 on that list. This is my point that security designs that make it hard to insert a back door are more secure than designs in which it is easy. But let me briefly address the other ones.
Good guys and bad guys
One of the interesting phrases in the Washington Post editorial back in October was notion that the golden key could only be used when a court has produced a warrant. This isn’t actually as ridiculous as it first seems if we consider that the relevant court might hold part of a split key. But a cryptographic system only knows whether it has been given keys that work or not; it cannot decide whether the person who is using that key is using it properly or came upon it through legitimate means.
1Password, for example, only knows if you have provided the correct Master Password. It doesn’t know if you are a good guy or a bad guy. It doesn’t know if you obtained the Master Password through torture. It doesn’t know if you are a photogenic hero who needs to decrypt the data to save the world from destruction by Dr No. These are simply not the kinds of things that software can know. As clever as we may be, we cannot build software that will “let the good guy in.” Instead we build systems that let the holder of the correct Master Password in and nobody else.
Inherent risks
The most obvious risk of a back door is that the keys to the back door will be captured by “the wrong people.” The holders of the key to the back door need to protect it well, not only from outsiders but from misuse from themselves. This is an enormous topic that I will largely skip since it is widely discussed elsewhere. But I will point out that in the US, the court oversight of secret programs has not lived up to what lawmakers wished, and that if one government is allowed a back door, many other governments will insist on similar access.
Systems for Communication
As mentioned above, Prime Minister Cameron expressed interest in “communication” and, so, perhaps, is envisioning rules that would apply only to systems that are used for communication. Perhaps text messaging systems would be subject to his rules that they must be readable by the British government, but Full Disk Encryption (FDE) systems like Bitlocker or FileVault would not be. The difficulty with taking such an approach is that even FDE systems could be used for secret communication. Patty may encrypt a disk and send the physical disk to Molly. Sure, Patty and Molly may have preferred to use tools better suited for communication, but if no such secure tools are available, they will make do with others.
Indeed this reflects the fact that cryptographers don’t typically distinguish between the case where Alice encrypts a message for Bob and the case where Alice encrypts a message for herself to decrypt at some later time. Communicating securely with a separate person is a lot like communicating securely with yourself in the future, and so tools that help with the latter can be co-opted to do the former.
Doors and architectures
I would now like to return to the central point I am trying to make. The kinds of security architectures in which it is easy to insert a back door are typically less secure than the security architectures in which it is hard to insert a back door.
This is a fundamental part of security engineering. By using strong encryption with keys that only the end user has access to, a huge number of potential attacks are suddenly off the table. As Matthew Green, a cryptographer at Johns Hopkins University, wrote in an article on Slate discussing the reaction to Apple’s statement:
Apple is not designing systems to prevent law enforcement from executing legitimate warrants. It’s building systems that prevent everyone who might want your data – including hackers, malicious insiders, and even hostile foreign governments — from accessing your phone. This is absolutely in the public interest. Moreover, in the process of doing so, Apple is setting a precedent that users, and not companies, should hold the keys to their own devices.
Apple isn’t designing iOS security with the aim of thumbing their noses at law enforcement. They are following good design principles that protect your data. Likewise, when we design our products so that only you can decrypt your data, we are doing so to protect you from those who would read your data without your consent. As described above, no software can determine the intent of the people using it.
Doors must lead somewhere
A back door can pretty much only be placed into a system at a point where that system has a secret such as an encryption key in memory. Otherwise it is a door to nowhere. The parts of a system that require the most protection are the ones that handle the secrets. A principle of security design is to reduce those portions of the system to the smallest possible.
Let’s consider software bugs. Continuing with our metaphor of doors, we can imagine a software bug as not so much another door but as a weakness that allows an attacker to break a hole in a wall. The attacker manages to go around the doors to get to the secrets.
The fewer places that secrets are held, the fewer the number of places where a dangerous vulnerability can occur. If the rooms with the secrets are small, there is less wall area to attack. So good security design means reducing the number of places and times where secrets are held. Great security design places all of those secret-holding components under the user’s control. Naturally, we strive for great design in our own products.
Some of the technical jargon is about “attack surfaces.” Good security design seeks to limit the attack surface, and therefore inherently limits the ways in which a back door could be inserted into a system. By building systems that preclude back doors in most places, we are also preventing a large class of accidental vulnerabilities.
Secrets under your control
One of the most important ways to achieve good security design is to make sure that your decrypted secrets never leave the system without your consent. In the case of 1Password, you may export your data, you may copy a password out of an item, you may use the 1Password extension to fill Login credentials into a web browser. But each of those is an action that you choose to take.
This is a slightly more general notion of what is meant by “end-to-end” encryption. Your encryption keys (the secrets that are derived from your Master Password) never leave your computers or devices and are only used when you want them to be used. Your encryption keys are created on your own devices and never leave your device unencrypted.
That sort of end-to-end encryption is essential to your security. It means that the only attacks that could ever be launched off of your system would involve guessing your Master Password. As a consequence, a back door could only be placed in the software running on a device under your control. By using end-to-end encryption we have dramatically narrowed down the attack surface. A side effect of this is that we also limit the places into which a back door could be inserted.
Where it would have to go
It appears that Admiral Rogers is advocating a key escrow system. Cryptographic tools would use strong encryption and would use strong keys, but the government would have a copy of the keys. His proposal of requiring multiple entities to unlock the escrowed key does make it harder to steal those keys from the government, but it does not stop this from being a key escrow system.
Even if we were fully confident that those keys would be stored safely and would only be used appropriately, the question of security architecture remains. Let’s look at 1Password for an example:
When you create a new vault (or even a new item) in 1Password, 1Password running on your machine will generate random cryptographic keys. We at AgileBits never have the opportunity to see those keys. Nor does anyone else. This is an example of what I meant when I said above that great security design places all of the secret holding components under the user’s control. The creation and handling of those keys happens only on your machine.
Under 1Password’s design, the only way to comply with key escrow would be to send a copy of the key to some government controlled entity when the key is created or after you have entered your Master Password (when these keys are decrypted on your machine). Roughly speaking, 1Password would have to send your Master Password (or keys derived from it) to some government entity. But because these only exist on your system (and not ours) it would have to be your system that is sending the information.
You can control what is transmitted from your computer. Sure, it may take technical skill to do so, but this is something that neither we nor a government can prevent you from doing. Indeed, in the unlikely event that we are ever required to produce a version of 1Password or Knox that would transmit your data to another system, we would display a huge notice to you about what is happening.
There might be more reliable ways in which we could (be forced to) comply with a key escrow scheme, but each of them involve weakening the overall security architecture of 1Password. It would mean that our software would only work if someone other than you had access to your keys. That is not how we build things.
This example should illustrate that the strongest security architectures cannot reliably participate in key escrow. This means that it is often a mistake to frame the discussion as a “clash between privacy and security.” We weaken many kinds of security when we weaken privacy protections.
Law enforcement is right to want a back door
The October 2014 Washington Post article that I keep referencing is absolutely correct when they say,
Law enforcement officials deserve to be heard in their recent warnings about the impact of next-generation encryption technology on smartphones, such as Apple’s new iPhone.
Those voices do need to be heard. So let’s start with them.
From the point of view of law enforcement, they rightly want to be able to actually get at data that they have the legal right to acquire.
Suppose that Molly, one of my dogs, is suspected of kidnapping, torturing, and even eating rabbits. (Molly, I’m sorry if some of my social media posts have implicated you in an FBI investigation, but your behavior was suspicious.) Also suppose that the FBI has good reason to suspect that Molly may even be taking pictures of her victims. The FBI should have little difficulty obtaining a warrant to confiscate and search Molly’s iPhone. If Molly has set a decent passcode for the device and has not leaked those photos off of her phone, then the FBI will have no means whatsoever (other than compelling Molly to reveal her passcode, which is a whole different set of very confused legal issues in the US) to get the evidence they need to lock Molly up in a crate. More bunnies will suffer and die as a consequence of the security design of iOS and the iPhone.
This isn’t as funny when we switch our example away from Molly and rabbits to the sorts of things that the FBI does investigate. Giving people access to encryption that law enforcement can’t break will mean that some investigations are harder, some never get solved, and some prosecutions will fail. There will be times when some very bad dogs get away with their crimes because of this.
It is no surprise that those given the task of fighting crime do not want to encounter encryption that they can’t break. Indeed, if they didn’t seek back doors into such systems they might not be doing their jobs. But this isn’t a question for law enforcement to decide on their own. It is a question for the public and for policy makers.
You can’t always get what you want
Just because something would be useful for law enforcement doesn’t mean that they should have it. There is no doubt that law enforcement would be able to catch more criminals if they weren’t bound by various rules. If they could search any place or anybody any time they wished (instead of being bound by various rules about when they can), they would clearly be able to solve and prevent more crimes. That is just one of many examples of where we deny to law enforcement tools that would obviously be useful to them.
Quite simply, non-tyrannical societies don’t give every power to law enforcement that law enforcement would find useful. Instead we make choices based on a whole complex array of factors. Obviously the value of some power is one factor that plays a role in such a decision, and so it is important to hear from law enforcement about what they would find useful. But that isn’t where the conversation ends, it is where it begins.
Whenever that conversation does takes place, it is essential that all the participants understand the nature of the technology: There are some things that we simply can’t do without deeply undermining the security of the systems that we all rely on to keep us safe.
Tweet about this post