Cryptography in Society#
This chapter marks the end of our abstract, purely technical coverage, and the start of our coverage of how cryptography interacts with the real world. First, we’ll discuss cryptography’s impact on society.
Like most technologies, cryptography can be used to benefit society, and to harm society. The availability of strong cryptography to the general public has been controversial since the beginning, and continues to be today.
This debate is one part of a much broader debate about data privacy and data protection: cryptography is an indispensable tool in achieving those goals. Here, we’ll concentrate on the debate surrounding cryptography itself.
The Debate#
The central tension in society’s use of cryptography is between people who want to use it to keep information secure, and governments that want access to people’s information for their own purposes.
People and organizations want to use cryptography to keep information secure for a wide variety of reasons. Some information could cause financial damage if it weren’t secure, such as bank account details or trade secrets. Some could be embarrassing, like messages with an intimate partner. Some could endanger a person’s safety, like physical location data. Some information, like passwords, could allow an adversary to impersonate the true owner of the passwords, which could cause a lot of different problems. And some information, like plans to commit a crime, would put a person in legal jeopardy.
Governments want access to the private information of people and organizations, generally for intelligence and law enforcement purposes. Their argument is that private information is a valuable resource for thwarting crime, or for investigating crimes that have already happened. The underlying point is that private information can help governments protect the societies they govern, by giving them insight into the activities of criminals.
In short, governments want to protect society from criminals. People and organizations want to protect themselves from criminals, and possibly from governments.
This latter point is very important: we have deliberately not defined what constitutes a crime, so it is not clear at all that “protecting society from criminals” is an unalloyed moral good. Some governments criminalize things that don’t harm anyone, or that most of their citizens believe should be legal.
Cryptography can allow terrorists to plan attacks, unseen by intelligence agencies. It can also allow people to learn about historical events, unseen by an authoritarian government that would rather wipe out all knowledge of them. Both of those acts could be considered crimes in certain contexts, but they have different moral value.
This means that it’s not possible to assign a moral value to cryptography in the abstract, or even to a general concept like “people using cryptography to avoid legal trouble”. Context matters and specifics matter.
However, the existence and availability of cryptography is universal, and independent of context and specifics. It’s impossible to make cryptography available for morally good purposes only. The debate is about balancing the beneficial uses of cryptography against the harmful ones.
The Crypto Wars#
In the aftermath of World War II, cryptography on computers was used exclusively by governments and militaries. The United States government classified it as a munition: a technology that had no commercial application. As such, it was shared with allied governments, but no one else. At the time, the concern of the US, and specifically the NSA, was that if strong cryptography fell into enemy hands (which, at the time, meant the Soviet Union and its allies), the US would lose a crucial advantage in information security.
The release of DES changed everything. Recall that the design of DES had been vetted by the NSA, and approved for government use. It’s noteworthy that the publication of DES likely wasn’t what the NSA intended. From Bruce Schneier’s Applied Cryptography[Sch96b]:
1National Bureau of Standards, the predecessor of NIST. NBS ran the DES standardization process.Never before had an NSA-evaluated algorithm been made public. This was probably the result of a misunderstanding between NSA and NBS. The NSA thought DES was hardware-only. The standard mandated a hardware implementation, but NBS published enough details so that people could write DES software. Off the record, NSA has characterized DES as one of their biggest mistakes. If they knew the details would be released so that people could write software, they would never have agreed to it.
The NSA’s actions in the following decades back up this story. But with the publication of DES, the cat was out of the bag: the world was set on a relatively straight path to where we are today, with strong cryptography everywhere.
Before we could get there, though, we went through a strange period which came to be called the Crypto Wars. The NSA and its allied intelligence agencies stopped caring as much about keeping cryptography out of the hands of rival governments (the Soviet Union had developed its own strong cryptography anyway), and instead became concerned with keeping it out of the hands of individuals.
Export Controls#
Until 1996, the US government continued to classify cryptographic hardware and software products as munitions, restricting their export. This even included descriptions of algorithms, not just software usable by end users.
In the 1990s, if you went to download any software that was developed in the US and included cryptography (notably including web browsers, once SSL was introduced), you had to choose between the “US edition” and the “international edition”. If you were not located in the US, you were only allowed to download the “international edition”, which could only use 40-bit keys2Even by 1990s standards, 40 bits was a joke. Recovering a 40-bit key by brute force was within the reach of individuals. A well-funded entity like the NSA could have brute-forced a 40-bit key in a few seconds. for encryption. In fact, there wasn’t a difference in the actual encryption used by “US” and “international” web browsers: both would encrypt with a 128-bit key, but the 40-bit-restricted international editions would then reveal 88 bits of it.
As that situation makes clear, the concept of “exporting” software or algorithm descriptions is, on some level, absurd — more so, the more widespread Internet access became. Once someone outside the US possessed the description of an algorithm, they could copy and redistribute it all over the world for essentially no cost, as well as write and distribute code for it. There is a huge, salient difference between transporting physical objects across an international border, and making electrical pulses on a metal wire that spans an international border.
The inherent absurdity meant that this state of affairs was unsustainable. In 1995, Daniel Bernstein3Yes, him again: the designer of ChaCha20, Poly1305, EdDSA, and Curve25519, though all of that was still in his future when he brought this case. brought a court case, Bernstein v. USDOJ4United States Department of Justice., challenging the export restrictions that prevented him from publishing a paper and source code for an encryption system he had developed. A court found that the restrictions violated the First Amendment to the United States Constitution, which protects the right to free speech[cou99]. The US government relaxed the restrictions in response.
Key Escrow#
In 1993, the NSA introduced the Clipper chip. It was a hardware encryption device on a chip, meant to be installed in cell phones to encrypt their voice and data traffic. It used a classified, NSA-designed symmetric cipher called Skipjack, with 80-bit keys, along with Diffie-Hellman for key agreement.
The twist was that Clipper implemented a scheme called key escrow. After agreeing on a key (a session key) with the counterparty, a Clipper chip would encrypt the session key with a unit key, unique to that chip and burned into its memory. It would transmit the encrypted session key plus a hash of it (collectively called the LEAF, for “law enforcement access field”), along with session-key-encrypted traffic. Meanwhile, the US federal government had a database with the unit keys of every Clipper chip manufactured, which gave them the capability to decrypt the traffic of any Clipper device.
There wasn’t enough political support to mandate Clipper adoption, which meant that the initiative was doomed from the start. The academic cryptography community was strongly opposed[A+97]. There was no compelling reason for cell phone manufacturers to use Clipper: customers didn’t want it, and foreign manufacturers couldn’t use it and so would have been preferred by anyone who didn’t want a Clipper phone. The program was shut down in 1996, and Skipjack was declassified two years later.
As a side note, it turned out that the scheme used in Clipper chips had a somewhat laughable vulnerability. In 1994, Matt Blaze discovered[Bla94] that the hash of the LEAF in Clipper transmissions was only 16 bits long (i.e. only 65,536 possible values). A very quick brute-force collision attack could generate a different LEAF with a matching hash, thus rendering the LEAF useless for law enforcement access. This meant that a device could easily use Clipper to achieve non-escrowed encryption.
This attempt at widely introducing key escrow was a shift in strategy by the NSA. Recognizing the usefulness of strong cryptography for general use — especially for e-commerce, which was getting started around that time — the NSA seemed to give up on keeping it away from the public altogether. Rather, they thought they could maintain their ability to surveil communications if they could spread cryptography that they could decrypt.
The Present Day#
There’s no universally-agreed end point of the Crypto Wars. One good candidate, though, is the AES competition, which started in 1997. Here was the US government, including the NSA, working in the open and in good faith to get a secure block cipher developed and released to the world, free of export restrictions and other encumbrances. It was an acknowledgement that strong cryptography was here to stay, that it would become more and more accessible to the general public, and that trying to hold back the tide would do more harm than good.
However, it was by no means the end of governments’ contentious relationship with cryptography. As more and more communications took place electronically, people’s desire to secure them, and governments’ desire to surveil them, both grew stronger than ever. The conflict continues, but it has moved to new arenas.
Making Cryptography Irrelevant#
With strong cryptography becoming more widespread, governments shifted their focus away from restricting or breaking cryptography5The NSA’s promotion of the probably-backdoored DUAL_EC_DRBG being a notable exception., towards circumventing it instead. They have been able to take advantage of the centralization of electronic communications.
Most electronic communications are not directly from person to person. Rather, they are intermediated by service providers, which are usually corporations. Service providers can include email services, messaging services (like cell phone carriers and messaging apps), voice and video chat services (like Zoom or Skype), file storage services (like Dropbox or Apple’s iCloud Drive), and others.
If you send an email to your friend, it does not go directly from your computer to your friend’s. Instead, it goes to your email provider first, which is probably either Google or Microsoft6Your Northeastern email is hosted by Microsoft.. The email may be encrypted as it traverses the Internet from your computer to your email provider’s server, but the email provider then decrypts it and has full access to the plaintext contents.
This means that if a government agency, such as a police department, wants access to your emails, they don’t need to break any encryption. Instead, they use the legal system to compel your email provider to give them access, with the threat of legal penalties if they don’t comply. Governments get this kind of access from a few large service providers, and then they can surveil a huge amount of the world’s communications, without having to worry about cryptography at all.
This puts those service providers in a difficult position. They are forced to take actions that they don’t necessarily agree with, and to make decisions they are not well-equipped to make. For the most part, they are amenable to helping law enforcement agencies with legitimate investigations and harm prevention efforts, but some demands for access are not that.
Some demands come from authoritarian governments whose policies the service providers don’t agree with. (For example, many governments persecute LGBTQ+ people, and some try to surveil private communications to find them.) Some come from attackers impersonating law enforcement[Kre22] — a type of social engineering attack. Some aren’t justified by the circumstances: there isn’t good evidence of a crime or imminent harm.
Furthermore, because this scheme sometimes requires rapid response (e.g. to prevent imminent harm), service providers may have to decide whether or not to comply within a very short time and under high pressure; mistakes are inevitable. In addition, the employees who handle demands for access are high-value targets for attackers, because of their expansive permissions. Attackers can attempt to compromise those employees’ credentials by using malware, social engineering, or simply stealing their laptops.
The extent of service providers’ cooperation with governments was brought to widespread public attention by the 2013 publication of NSA-internal documents that former NSA contractor Edward Snowden had given to journalists. The resulting backlash caused some companies to take action to reduce their exposure.
End-to-End Encryption#
Service providers countered these demands for access by making it impossible for themselves to access their users’ data in some cases. They did this with end-to-end encryption, sometimes abbreviated to E2EE. Let’s return to the email example above. If you encrypt your message to your friend using a key that only you and your friend know, and put the ciphertext in an email, the email provider will see only that ciphertext, and they won’t be able to decrypt it. (You could even run Diffie-Hellman over email to agree on that key.)
When you chat with someone using an E2EE messaging app, the app on your phone and the app on their phone run a key agreement protocol with each other, and then encrypt the chat with the resulting key. The key does not leave your phones; in particular, the app provider’s servers never see it. The encrypted messages (and the messages of the key agreement protocol) still go through the provider’s servers, but the provider can’t decrypt them. Thus, the provider is unable to comply with law enforcement demands for access.
Some major service providers have implemented E2EE in their products. The most common E2EE protocol is the Signal Protocol, which was first used in the messaging app Signal. (Signal is a favorite among security researchers, because it collects very minimal data about its users.) The popular messaging products WhatsApp, Facebook Messenger, Apple iMessage, and Google Messages7Note that this does not apply to SMS; only to messages that go via iMessage, or between two Android users in the case of Google Messages. The security of SMS is absolutely terrible, and I advise against using it for anything remotely sensitive, including multi-factor authentication. all support E2EE; all of them except iMessage use the Signal Protocol. Zoom Communications spent years falsely claiming that Zoom supported E2EE, but now it actually does[ftc20].
The concept of E2EE doesn’t just apply to messaging applications, but also to data storage. For example, if you put an unencrypted file in Dropbox or iCloud, the respective companies can read its contents. If you encrypt a file yourself before putting it in Dropbox or iCloud, the companies can’t read its contents — all they will see is ciphertext. The “ends” of “end-to-end encryption”, in this case, are the same person.
E2EE can similarly apply to data storage on a single device. In the wake of the Snowden leaks, Apple updated iOS so that iOS devices’ contents would be encrypted in such a way that no one, including Apple, could decrypt them without the device’s passcode. This feature gave rise to the biggest cryptography-related news story of recent years.
Apple and the FBI#
The single-device form of E2EE made headlines in December of 2015, after the mass shooting in San Bernardino, California. After the attack was over, the Federal Bureau of Investigation (FBI) found an iPhone belonging to one of the shooters, and suspected that there was information on it that would be useful to the investigation.
The phone’s contents were encrypted, as a feature of iOS. The decryption key was encrypted using another key, derived from the user’s four-digit passcode plus an identifier unique to the device. Therefore, the phone’s contents were unrecoverable without the correct passcode. To hinder brute-force attacks, the phone’s firmware enforced a delay of a few seconds to a few minutes between passcode entry attempts, and would erase the decryption key completely if an incorrect passcode was entered ten times in a row.
The FBI demanded that Apple write an alternative version of the firmware that didn’t have those features, which would allow the FBI to brute-force the four-digit passcode quickly and without risking losing the data forever. They needed Apple to do this because iPhone firmware updates must have a cryptographic signature, and only Apple has the signing key. Apple refused, and the matter went to court for a months-long back-and-forth[Sch16].
Note that there was no urgency here. The shooter who owned the phone had been killed during the attack, and no sign of any related attacks was seen during the ensuing months of legal wrangling. There was no suggestion that the phone’s contents were needed to prevent imminent harm. The FBI insisted that their concern was just this one phone, but if Apple had done what the FBI wanted, it would have been equally applicable to all iPhones. The FBI likely was aiming to set this precedent, and Apple wanted to avoid it[Sch16].
In the end, the FBI dropped the case because they found a security firm that had a non-cryptographic exploit that could unlock the phone[NA21]. The matter went undecided in court, so no precedent was created. It’s likely, though, that a situation like this will arise again. (By the way, it turned out that there was no useful information on the phone.)
Backdoors#
Governments, by and large, do not like end-to-end encryption. The most extreme stance we’ve seen was in the aftermath of the 2015 mass shooting in Paris, France. Then-UK Prime Minister, David Cameron, made a speech[Kra15] suggesting that he wanted to ban messaging apps that could not offer access to law enforcement: that is, to ban E2EE. This didn’t result in any real laws or regulations, but the idea is out there. Some writers call this “Crypto Wars II” or similar[Doc16].
In general, there are two major problems with the debate surrounding E2EE.
First, government officials often make fear-mongering, misleading, or outright false arguments in support of their position. Here are some examples.
Fear-mongering: At a September 2014 news conference[SC14], then-FBI Director James Comey criticized Apple’s recent announcement of stronger security for iPhones. Emphasis added:
[Comey] cited kidnapping cases, in which exploiting the contents of a seized phone could lead to finding a victim, and predicted there would be moments when parents would come to him “with tears in their eyes, look at me and say, ‘What do you mean you can’t’ “ decode the contents of a phone.
Note that he is not citing real examples; he is speaking hypothetically.
Regarding the same Apple announcement, the Chicago Police Department’s chief of detectives said[TM14]: “Apple will become the phone of choice for the pedophile.”
Anti-encryption arguments commonly invoke images of heinous criminals — terrorists, kidnappers, pedophiles, drug dealers — to create an emotional response in audiences. Because of their emotional power, these kinds of arguments can shut down legitimate counterarguments. They can distract from a lack of concrete evidence behind the argument, and from consideration of the upsides of strong security for individuals.
This is not to dismiss or minimize the harm that such criminals cause to society, which can be immense. But, in evaluating these arguments, try to avoid letting the seriousness of those harms substitute for soundness of the argument.
Misleading: Here is part of a 2017 speech by then-US Deputy Attorney General Rod Rosenstein[Ros17]:
Encrypted communications that cannot be intercepted and locked devices that cannot be opened are law-free zones that permit criminals and terrorists to operate without detection by police and without accountability by judges and juries.
[…]
If companies are permitted to create law-free zones for their customers, citizens should understand the consequences. When police cannot access evidence, crime cannot be solved. Criminals cannot be stopped and punished.
The implication is that end-to-end encryption inevitably leads to criminals running rampant, doing crimes with impunity. This does not follow; it is not true that surveilling communications is the only way to solve crimes and catch criminals.
Outright false: In October 2014, FBI Director Comey delivered a speech[Com14] entirely about this topic. In it, he described four cases in which data from cell phones was supposedly instrumental in solving the crime or convicting the criminal. His descriptions are emotionally weighted in the classic fear-mongering tradition, so I won’t quote them directly.
After the speech, news outlet The Intercept looked into each of the examples Comey gave, and found that cell phone data was actually irrelevant in all of them[FVC14]. As one example, Comey implies that GPS data from a phone was instrumental in convicting a driver who did a fatal hit-and-run; however, the driver had readily confessed to the hit-and-run before his phone came into play.
Furthermore, from the Intercept article: “In a question-and-answer session after his speech, Comey both denied trying to use scare stories to make his point – and admitted that he had launched a nationwide search for better ones, to no avail.”
The second problem with the debate is that it is often not grounded in technological reality. Among non-experts, there is a persistent attitude that it must be possible to create strong cryptography that also allows law enforcement access, but researchers just aren’t trying hard enough, or are being deliberately uncooperative.
In the aftermath of the Paris mass shooting, then-US President Barack Obama said so[Yad15]:
The president on Friday argued there must be a technical way to keep information private, but ensure that police and spies can listen in when a court approves.
Here is another part of Rod Rosenstein’s 2017 speech[Ros17]. He called the idea “responsible encryption”:
Responsible encryption is achievable. Responsible encryption can involve effective, secure encryption that allows access only with judicial authorization. Such encryption already exists. Examples include the central management of security keys and operating system updates; the scanning of content, like your e-mails, for advertising purposes; the simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop.
This, however, is not what advocates mean by “secure encryption”. He is simply describing the pre-E2EE status quo.
Here is another part of James Comey’s 2014 speech[Com14]:
There is a misconception that building a lawful intercept solution into a system requires a so-called “back door,” one that foreign adversaries and hackers may try to exploit.
But that isn’t true. We aren’t seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process — front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.
Here he is responding to an argument different from the one advocates are making. He seems to be defining a “back door” as a cryptanalytic weakness. However, the debate stopped being about that essentially as soon as AES was introduced in 2001. What Comey refers to as a “front door” — something that responds to court orders and legal process — would, contra what he says, be exploitable by foreign adversaries and hackers. If someone can access something in response to a court order from the US, then they can access it in response to a court order from an authoritarian government, or a fake court order, or no court order at all.
A Washington Post editorial from October 2014 encapsulates this kind of technological ignorance perfectly[wap14]:
How to resolve this? A police “back door” for all smartphones is undesirable – a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.
By this point in the course, your understanding of the underlying technology should make it clear to you that there is no way around encryption that can only be used by the “good guys”[Doc16]. It’s not a matter of insufficient wizardry; it is mathematically impossible.
A cipher has no concept of who is using it. If it has a cryptanalytic weakness, anyone can find and exploit it. If it has a small key size, anyone with enough time and computing power can brute-force it. If an external party has copies of encryption keys, attackers can target those. If there are employees of a service provider who have access to users’ plaintext data, attackers can target them.
The resolution of the Apple-FBI dispute is an example. The security firm that found the exploit sold it to the FBI, but they could have easily sold it to a more malicious party, or done something malicious with it themselves. That door wasn’t open exclusively for the FBI; it just so happens that the people who found the door first (as far as the world knows) told the FBI about it first.
Apple’s Client-Side Scanning#
An especially interesting front in the E2EE war was Apple’s attempt to placate the anti-E2EE arguments before introducing E2EE for iCloud. They did so by making a relatively credible attempt at building a backdoor using a novel cryptographic technique, although it was still fraught with problems.
In August 2021, Apple announced several features to be released in an upcoming version of iOS, aimed at protecting child safety online. The one we’re looking at here was that users’ photo libraries would be scanned on-device8Meaning that the scanning was done entirely on the iOS device; it did not entail sending anything over a network. for child sexual abuse material, or CSAM for short.
First, we have to introduce a general set of techniques called image fingerprinting, which takes an image as input and produces a small chunk of data, the fingerprint, as output. Like cryptographic hashing, you can’t reconstruct an image given only its fingerprint, and images that show different things should have different fingerprints. But unlike cryptographic hashing, fingerprinting is designed to ignore small modifications to an image: resizing, slight cropping, color alterations, etc. If you put an image and a 5% shrunken version of the same image through SHA-256, you’ll get different hashes. If you do the same thing with an image fingerprinting algorithm, you should get the same fingerprint. For this reason, image fingerprinting is also sometimes called perceptual hashing.
In the US, it’s illegal to possess CSAM. The National Center for Missing and Exploited Children (NCMEC) maintains a database of known CSAM images — NCMEC is given a special exception to the law. They distribute fingerprints of this database to companies that host large platforms with user-generated content (Google, Meta, etc.), so that they can monitor uploaded content for the presence of known CSAM images without actually possessing the images. Apple was a notable exception: they did not proactively monitor users’ data in iCloud for known CSAM, although they do comply with law enforcement requests for access to specific users’ data.
This new iOS functionality was going to work as follows[app21d]:
New versions of iOS would ship with fingerprints of the NCMEC database.
Every photo that would be synced to iCloud would be fingerprinted and matched against the fingerprint database, on the device. For each photo, the result of this matching, plus a reduced version of the photo, would be encrypted in a specific way to produce a “safety voucher”. (None of this would happen for photos not synced to iCloud.)
The safety vouchers would be uploaded to iCloud along with the photos9The fact that they could have just done the matching server-side, but went to all this trouble instead, is why security researchers believe that this work was laying the foundation for E2EE for iCloud. That’s the only way it makes any sense..
If more than a certain number of safety vouchers contained positive matches, Apple’s servers would be able to combine the safety vouchers into a key that could decrypt the vouchers that contained positive matches. Those would be manually reviewed and possibly reported to NCMEC/law enforcement. Vouchers that didn’t contain positive matches would be impossible to decrypt.
If less than that certain number of vouchers contained positive matches, Apple’s servers would be cryptographically unable to do anything with them. They wouldn’t be able to decrypt them, or even learn how many matches there were.
The actual details are a little more complicated, but that’s the general idea. It rests on a relatively new cryptographic technique called private set intersection (PSI), in which two parties each have some set of items, and they want to compute which items they have in common without revealing the entire sets to each other[BBM+21]. It also involves a technique called threshold secret sharing to require a combination of a specific number of positive-match safety vouchers in order to get a usable decryption key; we’ll see more about secret sharing in the next chapter. Apple’s summary document is worth a read; it’s written very accessibly but without glossing over important details. The cryptography involved in this scheme is credible, and was designed by established cryptographers; in that respect, it’s better than most attempts at technological backdoors.
When this was announced, however, there was a considerable backlash from security and privacy advocates[app21a]. Their major concern was that Apple would eventually be pressured to expand this on-device scanning from CSAM to other types of content[GS21]. There is no technical reason why the same kind of fingerprinting and matching could not be done with, say, terrorist imagery, or LGBTQ+ imagery, or imagery that criticizes an oppressive government.
Apple’s response[app21c] was that they simply wouldn’t do that:
Could governments force Apple to add non-CSAM images to the hash list?
No. Apple would refuse such demands and our system has been designed to prevent that from happening. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. The set of image hashes used for matching are from known, existing images of CSAM and only contains entries that were independently submitted by two or more child safety organizations operating in separate sovereign jurisdictions.
[…]
We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.
Critics — even a government, the German parliament — did not find this reassuring[Owe21]. The problem is that all of Apple’s proposed safeguards against overreach are policies, not technologies, and any policy can be overridden with enough pressure. Think specifically about mainland China, where the government does what people in the West consider heavy-handed censorship, but where Apple makes a lot of its revenue, and even more importantly, does most of its manufacturing. There was no evidence that Apple was actively under pressure to expand the scope of the scan, but the potential was extremely obvious.
Furthermore, there is no technical reason why the scan couldn’t be done even on photos not synced to iCloud. There was concern that Apple would face pressure to do so, since turning off syncing would be an easy and obvious way for people with CSAM on their devices to avoid detection[PPK21]. We can’t know for sure, but presumably Apple’s justification for the restriction was that scanning data that wasn’t destined for Apple servers was a significant line to cross, and too much of a privacy violation.
There was also concern about the fingerprinting algorithm. Whereas companies like Google and Meta use an older, well-analyzed system called PhotoDNA, Apple was going to use a novel system that they called NeuralHash. NeuralHash is proprietary, but security researchers reverse-engineered it from code in iOS, and generated collisions of unrelated images[Cor21]. This opens up the possibility of false positive matches.
After the backlash, Apple announced that they would “delay” the launch of this functionality, and they removed most mentions of it from their website. Finally, in early 2023, Apple released iOS 16.2, which offers E2EE for most types of iCloud data, including photos, under the name “Advanced Data Protection”. There is no on-device scanning.
What Now?#
This debate will continue for years. Governments continue to warn of dire consequences and call for solutions[fiv20]. If a new version of the Apple-FBI dispute arises, it may end up being decided in court, with wide-ranging consequences for society.
One factor that has likely kept the temperature of the debate down is that there are still plenty of electronic communication channels that are not end-to-end encrypted, such as email, phone, SMS, and cloud file storage like Dropbox. Law enforcement can still easily use these for investigations. If that ever changes significantly, expect the debate to flare up again.
Unfortunately, the quality of the debate may not improve. Fear-mongering and technically unfounded arguments are likely to keep appearing, and it can be hard to tell whether a person making them is doing so in bad faith or out of ignorance. The best way to deal with that is to have a solid understanding of the technology, and to be skeptical of hyperbolic claims by default.
Throughout all this, it’s important to keep in mind the many ways in which cryptography benefits society, and the many ways in which people are put at risk by weak cryptography and security[A+15a]. Amid all the discussion of terrorists and criminals, this tends to get lost: without cryptography, the modern Internet would not be possible.
Optional Further Reading#
“Body of Secrets” and “The Shadow Factory”, two books by James Bamford about the history of the NSA. The former is about the NSA before 9/11; the latter is after. [link 1], [link 2]
An essay (with plenty of links to further reading) by Bruce Schneier, about the backlash to iPhone encryption. [link]
Another Bruce Schneier post with lots of links to writing about the Apple-FBI case. [link]
“The Moral Character of Cryptographic Work”, by Phillip Rogaway. It’s written primarily for an audience of academic cryptographers, but presents a case for cryptography as a moral good. [link]