How much technological assistance should the government be able to compel in an investigation, and at what risk to privacy? On March 1, both Apple executives and the director of the Federal Bureau of Investigation testified about this question before the House Judiciary Committee. The hearing continued the unfolding drama of Apple’s objection to a court order in the investigation of the San Bernardino shootings that commanded them to produce a custom version of the iPhone operating system with its security protections disabled, so the F.B.I. can hack into the shooter’s phone.
The underlying dilemma—how to balance the government’s search and surveillance powers against the limits imposed by modern encryption technology—has been with us for a while, and this case will not be the last to raise it. But involving as it does a mass shooting on American soil in which the shooters proclaimed allegiance to the Islamic State, this case presents the starkest contrast between privacy and national security so far encountered.
Hard cases make bad law, however, and outrage combined with fear for safety makes worse law yet. We should step back from the exigencies of a terrorist threat to consider the best policy going forward. Effectively unbreakable encryption is a reality not because of political, business or even technological decisions, but because the underlying mathematics makes it possible and a networked world makes it necessary. There will be a case in the future where no one, not even the phone’s maker, can hack in at all. We should not establish the bad precedent of compelling the production of broken software in order to achieve the very temporary security it might deliver in the present.