When Apple refused to help the F.B.I. hack into the iPhone of a shooter in the attack last December in San Bernardino, both sides claimed their actions defended the common good. Apple warned that intentionally breaking their own software would put the safety and privacy of millions of their customers at risk. The government said that not accessing the phone to gather more intelligence would put the public at risk. They also claimed they could be trusted with secret backdoors into software and devices—that is, special access that makes systems, including those of private companies, vulnerable. The tech world was not so optimistic.
In August, a highly sensitive toolkit of exploits—taking advantage of previously unknown vulnerabilities—held by the National Security Agency was leaked. Apple was right.
To be sure, intelligence gathering and data collection are important tools for law enforcement, but they also require strong checks and balances. The leaked N.S.A. exploits have compromised the security of widely used network equipment found in our offices and schools. The privacy and safety of U.S. citizens are no longer just in the hands of the N.S.A.; malicious hackers and foreign governments can join the surveillance party. There are very few binding guidelines for the use and reporting of new vulnerabilities and hacks. When the N.S.A., whose mission includes not only intelligence and surveillance but also information security, discovers new vulnerabilities, it does not have to inform hardware or software makers. Without strong rules for reporting technological vulnerabilities to hardware and software makers, the government is failing to truly protect the common good.