IT Security and the Normalization of Deviance

by Bruce Schneier

Professional pilot Ron Rapp has written a fascinating article on a 2014 Gulfstream plane that crashed on takeoff. The accident was 100% human error and entirely preventable — the pilots ignored procedures and checklists and warning signs again and again. Rapp uses it as example of what systems theorists call the “normalization of deviance,” a term coined by sociologist Diane Vaughan:

Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. But it is a complex process with some kind of organizational acceptance. The people outside see the situation as deviant whereas the people inside get accustomed to it and do not. The more they do it, the more they get accustomed. For instance in the Challenger case there were design flaws in the famous “O-rings,” although they considered that by design the O-rings would not be damaged. In fact it happened that they suffered some recurrent damage. The first time the O-rings were damaged the engineers found a solution and decided the space transportation system to be flying with “acceptable risk.” The second time damage occurred, they thought the trouble came from something else. Because in their mind they believed they fixed the newest trouble, they again defined it as an acceptable risk and just kept monitoring the problem. And as they recurrently observed the problem with no consequence they got to the point that flying with the flaw was normal and acceptable. Of course, after the accident, they were shocked and horrified as they saw what they had done.

The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal — despite that fact that everyone involved knows better.

I think this is a useful term for IT security professionals. I have long said that the fundamental problems in computer security are not about technology; instead, they’re about using technology. We have lots of technical tools at our disposal, and if technology alone could secure networks we’d all be in great shape. But, of course, it can’t. Security is fundamentally a human problem, and there are people involved in security every step of the way. We know that people are regularly the weakest link. We have trouble getting people to follow good security practices and not undermine them as soon as they’re inconvenient. Rules are ignored.

As long as the organizational culture turns a blind eye to these practices, the predictable result is insecurity.

None of this is unique to IT. Looking at the healthcare field, John Banja identifies seven factors that contribute to the normalization of deviance:

The rules are stupid and inefficient! Knowledge is imperfect and uneven. The work itself, along with new technology, can disrupt work

behaviors and rule compliance.

I’m breaking the rule for the good of my patient! The rules don’t apply to me/you can trust me. Workers are afraid to speak up. Leadership withholding or diluting findings on system problems.

Dan Luu has written about this, too.

I see these same factors again and again in IT, especially in large organizations. We constantly battle this culture, and we’re regularly cleaning up the aftermath of people getting things wrong. The culture of IT relies on single expert individuals, with all the problems that come along with that. And false positives can wear down a team’s diligence, bringing about complacency.

I don’t have any magic solutions here. Banja’s suggestions are good, but general:

Pay attention to weak signals. Resist the urge to be unreasonably optimistic. Teach employees how to conduct emotionally uncomfortable

conversations.

System operators need to feel safe in speaking up. Realize that oversight and monitoring are never-ending.

The normalization of deviance is something we have to face, especially in areas like incident response where we can’t get people out of the loop. People believe they know better and deliberately ignore procedure, and invariably forget things. Recognizing the problem is the first step toward solving it.

Integrity and Availability Threats

By Bruce Schneier

Cyberthreats are changing. We’re worried about hackers crashing airplanes by hacking into computer networks. We’re worried about hackers remotely disabling cars. We’re worried about manipulated counts from electronic voting booths, remote murder through hacked medical devices and someone hacking an Internet thermostat to turn off the heat and freeze the pipes.

The traditional academic way of thinking about information security is as a triad: confidentiality, integrity, and availability. For years, the security industry has been trying to prevent data theft. Stolen data is used for identity theft and other frauds. It can be embarrassing, as in the Ashley Madison breach. It can be damaging, as in the Sony data theft. It can even be a national security threat, as in the case of the Office of Personal Management data breach. These are all breaches of privacy and confidentiality.

As bad as these threats are, they seem abstract. It’s been hard to craft public policy around them. But this is all changing. Threats to integrity and availability are much more visceral and much more devastating. And they will spur legislative action in a way that privacy risks never have.

Take one example: driverless cars and smart roads.

We’re heading toward a world where driverless cars will automatically communicate with each other and the roads, automatically taking us where we need to go safely and efficiently. The confidentiality threats are real: Someone who can eavesdrop on those communications can learn where the cars are going and maybe who is inside them. But the integrity threats are much worse.

Someone who can feed the cars false information can potentially cause them to crash into each other or nearby walls. Someone could also disable your car so it can’t start. Or worse, disable the entire system so that no one’s car can start.

This new rise in integrity and availability threats is a result of the Internet of Things. The objects we own and interact with will all become computerized and on the Internet. It’s actually more complicated.

What I’m calling the “World Sized Web” is a combination of these Internet-enabled things, cloud computing, mobile computing and the pervasiveness that comes from these systems being always on all the time. Together this means that computers and networks will be much more embedded in our daily lives. Yes, there will be more need for confidentiality, but there is a newfound need to ensure that these systems can’t be subverted to do real damage.

It’s one thing if your smart door lock can be eavesdropped to know who is home. It’s another thing entirely if it can be hacked to prevent you from opening your door or allow a burglar to open the door.

In separate testimonies before different House and Senate committees last year, both the Director of National Intelligence James Clapper and NSA Director Mike Rogers warned of these threats. They both consider them far larger and more important than the confidentiality threat and believe that we are vulnerable to attack.

And once the attacks start doing real damage — once someone dies from a hacked car or medical device, or an entire city’s 911 services go down for a day — there will be a real outcry to do something.

Congress will be forced to act. They might authorize more surveillance. They might authorize more government involvement in private-sector cybersecurity. They might try to ban certain technologies or certain uses. The results won’t be well-thought-out, and they probably won’t mitigate the actual risks. If we’re lucky, they won’t cause even more problems.

I worry that we’re rushing headlong into the World-Sized Web, and not paying enough attention to the new threats that it brings with it. Again and again, we’ve tried to retrofit security in after the fact.

It would be nice if we could do it right from the beginning this time. That’s going to take foresight and planning. The Obama administration just proposed spending $4 billion to advance the engineering of driverless cars.

How about focusing some of that money on the integrity and availability threats from that and similar technologies?

Security vs. Surveillance

by Bruce Schneier

Both the “going dark” metaphor of FBI Director James Comey and the contrasting “golden age of surveillance” metaphor of privacy law professor Peter Swire focus on the value of data to law enforcement. As framed in the media, encryption debates are about whether law enforcement should have surreptitious access to data, or whether companies should be allowed to provide strong encryption to their customers.

It’s a myopic framing that focuses only on one threat — criminals, including domestic terrorists — and the demands of law enforcement and national intelligence. This obscures the most important aspects of the encryption issue: the security it provides against a much wider variety of threats.

Encryption secures our data and communications against eavesdroppers like criminals, foreign governments, and terrorists. We use it every day to hide our cell phone conversations from eavesdroppers, and to hide our Internet purchasing from credit card thieves. Dissidents in China and many other countries use it to avoid arrest. It’s a vital tool for journalists to communicate with their sources, for NGOs to protect their work in repressive countries, and for attorneys to communicate with their clients.

Many technological security failures of today can be traced to failures of encryption. In 2014 and 2015, unnamed hackers — probably the Chinese government — stole 21.5 million personal files of U.S. government employees and others. They wouldn’t have obtained this data if it had been encrypted. Many large-scale criminal data thefts were made either easier or more damaging because data wasn’t encrypted: Target, TJ Maxx, Heartland Payment Systems, and so on. Many countries are eavesdropping on the unencrypted communications of their own citizens, looking for dissidents and other voices they want to silence.

Adding backdoors will only exacerbate the risks. As technologists, we can’t build an access system that only works for people of a certain citizenship, or with a particular morality, or only in the presence of a specified legal document. If the FBI can eavesdrop on your text messages or get at your computer’s hard drive, so can other governments. So can criminals. So can terrorists. This is not theoretical; again and again, backdoor accesses built for one purpose have been surreptitiously used for another. Vodafone built backdoor access into Greece’s cell phone network for the Greek government; it was used against the Greek government in 2004-2005. Google kept a database of backdoor accesses provided to the U.S. government under CALEA; the Chinese breached that database in 2009.

We’re not being asked to choose between security and privacy. We’re being asked to choose between less security and more security.

This trade-off isn’t new. In the mid-1990s, cryptographers argued that escrowing encryption keys with central authorities would weaken security. In 2013, cybersecurity researcher Susan Landau published her excellent book “Surveillance or Security?” which deftly parsed the details of this trade-off and concluded that security is far more important.

Ubiquitous encryption protects us much more from bulk surveillance than from targeted surveillance. For a variety of technical reasons, computer security is extraordinarily weak. If a sufficiently skilled, funded, and motivated attacker wants in to your computer, they’re in. If they’re not, it’s because you’re not high enough on their priority list to bother with. Widespread encryption forces the listener — whether a foreign government, criminal, or terrorist — to target. And this hurts repressive governments much more than it hurts terrorists and criminals.

Of course, criminals and terrorists have used, are using, and will use encryption to hide their planning from the authorities, just as they will use many aspects of society’s capabilities and infrastructure: cars, restaurants, telecommunications. In general, we recognize that such things can be used by both honest and dishonest people. Society thrives nonetheless because the honest so outnumber the dishonest. Compare this with the tactic of secretly poisoning all the food at a restaurant. Yes, we might get lucky and poison a terrorist before he strikes, but we’ll harm all the innocent customers in the process. Weakening encryption for everyone is harmful in exactly the same way.

The Importance of Strong Encryption to Security

By Bruce Schneier

Encryption keeps you safe. Encryption protects your financial details and passwords when you bank online. It protects your cell phone conversations from eavesdroppers. If you encrypt your laptop — and I hope you do — it protects your data if your computer is stolen. It protects our money and our privacy.

Encryption protects the identity of dissidents all over the world. It’s a vital tool to allow journalists to communicate securely with their sources, NGOs to protect their work in repressive countries, and lawyers to communicate privately with their clients. It protects our vital infrastructure: our communications network, the power grid and everything else. And as we move to the Internet of Things with its cars and thermostats and medical devices, all of which can destroy life and property if hacked and misused, encryption will become even more critical to our security.

Security is more than encryption, of course. But encryption is a critical component of security. You use strong encryption every day, and our Internet-laced world would be a far riskier place if you didn’t.

Strong encryption means unbreakable encryption. Any weakness in encryption will be exploited — by hackers, by criminals and by foreign governments. Many of the hacks that make the news can be attributed to weak or — even worse — nonexistent encryption.

The FBI wants the ability to bypass encryption in the course of criminal investigations. This is known as a “backdoor,” because it’s a way at the encrypted information that bypasses the normal encryption mechanisms. I am sympathetic to such claims, but as a technologist I can tell you that there is no way to give the FBI that capability without weakening the encryption against all adversaries. This is crucial to understand. I can’t build an access technology that only works with proper legal authorization, or only for people with a particular citizenship or the proper morality. The technology just doesn’t work that way.

If a backdoor exists, then anyone can exploit it. All it takes is knowledge of the backdoor and the capability to exploit it. And while it might temporarily be a secret, it’s a fragile secret. Backdoors are how everyone attacks computer systems.

This means that if the FBI can eavesdrop on your conversations or get into your computers without your consent, so can cybercriminals. So can the Chinese. So can terrorists. You might not care if the Chinese government is inside your computer, but lots of dissidents do. As do the many Americans who use computers to administer our critical infrastructure. Backdoors weaken us against all sorts of threats.

Either we build encryption systems to keep everyone secure, or we build them to leave everybody vulnerable.

Even a highly sophisticated backdoor that could only be exploited by nations like the United States and China today will leave us vulnerable to cybercriminals tomorrow. That’s just the way technology works: things become easier, cheaper, more widely accessible. Give the FBI the ability to hack into a cell phone today, and tomorrow you’ll hear reports that a criminal group used that same ability to hack into our power grid.

The FBI paints this as a trade-off between security and privacy. It’s not. It’s a trade-off between more security and less security. Our national security needs strong encryption. I wish I could give the good guys the access they want without also giving the bad guys access, but I can’t. If the FBI gets its way and forces companies to weaken encryption, all of us — our data, our networks, our infrastructure, our society — will be at risk.

Powered by themekiller.com anime4online.com animextoon.com