Face-off

Do data security breach notification laws work?

Security Experts Bruce Schneier & Marcus Ranum debate the impact of state data breach notification laws.


POINT by Marcus Ranum

THERE'S AN OLD SAYING, "Sometimes things have to get a lot worse before they can get better." If that's true, then breach notification laws offer the chance of eventual improvements in security, years hence.

For now? They're a huge distraction that has more to do with butt-covering and paperwork than improving systems security.

Somehow, the security world has managed to ignore the effect voluntary (?) notification and notification laws have had in other fields-namely, none.We regularly get bank disclosure statements, stock plan announcements, HIPAA disclosures, etc.-and they all go immediately in the wastebasket, unread.When I got my personal information breach notification from the Department of Veterans Affairs, it went in the trash too.

"Your personal information has been disclosed...yadda, yadda, yadda"- annoying stuff that's my responsibility to deal with because someone, someplace else, didn't handle data about me responsibly.We are deluged with fineprinted disclosures and warnings, and eventually they're all as empty of meaning as the Department of Homeland Security's color-coded terrorism threat warning level.

Aside from causing numbness in customers' minds, breach notification laws don't actually do anything to encourage good behavior; they just make bad behavior more obvious and expensive. The theory, I suppose, is that businesses will improve their security out of fear of losing customers due to a breach. There are three problems with this theory:

  • Most customers seem to assume that if one bank/ brokerage/hospital/whatever can't keep its data secure, it's likely that none of them can, and there's zero incentive to switch.
  • It's already too late. You might be able to motivate a customer to switch providers before there is a problem, but after there's a problem, they're going to be more likely to spend their time calling in fraud alerts and looking at their bank statements than complicating things further by switching providers.
  • It assumes there is actually a free market. My Social Security number was leaked by the U.S. government. As much as I'd like to fire them, I can't.

    All I see breach notification laws doing is informing customers that they need to pay attention to their horses after they've left the barn via an unlocked door in someone else's barn. Not to over-stretch an analogy, but if you let my horse out of your barn, it's your problem to catch him safely and if anything bad happens to him while he's gone walkabout, it's your responsibility. What these data breach laws are really saying to the consumer is "our mistake is your problem and we're bending over backwards to make sure you know that...it's your problem."

    We know that's silly.

    But breach notification laws encourage businesses and government agencies to worry about entirely the wrong thing-they should be worrying about the barn door. Most importantly, it shouldn't be the customer's problem.

    A lot of personal information is at risk because it is stored in systems that are not well designed to separate information within the organization. Some of us were warning about this back in the late 1980s; it's a bad idea to have your database configured so every secretary and contractor can access any record it contains.

    As long as systems are built that way, there will be news stories such as "Bored contractors examine presidential candidates' medical records" or "Customer database sold by ex-employee." This is not rocket science; it's just common sense. I'd rather have my government agencies and commercial providers worrying about how to fix their poorly designed systems than having their lawyers wordsmithing breach notices.

    Marcus Ranum is the CSO of Tenable Network Security and is a well-known security technology innovator, teacher and speaker. For more information, visit Ranum.com.

    COUNTERPOINT by Bruce Schneier

    THERE ARE THREE REASONS for breach notification laws. One, it's common politeness that when you lose something of someone else's, you tell him. The prevailing corporate attitude before the law-"They won't notice, and if they do notice they won't know it's us, so we are better off keeping quiet about the whole thing"-is just wrong. Two, it provides statistics to security researchers as to how pervasive the problem really is. And three, it forces companies to improve their security.

    That last point needs a bit of explanation. The problem with companies protecting your data is that it isn't in their financial best interest to do so. That is, the companies are responsible for protecting your data, but bear none of the costs if your data is compromised. You suffer the harm, but you have no control-or even knowledge- of the company's security practices. The idea behind such laws, and how they were sold to legislators, is that they would increase the cost-both in bad publicity and the actual notification-of security breaches, motivating companies to spend more to prevent them. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.

    So how has it worked?

    Earlier this year, three researchers at the Heinz School of Public Policy and Management at Carnegie Mellon University-Sasha Romanosky, Rahul Telang and Alessandro Acquisti-tried to answer that question. They looked at reported data breaches and rates of identity theft from 2002 to 2007, comparing states with a law to states without one. If these laws had their desired effects, people in states with notification laws should experience fewer incidences of identity theft. The result: not so much. The researchers found data breach notification laws reduced identity theft by just 2 percent on average.

    I think there's a combination of things going on. Identity theft is being reported far more today than five years ago, so it's difficult to compare identity theft rates before and after the state laws were enacted.Most identity theft occurs when someone's home or work computer is compromised, not from theft of large corporate databases, so the effect of these laws is small. Most of the security improvements companies made didn't make much of a difference, reducing the effect of these laws.

    The laws rely on public shaming. It's embarrassing to have to admit to a data breach, and companies should be willing to spend to avoid this PR expense. The problem is, in order for this to work well, public shaming needs the cooperation of the press. And there's an attenuation effect going on. The first major breach after the first state disclosure law was in February 2005 in California, when ChoicePoint sold personal data on 145,000 people to criminals. The event was big news, ChoicePoint's stock tanked, and it was shamed into improving its security.

    Next, LexisNexis exposed personal data on 300,000 individuals, and then Citigroup lost data on 3.9 million. The law worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. Data breach stories felt more like "crying wolf" and soon, data breaches were no longer news.

    Today, the remaining cost is that of the direct mail campaign to notify customers, which often turns into a marketing opportunity.

    I'm still a fan of these laws, if only for the first two reasons I listed. Disclosure is important, but it's not going to solve identity theft. As I've written previously, the reason theft of personal information is common is that the data is valuable once stolen. The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it's to make it difficult to use.

    Disclosure laws only deal with the economic externality of data owners protecting your personal information. What we really need are laws prohibiting financial institutions from granting credit to someone using your name with only a minimum of authentication.

    Bruce Schneier is chief security technology officer of BT Global Services and the author of Schneier on Security. For more information, visit Schneier.com.

    Want more Face-Offs?
    Read other Ranum-Schneier discussions of hot-button security issues.


    This was first published in January 2009