The recent huge security breach at Sony caps a bad year for big companies, with breaches at Target, Apple, Home Depot, P.F.Chang’s, Neiman Marcus, and no doubt other companies who haven’t admitted it yet. Is this the new normal? Is there any hope for our private data? I’m not sure, but here are three observations.

Systems are so complex that nobody understands them

This week Brian Krebs reported on several thousand Hypercom credit card terminals that all stopped working last Sunday. Had they all been hacked? No, they were doing exactly what they’d been programmed to do.

When these terminals send customer transaction data to the bank, the session between the terminal and the bank is protected by a cryptographic certificate similar to the ones used in secure Web sessions (the ones with the lock or the green address bar.) Each certificate includes an expiration date. Those terminals are pretty old, and their certificate was issued in 2004 with an expiration date ten years in the future, a date that arrived last Sunday. Expired certificates are no longer valid either for web sessions or for credit card terminals. Oops. Setting an expiration date is a reasonable thing to do for a variety of technical reasons, but someone does have to remember to renew it.

Back in 2004, the people who designed the terminals either assumed that the terminals would be replaced by now, or that well in advance of the expiration date the vendor would update the terminals with new certificates. If they made the first assumption, they came pretty close, since those terminals will have to be replaced by October 2015 to handle the long delayed switch to cards with chips.

The second assumption no doubt seemed reasonable, but during the past decade Hyperion was sold, merged, spun off and re-merged so I’d be surprised if there were many (any?) people around now at the current company Equinox who remembered that the certificates would expire.

Even if they had remembered, it’s not clear how easy it would have been to find all the terminals that needed reprogramming. Hyperion probably sold the terminals to specialist “acquirers” such as Elavon, Vantiv, and First Data, who then sold or leased them to the merchants. There’s been plenty of M&A in that business, too, and their records about exactly which merchant has exactly which kind of terminal are unlikely to be in perfect order.

It wouldn’t be technically hard to have the server software with which the terminals communicate look for signatures using certificate that would expire soon, but the acquirers often are just sales agents for banks, so the servers belong to someone else, and the signature checking software is security-critical, so it’s not something they will change quickly. Again, in principle this could all be made to work, but it’s a lot of moving parts at a lot of different organizations to deal with a problem that sounds extremely obscure and hypothetical until it happens.

Multiply this kind of chaos by several thousand, and you get the state of corporate computer security. The priority is always to make things work NOW, not to keep systems simple for long term stability. With all of the different parts, companies frequently make what are in retrospect obvious foolish mistakes, such as encrypting data in some but not all of their network traffic.

The incentives are wrong

The Sony breach is different from the others because Sony itself (and its employees) is suffering the pain. For all the others, the breach was primarily customer data, so the burden falls on the customers to notice bogus transactions, on the customers’ banks to deal with the bogus transactions and issue new cards, and possibly the merchants whose transactions got charged back. Target, Home Depot, and so forth certainly got bad publicity, and in some cases they may be the target of lawsuits, but no money went out the door of Target et al. to the crooks.

Lacking direct financial losses, the incentives for internal security people to find breaches like these are not compelling. More than once (as discussed in the next section) I’ve given a company direct evidence that they have a security breach, and they’ve just denied that there could be a problem. Investigating my report involves work, and if they find out I was right, they look bad since they failed to prevent it. I’m not sure how to realign the incentives, but it’s got to happen.

Banks are just as bad, since their goal is generally more to avoid being sued or being sanctioned by regulators than to minimize fraud. If a bank can show that they were doing the same thing as every other bank (an approach usually called “best practices”) they’re generally off the hook even if those practices are obviously ineffective. This leads to nonsense like those questons asking you about your pet’s favorite color, which have somehow been blessed as a substitute for actual two-factor authentication, which would cost money. (There are banks that do two-factor authentication, but not many in the U.S.)

Again, this good-enough-herd mentality is hard to fix. We clearly don’t want a bank to do worse than its peers, but we also don’t want banks all to use security measures that don’t work.

Some companies are a lot better than others

I don’t have any direct data about credit card security, but I have a lot about e-mail address security. Whenever I do business with a company online or sign up for their mailings, I use a unique e-mail address. If I start getting mail to that address from someone else, I know it’s leaked.

What I have found is that some companies leak and some don’t. The Wall Street Journal has repeatedly leaked my address to spammers, but the New York Times hasn’t. TD Ameritrade blew off repeated reports (including several from me) that e-mail addresses were leaking, until they found malware running on one of their internal servers. Vanguard hasn’t leaked any of my information. The Economist and Forbes (more than once) have leaked, the Atlantic Monthly hasn’t. There are a lot more, but you get the idea.

Maybe the non-leakers are just lucky, but I think it’s more likely to be a better security culture. Companies are very skittish about talking about their security practices, usually for good reason since the questions tend to come when they’ve screwed up. Perhaps highlighting companies that are succeeding would work better.

(Republished from CAUCE President John Levine’s blog