Twenty years ago, when I was writing Accidental Empires, my book about the PC industry, I included near the beginning a little rant about how good engineers were incapable of lying, because their work relied on Terminal A being positive and not negative and if they lied about such things then nothing would ever work. That was before I learned much about data security, where apparently lying is part of the game. Well, based on recent events at RSA, Lockheed Martin, and other places, I think lying should not be part of the game.Go read the rest of the article and learn something useful and discover you have a crystal ball into the future.
Was there a break-in? Was data stolen? Was there an unencrypted database of SecureID seeds and serial numbers? All we can say at best is that we don’t really know. And in some quarters that is supposed to make us feel more secure because it means the bad guys are equally clueless. Except they aren’t, because they broke-in, they stole data, they knew what the data was good for while we — including SecureID customers it seems — are still mainly in the dark.
A lot of this is marketing — a combination of “we are invincible” and “be afraid, be very afraid.” But a lot of it is intended also to keep us locked-in to certain technologies. To this point most data security systems have been proprietary and secret. If an algorithm appears in public it escaped, was stolen, or reverse-engineered. Why should such architectural secrecy even be required if those 1024- or 2048-bit codes really would take a thousand years to crack? Isn’t the encryption, combined with a hard limit on login attempts, good enough?
Good question.
Alas, the answer is “no.” There are several reasons for this but the largest by far is that the U.S. government does not want us to have really secure networks. The government is more interested in snooping in on the rest of the world’s insecure networks. The U.S. consumer can take the occasional security hit, our spy chiefs rationalize, if it means our government can snoop global traffic.
This is National Security, remember, which means ethical and common sense rules are suspended without question.
RSA, Cisco, Microsoft and many other companies have allowed the U.S. government to breach their designs. Don’t blame the companies, though: if they didn’t play along in the U.S. they would go to jail. Build a really good 4096-bit AES key service and watch the Justice Department introduce themselves to you, too.
The feds are so comfortable in this ethically-challenged landscape in large part because they are also the largest single employer… on both sides. One in four U.S. hackers is an FBI informer, according to The Guardian. The FBI and Secret Service have used the threat of prison to create an army of informers among online criminals.
While security dudes tend to speak in terms of black or white hats, it seems to me that nearly all hats are in varying shades of gray.
Here's Cringely's bottom line:
We’ve created a culture of self-perpetuating paranoia in military-industrial data security by building systems that are deliberately compromised then arguing that draconian measures are required to defend these holes we’ve made ourselves. This helps the unquestioned three-letter agencies maintain political power, doing little or nothing to increase national security, while at the same time compromising personal security for all of us.I say "three cheers for Cringely for calling a spade a spade!"
There is no excuse for bad engineering.
The joke is that national governments have the deep pockets to keep their secrets secret. As Cringely points out, having "leaky" security is of benefit only to allow government to spy on individuals (at least the ones not technologically sophisticated enough to protect themselves). Be assured, organized crime and terrorist can afford the necessary level of security (even if Osama Bin Laden appears to have ignored the need of it in his Pakistani-protected hideout).
No comments:
Post a Comment