Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security

Decrypting the Secret to Strong Security 288

farrellj writes "Cnet has an excellent article by Whitfield Diffie, who has probably has forgotten more about crypto than 99.9% of us will ever know, explains why secrecy does not equal security. The article also addresses the whole "open source vs proprietary software" security issue. A definite *must read* for anyone concerned about security...and that should be everyone!"
This discussion has been archived. No new comments can be posted.

Decrypting the Secret to Strong Security

Comments Filter:
  • Accuracy (Score:2, Funny)

    by Anonymous Coward
    who has probably has forgotten more about crypto than 99.9% of us will ever know

    What's the margin of error on that figure?
    • >>who has probably has forgotten more about crypto than 99.9% of us will ever know
      >What's the margin of error on that figure?

      How about (-0.0,+0.1)?

    • Re:Accuracy (Score:4, Funny)

      by Anonymous Coward on Thursday January 16, 2003 @11:48AM (#5095132)
      It is known that 84.2% of people make up percentages on the spot... I would bet that the rest use outdated data (e.g. older than 1 second).
    • Re:Accuracy (Score:5, Insightful)

      by monkeydo ( 173558 ) on Thursday January 16, 2003 @11:52AM (#5095176) Homepage
      That may be an excellent article for someone who has never been told that secrecy != security, but he didn't really say anything new. He didn't even really support any of his points. It isn't even really an article, more like a blurb. It's like someone at CNET said, "Give us 1,000 words on why OSS is good."
      • Re:Accuracy (Score:5, Insightful)

        by HawkinsD ( 267367 ) on Thursday January 16, 2003 @12:14PM (#5095362)
        Dude, CNet is a general-audience wide-circulation publication. Yes, the geeks that hang out in here all know this stuff already, but my clients, with whom my company must exchange data securely, may not know anything about why open source is good.

        Anything that helps convince my crypto-less clients to use GnuPG [gnupg.org] is very, very helpful.

    • >> who has probably has forgotten more about crypto than 99.9% of us will ever know
      > What's the margin of error on that figure?

      I'm more intrigued by the obviously new tense introduced in that sentence. Its expressive possibilities are quite are staggering.
  • FP! ...anyway... (Score:4, Informative)

    by MmmmAqua ( 613624 ) on Thursday January 16, 2003 @11:43AM (#5095095)
    Whitfield Diffie, who has probably has forgotten more about crypto than 99.9% of us will ever know, explains why secrecy does not equal security.

    For an excellent treatment of this important point, that secrecy != security, read Bruce Schneier's "Secrets and Lies: Digital Security in a Networked World".
    It's the best book on the topic available.
    • Re:FP! ...anyway... (Score:3, Interesting)

      by Anonymous Coward
      Also check out the "cryptogram" newsletters that Bruce Schneier writes at counterpane.com. He devotes some of the newsletter to discussing current events/topics and the security involved therein. Very interesting stuff.
    • Whitfield Diffie, who has probably has forgotten more about crypto than 99.9% of us will ever know, explains why secrecy does not equal security.

      And he would tell us all about it if he had a mouth [com.com]

    • Wrong. S&L is the best book to give to your boss to get hir to understand why you want to devote a bit of time to securing your new product instead of releasing it as soon as it's semi-functional. S&L is not a very technical book (not like Applied at all), and parts are chapter-long advertisements for Bruce's new-at-the-time-of-publishing business, but but it can be appreciated even by marketdroids and pointyhairs.
      • by MmmmAqua ( 613624 )
        I have to disagree. Secrets and Lies is a great book because it is not technical. It presents clearly the problems and challenges associated with securing a system, and then discusses means to solve the problems and overcome the challenges. It makes you realize that security must be an integral part of a system, not a bolted-on afterthought.

        In discussing these things in a non-technical manner, Schneier gets you (as a developer) to stop thinking about which trendy algorithm or PKI you're going to tack on to your product to call it secure, and start thinking about the security of the system itself. So you use cryptography; so what? What's the point in encrypting your data if you don't also ensure its authenticity and origin? You're using PKI to secure communications; so what? Are you also ensuring the security and integrity of the keys' local storage? Security is a process, not a product, and the biggest problem with purely technical books on cryptography or security (they're not the same thing) is that they give the impression that you can sprinkle their code samples throughout your project and have it be magically secure.

        It's a bit like me reading a book on security and declaring myself an expert because I read a book on security. Knowledge != understanding.
    • Re:FP! ...anyway... (Score:4, Informative)

      by ssimpson ( 133662 ) <slashdot@samsim[ ]n.com ['pso' in gap]> on Thursday January 16, 2003 @12:48PM (#5095710) Homepage

      It's the best book on the topic available.

      Actually, I beg to differ. Security Engineering by Dr Ross Anderson is IMHO a far more rigorous treatment of this subject. Details are here [cam.ac.uk]. It's even just as easy to read as Schneiers book...Of course, Bruce is a far better at self marketting.

      I am looking forward to getting Schneiers new Practical Cryptography book though (here [wiley.com]).

  • by Anonymous Coward on Thursday January 16, 2003 @11:44AM (#5095104)
    I just double ROT-13 everything for maximum protection. It seems to work so far. -- Note this message has been encrypted with double ROT-13 any attempts to understand it will in violation of the DMCA and will be duly noted.
    • by KDan ( 90353 ) on Thursday January 16, 2003 @11:51AM (#5095163) Homepage
      You fool! As is well known to anyone who follows Microsoft security bulletins (and who knows more about security than Microsoft) you need to use octuple-ROT-13 at least to guarantee good security!

      Daniel
      • You could always rot26 it since, that would be twice as secure as rot13.

        OR!

        I always use primes... everyone in crytology knows you need to use primes. So, you have to use two primes, like rot13 it 5 times, then 3 times. How do you think its going to work without using primes?

        OR!

        Another way to secure your data is to use rot(prime). I also found that you can rot3 and then rot23 it, or even rot7 and rot19.

        Luckly I didn't do that to this post or else it might have been impossible to ever read.
    • Can anyone from a non-DMCA country crack his ROT-13 and translate? I'd love to know what this guy said.
  • Security (Score:3, Insightful)

    by Alcohol Fueled ( 603402 ) on Thursday January 16, 2003 @11:45AM (#5095106) Homepage
    "In fact, auditing the programs on which an enterprise depends for its own security is a natural function of the enterprise's own information-security organization."

    To me, that says that making sure the programs used for a company's network security or documents or whatever actually work and protect the network. Too bad it seems that a lot of companies lack the protection that is supposed to be a "natural function" of the company's network/data security personnel.

  • by Chocolate Teapot ( 639869 ) on Thursday January 16, 2003 @11:46AM (#5095115) Homepage Journal
    The secret to strong security: less reliance on secrets
    I have a couple of rottweilers and make no secret of it. Wanna try some social engineering on them?
  • ... unless a woman enters the loop!
  • random eyes (Score:5, Insightful)

    by oliverthered ( 187439 ) <{moc.liamtoh} {ta} {derehtrevilo}> on Thursday January 16, 2003 @11:47AM (#5095125) Journal
    Whilst not quite in the random eye meaning of the article.

    OSS does need proper audit and change tracking.
    I've looked thorough quite a bit of OSS, and I've fixed a few bugs,
    But apart from a patch there's no real way to track what code I thought needed atention, what was good and what was a mess.

    Patches are good for tracking maturity/stability if used well, a section of the code that hasn't been patched for a while is either very stable or needs looking at.
    • Wouldn't bugzilla and other similar bug tracking systems fit into this? If you read some code and think that it needs attention, you raise a bug, and this will either track it until it's fixed, or record a reason why it doesn't need fixing.
  • Then again... (Score:4, Interesting)

    by KDan ( 90353 ) on Thursday January 16, 2003 @11:48AM (#5095131) Homepage
    One of his statements begs a question. Diffie says: "A secret that cannot be readily changed should be regarded as a vulnerability."

    Yet asymmetric crypto (which I believe was publicised by Diffie and Helman (sp?) first) relies on one secret (the private key) being kept very very securely. Not only that, but if asymmetric crypto is to be any use, the secret should be kept for a fairly long time, as long as a signature needs to be valid. If you're going to use asymmetric crypto for legal purposes, to sign stuff, for instance, then the secret cannot be easily changed (unless there's some sort of central repository of keys that actually authenticates you properly when you ask to change your key, but even that is a bit dodgy).

    Is it just me or does Diffie's statement, in a generalised form, kind of nullify the usefulness of asymmetric crypto? Or maybe I've missed the point...

    Daniel
    • Re:Then again... (Score:3, Informative)

      by Anonymous Coward
      You missed the point...

      Everybody can know the RSA algorithm, it's no secret. If everybody knows the code then the "good guys" and the "bad guys" can look at it. So, if in all this years nobody from the "good guys" found a flaw in it, it means that almost by sure it is safe.

      Now image a crypto algorithm that is kept secrept. There are less eyes looking at it. The "good guys" don't waste much time reverse-engineering it, but the "bad guys" do. So the probability of a "bad guy" finding a flaw before the "good guys" is much bigger.

      The secret is in the key, not the algorithm. Keys are easially changed, algorithms no
    • Re:Then again... (Score:3, Interesting)

      by schon ( 31600 )
      One of his statements begs a question. Diffie says: "A secret that cannot be readily changed should be regarded as a vulnerability."

      Yes, and this is true.

      asymmetric crypto (which I believe was publicised by Diffie and Helman (sp?) first) relies on one secret (the private key) being kept very very securely.

      And this has what to do with the above statement?

      You said it yourself: the private key being secret.

      In any properly designed system, the key will be easy to change.

      If you design (or use) a system, and can't change the key easily, then yes, it's a vulnerability.

      Solution? make the keys easy to change.

      does Diffie's statement, in a generalised form, kind of nullify the usefulness of asymmetric crypto?

      No. Not Unless you use asymmetric crypto improperly.
    • Re:Then again... (Score:5, Informative)

      by R.Caley ( 126968 ) on Thursday January 16, 2003 @12:15PM (#5095365)
      If you're going to use asymmetric crypto for legal purposes, to sign stuff, for instance, then the secret cannot be easily changed (unless there's some sort of central repository of keys that actually authenticates you properly when you ask to change your key, but even that is a bit dodgy).

      I don't think it's quite that bad. Imagine you are maintaining a repository of signed documents (eg security patches for an OS). You sign these with a private key and make sur ethe public key is widely advertised, so people can check that your documents have not been compromised.

      Now, assume your private key is compromised. This is bad but not the end of civilisation as we know it. You can make sure the world knows not to trust that key, at which point is as if your repository had never existed, and you are starting from scratch. You would need to get your documents back from a trusted archive (you did take backups didn't you:-)), and sign them with a new key pair. You are back in busines as soon as the new public key had been recieved and verified by enough trustworthy people.

      So, loss of the secret is a big pain in the arse, but not disasterous. Just how painful it is depends on how well you have planned, eg having that trusted archive, having channels to quickly disavow your compromised key and the network of widely trusted people who know how to check that your new key really came from you.

      in a legally signed document scenario, you might arange for an electronic notary to annotate your document with the date you signed it and then sign the annoted document. Then people could tell whether the document was signed before your key was compromised, and a fraudster needs to get at both your secret and that of the notary.

    • The nub to the key/algorithm business is not necessarily the speed with which it can be changed but the issue of control. As a cryptography user I have no control over the secrecy or otherwise of the algorithm, but when I generate a key pair I am in control of the secrecy of the key.

      Dunstan
    • If your secret (private key) becomes known, sure you now have the cost of creating a new key, revoking your old key, and making sure the trusted depository has both. Also, this does not eliminate some risks in others having trusted documents signed with your old key by your nemesis during the interim (or by those who fail to verify the key has not been revoked). But this cost is nowhere near the cost of designing a replacement algorithm were it the case we were using one which depended on secrecy to avoid being compromized. Not only would there be that cost of redesigning, but also the cost of having no reliable system in the interim. Instead, just one entity (you, if your secret key gets out) incurs the costs. This is also why keys should come with a set expiration date, so that those who trust them won't extend their trust too far.

      It's 2003. Have you changed all your passwords, yet? Have you created new SSH keys and removed all trust of the old ones?

    • Let's keep reading, shall we?
      If you depend on a secret for your security, what do you do when the secret is discovered? If it is easy to change, like a cryptographic key, you do so.
      People aren't as committed to their keys as they are to their algorithms or their operating systems. So secret keys are better than secret algorithms or secret OSes. That's all he's saying.

      And, if the situations you listed are accurate, and those secrets really are hard to change, then yes, I think Mr. Diffie would agree with you that those systems are vulnerable.

    • Shared Secret (Score:3, Informative)

      by emarkp ( 67813 )
      If you qualify the statement as shared secret then it's pretty much correct. A private key (in a public/private pair) is never held by anyone other than the owner, nor is it necessary to transmit it in any way. And he can keep better track of it than anyone else can (or at least, he should).
    • Re:Then again... (Score:3, Informative)

      by rsdio ( 156261 )
      Actually, Diffie's greatest invention in the field of public-key cryptography -- the Diffie-Hellman key exchange -- does not require secrets to be kept for long periods of time, which is one of the coolest things about the algorithm.

      Diffie-Hellman key exchange relies on two secrets between the two people who are communicating (or three for three people, and so on), and these secrets are nothing but large, random integers. Since these integers don't have to have any specific properties (such as the key pairs in RSA) they can be thrown away at the end of the session, changed every hour, and so on. In the context of cryptographic algorithms, Diffie's statement is backed up by his inventions.

      See: http://www.apocalypse.org/pub/u/seven/diffie.html

  • by MarvinMouse ( 323641 ) on Thursday January 16, 2003 @11:50AM (#5095152) Homepage Journal
    Diffie is definitely the guy to be talking about this. Considering a main form of private key-exchange is called Diffie-Hellman.

    But, nontheless, it's silly that people don't know this inherently. A secure system is only as secure as its weakest point. If that point is compromised and cannot be easily fixed and/or repaired. It's useless.

    Depending on the secrecy of the code or "Security through Obscurity" is useless. Anyone who tells you otherwise is a quack or is trying to sell you something and doesn't want to do all the work necessary to do the proper job.

    If you want a secure system, you have to instantly assume that the system, code, and key will eventually be completely compromised, and then you can begin to think about. Now, if any of these were compromised, how can I fix the problem. The current solution is to reset the keys, and using modern mathematics (most of which was developed by Dif) You can do this securely.

    Now, the only problem that remains with modern cryptography, is if the factoring problem is solved _and_ the elliptic curve problem is solved efficiently, then modern crypto becomes useless, and we are back to square one.

    Albeit, Quantum Cryptography has some potential as it provides a mathematically verifiable form of perfect cryptography, since it is one time pads. It just currently cannot be done over long enough distances to be completely effective. When the technical/engineering details are solved for QC, then crypto is guaranteed secure. Assuming no one compromises your system directly (Human Error).

    Dependence on Security through Obscurity is bad, incredibly bad, and I hope anyone programming security software out there will realize that, and begin to use proper cryptographic techniques.

    ** I am going to write a couple of journal articles soon reviewing the various techniques for those who are interested. **
    • Quantum Cryptography has some potential as it provides a mathematically verifiable form of perfect cryptography, since it is one time pads.

      Quantum cryptography solves one specific problem: to share (or, strictly speaking, expand) a secret over a distance. This secret can be a one-time pad.

      However, sharing a secret over a distance is just one building block of a cryptosystem. There are many others it doesn't help with, e.g. sharing an initial key, or digital signatures.

    • is if the factoring problem is solved

      I thought it was the discrete logarithm problem?

      • The RSA problem is reduceable to the factoring problem.

        The discrete logarithm problem is related to the diffie-hellman key exchange.

        Almost all of these problems though reduce to a simple NP problem, in which case, if one is possible to do efficiently, they'll all be likely solved.
    • If you want a secure system, you have to instantly assume that the system, code, and key will eventually be completely compromised, and then you can begin to think about.

      Yes and no.

      Kerck hoff's Law is certainly the starting point, and extending that to consider the system's reaction to key compromise is an essential step, but in the real world things are... messier.

      In some cases, for example, it is impossible (or at least not cost-effective) to correct the security defects in a deployed system, and in these cases obscurity is a good choice.

      For example: Consider a smart card system used for reloadable electronic cash transactions. There may be many millions of cards in circulation, and the security of the system as a whole relies to some extent on the ability of each card to keep its keys secret and to perform its operations correctly. Now suppose that the software on this smart card chip contains a defect which will permit an attacker to violate these security assumptions.

      Is it better, for security, to publish the source code or to keep it secret? I maintain that it's better, under real-world assumptions, to keep it secret. Why?

      First, recognize that publishing the code makes it *more* likely that the defect will be discovered. An attacker has a steep uphill climb to discover a defect in this particular code, since he first has to peel apart layers of metal cladding and silicon to get to the ROM to read the object code out of the transistors (and it's designed to make this as difficult as possible) before he can even begin to analyze it. Black box bug-hunting is extremely difficult as well, since the software is paranoid and a few failed transactions will cause it to refuse to operate any more. Keeping the code secret prevents all but the most determined attackers from even looking for holes, much less finding them.

      Second, keep in mind that if a defect is discovered, "fixing" the hole is a very, very expensive proposition. All of those millions of cards must be replaced. If the source is open, the fact that "white hats" have discovered the defect means that it must be assumed that "black hats" have as well. If the source is carefully protected, the fact that finding defects is so much easier for the good guys makes it reasonable in many cases to assume that the bad guys probably have not.

      Third, the fact is that any secure system design worth its salt does *not* under any circumstances place 100% of its faith in the technology. Monitoring the operation of the system and looking for indicators of potential breaks is essential. In the real world, a broken system can often continue to function just fine as long as those who successfully break it can be tracked down and thrown in prison. In fact, *most* of our real-world "security" relies on this notion of detection and deterrence rather than prevention.

      Combine these facts together, and you can see that in this situation it makes more sense to: keep the code and any discovered defects secret (from the world, not from the system operator!); replace the defective devices in a slow, cost-effective trickle; monitor the level of abuse; track down the abusers; and, of course, be ready to shut the whole thing down if the level of abuse becomes intolerable.

      In addition to this, the value of a layer of obscurity on *top* of good security should not be disregarded. This is why, for example, the NSA does not publish the details of the ciphers used to secure US military communications.

      The common error, of course, is to believe that obscurity is security. It absolutely is not. But, when you understand that no real-world system will ever be perfectly secure, you quickly see that the job of any secure system designer is simply to place enough obstacles in the path of an attacker to convince him that he should go find an easier target. With that mind-set, it's very clear that obscurity can often be a useful source of additional obstacles, as long as one is careful not to overestimate the difficulty of penetrating them.

      The current solution is to reset the keys, and using modern mathematics (most of which was developed by Dif) You can do this securely.

      Only if you have a place to securely store the private keys. Ya still gotta have secrets at some point. (No, don't go off about how classic Diffie-Hellman has no private keys; you still need secrets for authentication, otherwise you're vulnerable to a MITM attack).

  • You may be able to keep the exact workings of the program out of general circulation, but can you prevent the code from being reverse-engineered by serious opponents? Probably not.



    There are just too many ways to reverse engineer something these days .... debuggers, de-compilers, etc ....


    This is why everything should be open source! If everthing is put in plain view (and not protected by rediculous laws and coprights), then people that use crypto programs are more likely to ensure that they are truely secure ... and they are able to help contribute to the program by repairing flaws in the original code ...

    If everyone's eyes can see the program, then security can better be kept without an excessive need for an abundense of secrets ...


    Just my $0.02 cents ...

  • Hum (Score:2, Interesting)

    by RyoSaeba ( 627522 )
    The secret to strong security: less reliance on secrets.
    Now that sounds really like an argument for Open Source, even if he points out that
    A secret that cannot be readily changed should be regarded as a vulnerability
    .
    On the whole, though, apart those 2 arguments, the article seems quite hollow imo, just your usual arguments on both sides... (NOT trying to start a flame war here, just expressing my opinion, to which of course you can disagree ^_-)
  • by Ed Avis ( 5917 ) <ed@membled.com> on Thursday January 16, 2003 @11:56AM (#5095198) Homepage
    I haven't seen anyone (save a few Slashdot trolls) seriously argue that binary-only software is inherently more secure, either in theory or in practice. So at first it sounds like Mr Diffie is setting up a straw man at the beginning of his article. But then you realize that the 'opponents' are not serious arguments but, on the whole, vague FUD wafting about that may be swallowed by less-technical people. So his article is an attempt to explain to the rest of the world what the security industry already knew.
    • by schon ( 31600 ) on Thursday January 16, 2003 @12:54PM (#5095767)
      I haven't seen anyone (save a few Slashdot trolls) seriously argue that binary-only software is inherently more secure, either in theory or in practice.

      Then you must not get out much.

      [businesswire.com]
      Alexis de Tocqueville Institution published a white paper (funded by Microsoft) that argues this very point. Do you consider them "slashdot trolls"?

      How about Steve Lipner [landfield.com], manager of Microsoft's security response center? Is he a troll too?

      Hmm, ZDNet has [zdnet.co.uk] another (unnamed this time) source from MS, who claims that too. You're saying that MS's spokespeople troll /.?

      I've also seen company websites (SoftArc comes immediatly to mind) that stated (in effect) "we don't release source code because it's more secure that way" - sorry, no link for this one, as they've changed their site... but there is a chice quote on their security page [softarc.com], where they explain that their products are more secure because "connections employ entirely proprietary protocols"

      The thing is that this FUD is spewed about by people who don't know what they're talking about, and believed by others who haven't thought about it too much. "Security through obscurity" makes an inutitive kind of common sense, unless you think about it for awhile, or are exposed to the flaws (which aren't as intuitive.) It's the same kind of sense that got the DMCA passed.

      Mr. Diffie isn't writing for the security community, but for the people outside the security community, who might be led to believe that obscurity does provide security.
  • by Anonymous Coward on Thursday January 16, 2003 @11:56AM (#5095201)
    "If you depend on a secret for your security, what do you do when the secret is discovered?"

    Doh! That's obvious - Use the DCMA to sue their butts.

  • by HealYourChurchWebSit ( 615198 ) on Thursday January 16, 2003 @11:56AM (#5095211) Homepage


    Perhaps its just me, but I'm reading between the lines that the issue really may not be Open Source vs. Commercial -- but who has the most to lose, in both intellectual property and in physical harm due to decryption by nere-do-wells.

    I'm also seeing the same message over and over again, with this article, the book review previous [slashdot.org] to this article, and a few other articles [slashdot.org] that indicate that again, it comes down to human factors.

    Again, the question becomes, how do we best secure the nut holding the keyboard?

  • Point taken; yet "trade secrets" still have value in the commercial context. One valid business model is still based on selling stuff made with secret recipies (Coca-Cola, KFC chicken, etc).
  • by The Evil Couch ( 621105 ) on Thursday January 16, 2003 @12:00PM (#5095236) Homepage
    he references codes used by the military in our various wars and notes how we were able to keep them secret from the majority of people, but enemies that were doing their best to break the codes, did so on a regular basis.

    while that's true, I'd like to bring up the navajo code-talkers of WWII. it was never broken. the code was a secret even from most navajo. knowing navajo only enabled you to learn the code.

    so basically, we had a secret code, based in a difficult language, that was 100% secure. so, really, it's not that relying on a secret is a weak point in cryptography, it's just that it cannot be the sole means of preventing someone from breaking it. combining techniques is usually far better than just using one by itself.
    • by DG ( 989 ) on Thursday January 16, 2003 @12:19PM (#5095408) Homepage Journal
      Just because the Navajo code-talker code was never broken, does not mean that it was 100% secure. Not by a long shot.

      The whole enterprise depended on keeping a secret - that the radio traffic was being relayed in Navajo. Had that secret ever been revealed, the code would suddenly become vulnrable.

      All it would have taken is one Japanese Navajo speaker - not even a fluent one, just one who could recognise that the language being spoken was Navajo - for the system to have been compromised. For example, an effective battlefield counter-measure would be to seek out and kill anybody with a radio that looked like he might be a Navajo. The code-talkers would have been fiendishly difficult to replace....

      Make no doubt about it, that whole programme was a huge gamble, and had the war persisted long enough, it would eventually have been discovered and rendered useless.

      Now for another WWII analogy, consider Enigma. One of the keystones behind its use was that it WAS considered secret and unbreakable - and that proved to be gravely mistaken. The problem with secrets is that the attacker is under no compulsion to reveal to you that he has discovered your secret, and if you continue using a system that hinges on a secret being kept secret when in fact it is no longer secret (and that it is no longer secret is a secret from you!) then you do yourself ENORMOUS harm.

      So don't do that. Assume that secrets will be discovered, and quickly, and so base no security system around any fact that must be kept secret for a long time.

      DG
      • Just because the Navajo code-talker code was never broken, does not mean that it was 100% secure. Not by a long shot.

        I was going to make the same comment, but since you already did, I'll just comment on your comments. :-)

        The whole enterprise depended on keeping a secret - that the radio traffic was being relayed in Navajo. Had that secret ever been revealed, the code would suddenly become vulnrable.

        And vulnerable does not necessarily mean breakable. It actually relied on more than one secret - that they spoke Navajo, and that you had to be able to interpret an unknown language.

        All it would have taken is one Japanese Navajo speaker - not even a fluent one, just one who could recognise that the language being spoken was Navajo - for the system to have been compromised.

        Again, all they would have been able to do was make a little dent in it. Part of the security was there were no other known Navajo speakers outside the continental US. That was a big plus. I think even if there were, they would have been able to break the code after they could make sense of the messages. It was possible, but not very likely. I think that you are right that perhaps eventually it could have been broken. But it was a novel idea that paid off.

        For example, an effective battlefield counter-measure would be to seek out and kill anybody with a radio that looked like he might be a Navajo. The code-talkers would have been fiendishly difficult to replace....

        Well, duh. Another battlefield counter-measure would have been to seek out and kill all of your enemies. :-)

      • The problem with secrets is that the attacker is under no compulsion to reveal to you that he has discovered your secret, and if you continue using a system that hinges on a secret being kept secret when in fact it is no longer secret (and that it is no longer secret is a secret from you!) then you do yourself ENORMOUS harm.

        Poor security is worse than no security at all.
    • It was hardly 100% secure. Compromising it would be as simple as capturing one and applying a blowtorch until the secret (and anything else you want) was revealed.


      You're making a common, dangerous error in believing that because a system was not compromised it was secure. It might have been secure as long as you didn't get your hands on a codetalker, but not otherwise. It's only secure if getting one was impossible.


      It was clever and effective, but not 100% secure. A good lesson to take from that is that your goal is effective, not 100% secure.

  • by zapfie ( 560589 ) on Thursday January 16, 2003 @12:06PM (#5095290)
    My IP is 127.0.0.1. Do your worst.
  • by sporadek ( 639621 ) on Thursday January 16, 2003 @12:07PM (#5095301) Homepage
    A few years ago I worked on a military messaging system and used some of the source code from Schneier's Applied Cryptography to implement the key exchange, among other things. Everything worked great for us, but not long after it got into the field, we kept having sites come up with errors establishing connections.

    The code included a function specifically for a_times_b_mod_c using arbitrarily large numbers, and we used this function in the interest of speed. Unfortunately, there was a bug which caused the function to return a 0 result a little more often than expected (with C being "almost certainly" prime, it should almost never return a 0).

    Fortunately, though, a 0 caused an error, rather than an insecure connection. When we got rid of the special function and instead used the overloaded * and % operators, everything worked fine.

    I know there must have been more than a few eyeballs looking at the code in that function -- including mine -- but a potentially devastating bug snuck through. Heck, I didn't have a clue how that code was supposed to work. It was too mathematically complex for me.

    The moral of the story? I suppose it's just this: the "many eyeballs" theory quickly breaks down in the face of esoteric algorithms.

    • The moral of the story? I suppose it's just this: 'the "many eyeballs" theory quickly breaks down in the face of esoteric algorithms'.

      But.. but...

      You found the bug, and now the world at large knows about it. You are a living example of the "many eyeballs" theory in action. You don't *have* to spot the bug merely by eyeballing the code; witnessing it in the wild counts too.
    • The moral of the story? I suppose it's just this:
      the "many eyeballs" theory quickly breaks down in the face of esoteric algorithms.
      The follow-on to this story is that Schneier developed blowfish for just this reason, as he talks about here [counterpane.com]:
      Use a design that is simple to understand. This will facilitate analysis and increase the confidence in the algorithm. In practice, this means that the algorithm will be a Feistel iterated block cipher.
      I am writing a simple app at home using blowfish to brush up on my C++ skills, and I am just a lowly mechanical-engineer-turned-programmer.
    • The moral of the story? I suppose it's just this: the "many eyeballs" theory quickly breaks down in the face of esoteric algorithms.


      No, the moral of the story is that you found the error and corrected it, thus solving the problem. Could you have fixed it if you didn't have the source?


      Programmers and designers make mistakes. Programmers and designers will probably always make mistakes. The real issue is how do you find and fix the errors, whether they are based in the code or the algorithm or in the application logic. If you can't see the source, that's just one more obstacle in the way, one more source of noise to work through.

  • by YahoKa ( 577942 ) on Thursday January 16, 2003 @12:08PM (#5095308)
    Haha ... cute :)
    For those of you who don't know, he's the co-inventor of public-key cryptography. Bow to him, because we're not worthy!
  • by airrage ( 514164 ) on Thursday January 16, 2003 @12:12PM (#5095342) Homepage Journal
    While you may or may not agree with the "secrets" part of the article, I have to take some umbrage with the author's intent on closed vs. open source as to it's securability.
    "There is probably some truth to the notion that giving programmers access to a piece of software doesn't guarantee they will study it carefully. But there is a group of programmers who can be expected to care deeply: Those who either use the software personally or work for an enterprise that depends on it.
    But that's the problem with the argument, because study does not equal security. To use the automobile analogy further: many people bought and drive Ford Explorers with Firestone tires, many of whom were probably automobile experts, safety experts, physicists; but the "vulnerability" of a tire blow out causing a fatal crash was never revealed by the consumer. In what organization does anyone look at the code and understand it, but furthermore find the vulnerabilities? That argument seems to crop up as the first few paragraphs in security / technical articles and just never seems to pass muster.
    • Yeah, and due to the "open source" nature of the tires, consumers could FIX that "vulnerability" without waiting for Ford to issue "new tires". Think of what would have happened if the tires+wheels were bolted on with secret locked nuts unlockable only by Ford. All the Exploreres would have had to have been garaged until Ford modified the design of the tires and sent them to service centers where the Explorers would go to have new tires put on. This is essentially the deal with closed source security software.

      What happened instead was, as soon as consumers heard about the vulnerability, they had the option of patching it themselves, namely, going to Tires-R-Us and getting a new set of tires. The argument was not "study => security" but "secrecy != security" and "easy fix => security"

      EnkiduEOT

  • So many bugs (Score:3, Insightful)

    by t_allardyce ( 48447 ) on Thursday January 16, 2003 @12:14PM (#5095360) Journal
    If microsoft opened up their all their software tonight. Tomorrow morning every windows server would be down, every internet-connected desktop would be down, Infact anything that could be down would be down. Open source software such as linux is probably on a higher level than closed source, so the majority of bugs that could be found in linux already have. For example, if you open fire randomly at a crowd with a machine gun, you'll hit more people in the first few seconds than in the next minute, because after you've taken out the bulk, what your left with is afew scattered people that are harder to pick off - anyone who plays fps's will know what i mean.
  • by gentlewizard ( 300741 ) on Thursday January 16, 2003 @12:16PM (#5095377)
    I forget who said this, but there's a real paradox with security that the more you THINK you have, the more risks you will take, and therefore the less safe you are. When you know you are vulnerable, it heightens the senses, focuses your awareness. You're sharper, because you have to be.

    I'm not saying throw the security away, but think about this: trusting on a secret can make you complacent just as Diffie writes. Knowing your code is Open Source and everyone can look at it should help you focus on the real problem, which is that security is a moving target and needs constant evaluation.
  • I agree with WD's theme, but his defense of Open Source has a weak/irrelevant point.

    But all this does not mean that there is no group responsible for the car. At a level different from the mechanic, the manufacturer follows the repair history of each car model, then issues repair advisories and occasionally recalls a model for maintenance if a serious fault is found.

    I think auto-manufacturer responsibility is anchored in legal liability. If the wheels come off, the builder is sued, no matter whether the engineering diagrams are freely available to the car's owner.

    Moreover, just because a program is open-source software does not mean that no one is responsible for it.

    Yes, but it doesn't mean someone is. He's arguing in favour of a (legally liable) vendor.

    As noted by other posters, the basic arguments have been written in more detail by people like Bruce Schneier -- see his Cryptogram [counterpane.com] newsletters for some well-thought-out writing.

    A nice little article, suitable for sharing with less-technical coworkers.

  • by Boss, Pointy Haired ( 537010 ) on Thursday January 16, 2003 @12:17PM (#5095395)
    .."Security through obscurity is no security"..

    Can you explain what a password is if it isn't security through obscurity?

    Consider a website that has on the front page a login box with the prompt "Admin Password:".

    How is that any more secure than an "security through obscurity" approach, whereby the developer has made himself the following admin URL:

    http://www.example.com/3458976394534/admin.html

    Both the password, and the hidden URL are equally hard to guess. Yet people go on about how security through obscurity is no security.

    Is anybody with me on this?
    • Nope. (Score:5, Interesting)

      by DG ( 989 ) on Thursday January 16, 2003 @12:29PM (#5095526) Homepage Journal
      Passwords can be changed, and can be changed quickly. If you discover a password has been compromised, locking down the system is a password change away.

      If you want to be really secure, change your password daily. Or hourly. Or after each transaction.

      But once your obfuscated URL is discovered - and discovering it is trivial - then the secret is out, and what little protection it did provide is lost until you can change the obfuscation.

      For the best example, see the CSS system used on DVD players. That security system hinged on keeping something secret. Once it was discovered, there was no way to put the cat back in the bag without changing the key on everything that needed to be able to read DVDs - and obviously, the MPAA couldn't do that without rendering all the DVD players out there nonfunctional.

      Secrets, as part of a security system, are BAD. They only become acceptable when they can be quickly changed once compromised. If they cannot be changed quickly, they render you more vulnerable than if they were out in the open to begin with.

      DG
    • http://www.example.com/3458976394534/admin.html

      Yeah - and just wait until that gets into Google :) Google might spider a site with public proxy logs, and it gets in that way.

      Wait, that's given me an idea.... :)
    • by schon ( 31600 ) on Thursday January 16, 2003 @01:23PM (#5096004)
      Can you explain what a password is if it isn't security through obscurity?

      *sigh* I hear this all the time, and it's fundamentally flawed logic.

      Obscurity is keeping something a secret that could be found out by some other means.

      A password is a method of authentication - you prove you are authorized to do something because of something you know.

      A properly administered password is not obscurity because the only way to get it is for someone who is authorized to tell you explicitly.

      A password is *not* obscurity - unless you store your passwords in a publically accessible place, and think that "nobody will think to look there."

      How is that any more secure than an "security through obscurity" approach, whereby the developer has made himself the following admin URL:

      http://www.example.com/3458976394534/admin.html

      Both the password, and the hidden URL are equally hard to guess.


      And this is the perfect example of what I'm talking about.

      They are equally hard to guess, but there is a _huge_ difference between the URL and the password in your example, because the URL can show up in other places (like, say, referrer logs!) if you link to _anything_ in that page that you don't have 100% control over, your URL will leak to the outside world, and your server is compromised.

      Or what about a browser cache? Or URL history? Both methods will make your URL "security" method useless.

      And what if someone looks over your shoulder at the screen? The URL is printed in plain text right in the browsers address bar.
    • It is important to not confuse the scope of the whole idea (keeping X from being accessed) with method 'security' or method 'obscurity'.

      You can achieve security by using a password that is obscure, if your password is involved in a system of being changed frequently and being complicated enough to serve as a barrier. Just because the adjective 'obscure' is used, does not mean that keeping X private is done through obscurity.

      The password is obscure in that with only a handful of characters, in a user / pass PAIR, the obscurity of it (difficulty to brute force) can approach infinity. Which in turn is a secure method of preventing X from being accessed.

      If you follow the scope the article is talking about your example of the URL is obscurity, that there is no barrier to accessing X as long as you know in general where the holes are and that the URL exists. Remember any dorko with unsecured IIS could thwart some of the worm scripts just by not using the same path names the worm assumed were in place. (Don't put your idq files in the scripts directory! Duh!) Renaming folders without closing the hole made the obscurity of IIS install unique, only that one server has that pathname used, therefore a worm is ineffective against it. That is obscurity, but is still sure as heck is not secure, as anybody who knows the pathname can get right in.

      Though, I think the article missed something by not adding that security can be enhanced by obscurity, and obscurity can be enhanced by security. Using BOTH is the best way to keep the baddies from getting X. That's what the government does, only a few people know (obscure or secret) and those that do have to use security (password, fingerprint or whatever) to access information X.

      The problem is, to the uninitiated, obscurity looks like and seems just as good as security After all, who is going to guess that I wrote my PIN number on the other card! Rather than noting that the best way to keep a PIN number secure is to not use birthdates (be obscure) but also memorize it and not write it down (secure).
  • quite insightful.. (Score:2, Interesting)

    by mmThe1 ( 213136 )
    "If you depend on a secret for your security, what do you do when the secret is discovered? If it is easy to change, like a cryptographic key, you do so. If it's hard to change, like a cryptographic system or an operating system, you're stuck. You will be vulnerable until you invest the time and money to design another system."

    The author has rightly pinpointed the pivotal dilemma of quite a many software designers. The problem is more about defining boundaries for modules handling security of the system. Do you integrate it strongly with the rest of the system? That creates a problem if a vulnerability is discovered and you have to invest more time and finances into taking care of all those 'integration points'. Do you design like a true pluggable module and let the system interact with it using few interfaces? That makes your whole system more transparent (some closed-source companies may whine here) and there may be possibilities of someone spoofing this external interface altogether. A balance is definitley required, but surprisingly most software designs seem to miss this point completely.
  • OSS can be viewed by many eyes.

    But is it?

    Can you be sure that each and every code change is reviewed by competent individuals trained and experienced in security and with a comprehensive knowledge of the architectural issues with the work product? By each and every we include device drivers from every source under the sun that are in the kernel and thus have the ability to do good things or ill.

    Who maintains the security model, the design documents, the overall architecture? Who argues that this code, while it speeds things up wonderfully, violates architectural principles that are important to the security of the entire system? And who can make the decision stick...that security is more important than functionality or speed.

    Yes OSS could be more secure than most proprietary products by virtue of the quantity of eyes.

    But perhaps it is possible to make a product even more secure by following great developmental practices, ones that are only enforceable in a proprietary world. And submitting it to peer review by acknowledged experts.

    Compare the assurance requirements contained within the Common Criteria to the practices followed in most OSS product development and maintenance. OSS has some real problems.

    Not that it isn't wonderful ... but security in the OSS world has yet to be proven.
    • OSS can be viewed by many eyes. But is it?

      Depends very much on the popularity of an application. From my own experience, probably only the Top 500 (maybe the Top 1000) get enough feedback to maintain some level of quality.

      On the other hand, every day design flaws and bugs are found in some proprietary applications. The fact that in the 'proprietary world' you could enforce 'great developmental practices', apparently does not mean that it is really done.

      Furthermore, customers look for features, and certainly do not easily perceive the value of good design in the code they never see. Therefore, in the 'proprietary world' there is little (or no) incentive to follow 'great developmental practices', whatever these may be. If anywhere at all, I would expect good design in OSS where return on investment is much less of an issue sometimes.

  • VERY WEAK ARTICLE (Score:3, Insightful)

    by huckda ( 398277 ) on Thursday January 16, 2003 @12:25PM (#5095479) Journal
    For someone who is supposed to be an utmost authority in crypto...his article was very lacking in anything that remotely addressed the issue of the question at the heading 'Is open-source software better for security than proprietary software?'

    It addressed secrecy as a form of security...proprietary software is NOT secrect software.

    I just feel that someone with his credentials should have been able to come up with some arguement or form of support. All in all I wouldn't recommend the article to be read at all, for it lacks any insight on the topic it was supposed to address.

  • Passwords (Score:5, Insightful)

    by Virtex ( 2914 ) on Thursday January 16, 2003 @12:26PM (#5095502)
    Passwords can be seen as a secret used for security. The author also mentions cryptographic keys in the same context. He justifies them by saying that because they can be easily changed, they aren't a great detriment to security. I'm not sure I agree. In the past, the most common way to gain unauthorized access to a machine was through weak passwords. And even if you have a strong password, it may be difficult to know if it becomes compromised.

    I've always wished for a system like RSA'a SecurID cards. They give you a password that changes every 60 seconds, and you carry around a token that shows the latest password for you. Unfortunately, such technology is priced out of the range of individuals like me.
    • I've always wished for a system like RSA'a SecurID cards. They give you a password that changes every 60 seconds, and you carry around a token that shows the latest password for you. Unfortunately, such technology is priced out of the range of individuals like me.

      Sorry, I can't buy the argument that this is "...priced out of the range of individuals...". There are free and low cost systems available that provide this type of security.

      All a SecureID card is is a one way hash of the date and time along with the serial number of the SecureID card.

      The back end is a Kerberose style system that validates the authenticity of the card, verifies that it belongs to you, and lets the system you are logging into know what rights you have at this time.

      This can be done with PAM and plugins that provide the appropriate features, as well as authentication module that works off of a similar function.

      As a result of the fact that two clocks rarely maintain syncronicity over long periods of time, the back end authentication system generates hashes for one or two minutes around the current minute. (hashing down to the second would be useless as most of these systems require manual entry of the hash value.) If matches are regularly found to be offset by one or two minutes over some period of time the back end starts adjusting the time tossed into the hash tree to reflect the drift relative to the mobile card.

      Building a software based system for this would be fairly simple, and in-expensive. Building hardware based versions would not be particularly difficult either, and if you are going to build them in bulk would be very inexpensive as well.

      Of course even with this level of security you will want to use passwords and or passphrases in some combination to deal with the prospect of someone walking off with your MobilKey.

      Good luck.

      -Rusty
  • The secret to strong security: less reliance on secrets

    The "funny"/scary thing is that the majority of credit card usage/processing falls under this statement.

    Think about it. People shred there cc receipts, they demand secure links to ecommerce sites, they shroud their credit cards and SSN's from prying eyes. Yet, you hand your CC to Joe Sixpack at the gas station or the waitress at the restaurant. I remember back in the day when I worked at a service station, I relized back then that I could simply collect peoples CC#'s and use them surreptitiously. Yes there are more safeguards now, but it still is simple to do. Anyone who thinks that their CC is truely secure is fooling themselves.

    So what you do is to make reasonable efforts to keep things secure, but ALWAYS check your statements, and be ready to act if something happens.
    • You're right-it was only a few weeks ago a friend of mine had a similar thing happen to them, although for him it was a debit card. Apparently the gas station had a little gizmo between the card swiper and the actual debit card unit, so that when the victim swipes their card it keeps a local copy of the info. By the time he got home they'd taken more than two thousand from his account.

    • Okay, I'll bite.

      I shred all personal documents (and junk mail and crap) to make it more difficult if someone wanted my personal information. It isn't foolproof, but it doe smake me a harder target, which is all I want.
      (Two men are in the woods, they run into an enraged bear, one of them takes off running, the other says "what are you doing, you can't outrun the bear". The first replies "The bear? I only have to outrun you")

      Secondly anyone can copy my CC number, you just need to look at the card, perhaps take a picture. I rely on the fraud protection to protect me.

      It isn't perfect, or even secure, but I think if I put enough barriers up, it might just be troublesome enough for people to avoid.
      • I wasen't saying that you shouldn't do those things, only that people get a false sense of security doing those things. It's the "I have a $500 lock on my front door and a $.50 lock on my windows" thing. I shred to (have two shredders, one strip for most stuff, one cross cut for "important" stuff), but I religiously check my statements because I know that there are many other points of weakness.
    • Credit Card fraud, even Credit card fraud online, is dropping, so there is good news (check both visa.com and mastercard.com and see their security/fraud info for proof of this.)

      Having the numbers embossed on the front of the card is simultaneously a relic and a backup for when the hypercom goes down and the merchant has to run the credit card on the little swipe machine thinger.

      But if we didn't need that, or perhaps in the future Visa/MC will just decide to remove the embossed numbers, or part of the numbers (or better yet, you have a credit card you use with merchants that have no embossed numbers...and then you have one at home with embossed numbers for when you are ordering stuff online and need to know the number.)

      As for the form of fraud when they swipe the card through another reader to capture the numbers, other than an encryption system, which I don't have much faith in at this scale, that's what fraud protection is for. :-)

  • I wrote up my view of the article and posted it earlier [xwell.org]. I think that (for obvious reasons [sun.com]) he tends to view things from a cryptography perspective and tends to miss what really happens "on the ground", but hopefully his voice will be influential in such matters among his colleagues.
  • by core plexus ( 599119 ) on Thursday January 16, 2003 @01:01PM (#5095818) Homepage
    Stimulating article, if a bit short. However, and not to try and sharpshoot him, but I feel there is one inportant part missing. He writes: "Look at it this way: A mechanic who checks your brakes is acting to ensure the correct functioning of a system essential to your security.

    To which I would add: I regularly check my brake fluids (and other stuff). However, most people I have seen who are not pilots don't even do a walk-around of their vehicles, they just jump in and go. Certainly I am not proposing each user become a "mechanic", but some basic training would go a long way.

    This little penguin doesn't forget favors [xnewswire.com]

  • considering the guy co-wrote one of the most important cryptographic algorithms used in most key exchanges.
  • Schneier's Secrets & Lies is indeed excellent.

    For another excellent but more technical book on security I would recommend Building Secure Software [amazon.co.uk].

    Building Secure Software has a foreward by Schneier in which he writes: "Read it; learn from it. And then put its lessons into practise."

    Chapter 4 in the book is "On Open Source and Closed Source".

    A most own book if you are interested in software security.

"Trust me. I know what I'm doing." -- Sledge Hammer

Working...