Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Programming Software IT Technology

Is Finding Security Holes a Good Idea? 433

ekr writes "A lot of effort goes into finding vulnerabilities in software, but there's no real evidence that it actually improves security. I've been trying to study this problem and the results (pdf) aren't very encouraging. It doesn't look like we're making much of a dent in the overall number of vulnerabilities in the software we use. The paper was presented at the Workshop on Economics and Information Security 2004 and the slides can be found here (pdf)."
This discussion has been archived. No new comments can be posted.

Is Finding Security Holes a Good Idea?

Comments Filter:
  • Google is teh friend (Score:5, Informative)

    by Mz6 ( 741941 ) * on Friday June 11, 2004 @11:36AM (#9398913) Journal
    Posting a PDF on /. is almost certain server death. Here are Google's HTML versions:

    Is finding security holes a good idea? [64.233.167.104]

    Writing Security Considerations Sections [64.233.167.104]

  • by zoobaby ( 583075 ) on Friday June 11, 2004 @11:38AM (#9398938)
    In order to fix vulnerabilities, you have to find them. However, as soon as they are found and publicized, some script kiddie exploits them. So yes finding them is a good idea, patches just need to be released and INSTALLED before script kiddies expliot them.
    • by jwthompson2 ( 749521 ) * on Friday June 11, 2004 @11:46AM (#9399059) Homepage
      This is one of the best points the author makes though. He describes that if automated installation of patches were widely deployed then the benefits to discovery would increase. The problem lies in the number of systems that remain unpatched and thus exposed. The real problem is not that Discovery is not worth the time and money spent, but that it becomes worthless if the patches created are not applied.
      • by aussie_a ( 778472 ) on Friday June 11, 2004 @11:54AM (#9399201) Journal
        In principle, I agree that automatically installing patches is a good thing in principle. But Microsoft has a habbit of changing their licenses and installing DRM when people "upgrade" and/or "install patches."

        Also, imagine I have 2 programs. Both automatically install patches. Unpatched they both work fine. But when program #1 is patched, program #2 cannot run at all. Now this will probably be fixed eventually, but in the mean-time, I cannot run program #2 at all. If I need both programs, I'm fucked with the system of auto-patches.

        However when I have a choice, how likely am I to install a patch? Not as likely (due to laziness). So the effectiveness decreases significantly.
        • by mangu ( 126918 ) on Friday June 11, 2004 @12:13PM (#9399474)
          In theory, you are right. In practice, I've been using apt-get for several years and never got in the situation you mention when patching with "stable" releases. Can't say anything about Microsoft patches, though. Never touch that stuff.
          • by Umrick ( 151871 ) on Friday June 11, 2004 @12:36PM (#9399825) Homepage
            Actually I did have this happen once in all my years with a Debian stable production box. It helped move data from a Vax box to be injected into an Oracle DB running under Windows for ERP.

            There was an update to the nfs code to solve a potential exploit, which unfortunately also broke the NFS shares on the Vax side.

            Was easy to revert to the previously "broken" NFS server though.

            That was one time in 5 years of running though. The number of times an update has borked windows though is much more of a concern.

            Don't even get me started on the lobotomy done on a machine by Mandrake's autoupdate function though.
            • Actually I did have this happen once in all my years with a Debian stable production box

              I once saw a vendor approved patch bring down a financial market running on some very big iron. The amount of money lost in compensation to the traders was many, many times the cost of the hardware involved. Automatic patching was the root cause - except rather than a shell script being to blame, the idiot was there in person, downloading patches and whacking them straight on.

              They failed to do any testing whatsoever b
        • by pmjordan ( 745016 ) on Friday June 11, 2004 @12:20PM (#9399577)
          Enter a patching service, run by say a Linux distributor. SuSE's Yast Online Update (YOU) does this very well. Patches are often zero-day, and are guaranteed by SuSE not to cause trouble with other installed software, or that dependent software is also patched. I'm sure other commercial distributors have similar services, and debian's stable branch has worked well for me as well.

          Yes, it involves a certain amount of trust, but if you didn't trust anyone, you'd have to write everything yourself. Also, the company's business model depends on the reliability of said patching service, so they do their best to run it well.

          Of course, license changes are evil, but they're unlikely to happen with FOSS. Yet another reason to move away from Microsoft.
        • If Microsoft automatically installs a patch, can they change the liscense? I mean, no one clicked on 'I agree'.
      • by Ra5pu7in ( 603513 ) <ra5pu7in@@@gmail...com> on Friday June 11, 2004 @11:54AM (#9399213) Journal
        The problem with automated patching is that some of the patches interfere with previously working software. When you manage several hundred computers with specially designed software and a blasted patch to fix a security problem can take the computers down when the software is run, you sure as anything will never let the patch process remain automated. I'd rather test it on a few computers before broadly applying it.
        • by jwthompson2 ( 749521 ) * on Friday June 11, 2004 @12:00PM (#9399311) Homepage
          But you should be able to override the default behavior of auto-installing patches; my thinking would be that systems should patch themselves automatically unless the user specifies that they shouldn't.

          The issue still remains though that an unpatched system is still vulnerable, if the patch breaks an application and the machine goes unpatched there is a loss in security because of potential intrusion. If the patch is applied there is a potential loss of productivity. This is the kind of call a sysadmin has to make for their network, but a sysadmin should know enough to make the decision in an informed way, the average computer user is not equipped in the same way and probably should recieve the patch in order to mitigate risk that user's compromised system may cause to the greater group of users they may connect to via the internet.
          • Not necessarily (Score:5, Informative)

            by aussie_a ( 778472 ) on Friday June 11, 2004 @12:03PM (#9399361) Journal
            if the patch breaks an application and the machine goes unpatched there is a loss in security because of potential intrusion. If the patch is applied there is a potential loss of productivity.

            Not all patches are security patches. Many patches fix problems, such as the spell check function doesn't work correctly. Or some other function doesn't work correctly. These won't compromise security, but they may interfere with other programs.
        • by Anonymous Coward on Friday June 11, 2004 @12:24PM (#9399633)
          Refer back to last week's recent story [slashdot.org] regarding the Royal Bank's problems with an upgrade in Canada. Auto-patching might very well lead to something like this on a larger scale, or affecting numerous small businesses.
      • by Jim_Maryland ( 718224 ) on Friday June 11, 2004 @11:58AM (#9399286)
        Part of the problem is that automatic installation of patches isn't the best solution for every system, especially on critical systems. In general, the automated patching will work for most people. As a UNIX administrator though, I like to read the patch details before applying on any system I manage (including my MS Win32 boxes).

        The one point about discovery that I don't recall seeing is that where would our software be today if people didn't take the time to discover vulnerabilities? If you figure only "Black Hat" people discover these, they would likely be better at exploiting than those trying to protect the systems without understanding how to discover an exploit. In general though, I believe you need a good balance of internal discovery along with a process to rapidly develop/deploy patches.

        In true /. fashion, I'll complain a bit about the MS update process a bit here (at least the web update). Does anyone else find it especially annoying that MS doesn't cummulate their patches a bit more? If you build a system from CD, you spend a good deal of time updating patches only to find that after you install the patches, you need to install another set on top of those. I realize that different sites may want to patch to a particular level, but the default really should be to obsolete patches as they themself are patched.
      • by kent_eh ( 543303 ) on Friday June 11, 2004 @12:01PM (#9399334)
        He describes that if automated installation of patches were widely deployed then the benefits to discovery would increase.

        Assuming the patches don't break something else by mistake.

        The last time I did an update on my laptop (via MS update) and rebooted, I landed in a BSOD. I had to disable my wireless card, get new drivers, and re-install it before I could get the machine to boot normally again.

        If the update had happened automatically, and I was not in a position to get the new device drivers like on the road, or at a customer's site), I would have been SOL.

        While automatic updates may sometimes make sense for security, they aren't the best solution.
      • by c0dedude ( 587568 ) on Friday June 11, 2004 @12:05PM (#9399387)
        AGGG!!!! My Brain! My Brain! You've burned all logic from it.

        That's like saying that we shouldn't produce safer cars because everyone doesn't buy one. And hell, why train drivers, because, you know, crappy drivers are everywhere. Or like saying we shouldn't make furniture fireproof, because, you know, something else will burn. Of course, it completely disregards those of us who routinely patch our managed systems and keep them dead secure, compatibility and testing be damned. This is DMCA logic. If we criminalize software holes, only criminals will know of exploits. See the problem?
        • by MillionthMonkey ( 240664 ) on Friday June 11, 2004 @12:49PM (#9400006)
          That's like saying that we shouldn't produce safer cars because everyone doesn't buy one.

          No. Your analogy is flawed.

          If cars worked like exploits and patches, then every time a safer car came out, your car would suddenly become less safe than it had been yesterday- and it would become incumbent upon you to get it fixed. Cars, being physical objects, do not behave this way.

          And hell, why train drivers, because, you know, crappy drivers are everywhere. Or like saying we shouldn't make furniture fireproof, because, you know, something else will burn.

          All these analogies are flawed because they miss the point. When safer drivers are trained, existing drivers don't suddenly become more liable to be in accidents. When safer furniture comes out, the furniture in your living room does not suddenly develop an odor of gasoline.

          Of course, it completely disregards those of us who routinely patch our managed systems and keep them dead secure, compatibility and testing be damned.

          I think it acknowledges us, but for the minority that we are. The existence on the Internet of a large number of systems remaining unpatched to published vulnerabilities is exactly the nightmare scenario everyone wants to avoid- and suggests that the publish and patch system is broken. People don't patch.

          This is DMCA logic. If we criminalize software holes, only criminals will know of exploits. See the problem?

          There's a big difference between "criminalizing software holes" and voluntarily agreeing not to publish exploit code. And the way that sentence is worded is extremely misleading. It suggests that if the exploits aren't published then all criminals will still have unfettered access and that isn't true. While it is true that some of the people left who know of the exploits will be criminals, most criminals will no longer know of the exploits because they aren't published and require hard work to discover. Criminals are free even now to ignore the published vulnerabilities and look for unknown ones to exploit. Few choose to do so because it's a lot of work and most of them are lazy and stupid. Not publishing the exploits would force them to always develop this way.

          It comes down to this- you can either have 100% of machines unpatched to N unknown vulnerabilities, or you can have 100% unpatched to N-m unknown vulnerabilities and 50% patched to m published vulnerabilities. Even if you do publish and patch, there are still apparently an unlimited numer of unknown vulnerabilities in software. They become much more dangerous and easy to exploit once they're published, and not everyone patches. Even if you do patch, unpatched machines on the network still affect you.

    • by Anonymous Coward
      As always, this assumes that the only exploits are by script kiddies that can only make use of publicized vulnerabilties. And that is decidedly NOT true!

      In fact, script kiddies serve the purpose of forcing vulnerabilities to be patched quicker by writing exploits that are so badly written that they generally don't do much damage beyond crippling attacked machines.

      In contrast, the true black hats that use exploits to quietly and competently install keyloggers, spam relays and mine creditcard/banking data d
    • More important to finding them is engineering the product so that they do not occur. Yes there will still be security holes. However a well designed product with security in mind will have less to find; even if it is less sexy than seat of your pants coding.
    • It's naive to think that you could just tell everyone to not look for those vulnurabilities.

      Somebody is going to look for them, if good guys don't look for them and publicise them then only the outlaws will know what holes would need fixing.

    • Oracle: I'd ask you to sit down, but, you're not going to anyway. And don't worry about the exploit.
      Neo: What exploit?
      [Neo turns Oracles computer and intantly pop up adds start appearing on the Oracle's desktop]
      Oracle: That exploit.
      Neo: I'm sorry--
      Oracle: I said don't worry about it. I'll get one of my kids to write a patch for it.
      Neo: How did you know?
      Oracle: Ohh, what's really going to bake your noodle later on is, would anyone have created that virus if i hadn't have told them about the exploit?
  • by u-235-sentinel ( 594077 ) on Friday June 11, 2004 @11:38AM (#9398944) Homepage Journal
    While we still have a long way to go regarding security, I believe we're still learning how to design security into systems. People are creative. Computers are not. I believe that we're infants at this stage of computer development. Look at how far we've come in 30 years? Where will we be after 30 more?

    It's still a brand new world to explore. We have alot of work ahead of us.
  • Bill (Score:3, Interesting)

    by somethinghollow ( 530478 ) on Friday June 11, 2004 @11:39AM (#9398954) Homepage Journal
    Sounds alot like something Microsoft has been saying...
  • Don't buy it (Score:4, Insightful)

    by Omega1045 ( 584264 ) on Friday June 11, 2004 @11:39AM (#9398956)
    I cannot believe that sticking your head in the sand is any better. I would think that there are many examples of security holes being found and patched before they could be exploited.

    If anything, the data seems to point to the fact that software companies and users need to act on security holes and patches more quickly. This may require better education of the user, and it also would help to have better patching mechanisms.
    • I agree (Score:3, Interesting)

      by Moth7 ( 699815 )
      Of course there are many found and patched before the damage is done - they just don't get the kind of press that exploited ones do.
    • Re:Don't buy it (Score:3, Insightful)

      My interpretation of this claim is that perhaps instead of trying to find and fix holes, we should focus on using more secure tools and frameworks, so that they automatically eliminate a whole class of holes. Look at how much pain was caused industry-wide by using C/C++ with all the buffer overflow vulnerabilities, which are trivially avoided in different languages (e.g. Java).
    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
  • by Defiler ( 1693 ) *
    I like sticking my head into the sand, but the grit keeps scratching my sunglasses. Any suggestions?
  • by GreyyGuy ( 91753 ) on Friday June 11, 2004 @11:40AM (#9398971)
    'Cause trusting the manufacturer to make their product secure has shown to be such a good solution in the past.

    The alternative is to not look and leave that to the people who will fix it or the people that will exploit it. Are you really comfortable with that?
  • by Hayzeus ( 596826 ) on Friday June 11, 2004 @11:41AM (#9398982) Homepage
    Let's say we all stopped reporting security holes in software -- would the resulting software actually be any better?

    I guess I'm a little unclear on what the research stated is supposed to actually accomplish.

  • It helps admins (Score:5, Insightful)

    by digidave ( 259925 ) on Friday June 11, 2004 @11:41AM (#9398989)
    As a sysadmin, I can tell you for certain that reading bugtraq and other vulnerability lists helps me. I can study trends in software, trends in company response and protect myself against problems. If I know a new worm or vulnerability has a prerequisite configuration then I can make sure to configure software in a way where I won't be vulnerable until a patch is release or until I can apply it.

    Anyone who is subscribed to bugtraq can see the bad situation some software is in. Lately there was a lot of posts about Linksys that raised my eyebrow. Do I really want to deal with a company that doesn't properly address vulnerabilities it's made aware of? Good thing bugtraq posters had a workaround for the Linksys remote administration problem.
    • by paulproteus ( 112149 ) <slashdot@ashee[ ]org ['sh.' in gap]> on Friday June 11, 2004 @12:01PM (#9399340) Homepage
      The parent is exactly right. Having read through this paper now, I realize what it misses: the economic impact of the information.

      Much work has been done in economics regarding the affect that inadequate information flow has on a market; a Nobel Prize was wone in it lately. The paper assumes that there are a constant number of vulnerable machines, as you can see on page 2, for any given vulnerability. First of all, it ignores the fact that someone has to choose to use these vulnerable products. Second, it ignores the choice that comes to sysadmins when they learn that a particular company's products are more likely to have bugs, as the parent describes.

      The moral of the story is, the paper tries to be more broad than it can - by assuming that software acquisition decisions never happen, it fails to see the effect of vulnerability disclosure on these decisions. And these decisions, made well, do in fact make us more secure. The "software defect rate" in total might not decrease, but the defect rate in *used* software may well decrease.
  • The real problem, (Score:5, Insightful)

    by Cow007 ( 735705 ) on Friday June 11, 2004 @11:41AM (#9398994) Journal
    The real problem in software security lies in the design of the software itself. No amount of patches and service packs can secure unsecure software. Instead to be secure it has to be biult that way from the ground up. These findings seem to make sense in this context beacause patching software doesen't change the fundamental way it works.
    • by Analogy Man ( 601298 ) on Friday June 11, 2004 @11:55AM (#9399232)
      Mod this parent up!

      More important I think than fixing vulnerabilities and posting patches that may or may not be adopted by users is good design. To extend on the parent's thought... if development teams learn from the flaws in their current and past designs and use those considerations to identify "good" practice and "bad" practice it is likely the end product will be better.

      If posting a patch is a "hack me! hack me!" alert and there is not a means of pushing a patch out to everyone, would there be a way that security patches could be obfuscated with "enhancements" and more anonymously roled into scheduled releases?

  • It's an arms race (Score:5, Insightful)

    by ajs ( 35943 ) <{moc.sja} {ta} {sja}> on Friday June 11, 2004 @11:42AM (#9398996) Homepage Journal
    The goal of searching out vulnerabilities is to find them before the people with black-hats do. This is why most clearinghouses of such reports don't release the information until there is a fix (or until such time passes that vendors have demonstrated a lack of interest in producing a fix): the people who would exploit the bugs need to mount their OWN efforts to discover them.

    Ignoring actual bugs, there are many other kinds of security vulnerability. We know that software will always have side-effects that we don't intend. In fact, we desire this (e.g. providing a user with the flexibility to use the product in ways not envisioned by the creator). Sometimes those side-effects have security implications (e.g giving someone an environement variable to control a program's behavior lets a good user do something you might not have thought of, but it turns out a malicious user can abuse this in order to raise their security status).

    This means that, as long as software is not static, security bugs will continue to be introduced. Discovering them as fast as possible is the only correct course of action... you KNOW the black-hats will be.
  • It helps (Score:3, Insightful)

    by insanely_mad ( 636449 ) on Friday June 11, 2004 @11:42AM (#9399008)
    one of the reasons I now use Firefox as my primary browser is because so many exploits were found in IE. So even if Microsoft doesn't respond when exploits are found, these exploits do cause some people to look for more secure alternatives.
  • Yes. (Score:5, Insightful)

    by nacturation ( 646836 ) <nacturation@gmai l . c om> on Friday June 11, 2004 @11:43AM (#9399013) Journal
    To answer the question, yes. Finding security holes is a good idea.

    To the unasked question, "Is finding individual security holes the best possible use of a security researcher's time?", the answer is No. The best use of security research is to classify different types of security holes and use that information to create a development framework where those security holes are extremely difficult to recreate. For example, you're not going to find buffer overruns in Java code, since the memory is dynamically handled for you. Eventually, having security levels, encrypted buffers, etc. will all be part of a standard developer's library.
  • what is needed is for security to be part of the design from the beginning. We can remove many or all of the over flow problems by just changing the language used to write the software. no password should be saved or sent unencrypted ( with good hard to brake encryption. For example. This needs to designed in to the program or OS not added latter. By dealing with these kind of problems after the fact we are chasing the bugs and pushing the bad design around the code, instead of just having a good design fro
    • Wow, no software developer ever thought of that before. I know I have been putting bugs in my code on purpose because I thought we were supposed to. Thanks for the heads up; I'll start writing perfect code from now on.
  • A proper experiment would be an application where the developers made no attempt to find security problems. Any volunteers ? Anyone want to install such an application. (Nevermind all the joking about MSoft having already volunteered and how it's widely installed).
  • by Rahga ( 13479 ) on Friday June 11, 2004 @11:45AM (#9399043) Journal
    Evidence wouldn't show us that searching for security holes improves security... Rather, such a judgement requires reasoning and evalution of the evidence. Common sense stuff, here.

    Do smashing cars head-on into brick walls improve car safety? No, of course not. Evalution of the results of the crash, and using those findings to build better cars, that is what improves car safety, and the situation is entirely analogous in the security world. The assumption is that there is always a weakest link in security, that link is the most likely one to be exploited, and the trick is finding that link and strengthening it against attacks in the future, hopefully to the point where it is more likely that other links are weaker.
  • by m.h.2 ( 617891 )
    "but there's no real evidence that it actually improves security"

    OK, didn't RTFA, but is there 'real evidence' to the contrary?
    You can't fix what you don't know is broken. Is ignorance a better security solution?
  • by Vexler ( 127353 ) on Friday June 11, 2004 @11:46AM (#9399057) Journal
    ...that hunting down thugs and thieves and terrorists is not necessarily helping the nation's security, so let's not do it. Asinine suggestion.
    • by AviLazar ( 741826 ) on Friday June 11, 2004 @11:51AM (#9399143) Journal
      I'm thinking its more along the lines of - if we do not help find security holes, then we are giving less amunition to hackers. The only problem with the hypothesis is it assumes hackers only gain this "ammunition" through legitimate coders who are trying to find vulnerabilities. In fact, as all of us know, hackers do find security holes on their own, without help from other people.
  • by AviLazar ( 741826 ) on Friday June 11, 2004 @11:46AM (#9399068) Journal
    then someone who wants to do the real hacking will find them. If a malicious hackers finds the security hole, then he/she might utilize it, and they won't be nice enough to give us a patch to protect against it. So since the holes are there, lets find them and patch them BEFORE some malicious programmer does. Finding security holes is a good choice, making patches for security holes is a better choice, actually UTILIZING these patches for security holes is the BEST choice...unless you want to be on Citibank Identity theft commercials :)
  • by Linus Sixpack ( 709619 ) on Friday June 11, 2004 @11:48AM (#9399095) Journal
    The people trying to abuse security may likely have the same sort of skills as the flaw hunters, though hopefully less skill. Its not just that some bugs are found but perhaps the most obvious ones are found.

    Embarrassment encourages vigilence: Software firms are always looking to reduce costs (who isn't) - outside bug hunters encourage them to test more completely.

    I think really bad software abuse usually has a motive connected with bad treatment or reputation for bad treatment. Even if there is a small lag time between the discovery and fixing of a hole it doesn't let the problem lie around where people who develop a grudge can use it.

    Finally (and most importantly) fairness dictates that if I'm using a product you know a problem with - you should tell me about it. Consumers deserve the chance to disable systems, switch products etc.... if they feel vulnerable.

    Especially if software is closed source - how do you know this bug isn't the tip of the iceberg. Companies have conviced consumers they dont get to look inside software -- they can't stop others from hearing about its flaws.

    ls
  • by Ed Avis ( 5917 ) <ed@membled.com> on Friday June 11, 2004 @11:48AM (#9399097) Homepage
    If you find a security hole then the mistake has already been made. Fix the hole, but also make sure the same bug doesn't exist in any other program. Finding the same exploits again and again (buffer overruns, format string vulnerabilities, tempfile symlink vulnerabilities) reflects very badly on free software and programmers' ability to learn from bugs found in other software. (Not that proprietary software is necessarily any better - I am just not discussing it here.)

    The OpenBSD people have built up a good track record on security by finding holes and fixing them everywhere possible. I am sure they would disagree with your assertion that finding holes does not help to improve security. Finding the bugs is an important first step towards not putting them back in next time you write code.
  • The idea is to find the security holes before the bad guys do, so you can fix the problem before its in the wild and exploiting people, without your knowledge.

    Every program has bugs. There is no way around it. What makes the difference though is how you respond to bugs when they are found.

    You have a choice - either be like Microsoft, try to deny that the bugs exist, or downplay the bugs, and try to stifle the person who found it - or be like a real programer, fix the bug, and get people to fix the probl
  • Working to discover what security flaws exist in any given program, series of prgrams, Operating Systems, hardware, etc is not the real issue in my opinion: it is the idea of working to design a system that is as stable, flexible/adaptable, transparent, and clear as possible while at the same time providing a foundation that allows room for future growth. To really execute all of these concepts well can be a truly daunting task, IMO, given the often limited salaries/wages, time and other constraints (e.g.
  • Is it just me or does it seem a stretch that within the first couple of paragraphs the assumption is made that there is somehow a direct relation between the number of intrusions and the cost of those intrusions:

    "If a vulnerability isfound by good guys and a fix is made available, then the number of intrusions--and hence the cost of intru-sions--resulting from that vulnerability is less than if it were discovered by bad guys"

    While I'm not certain that there is NO relationship between the two, I'm certainl
  • by jwthompson2 ( 749521 ) * on Friday June 11, 2004 @11:50AM (#9399134) Homepage
    Many posters have already taken to jumping to bad conclusions having not latched on to one of the report author's best conclusions. If patches are not applied then the time and money spent on discovery are worthless. The only ways to make discovery worthwhile is if the patches are applied, otherwise discovery does not resolve the vulnerabilities.

    Automatic/Forced patching is the only way to make discovery worthwhile because otherwise the number of vulnerable systems is unpredicatable over time and constitutes a large risk. Security issues must be resolved as quickly as possible in order to mitigate risks, and unless patch application is automated and enforced then discovery becomes meaningless.
  • My Take... (Score:4, Interesting)

    by abscondment ( 672321 ) on Friday June 11, 2004 @11:51AM (#9399140) Homepage

    The value of finding security holes is totally dependant upon the company whose product has the hole.

    I work at Barnes & Noble as a bookseller; within a few months of working there, I found huge security holes in their in-store search system and network. I reported these to my manager; as soon as she grasped the scope of the hole, she called tech support in New York. I spent 30 minutes explaining all the different problems to them, and that was that.

    I didn't think they'd do anything about it, and that's the problem--since it costs time and money, most companies can't or won't fix holes. To my surprise, however, my store began beta testing the next version of the in-store software. What was even more surprising was that every hole I had reported was gone (so I went and found more, of course).

    There's never a silver-bullet; a new vulnerability will always surface. It's really hard to stay ahead of the curve, but it's something that must be attempted.

  • Security isn't an add-on, it's an integral part of every component of every system--whether functional or flawed--and needs to be a design consideration ranking right up there with the user interface.

    One of the obvious benefits of posting secuity holes is that it gives developers the insight and the opportunity to not duplicate that same security flaw in another system. How consistently, or not, we are learning these lessons is a different issue.
  • by ajs ( 35943 ) <{moc.sja} {ta} {sja}> on Friday June 11, 2004 @11:52AM (#9399167) Homepage Journal
    I'm confused about this guy. He claims to be a security consultant, but to quote his blog [rtfm.com],
    "I replied to the mail and didn't check the recipients lines and my mailer helpfully sent a copy of my credit card # to everyone who had gotten the original message. Outstanding."

    Really. I didn't make that up, check the link! Who is this guy, and why is he giving me software security advice?!
    • Agreed; the first rule of security (let alone *computer* security* is that you can't stop human stupidity.

    • Re:Security guy? (Score:4, Insightful)

      by ekr ( 315362 ) on Friday June 11, 2004 @12:19PM (#9399562)
      Actually, I think that reinforces my point. I spend most of my day working with security systems (see here [rtfm.com]) and so I absolutely know better than to send mail without checking the response line and yet I made that sort of boneheaded mistake anyway. This is exactly why software is riddled with bugs and why it seems unlikely we'll be able to patch them out of existence--people make mistakes.
      • Re:Security guy? (Score:3, Insightful)

        by ajs ( 35943 )
        Actually, I think that reinforces my point.

        I get what you're saying, and you're correct as far as that goes, but I was not concerned that you failed to wipe the CC list.

        I was refering to sending your credit card number to someone via electronic mail. Even if I was sure that TLS would occur between my MTA and theirs (ignoring the chance that a third-party secondary MX would get involved) and that they and I were using SSL-enabled IMAP to fetch our mail... I would still hesitate long enough to make it wort
    • Re:Security guy? (Score:4, Informative)

      by randombit ( 87792 ) on Friday June 11, 2004 @12:52PM (#9400029) Homepage
      Really. I didn't make that up, check the link! Who is this guy, and why is he giving me software security advice?!

      Member of the IAB. Co-chair of the TLS working group.
  • ... then someone else will. Hard to say if finding and fixing is helping, because noone knows how bad it would be if we didn't do it.

    Then again, MS doesn't seem to be trying to find vulnerabilities in their own code; often it's found by others. Sometimes it's the bad guys.

    Point being, it's hard to say what effect something is having when you can't contrast it against "not doing it."
  • by GreyyGuy ( 91753 ) on Friday June 11, 2004 @11:54AM (#9399204)
    Just read the article and have to point out a number of flaws in the methodology. First- it assumes that if the vulnerability is only known to a few then the number of intrustions will be low. Given the number of zombie computers out there, I do not think that is a safe assumption. Look at how the last few big viruses went around. I know those were exploiting known and patched vulnerabilities, but there is nothing to say that the same thing couldn't be done with a "day-zero" exploit.

    Second- it doesn't address the level of the vulnerability. If it is an exploit that lets someone take over a machine, or format a drive, the cost of even those first, possibly limited, intrusions will be astronomical.
  • by happyfrogcow ( 708359 ) on Friday June 11, 2004 @11:56AM (#9399250)
    Morally, finding security holes has to be done. It doesn't matter what the percieved quality improvement is.

    But instead of trying to plug the holes, it's better to understand why the holes pop up and what we can do to alter the behavior that leads to holes.

    [insert plug for your favorite high level language here]

    But even better development tools only gets you so far. The burden has to be laid square on the shoulders of the project leader and their managerial counterparts. You cannot continue to do the business side "favors" by including some technically unnecesary component after the specs and requirements are done, and expect it to get integrated seamlessly and without effecting everything else. once you say "yes" to something, it will be harder to say "no" the next time. Maybe you need to understand the business problem you are trying to solve better before you finish the design. Maybe the business folks need to better understand the development process, so they know they can't add features late in the game.

    this is just my "2 years in the system" view of things, after time and again getting an email saying "so-and-so wants such-and-such done to this thing" after the thing's design has already been settled upon. when i ask why, it always comes down to someone not being able (yay office politics) to say no to someone for some reason or another.

    want fewer security holes? start with better communication between different groups. end with a written in stone spec. leave out all feature creep until the next design phase.

    good luck with that! ha!
  • The assumptions... (Score:3, Interesting)

    by sterno ( 16320 ) on Friday June 11, 2004 @11:56AM (#9399252) Homepage
    The problem I can see in this paper is that it makes certain assumptions about the behaviour of black hat hackers which aren't necessarily true. The majority of vulnerabilities discovered by black hat hackers are eventually leaked from the hacker community to a white hat which will seek a solution to the problem. But there's no reason to conclude that this is true of all vulnerabilities.

    I forget the terminology for it, but there's the concept of worms that are launched on a large number of vulnerable machines simultaneously. I'm not aware of an attack like this in the wild but it's theoretically possible and would be terribly destructive. If a black hat hacker plays his cards right, he can probably sneak his exploit onto a few thousand computers without anybody noticing. Then he can launch a massive attack before anybody even knows the vulnerability exists.

    Having said that I think that, in the real world, the amount of effort put into finding vulnerabilities by white hats has a minimal cost. There's essentially three areas where security vulnerabilities are discovered by the friendlies:

    1) QA of the product developers
    2) Hobbyist white hats
    3) Network security auditing

    The cost of #1 is an assumed cost of any product and is part of the basics of releasing it to the public. You check for typos in the documentation and you check for security bugs.

    The cost of #2 is zero because it's people doing these things on their own time for their own amusement.

    The cost of #3 is substantial but it's critically important to some businesses to have zero security vulnerabilities. A security breach not only has an immediate cost in time to fix the problem, but it also has a long term cost by damaging the integrity of the company. If your bank got hacked and you lost all your savings, even if it was insured, would you keep your money in that bank?
  • by Sloppy ( 14984 ) * on Friday June 11, 2004 @11:57AM (#9399261) Homepage Journal
    Of course it helps! But perhaps not the way you might expect it to.

    Someone finds a buffer overflow problem. Someone finds another one. Someone finds another one. Someone finds another one.

    Someone realizes: "what if I centralized all my buffer routines and just got one little section of code working perfectly?" Then you get projects like qmail or vsftp, which simply are more secure. Then people start using these programs, and their systems really are less vulnerable.

    This paper looks keeps using the phrase "given piece of software." It's talking about (lack of) improvements at a micro-scale, but ignores improvements in the big picture that can happen due to people getting fed up or scared.

    If vulnerabilities were not discovered, would anyone bother to develop secure software?

    I think this paper has as an underlying assumption, the popular view that it just isn't possible to write secure software, and that every large software system is just a big leaky sieve that requires perpetual patching. I don't accept that. I understand why the belief is widely held: most popular software was not originally written with security issues in mind. But this is something that will be overcome with time, as legacies die off.

  • by BigBir3d ( 454486 ) on Friday June 11, 2004 @11:57AM (#9399268) Journal
    you can plug all the individual holes you want, it is still a crappy designed dam.

    if it designed differently, the number of cracks is smaller...

    i wish reporters understood that. flame MS for not bringing lonhorn out sooner. XP is not good enough. everyone knows this, nobody in the popular press is saying it in the right way.

    *sigh* /me goes back to condeming the world to doom

  • by a_hofmann ( 253827 ) on Friday June 11, 2004 @11:58AM (#9399281) Homepage
    The study sheds light into a little studied phenomenon, and therefor shows interesting facts. It shows that the difference between black-hat-discovery and white-hat-discovery basically reduces to the number of exploits between discovery and public disclosure of a bug, which is negible compared to the total number of exploits during the lifecycle of a bug.

    That may hold true and make sense if you study the total number of exploitable systems on the net, but totally ignores the fact that there are a very large number of systems with little priority for security while only a few depend on 100% system security.

    Those few high security sites have the need and pay for resources to fix known flaws or patch their software asap. It are those who gain from the knowledge of a security flaw before the black-hat guys do. They cannot live with even the shortest vulnerability timeframes and usually patch exploits as soon as they get published on public security channels.

    It may hold true that the work put into security auditing does not pay out on whole, taking all software users into account, but for those few who really care about security it surely does...
  • by CajunArson ( 465943 ) on Friday June 11, 2004 @11:59AM (#9399289) Journal
    I think the story raises a good point. The best analogy I could pint out would be a dam where new leaks keep popping up and you quickly rush to patch them. You spend so much time patching over the leaks that the fundamental design problems in the dam are never fixed.
    There are multiple strategies that will actually improve security far more than just trying to ferret out a new vulnerability. I personally recommend using Java or another type-safe language for programming if at all possible since the most common memory management errors are eliminated. Hoevwer, the best way to stop major security breaches is to have a security layer that will assume software programs will be compromised somehow. Then, the security layer is more interested in enforcing access to the system that program ought to have instead of just trusting the effective user ID of the program to hopefully do the right thing.
    A bit of karma-whoring here for my thesis project [cmu.edu] which is based on earlier work in Mandatory Access Controls in Linux, as well as the much more well-known SELinux [nsa.gov]
    kernel modules.
    I personally did my thesis in Domain & Type Enforcment which simply puts running processes into various different domains that have certain access rights to Types. A type is just a name tag assigned to files, and in my case you can also type system calls, network sockets, and eventually even Linux capabilities. It is similar to part of SELinux but also designed to be much simpler to understand & implement as well.
    Anyway, these systems all are designed with the assumption that vital processes will be compromised and the onus is on the policy writers to enforce least-privilege on the processes. This may sound difficult to do, but it is actually trivial compared to the approach we are using now which is to try and figure out every possible attack and write perfect software (the point of the article). It is much easier to define what a program is supposed to do than every nasty malicious thing someone on the Internet can dream up that it should not do.
    I've ranted long enough, but I think that there are good solutions to stopping about 90% of the crap that we see going on today, and that the other 10% will be fun to keep us security professionals employed :p
    • mainframes... (Score:5, Insightful)

      by gillbates ( 106458 ) on Friday June 11, 2004 @12:35PM (#9399815) Homepage Journal

      One of the key factors that has kept Mainframes secure for so long is the fact they were designed as a secure environment in the first place:

      • Mainframes don't have a hardware stack, in the sense that UNIX and PC machines do. So buffer-overflow vulnerabilities are ruled out from the start.
      • A program must enumerate all of the system resources it uses before it begins execution. While this is certainly a PITA from a developer's perspective, it means that a running process can't be hijacked into installing a root kit. A process can't read, write, or create files unless they are specified in the JCL (And how many hackers know JCL?)
      • Of course, mainframes were the first to have memory protection.
  • by LaissezFaire ( 582924 ) on Friday June 11, 2004 @12:04PM (#9399380) Journal
    A lot of effort goes into finding vulnerabilities in software, but there's no real evidence that it actually improves security . . . It doesn't look like we're making much of a dent in the overall number of vulnerabilities in the software we use.
    The poster is saying that because we are not lowering the absolute number of vulnerabilites, therefore we have no evidence removing / finding vulnerabilites improves security. The answer doesn't follow the premise.

    Take a sinking boat. If you are bailing water out, and the boat isn't sinking any more, it does not follow that bailing water isn't a good idea. If you stop bailing water, you're sunk. If good guys stop finding and fixing vulnerabilities, you're sunk, too.

  • I know I'd feel much better if only the blackhats were looking for and exploiting security vulnerabilities. If the whitehats don't look for them, publish them and give the vendors/developers incentive to do something about it, then any response to an attack is purely reactive. Welcome to the world where every system is a zombie. In fact, it would soon be duelling zombies. Wouldn't that be great!
  • He has a point... (Score:5, Insightful)

    by bigHairyDog ( 686475 ) * on Friday June 11, 2004 @12:12PM (#9399459)

    Anticipating the shitstorm of people lining up to say 'how stupid' without reading the FA, here's a nice little summary.

    The paper is not quite as stupid as it sounded by the description, but it misses/ignores a critical flaw in the argument.

    His basic premise is that patching is expensive and people don't do it anyway - probably true for the majority of systems. Therefore, he argues, black-hats are alerted to the security holes by the disclosure. He shows that it doesn't really matter whether holes are discovered by black-hats and are fixes are released after the exploit, or discovered by white-hats and exploited after the fix has been released but not applied.

    However, his arguments are based on averages. Where he's wrong is that if you have some systems that are simply so valuable that they cannot be comprimised, proactive bug fixing [openbsd.com] coupled with a manic obsession for patching your system the moment a patch is released is still the best way to stay safe

  • by Illissius ( 694708 ) on Friday June 11, 2004 @12:22PM (#9399612)
    exceedingly so, in fact. It boils down to a single sentence:
    It's better to find the security hole yourself and fix it than for someone with malicious intentions to do so and exploit it.
    (And good luck convincing /them/ that it's not worth looking for it.)
  • by gurps_npc ( 621217 ) on Friday June 11, 2004 @12:29PM (#9399716) Homepage
    His report is thorough, but like many such things, he made several key logical errors in his conclusions.

    His basic theory is that he believes, given the current rates of discovery, poor patching rates and the large number of bugs that:

    1) Due to massive information exchange and slow patching/fixing, post announcement explotitations are not significantly less than explotiation caused by un-announced bugs, so announcing does not help.

    2) There are so many bugs out there that finding a bug and announcing it does not produce a singificant reduction in the number of "bugs unknown to white Hats" so it does not increase security significantly.

    His major errors are ignoring the following: 1) Exploitation post announcement is almost entirely done against the "lesser" computers, i.e. either machines with un-important data/programs (home pc's) or important machines with incompetent sysadmin. As such the real cost of these explotations is either A) practically null, or b) high, but likely to get the incompetent sysadmin fired, resulting in i) better employment prospects for good sysadmin and ii)an over-all improvment in quality of security for that company. So while the # of explots may be higher for Post-announcement bugs, soceity has a REAL social and economic gain that is very significant.

    2)A)The announcement of bugs allows people to judge how well written a program is and therefore make an informed decision to NOT buy inferior producs (say Windows).

    B)That perhaps our problem is not that we are announcing the bugs but that we are not doing sufficient bug hunting. He seems to be implying that because bug hunts don't find enough bugs, the solution is not to bug hunt. But anyone with a brain should be able to see that if we are not depleating the unknown bugs fast enough than one possible solution is to tremoundously increase our bug hunting, not to stop the bug disclosures.

  • good idea (Score:4, Informative)

    by dtfinch ( 661405 ) * on Friday June 11, 2004 @12:35PM (#9399814) Journal
    Crackers will dissect your patches to create exploits, but you'll at least have protection available when the exploits go wild. If they don't find vulnerabilities from the patches, they'll just spend more time trying to find them manually, and the more you leave unpatched, the better the odds they have of finding one. Your customers who care about security the most will install the patches on time, and get pissed if a cracker exploits something before you've patched it.

    But it's even better to find them before the product ships, and design early on to avoid the common ones. I believe the author of qmail is still offering thousands of dollars to the first person who finds even a single vulnerability.
  • by Darth Daver ( 193621 ) on Friday June 11, 2004 @12:39PM (#9399859)
    new ones just keep coming along. What is the point. We cured polio and smallpox, but now we have HIV. We should have left well enough alone.

    Maybe it is just me, but I think linking bubonic plague to flea infested rats was a beneficial advancement in progressing the human situation, for multiple reasons.
  • by stevey ( 64018 ) on Friday June 11, 2004 @12:44PM (#9399918) Homepage

    Finding problems which can be disclosed at the same time as a patch is very good.

    All the major Linux distributors will release updates in a timely manner, and enable people to install them with minimum effort - much like Microsoft does. The only difference with Microsoft's patches is they can, rarely, break things. I've never seen this happen with a Linux update.

    Personally I've never heard anybody say anything bad about the pro-active way which the OpenBSD team audit their codebase and this is one of the reasons why I started the Debian Security Audit [debian.org].

    Having a dedicated team of people auditing code, combined with the ability to release updates in a timely manner is definately a good thing.

    (The results of my work [debian.org] show that even with only a small amount of effort security can be increased)

    Did I mention that I'm available for hiring? [steve.org.uk] ;)

  • by Tumbleweed ( 3706 ) * on Friday June 11, 2004 @12:47PM (#9399971)
    "A lot of effort goes into funding law enforcement in society, but there's no real evidence that it actually reduces crime. I've been trying to study this problem and the results aren't very encouraging. It doesn't look like we're making much of a dent in the overall number of crimes in our society."

    ---

    If you think security is bad now, just stop fixing security vulnerabilities and see how much worse things get. It's like a sump pump - it may not fix the leak, but it'll keep you from drowning.
  • by PetoskeyGuy ( 648788 ) on Friday June 11, 2004 @12:54PM (#9400056)
    Most roads have some holes in them. Some would say it is a natural part of the road building process. Other argues roads can be made hole free. Generally roads are thought to be without holes when initially developed, but over time holes are found. While identifying and patching holes in roads is thought to be a good thing, numerical analysis shows otherwise.

    Patching roads requires people to stop the flow of traffic, and puts workers at risk of being injured or killed by users of the roads. A road that is fully patched encourages users to drive faster, burning fossil fuels at a lower efficiency compared to the slow speed drivers use when they see holes. Driving slower will also save lives in the event of an accident and cause drivers to pay more attention to the road since they never know when a hole could be in their local path.

    Many times the problem is not with the road, but the surface that it is built on. Patching the road can only be assumed to be a stop gap measure at best and will likely have to be patched again. Holes in the supporting structure will almost always show up in the things built on it.

    Fixing pot holes slows innovation. Once it becomes accepted that roads have holes in them, consumers will demand vehicles able to deal with them. If hole patching was stopped right now, studies show we would all be flying to work in our personal jetson mobiles within 8 years. Space previously set aside for roads will be converted to trails for bikes, bladers and walkers. Butterflies will land on your outstretched hand while you stop to observe the wild flowers.

    Fixing holes only maintains the status quo and dominance of local government and the corrupt DOT branches of the states. If you reduce their budget by even 1% they will go on strike and roads will quickly deteriorate. Some communities out there are leading the charge in not fixing pot holes to bring you a world of glass houses on stilts and talking dogs with jet packs.

    In conclusion our findings indicate the DOT should be abolished. They have served their purpose but have no place in todays innovative world.
    • I lived for a while in Gif sur Yvette, France. The mayor actually make a point of building road obstructions and posting that this is a domain for pedistrians. Against my assumptions that this was nuts, it actually seemed to work. Cars were fewer and slower and walking was more pleasant. The downtown was much more vital than the virutal ghostowns of many city centers in 'Walmark America'. The local drivers said that it didn't really slow them down much, since there was less traffic to fight.

      Intuition

  • by __aadkms7016 ( 29860 ) on Friday June 11, 2004 @12:56PM (#9400084)
    Black Hats are dynamic actors -- if the world changes so that Figure 2 in Eric's paper is the norm rather than Figure 1, the Black Hat community will evolve to live in the new world. Their new goal will be to maximize area under the "Private Exploitation" part of Figure 2. We may be better off with the current state of affairs.
  • Liability (Score:3, Interesting)

    by YouHaveSnail ( 202852 ) on Friday June 11, 2004 @01:02PM (#9400146)
    As far as I can see, the paper fails to consider liability issues resulting from failing to patch security-related flaws. If an ISP, for example, fails to actively work to protect its systems from intrusion, it would seem likely that they'd be found negligent in cases where harm comes to its customers as a result of such intrusion. If, on the other hand, the same ISP endeavors to keep abreast of security warnings and to do as much as it is able to lock out intruders, one would think they'd be protected to at least some degree from claims of negligence.

    On a microscopic level, individual system administrators have a strong personal interest in avoiding having to tell a CIO something like: "I've heard rumors that attacks like the one that just devastated our system might be possible, but nobody ever discovered a particular hole, so I ignored the issue. But look, here's a paper which says that searching for security flaws is probably just a waste of time and money. See? Even though this attack will cost us millions, think of the money we saved in not having to look for holes!"
  • by Effugas ( 2378 ) on Friday June 11, 2004 @01:16PM (#9400326) Homepage
    It's quite hard to compare a status quo to an invisible alternative state -- this is a huge problem in business, politics, and especially economics. But at least I've determined that simply using vulnerability metrics -- i.e. "Finding bugs does not lead to less bugs being found" -- is ultimately not a representative metric for the actual risk mitigated.

    To use a straightforward analogy -- possessing an immune system does not by a significant means reduce the pathogenic population, yet lacking one is death. The case is quite similar with vulnerabilities and virii -- it would be very simple for us to completely lack the infrastructure to manage an Internet-wide vulnerability. The low grade malware floating around -- though infuriating -- forces us to create a management infrastructure, on pain of losing connectivity. What the consistent stream of discovered vulnerabilities creates is not fewer vulnerabilities -- software simply isn't mature enough, nor would we really want it to be -- but more managable failures. Put simply, it doesn't matter what this way comes, because we've already been forced to deploy backups, create procedures, and implement recovery strategies.

    The alternative state is far more terrifying: Bugs are not talked about, and the strategy is not to fix them but to silence their open researchers. A black market opens up -- it will always be in the benefit of some to have the means to exploit others. These means always work, because nobody defends. Are there fewer with these means? Yes, but one person can write an attack, and the motive to blackmail the entire Internet population (pay me, and I'll "protect" you from the next wave) is quite strong.

    Bottom line -- and it's something that took me some time to realize myself, being an active member of the security community who doesn't deal in exploits heavily -- is that whatever the headaches are of full disclosure, the alternative is much worse.

    --Dan
  • by VishalMishra ( 100004 ) on Friday June 11, 2004 @04:47PM (#9402556) Homepage
    The author is naive enough to not have considered the Grey Hat Discovery (GHD) of vulnerabilities where the hackers (not being a black hat) do not go on a private exploitation spree, because most people don't want to violate laws and end up in prison, but definitely want to make a big name for themselves, so publish the vulnerability without giving the chance to the Vendor to fix the problem before public disclosure. The percentage of grey hats is enormous compared to both white and black hats, so most of the arguments in this paper are based on a WEAK Foundation of practical reality. This changes many mathematical equations in the article, and lead to significantly different final conclusions. Contact me if you need more specific details.

Kiss your keyboard goodbye!

Working...