Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security Programming Software IT Technology

Is Finding Security Holes a Good Idea? 433

ekr writes "A lot of effort goes into finding vulnerabilities in software, but there's no real evidence that it actually improves security. I've been trying to study this problem and the results (pdf) aren't very encouraging. It doesn't look like we're making much of a dent in the overall number of vulnerabilities in the software we use. The paper was presented at the Workshop on Economics and Information Security 2004 and the slides can be found here (pdf)."
This discussion has been archived. No new comments can be posted.

Is Finding Security Holes a Good Idea?

Comments Filter:
  • Bill (Score:3, Interesting)

    by somethinghollow ( 530478 ) on Friday June 11, 2004 @12:39PM (#9398954) Homepage Journal
    Sounds alot like something Microsoft has been saying...
  • by Hayzeus ( 596826 ) on Friday June 11, 2004 @12:41PM (#9398982) Homepage
    Let's say we all stopped reporting security holes in software -- would the resulting software actually be any better?

    I guess I'm a little unclear on what the research stated is supposed to actually accomplish.

  • I agree (Score:3, Interesting)

    by Moth7 ( 699815 ) <<mike.brownbill> <at> <gmail.com>> on Friday June 11, 2004 @12:44PM (#9399021) Journal
    Of course there are many found and patched before the damage is done - they just don't get the kind of press that exploited ones do.
  • by thechuckbenz ( 526254 ) on Friday June 11, 2004 @12:44PM (#9399022)
    A proper experiment would be an application where the developers made no attempt to find security problems. Any volunteers ? Anyone want to install such an application. (Nevermind all the joking about MSoft having already volunteered and how it's widely installed).
  • by Ed Avis ( 5917 ) <ed@membled.com> on Friday June 11, 2004 @12:48PM (#9399097) Homepage
    If you find a security hole then the mistake has already been made. Fix the hole, but also make sure the same bug doesn't exist in any other program. Finding the same exploits again and again (buffer overruns, format string vulnerabilities, tempfile symlink vulnerabilities) reflects very badly on free software and programmers' ability to learn from bugs found in other software. (Not that proprietary software is necessarily any better - I am just not discussing it here.)

    The OpenBSD people have built up a good track record on security by finding holes and fixing them everywhere possible. I am sure they would disagree with your assertion that finding holes does not help to improve security. Finding the bugs is an important first step towards not putting them back in next time you write code.
  • by KarmaOverDogma ( 681451 ) on Friday June 11, 2004 @12:48PM (#9399101) Homepage Journal
    Working to discover what security flaws exist in any given program, series of prgrams, Operating Systems, hardware, etc is not the real issue in my opinion: it is the idea of working to design a system that is as stable, flexible/adaptable, transparent, and clear as possible while at the same time providing a foundation that allows room for future growth. To really execute all of these concepts well can be a truly daunting task, IMO, given the often limited salaries/wages, time and other constraints (e.g. management) that progammers in particular have to face. This is just one of the reasons programs/kernels/systems, etc go through so many revisions.

    I know the article doesnt imply this at all but the solution to security and stability problems does not lie in simply sticiking our collective heads in the sand. We have to answer the who/what/when/where/why elements of design. Building a better mousetrap involves many elements, as I alluded to above.

  • by beejhuff ( 186291 ) on Friday June 11, 2004 @12:49PM (#9399105) Homepage
    Is it just me or does it seem a stretch that within the first couple of paragraphs the assumption is made that there is somehow a direct relation between the number of intrusions and the cost of those intrusions:

    "If a vulnerability isfound by good guys and a fix is made available, then the number of intrusions--and hence the cost of intru-sions--resulting from that vulnerability is less than if it were discovered by bad guys"

    While I'm not certain that there is NO relationship between the two, I'm certainly NOT comfortable positing such a blanket assessment.

    Perhaps there is a relationship between the net economic cost and the number of intrusions, but it seems equally likely that it would be possible through full disclosure the marginal cost of each instrusion could be reduced; a possible seemingly left lightly treated at best in this essay.

  • My Take... (Score:4, Interesting)

    by abscondment ( 672321 ) on Friday June 11, 2004 @12:51PM (#9399140) Homepage

    The value of finding security holes is totally dependant upon the company whose product has the hole.

    I work at Barnes & Noble as a bookseller; within a few months of working there, I found huge security holes in their in-store search system and network. I reported these to my manager; as soon as she grasped the scope of the hole, she called tech support in New York. I spent 30 minutes explaining all the different problems to them, and that was that.

    I didn't think they'd do anything about it, and that's the problem--since it costs time and money, most companies can't or won't fix holes. To my surprise, however, my store began beta testing the next version of the in-store software. What was even more surprising was that every hole I had reported was gone (so I went and found more, of course).

    There's never a silver-bullet; a new vulnerability will always surface. It's really hard to stay ahead of the curve, but it's something that must be attempted.

  • by aussie_a ( 778472 ) on Friday June 11, 2004 @12:54PM (#9399201) Journal
    In principle, I agree that automatically installing patches is a good thing in principle. But Microsoft has a habbit of changing their licenses and installing DRM when people "upgrade" and/or "install patches."

    Also, imagine I have 2 programs. Both automatically install patches. Unpatched they both work fine. But when program #1 is patched, program #2 cannot run at all. Now this will probably be fixed eventually, but in the mean-time, I cannot run program #2 at all. If I need both programs, I'm fucked with the system of auto-patches.

    However when I have a choice, how likely am I to install a patch? Not as likely (due to laziness). So the effectiveness decreases significantly.
  • by EvilCowzGoMoo ( 781227 ) on Friday June 11, 2004 @12:55PM (#9399223) Journal
    In order to fix vulnerabilities, you have to find them. However, as soon as they are found and publicized, some script kiddie exploits them.

    I wonder if this model can be reversed. Instead of software companies spending millions to find the vulnerabilities there is a huge body of free labor out there who will do it for you. This would eliminate script kiddies.

    Now when a new exploit comes out its a matter of containing it ASAP and plugging the newly found hole. There would be damage, granted, but would it be more than the cost of finding the vulnerabilities?

  • The assumptions... (Score:3, Interesting)

    by sterno ( 16320 ) on Friday June 11, 2004 @12:56PM (#9399252) Homepage
    The problem I can see in this paper is that it makes certain assumptions about the behaviour of black hat hackers which aren't necessarily true. The majority of vulnerabilities discovered by black hat hackers are eventually leaked from the hacker community to a white hat which will seek a solution to the problem. But there's no reason to conclude that this is true of all vulnerabilities.

    I forget the terminology for it, but there's the concept of worms that are launched on a large number of vulnerable machines simultaneously. I'm not aware of an attack like this in the wild but it's theoretically possible and would be terribly destructive. If a black hat hacker plays his cards right, he can probably sneak his exploit onto a few thousand computers without anybody noticing. Then he can launch a massive attack before anybody even knows the vulnerability exists.

    Having said that I think that, in the real world, the amount of effort put into finding vulnerabilities by white hats has a minimal cost. There's essentially three areas where security vulnerabilities are discovered by the friendlies:

    1) QA of the product developers
    2) Hobbyist white hats
    3) Network security auditing

    The cost of #1 is an assumed cost of any product and is part of the basics of releasing it to the public. You check for typos in the documentation and you check for security bugs.

    The cost of #2 is zero because it's people doing these things on their own time for their own amusement.

    The cost of #3 is substantial but it's critically important to some businesses to have zero security vulnerabilities. A security breach not only has an immediate cost in time to fix the problem, but it also has a long term cost by damaging the integrity of the company. If your bank got hacked and you lost all your savings, even if it was insured, would you keep your money in that bank?
  • Actually (Score:1, Interesting)

    by aussie_a ( 778472 ) on Friday June 11, 2004 @12:57PM (#9399257) Journal
    I'm less likely to read a PDF then a HTML file.

    I hate PDF.
  • by Sloppy ( 14984 ) * on Friday June 11, 2004 @12:57PM (#9399261) Homepage Journal
    Of course it helps! But perhaps not the way you might expect it to.

    Someone finds a buffer overflow problem. Someone finds another one. Someone finds another one. Someone finds another one.

    Someone realizes: "what if I centralized all my buffer routines and just got one little section of code working perfectly?" Then you get projects like qmail or vsftp, which simply are more secure. Then people start using these programs, and their systems really are less vulnerable.

    This paper looks keeps using the phrase "given piece of software." It's talking about (lack of) improvements at a micro-scale, but ignores improvements in the big picture that can happen due to people getting fed up or scared.

    If vulnerabilities were not discovered, would anyone bother to develop secure software?

    I think this paper has as an underlying assumption, the popular view that it just isn't possible to write secure software, and that every large software system is just a big leaky sieve that requires perpetual patching. I don't accept that. I understand why the belief is widely held: most popular software was not originally written with security issues in mind. But this is something that will be overcome with time, as legacies die off.

  • by BigBir3d ( 454486 ) on Friday June 11, 2004 @12:57PM (#9399268) Journal
    you can plug all the individual holes you want, it is still a crappy designed dam.

    if it designed differently, the number of cracks is smaller...

    i wish reporters understood that. flame MS for not bringing lonhorn out sooner. XP is not good enough. everyone knows this, nobody in the popular press is saying it in the right way.

    *sigh* /me goes back to condeming the world to doom

  • by a_hofmann ( 253827 ) on Friday June 11, 2004 @12:58PM (#9399281) Homepage
    The study sheds light into a little studied phenomenon, and therefor shows interesting facts. It shows that the difference between black-hat-discovery and white-hat-discovery basically reduces to the number of exploits between discovery and public disclosure of a bug, which is negible compared to the total number of exploits during the lifecycle of a bug.

    That may hold true and make sense if you study the total number of exploitable systems on the net, but totally ignores the fact that there are a very large number of systems with little priority for security while only a few depend on 100% system security.

    Those few high security sites have the need and pay for resources to fix known flaws or patch their software asap. It are those who gain from the knowledge of a security flaw before the black-hat guys do. They cannot live with even the shortest vulnerability timeframes and usually patch exploits as soon as they get published on public security channels.

    It may hold true that the work put into security auditing does not pay out on whole, taking all software users into account, but for those few who really care about security it surely does...
  • by jwthompson2 ( 749521 ) * on Friday June 11, 2004 @01:00PM (#9399311) Homepage
    But you should be able to override the default behavior of auto-installing patches; my thinking would be that systems should patch themselves automatically unless the user specifies that they shouldn't.

    The issue still remains though that an unpatched system is still vulnerable, if the patch breaks an application and the machine goes unpatched there is a loss in security because of potential intrusion. If the patch is applied there is a potential loss of productivity. This is the kind of call a sysadmin has to make for their network, but a sysadmin should know enough to make the decision in an informed way, the average computer user is not equipped in the same way and probably should recieve the patch in order to mitigate risk that user's compromised system may cause to the greater group of users they may connect to via the internet.
  • by paulproteus ( 112149 ) <slashdot AT asheesh DOT org> on Friday June 11, 2004 @01:01PM (#9399340) Homepage
    The parent is exactly right. Having read through this paper now, I realize what it misses: the economic impact of the information.

    Much work has been done in economics regarding the affect that inadequate information flow has on a market; a Nobel Prize was wone in it lately. The paper assumes that there are a constant number of vulnerable machines, as you can see on page 2, for any given vulnerability. First of all, it ignores the fact that someone has to choose to use these vulnerable products. Second, it ignores the choice that comes to sysadmins when they learn that a particular company's products are more likely to have bugs, as the parent describes.

    The moral of the story is, the paper tries to be more broad than it can - by assuming that software acquisition decisions never happen, it fails to see the effect of vulnerability disclosure on these decisions. And these decisions, made well, do in fact make us more secure. The "software defect rate" in total might not decrease, but the defect rate in *used* software may well decrease.
  • by Fo0eY ( 546716 ) on Friday June 11, 2004 @01:12PM (#9399468)
    i've thought for a while that we're going to go full circly and go back to running apps off a server instead of client side

    just look at the direction IBM is going with building their web based office suite

    just one patch and everyones updated on the fly
  • by Anonymous Coward on Friday June 11, 2004 @01:19PM (#9399567)
    These two items aren't the same thing. Hell, patching security holes doesn't always mean improved security. Of course, it's important to find the issues -- Windows users only patch when there is a worm released for a 6-month-old vulnerability. Take the adodb.stream issue for IE (or whatever it is) that has been a gaping hole for 10 months or so. No worm means no pressure on MSFT to fix it.

    There needs to be some pressure (monetary, legal, etc) put on developers and companies to quit writing crappy code. Microsoft has succeeded in creating a complex OS integrated web browser that has proved to be a pain in the ass to secure. Large companies need to file lawsuits against MSFT to recover damages related to fixing and securing Windows based operating systems. This is the only language that MSFT speaks. Of course, even if it made it very far in the legal system MSFT would just settle by giving the company reduced pricing for Office and Windows licenses.

    Damn it all! Let's go back to paper. /rant off
  • If Microsoft automatically installs a patch, can they change the liscense? I mean, no one clicked on 'I agree'.
  • by Anonymous Coward on Friday June 11, 2004 @01:24PM (#9399633)
    Refer back to last week's recent story [slashdot.org] regarding the Royal Bank's problems with an upgrade in Canada. Auto-patching might very well lead to something like this on a larger scale, or affecting numerous small businesses.
  • by Anonymous Coward on Friday June 11, 2004 @01:25PM (#9399643)
    I built a Win98 box a couple weekends ago, the Windows Update process was gruling. It stands to reason that if A requires B, when I select A, just f'n give me B too. Maybe for the poor bastards dialing up.

    That said, I do appreciate the XP "Download and bug me" option.
  • by mangu ( 126918 ) on Friday June 11, 2004 @01:29PM (#9399714)
    The "reporting" only provides script kiddies with a list of ways to be dicks.


    You mean script kiddies read university reports? Somehow, I can't imagine someone doing all the work needed to be a highly respectable university professor in order to become a script kiddy. I think it's more on the line of "Hey, let's throw a 4 kbyte buffer at this fucker and see what happens. Nothing? Try 8 kbytes, then!"


    I might be wrong, but I wish that, for every "security through obscurity" argument I see, they also published some hard data showing exactly which exploits have been developed based on published reports. Or, considering how difficult it is to keep track on the script kiddies' development methods, at least show which vulnerabilities have been published before the exploit came out.


    From my own research, using the data at netcraft and the late alldas.de, exploits on Microsoft IIS were sixteen times more likely to happen than on open-source Apache, considering the installed base of each.

  • by Anonymous Coward on Friday June 11, 2004 @01:34PM (#9399790)
    A pdf is not inherently worse than a static html page. In fact, a pdf is more server friendly than a static webpage because it inlines its images (i.e. less http requests).

    Yes, I can imagine a scenario where the user tried using ps2pdf and got a bloated pdf because all of his text was transformed into bitmaps of the letters. A clueless user could also stick tons of graphics into his pdf or try to stick 100 documents worth of data into one pdf. However, it's possible to make those same mistakes in HTML, so any argument that PDFs are inherently worse on the server are just flamebait.
  • by Umrick ( 151871 ) on Friday June 11, 2004 @01:36PM (#9399825) Homepage
    Actually I did have this happen once in all my years with a Debian stable production box. It helped move data from a Vax box to be injected into an Oracle DB running under Windows for ERP.

    There was an update to the nfs code to solve a potential exploit, which unfortunately also broke the NFS shares on the Vax side.

    Was easy to revert to the previously "broken" NFS server though.

    That was one time in 5 years of running though. The number of times an update has borked windows though is much more of a concern.

    Don't even get me started on the lobotomy done on a machine by Mandrake's autoupdate function though.
  • Re:Not necessarily (Score:2, Interesting)

    by nkh ( 750837 ) on Friday June 11, 2004 @01:40PM (#9399878) Journal
    OT but the swastika is an indian religious symbol, and it is also used in japanese maps to pinpoint the location of buddhist temples.
  • by Tumbleweed ( 3706 ) * on Friday June 11, 2004 @01:47PM (#9399971)
    "A lot of effort goes into funding law enforcement in society, but there's no real evidence that it actually reduces crime. I've been trying to study this problem and the results aren't very encouraging. It doesn't look like we're making much of a dent in the overall number of crimes in our society."

    ---

    If you think security is bad now, just stop fixing security vulnerabilities and see how much worse things get. It's like a sump pump - it may not fix the leak, but it'll keep you from drowning.
  • Having actually read the Windows XP EULA at one point, IIRC there is a clause which addresses this, so basically when you agree to one EULA you agree to any changes they decide to make to it down the road.
  • Complexity (Score:2, Interesting)

    by Jane_Dozey ( 759010 ) on Friday June 11, 2004 @01:51PM (#9400023)
    Software security is, in a way, getting better. More holes are being patched and more ways of exploiting security are getting found and publisized (which is good if programmers and sys admins take note).
    The real problems are the fact that software doesn't stick around too long. New versions are released with new bugs and holes. They get bigger and more complex which makes it harder and harder to get a good degree of security in any product or model. This would show that security is getting worse. I know I've contradicted what I said earlier but I think both are true. Security in software gets better with every hole fixed, but worse everytime it gets code changed or added.
    Sticking your head in the sand and pretending that these problems don't exist because they are not published yet makes no sense. It doesn't make any system any more secure.
    If a door on a building is unlocked but no-one know this yet does it make that door secure? No.
  • by __aadkms7016 ( 29860 ) on Friday June 11, 2004 @01:56PM (#9400084)
    Black Hats are dynamic actors -- if the world changes so that Figure 2 in Eric's paper is the norm rather than Figure 1, the Black Hat community will evolve to live in the new world. Their new goal will be to maximize area under the "Private Exploitation" part of Figure 2. We may be better off with the current state of affairs.
  • Liability (Score:3, Interesting)

    by YouHaveSnail ( 202852 ) on Friday June 11, 2004 @02:02PM (#9400146)
    As far as I can see, the paper fails to consider liability issues resulting from failing to patch security-related flaws. If an ISP, for example, fails to actively work to protect its systems from intrusion, it would seem likely that they'd be found negligent in cases where harm comes to its customers as a result of such intrusion. If, on the other hand, the same ISP endeavors to keep abreast of security warnings and to do as much as it is able to lock out intruders, one would think they'd be protected to at least some degree from claims of negligence.

    On a microscopic level, individual system administrators have a strong personal interest in avoiding having to tell a CIO something like: "I've heard rumors that attacks like the one that just devastated our system might be possible, but nobody ever discovered a particular hole, so I ignored the issue. But look, here's a paper which says that searching for security flaws is probably just a waste of time and money. See? Even though this attack will cost us millions, think of the money we saved in not having to look for holes!"
  • by thayner ( 130464 ) <thaynerNO@SPAMrcn.com> on Friday June 11, 2004 @02:03PM (#9400153) Homepage
    Sounds good, but I for one have never seen a sysadmin fired for incompetence. Their managers usually like the visibility that a security vulnerability brings, because they can lobby for more money to fight against future vulnerabilities. Few companies critically judge their IT employees, which is why MSCEs are able to get jobs and why shifting everything to India sounds like such a good idea.
  • bingo (Score:2, Interesting)

    by blunte ( 183182 ) on Friday June 11, 2004 @02:07PM (#9400196)
    it will take many software generations to start seeing appreciable results in security.

    one of the biggest difficulties is to increase security while simultaneously adding features. once products and operating systems reach a certain level of feature maturity, the ernest improvement of security can happen.

    at the same time, the building blocks are getting safer. simply eliminating buffer overflow vulnerabilities greatly strengthens security.

    give it time, and keep working toward smarter development practices. software is still young.
  • by The_Wilschon ( 782534 ) on Friday June 11, 2004 @02:52PM (#9400798) Homepage
    You go too far by saying that Automatic/Forced patching is the _only_ way to make discovery worthwhile.

    It might be the only practical way, or the only realistic way, but the fact is, there are other conceivable ways. For example, every user might spontaneously decide to install the patch.

    And here's something to chew on: what happens when the automated patch server is compromised, and a deliberate security hole is auto/force installed on every machine? That would seem to me to be a pretty grave danger, esp. if the fact that the server was compromised is not caught quickly and fixed to repatch systems to remove the bug. and given the rate of worm spread, quickly means you have just a few minutes to discover and correct the problem, because the hacker undoubtedly has an exploit for his custom hole just sitting and waiting to use.
  • by Anonymous Coward on Friday June 11, 2004 @03:03PM (#9400921)
    Anyone that works in a bank or something with a 24x7 production server does not patch unless absolutely necessary.

    It's not good business practice to risk your business critical systems to unknown patches except once every year or two.

    end user desktops are different.
  • Re:Ummm... (Score:3, Interesting)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Friday June 11, 2004 @03:19PM (#9401106) Homepage Journal
    The biggest safety issue for the occupants of an SUV is that some drivers overestimate the abilities of their vehicles and do stupid things in them that they wouldn't attempt in a smaller vehicle ("The water barely covers the bridge - we can make it!" or "I have 4 wheel drive, so I should be able to accelerate through this corner.").

    So, it sounds to me like a selfishness and cowardice issue on the part of the SUV driver - I would rather two other people die in a car to car collision than I die.

    Frankly, I'd rather 10 other people die than me, and I'll bet 99.9% of the population feels the same way if it comes down to it. My wife isn't married to the people in the other car. My children don't call them "daddy". My mom doesn't worry about them, and my sisters didn't grow up with them. That's not to say that I would never sacrifice my life in any situation, but boy, there better be one heck of a payoff for the people I'd be saving for me to consider it (ie taking a bullet for the President, protecting my family, etc.).

  • by jombee ( 111566 ) on Friday June 11, 2004 @03:20PM (#9401112)
    With each disclosure :
    - V(found) approaches V(all).
    - the time (t) in the vulnerability lifecycle between disclosure and fix release becomes a concrete value = t(fix).
    - the cost C(pub) can become a quantifiable value.

    As a security professional I am more accurately able to evaluate/assess and manage risk for each V(found), t(fix), and C(pub) given above. Furthermore, for every initial public lack of disclosure (or BHD) and large t(fix) value on critical/costly systems or information, I am able to make more meaningful vendor/product recommendations.

    While the paper is well written, contains valid analysis, and provides insight into the disclosure issue, I find section 3.3 to be lacking. The author's conclusions and the security industry itself would be strengthened by further work in modeling the range of cost issues due to disclosure for various commercial industries, educational institutions, and government establishments.

    In my professional experience, the sum of knowledge I gain from disclosure details provides defensive strength.

    =jombee
  • by shis-ka-bob ( 595298 ) on Friday June 11, 2004 @03:47PM (#9401407)
    I lived for a while in Gif sur Yvette, France. The mayor actually make a point of building road obstructions and posting that this is a domain for pedistrians. Against my assumptions that this was nuts, it actually seemed to work. Cars were fewer and slower and walking was more pleasant. The downtown was much more vital than the virutal ghostowns of many city centers in 'Walmark America'. The local drivers said that it didn't really slow them down much, since there was less traffic to fight.

    Intuition can be misleading, its better to have observations and attempts to model the data. Even though this paper is completely counter-intuitve to many of us, we should gather better data and build better models. Making analogies is a useful first step, but only tht the extend that you can use the analogy to build a better model. Then you can prove that the insights you gain by analogy are valid.

  • by randomencounter ( 653994 ) on Friday June 11, 2004 @03:50PM (#9401431)
    If I have a system with a security vulnerability it is unsafe. If everyone knows about it it is very unsafe.

    If only Dmitri Hackforprofit and his buddies know about it, I'm toast.

    Just because the bad guys exploit holes found by the good guys doesn't mean they don't know how to find them on their own.

  • Is that anyhow lega in US? I mean, in Brazil some telecom companies tryed to do something similar (with executive's support) and where forced to move back by the judiciary because such terms are illegal here.
  • by VishalMishra ( 100004 ) on Friday June 11, 2004 @05:47PM (#9402556) Homepage
    The author is naive enough to not have considered the Grey Hat Discovery (GHD) of vulnerabilities where the hackers (not being a black hat) do not go on a private exploitation spree, because most people don't want to violate laws and end up in prison, but definitely want to make a big name for themselves, so publish the vulnerability without giving the chance to the Vendor to fix the problem before public disclosure. The percentage of grey hats is enormous compared to both white and black hats, so most of the arguments in this paper are based on a WEAK Foundation of practical reality. This changes many mathematical equations in the article, and lead to significantly different final conclusions. Contact me if you need more specific details.
  • by spinlocked ( 462072 ) on Friday June 11, 2004 @09:44PM (#9404110)
    Actually I did have this happen once in all my years with a Debian stable production box

    I once saw a vendor approved patch bring down a financial market running on some very big iron. The amount of money lost in compensation to the traders was many, many times the cost of the hardware involved. Automatic patching was the root cause - except rather than a shell script being to blame, the idiot was there in person, downloading patches and whacking them straight on.

    They failed to do any testing whatsoever before applying them to production machines. This is an often overlooked part of availability engineering best practices.
  • by MillionthMonkey ( 240664 ) on Saturday June 12, 2004 @01:41AM (#9405101)
    When a patch comes out for software, it doesn't make the software "suddenly" less secure than it was the day before.

    Of course not. That happens when the vulnerability is published, not when the patch is released. They're two separate events.

    If you buy a car that explodes when it is hit from the side and the next year the manufacturer releases a new model minus this magic exploding "exploit", your car is similarly not any LESS safe than it was before. It's just relatively less safe when compared to the car that doesn't explode so easily.

    Of course it's less safe than it was before! Now all your enemies know they can easily kill you by hitting your car on the side!

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...