Is Finding Security Holes a Good Idea? 433
ekr writes "A lot of effort goes into finding vulnerabilities in
software, but there's no real evidence that it actually improves security. I've been trying to study this problem and the results (pdf) aren't very encouraging. It doesn't look like we're making much of a dent in the overall number of vulnerabilities in the software we use. The paper was presented at the Workshop on Economics and Information Security 2004 and the slides can be found here (pdf)."
Bill (Score:3, Interesting)
But what about the converse? (Score:5, Interesting)
I guess I'm a little unclear on what the research stated is supposed to actually accomplish.
I agree (Score:3, Interesting)
Control for the experiment (Score:2, Interesting)
Finding the holes is only half the battle (Score:5, Interesting)
The OpenBSD people have built up a good track record on security by finding holes and fixing them everywhere possible. I am sure they would disagree with your assertion that finding holes does not help to improve security. Finding the bugs is an important first step towards not putting them back in next time you write code.
"Finding flaws" is not the problem (Score:2, Interesting)
I know the article doesnt imply this at all but the solution to security and stability problems does not lie in simply sticiking our collective heads in the sand. We have to answer the who/what/when/where/why elements of design. Building a better mousetrap involves many elements, as I alluded to above.
fundamental misconception perhaps (Score:2, Interesting)
"If a vulnerability isfound by good guys and a fix is made available, then the number of intrusions--and hence the cost of intru-sions--resulting from that vulnerability is less than if it were discovered by bad guys"
While I'm not certain that there is NO relationship between the two, I'm certainly NOT comfortable positing such a blanket assessment.
Perhaps there is a relationship between the net economic cost and the number of intrusions, but it seems equally likely that it would be possible through full disclosure the marginal cost of each instrusion could be reduced; a possible seemingly left lightly treated at best in this essay.
My Take... (Score:4, Interesting)
The value of finding security holes is totally dependant upon the company whose product has the hole.
I work at Barnes & Noble as a bookseller; within a few months of working there, I found huge security holes in their in-store search system and network. I reported these to my manager; as soon as she grasped the scope of the hole, she called tech support in New York. I spent 30 minutes explaining all the different problems to them, and that was that.
I didn't think they'd do anything about it, and that's the problem--since it costs time and money, most companies can't or won't fix holes. To my surprise, however, my store began beta testing the next version of the in-store software. What was even more surprising was that every hole I had reported was gone (so I went and found more, of course).
There's never a silver-bullet; a new vulnerability will always surface. It's really hard to stay ahead of the curve, but it's something that must be attempted.
Uhuh. Is this good if Microsoft does this? (Score:5, Interesting)
Also, imagine I have 2 programs. Both automatically install patches. Unpatched they both work fine. But when program #1 is patched, program #2 cannot run at all. Now this will probably be fixed eventually, but in the mean-time, I cannot run program #2 at all. If I need both programs, I'm fucked with the system of auto-patches.
However when I have a choice, how likely am I to install a patch? Not as likely (due to laziness). So the effectiveness decreases significantly.
Re:Fixing vulnerabilities is GOOD! (Score:2, Interesting)
I wonder if this model can be reversed. Instead of software companies spending millions to find the vulnerabilities there is a huge body of free labor out there who will do it for you. This would eliminate script kiddies.
Now when a new exploit comes out its a matter of containing it ASAP and plugging the newly found hole. There would be damage, granted, but would it be more than the cost of finding the vulnerabilities?
The assumptions... (Score:3, Interesting)
I forget the terminology for it, but there's the concept of worms that are launched on a large number of vulnerable machines simultaneously. I'm not aware of an attack like this in the wild but it's theoretically possible and would be terribly destructive. If a black hat hacker plays his cards right, he can probably sneak his exploit onto a few thousand computers without anybody noticing. Then he can launch a massive attack before anybody even knows the vulnerability exists.
Having said that I think that, in the real world, the amount of effort put into finding vulnerabilities by white hats has a minimal cost. There's essentially three areas where security vulnerabilities are discovered by the friendlies:
1) QA of the product developers
2) Hobbyist white hats
3) Network security auditing
The cost of #1 is an assumed cost of any product and is part of the basics of releasing it to the public. You check for typos in the documentation and you check for security bugs.
The cost of #2 is zero because it's people doing these things on their own time for their own amusement.
The cost of #3 is substantial but it's critically important to some businesses to have zero security vulnerabilities. A security breach not only has an immediate cost in time to fix the problem, but it also has a long term cost by damaging the integrity of the company. If your bank got hacked and you lost all your savings, even if it was insured, would you keep your money in that bank?
Actually (Score:1, Interesting)
I hate PDF.
Yes, it's a good idea (Score:5, Interesting)
Someone finds a buffer overflow problem. Someone finds another one. Someone finds another one. Someone finds another one.
Someone realizes: "what if I centralized all my buffer routines and just got one little section of code working perfectly?" Then you get projects like qmail or vsftp, which simply are more secure. Then people start using these programs, and their systems really are less vulnerable.
This paper looks keeps using the phrase "given piece of software." It's talking about (lack of) improvements at a micro-scale, but ignores improvements in the big picture that can happen due to people getting fed up or scared.
If vulnerabilities were not discovered, would anyone bother to develop secure software?
I think this paper has as an underlying assumption, the popular view that it just isn't possible to write secure software, and that every large software system is just a big leaky sieve that requires perpetual patching. I don't accept that. I understand why the belief is widely held: most popular software was not originally written with security issues in mind. But this is something that will be overcome with time, as legacies die off.
think of a cracking dam (Score:3, Interesting)
if it designed differently, the number of cracks is smaller...
i wish reporters understood that. flame MS for not bringing lonhorn out sooner. XP is not good enough. everyone knows this, nobody in the popular press is saying it in the right way.
*sigh*
Conclusion basically flawed (Score:3, Interesting)
That may hold true and make sense if you study the total number of exploitable systems on the net, but totally ignores the fact that there are a very large number of systems with little priority for security while only a few depend on 100% system security.
Those few high security sites have the need and pay for resources to fix known flaws or patch their software asap. It are those who gain from the knowledge of a security flaw before the black-hat guys do. They cannot live with even the shortest vulnerability timeframes and usually patch exploits as soon as they get published on public security channels.
It may hold true that the work put into security auditing does not pay out on whole, taking all software users into account, but for those few who really care about security it surely does...
Re:Fixing vulnerabilities is GOOD! (Score:5, Interesting)
The issue still remains though that an unpatched system is still vulnerable, if the patch breaks an application and the machine goes unpatched there is a loss in security because of potential intrusion. If the patch is applied there is a potential loss of productivity. This is the kind of call a sysadmin has to make for their network, but a sysadmin should know enough to make the decision in an informed way, the average computer user is not equipped in the same way and probably should recieve the patch in order to mitigate risk that user's compromised system may cause to the greater group of users they may connect to via the internet.
The problem with this paper (Score:5, Interesting)
Much work has been done in economics regarding the affect that inadequate information flow has on a market; a Nobel Prize was wone in it lately. The paper assumes that there are a constant number of vulnerable machines, as you can see on page 2, for any given vulnerability. First of all, it ignores the fact that someone has to choose to use these vulnerable products. Second, it ignores the choice that comes to sysadmins when they learn that a particular company's products are more likely to have bugs, as the parent describes.
The moral of the story is, the paper tries to be more broad than it can - by assuming that software acquisition decisions never happen, it fails to see the effect of vulnerability disclosure on these decisions. And these decisions, made well, do in fact make us more secure. The "software defect rate" in total might not decrease, but the defect rate in *used* software may well decrease.
going back to mainframes and dumb terminals (Score:2, Interesting)
just look at the direction IBM is going with building their web based office suite
just one patch and everyones updated on the fly
Finding security holes != improved security (Score:1, Interesting)
There needs to be some pressure (monetary, legal, etc) put on developers and companies to quit writing crappy code. Microsoft has succeeded in creating a complex OS integrated web browser that has proved to be a pain in the ass to secure. Large companies need to file lawsuits against MSFT to recover damages related to fixing and securing Windows based operating systems. This is the only language that MSFT speaks. Of course, even if it made it very far in the legal system MSFT would just settle by giving the company reduced pricing for Office and Windows licenses.
Damn it all! Let's go back to paper.
Re:Uhuh. Is this good if Microsoft does this? (Score:5, Interesting)
Re:Fixing vulnerabilities is GOOD! (Score:5, Interesting)
Re:Fixing vulnerabilities is GOOD! (Score:1, Interesting)
That said, I do appreciate the XP "Download and bug me" option.
Re:But what about the converse? (Score:3, Interesting)
You mean script kiddies read university reports? Somehow, I can't imagine someone doing all the work needed to be a highly respectable university professor in order to become a script kiddy. I think it's more on the line of "Hey, let's throw a 4 kbyte buffer at this fucker and see what happens. Nothing? Try 8 kbytes, then!"
I might be wrong, but I wish that, for every "security through obscurity" argument I see, they also published some hard data showing exactly which exploits have been developed based on published reports. Or, considering how difficult it is to keep track on the script kiddies' development methods, at least show which vulnerabilities have been published before the exploit came out.
From my own research, using the data at netcraft and the late alldas.de, exploits on Microsoft IIS were sixteen times more likely to happen than on open-source Apache, considering the installed base of each.
Re:don't feed the trolls (or the karma whores) (Score:1, Interesting)
Yes, I can imagine a scenario where the user tried using ps2pdf and got a bloated pdf because all of his text was transformed into bitmaps of the letters. A clueless user could also stick tons of graphics into his pdf or try to stick 100 documents worth of data into one pdf. However, it's possible to make those same mistakes in HTML, so any argument that PDFs are inherently worse on the server are just flamebait.
Re:Uhuh. Is this good if Microsoft does this? (Score:5, Interesting)
There was an update to the nfs code to solve a potential exploit, which unfortunately also broke the NFS shares on the Vax side.
Was easy to revert to the previously "broken" NFS server though.
That was one time in 5 years of running though. The number of times an update has borked windows though is much more of a concern.
Don't even get me started on the lobotomy done on a machine by Mandrake's autoupdate function though.
Re:Not necessarily (Score:2, Interesting)
Is Funding Law Enforcement a Good IDea? (Score:5, Interesting)
---
If you think security is bad now, just stop fixing security vulnerabilities and see how much worse things get. It's like a sump pump - it may not fix the leak, but it'll keep you from drowning.
Re:Uhuh. Is this good if Microsoft does this? (Score:5, Interesting)
Complexity (Score:2, Interesting)
The real problems are the fact that software doesn't stick around too long. New versions are released with new bugs and holes. They get bigger and more complex which makes it harder and harder to get a good degree of security in any product or model. This would show that security is getting worse. I know I've contradicted what I said earlier but I think both are true. Security in software gets better with every hole fixed, but worse everytime it gets code changed or added.
Sticking your head in the sand and pretending that these problems don't exist because they are not published yet makes no sense. It doesn't make any system any more secure.
If a door on a building is unlocked but no-one know this yet does it make that door secure? No.
Be careful what you wish for ... (Score:5, Interesting)
Liability (Score:3, Interesting)
On a microscopic level, individual system administrators have a strong personal interest in avoiding having to tell a CIO something like: "I've heard rumors that attacks like the one that just devastated our system might be possible, but nobody ever discovered a particular hole, so I ignored the issue. But look, here's a paper which says that searching for security flaws is probably just a waste of time and money. See? Even though this attack will cost us millions, think of the money we saved in not having to look for holes!"
Re:Key logical errors. (Score:2, Interesting)
bingo (Score:2, Interesting)
one of the biggest difficulties is to increase security while simultaneously adding features. once products and operating systems reach a certain level of feature maturity, the ernest improvement of security can happen.
at the same time, the building blocks are getting safer. simply eliminating buffer overflow vulnerabilities greatly strengthens security.
give it time, and keep working toward smarter development practices. software is still young.
Re:Missing a big part of the conclusion (Score:2, Interesting)
It might be the only practical way, or the only realistic way, but the fact is, there are other conceivable ways. For example, every user might spontaneously decide to install the patch.
And here's something to chew on: what happens when the automated patch server is compromised, and a deliberate security hole is auto/force installed on every machine? That would seem to me to be a pretty grave danger, esp. if the fact that the server was compromised is not caught quickly and fixed to repatch systems to remove the bug. and given the rate of worm spread, quickly means you have just a few minutes to discover and correct the problem, because the hacker undoubtedly has an exploit for his custom hole just sitting and waiting to use.
24x7 production machines not patched (Score:1, Interesting)
It's not good business practice to risk your business critical systems to unknown patches except once every year or two.
end user desktops are different.
Re:Ummm... (Score:3, Interesting)
So, it sounds to me like a selfishness and cowardice issue on the part of the SUV driver - I would rather two other people die in a car to car collision than I die.
Frankly, I'd rather 10 other people die than me, and I'll bet 99.9% of the population feels the same way if it comes down to it. My wife isn't married to the people in the other car. My children don't call them "daddy". My mom doesn't worry about them, and my sisters didn't grow up with them. That's not to say that I would never sacrifice my life in any situation, but boy, there better be one heck of a payoff for the people I'd be saving for me to consider it (ie taking a bullet for the President, protecting my family, etc.).
In defense knowledge provides strength (Score:2, Interesting)
- V(found) approaches V(all).
- the time (t) in the vulnerability lifecycle between disclosure and fix release becomes a concrete value = t(fix).
- the cost C(pub) can become a quantifiable value.
As a security professional I am more accurately able to evaluate/assess and manage risk for each V(found), t(fix), and C(pub) given above. Furthermore, for every initial public lack of disclosure (or BHD) and large t(fix) value on critical/costly systems or information, I am able to make more meaningful vendor/product recommendations.
While the paper is well written, contains valid analysis, and provides insight into the disclosure issue, I find section 3.3 to be lacking. The author's conclusions and the security industry itself would be strengthened by further work in modeling the range of cost issues due to disclosure for various commercial industries, educational institutions, and government establishments.
In my professional experience, the sum of knowledge I gain from disclosure details provides defensive strength.
=jombee
Re:Is Fixing Pot-holes a good idea? (Score:3, Interesting)
Intuition can be misleading, its better to have observations and attempts to model the data. Even though this paper is completely counter-intuitve to many of us, we should gather better data and build better models. Making analogies is a useful first step, but only tht the extend that you can use the analogy to build a better model. Then you can prove that the insights you gain by analogy are valid.
Re:Fixing vulnerabilities is GOOD! (Score:3, Interesting)
If only Dmitri Hackforprofit and his buddies know about it, I'm toast.
Just because the bad guys exploit holes found by the good guys doesn't mean they don't know how to find them on their own.
Re:Uhuh. Is this good if Microsoft does this? (Score:2, Interesting)
GHD: Grey Hat Discovery (Score:3, Interesting)
Re:Uhuh. Is this good if Microsoft does this? (Score:3, Interesting)
I once saw a vendor approved patch bring down a financial market running on some very big iron. The amount of money lost in compensation to the traders was many, many times the cost of the hardware involved. Automatic patching was the root cause - except rather than a shell script being to blame, the idiot was there in person, downloading patches and whacking them straight on.
They failed to do any testing whatsoever before applying them to production machines. This is an often overlooked part of availability engineering best practices.
Re:Fixing vulnerabilities is GOOD! (Score:3, Interesting)
Of course not. That happens when the vulnerability is published, not when the patch is released. They're two separate events.
If you buy a car that explodes when it is hit from the side and the next year the manufacturer releases a new model minus this magic exploding "exploit", your car is similarly not any LESS safe than it was before. It's just relatively less safe when compared to the car that doesn't explode so easily.
Of course it's less safe than it was before! Now all your enemies know they can easily kill you by hitting your car on the side!