Is Finding Security Holes a Good Idea? 433
ekr writes "A lot of effort goes into finding vulnerabilities in
software, but there's no real evidence that it actually improves security. I've been trying to study this problem and the results (pdf) aren't very encouraging. It doesn't look like we're making much of a dent in the overall number of vulnerabilities in the software we use. The paper was presented at the Workshop on Economics and Information Security 2004 and the slides can be found here (pdf)."
Google is teh friend (Score:5, Informative)
Is finding security holes a good idea? [64.233.167.104]
Writing Security Considerations Sections [64.233.167.104]
Fixing vulnerabilities is GOOD! (Score:3, Insightful)
Re:Fixing vulnerabilities is GOOD! (Score:5, Insightful)
Uhuh. Is this good if Microsoft does this? (Score:5, Interesting)
Also, imagine I have 2 programs. Both automatically install patches. Unpatched they both work fine. But when program #1 is patched, program #2 cannot run at all. Now this will probably be fixed eventually, but in the mean-time, I cannot run program #2 at all. If I need both programs, I'm fucked with the system of auto-patches.
However when I have a choice, how likely am I to install a patch? Not as likely (due to laziness). So the effectiveness decreases significantly.
Re:Uhuh. Is this good if Microsoft does this? (Score:5, Informative)
Re:Uhuh. Is this good if Microsoft does this? (Score:5, Interesting)
There was an update to the nfs code to solve a potential exploit, which unfortunately also broke the NFS shares on the Vax side.
Was easy to revert to the previously "broken" NFS server though.
That was one time in 5 years of running though. The number of times an update has borked windows though is much more of a concern.
Don't even get me started on the lobotomy done on a machine by Mandrake's autoupdate function though.
Re:Uhuh. Is this good if Microsoft does this? (Score:3, Interesting)
I once saw a vendor approved patch bring down a financial market running on some very big iron. The amount of money lost in compensation to the traders was many, many times the cost of the hardware involved. Automatic patching was the root cause - except rather than a shell script being to blame, the idiot was there in person, downloading patches and whacking them straight on.
They failed to do any testing whatsoever b
Re:Uhuh. Is this good if Microsoft does this? (Score:5, Insightful)
Yes, it involves a certain amount of trust, but if you didn't trust anyone, you'd have to write everything yourself. Also, the company's business model depends on the reliability of said patching service, so they do their best to run it well.
Of course, license changes are evil, but they're unlikely to happen with FOSS. Yet another reason to move away from Microsoft.
Re:Uhuh. Is this good if Microsoft does this? (Score:5, Interesting)
Re:Uhuh. Is this good if Microsoft does this? (Score:5, Interesting)
Re:Fixing vulnerabilities is GOOD! (Score:5, Insightful)
Re:Fixing vulnerabilities is GOOD! (Score:5, Interesting)
The issue still remains though that an unpatched system is still vulnerable, if the patch breaks an application and the machine goes unpatched there is a loss in security because of potential intrusion. If the patch is applied there is a potential loss of productivity. This is the kind of call a sysadmin has to make for their network, but a sysadmin should know enough to make the decision in an informed way, the average computer user is not equipped in the same way and probably should recieve the patch in order to mitigate risk that user's compromised system may cause to the greater group of users they may connect to via the internet.
Not necessarily (Score:5, Informative)
Not all patches are security patches. Many patches fix problems, such as the spell check function doesn't work correctly. Or some other function doesn't work correctly. These won't compromise security, but they may interfere with other programs.
Re:Not necessarily (Score:4, Funny)
IIRC the hotfix for the offensive characters (some font had a swastika or something like that) was listed with the "critical" updates on windows update. Maybe I'm remembering wrong though.
Re:Fixing vulnerabilities is GOOD! (Score:5, Interesting)
Re:Fixing vulnerabilities is GOOD! (Score:5, Insightful)
The one point about discovery that I don't recall seeing is that where would our software be today if people didn't take the time to discover vulnerabilities? If you figure only "Black Hat" people discover these, they would likely be better at exploiting than those trying to protect the systems without understanding how to discover an exploit. In general though, I believe you need a good balance of internal discovery along with a process to rapidly develop/deploy patches.
In true
Re:Fixing vulnerabilities is GOOD! (Score:5, Informative)
Assuming the patches don't break something else by mistake.
The last time I did an update on my laptop (via MS update) and rebooted, I landed in a BSOD. I had to disable my wireless card, get new drivers, and re-install it before I could get the machine to boot normally again.
If the update had happened automatically, and I was not in a position to get the new device drivers like on the road, or at a customer's site), I would have been SOL.
While automatic updates may sometimes make sense for security, they aren't the best solution.
Re:Fixing vulnerabilities is GOOD! (Score:5, Insightful)
That's like saying that we shouldn't produce safer cars because everyone doesn't buy one. And hell, why train drivers, because, you know, crappy drivers are everywhere. Or like saying we shouldn't make furniture fireproof, because, you know, something else will burn. Of course, it completely disregards those of us who routinely patch our managed systems and keep them dead secure, compatibility and testing be damned. This is DMCA logic. If we criminalize software holes, only criminals will know of exploits. See the problem?
Re:Fixing vulnerabilities is GOOD! (Score:5, Insightful)
No. Your analogy is flawed.
If cars worked like exploits and patches, then every time a safer car came out, your car would suddenly become less safe than it had been yesterday- and it would become incumbent upon you to get it fixed. Cars, being physical objects, do not behave this way.
And hell, why train drivers, because, you know, crappy drivers are everywhere. Or like saying we shouldn't make furniture fireproof, because, you know, something else will burn.
All these analogies are flawed because they miss the point. When safer drivers are trained, existing drivers don't suddenly become more liable to be in accidents. When safer furniture comes out, the furniture in your living room does not suddenly develop an odor of gasoline.
Of course, it completely disregards those of us who routinely patch our managed systems and keep them dead secure, compatibility and testing be damned.
I think it acknowledges us, but for the minority that we are. The existence on the Internet of a large number of systems remaining unpatched to published vulnerabilities is exactly the nightmare scenario everyone wants to avoid- and suggests that the publish and patch system is broken. People don't patch.
This is DMCA logic. If we criminalize software holes, only criminals will know of exploits. See the problem?
There's a big difference between "criminalizing software holes" and voluntarily agreeing not to publish exploit code. And the way that sentence is worded is extremely misleading. It suggests that if the exploits aren't published then all criminals will still have unfettered access and that isn't true. While it is true that some of the people left who know of the exploits will be criminals, most criminals will no longer know of the exploits because they aren't published and require hard work to discover. Criminals are free even now to ignore the published vulnerabilities and look for unknown ones to exploit. Few choose to do so because it's a lot of work and most of them are lazy and stupid. Not publishing the exploits would force them to always develop this way.
It comes down to this- you can either have 100% of machines unpatched to N unknown vulnerabilities, or you can have 100% unpatched to N-m unknown vulnerabilities and 50% patched to m published vulnerabilities. Even if you do publish and patch, there are still apparently an unlimited numer of unknown vulnerabilities in software. They become much more dangerous and easy to exploit once they're published, and not everyone patches. Even if you do patch, unpatched machines on the network still affect you.
Re:Fixing vulnerabilities is GOOD! (Score:4, Insightful)
Your language is obscuring your logic. A car is "safe" or "unsafe". But a software vulnerability is unsafe when nobody knows about it and extremely unsafe when everyone does. Not like cars.
People put down security by obscurity all the time, with anecdotes of how it can't be relied upon, etc., without realizing what a security catastrophe it would be if all the obscurity in the world were to vanish. While it isn't a good idea to be relying on security by obscurity, the fact is that much of the world in fact does rely on obscurity, and going out of our way to destroy obscurity isn't necessarily a good idea either.
Re:Fixing vulnerabilities is GOOD! (Score:3, Interesting)
If only Dmitri Hackforprofit and his buddies know about it, I'm toast.
Just because the bad guys exploit holes found by the good guys doesn't mean they don't know how to find them on their own.
Re:Fixing vulnerabilities is GOOD! (Score:3, Interesting)
Of course not. That happens when the vulnerability is published, not when the patch is released. They're two separate events.
If you buy a car that explodes when it is hit from the side and the next year the manufacturer releases a new model minus this magic exploding "exploit", your car is similarly not any LESS safe than it was before. It's just relatively less safe when compared to the c
Re:Ummm... (Score:3, Interesting)
So, it sounds to me like a selfishness and cowardice issue on the part of the SUV driver - I would rather two other people die in a car to car collision than I die.
Frankly,
Re:Fixing vulnerabilities is GOOD! (Score:2, Insightful)
In fact, script kiddies serve the purpose of forcing vulnerabilities to be patched quicker by writing exploits that are so badly written that they generally don't do much damage beyond crippling attacked machines.
In contrast, the true black hats that use exploits to quietly and competently install keyloggers, spam relays and mine creditcard/banking data d
Re:Fixing vulnerabilities is GOOD! (Score:3, Insightful)
Re:Fixing vulnerabilities is GOOD! (Score:3, Insightful)
Somebody is going to look for them, if good guys don't look for them and publicise them then only the outlaws will know what holes would need fixing.
Re:Fixing vulnerabilities is GOOD! (Score:3, Funny)
Neo: What exploit?
[Neo turns Oracles computer and intantly pop up adds start appearing on the Oracle's desktop]
Oracle: That exploit.
Neo: I'm sorry--
Oracle: I said don't worry about it. I'll get one of my kids to write a patch for it.
Neo: How did you know?
Oracle: Ohh, what's really going to bake your noodle later on is, would anyone have created that virus if i hadn't have told them about the exploit?
I've read the paper and disagree (Score:3, Insightful)
It's still a brand new world to explore. We have alot of work ahead of us.
Re:I've read the paper and disagree (Score:3, Insightful)
When writing code for the Space Shuttle, the coders understand that not only are up to seven lives at stake, so is a 4 billion dollar irreplaceable piece of hardware.
Microsoft doesn't have that motivation. Neither do IBM, Sun, RedHat, SCO or Linus. Until you make them feel the pain of their neglect, their ignorance and arrogance, nothing about insecure software will chang
Bill (Score:3, Interesting)
Don't buy it (Score:4, Insightful)
If anything, the data seems to point to the fact that software companies and users need to act on security holes and patches more quickly. This may require better education of the user, and it also would help to have better patching mechanisms.
I agree (Score:3, Interesting)
Re:Don't buy it (Score:3, Insightful)
Re:Don't buy it (Score:2)
e.g. python
Re: (Score:3, Insightful)
Re:Don't buy it (Score:3, Informative)
By the very definition of the term, script kiddies do not find holes or exploit them, they simply run the exploit scripts.
High-larious (Score:2, Funny)
Re:High-larious (Score:2, Funny)
Maybe one of the olde-tymers can help us here.....
The alternative is... ? (Score:5, Insightful)
The alternative is to not look and leave that to the people who will fix it or the people that will exploit it. Are you really comfortable with that?
Re:The alternative is... ? (Score:2)
Yep, those manufacturers have been on [slashdot.org] the [slashdot.org] ball [microsoft.com] haven't they?
But what about the converse? (Score:5, Interesting)
I guess I'm a little unclear on what the research stated is supposed to actually accomplish.
Re:But what about the converse? (Score:3, Insightful)
Secondly, vulnerabilities will get reported anyway -- perhaps just not so openly. Script kiddies will likely still have access as well as other nasty types -- such as organized spam gangs and other groups with an interest in using comp
Re:But what about the converse? (Score:3, Interesting)
You mean script kiddies read university reports? Somehow, I can't imagine someone doing all the work needed to be a highly respectable university professor in order to become a script kiddy. I think it's more on the line of "Hey, let's throw a 4 kbyte buffer at this fucker and see what happens. Nothing? Try 8 kbytes, then!"
I might be wrong, but I wish that, for every "security through obscurity" argument I see, they also publis
Re:What about people... (Score:3, Insightful)
Report it to the developer, not the whole world.
If reports aren't made but patches are, some (the smart) people will not install patches without knowing exactly what they are installing (especiallly important for Windows users).
You're still installing binary code that you know little about. Whether you have a code sample for the exploit, or you just know that there was an exploit
It helps admins (Score:5, Insightful)
Anyone who is subscribed to bugtraq can see the bad situation some software is in. Lately there was a lot of posts about Linksys that raised my eyebrow. Do I really want to deal with a company that doesn't properly address vulnerabilities it's made aware of? Good thing bugtraq posters had a workaround for the Linksys remote administration problem.
The problem with this paper (Score:5, Interesting)
Much work has been done in economics regarding the affect that inadequate information flow has on a market; a Nobel Prize was wone in it lately. The paper assumes that there are a constant number of vulnerable machines, as you can see on page 2, for any given vulnerability. First of all, it ignores the fact that someone has to choose to use these vulnerable products. Second, it ignores the choice that comes to sysadmins when they learn that a particular company's products are more likely to have bugs, as the parent describes.
The moral of the story is, the paper tries to be more broad than it can - by assuming that software acquisition decisions never happen, it fails to see the effect of vulnerability disclosure on these decisions. And these decisions, made well, do in fact make us more secure. The "software defect rate" in total might not decrease, but the defect rate in *used* software may well decrease.
The real problem, (Score:5, Insightful)
Re:The real problem, (Score:5, Insightful)
More important I think than fixing vulnerabilities and posting patches that may or may not be adopted by users is good design. To extend on the parent's thought... if development teams learn from the flaws in their current and past designs and use those considerations to identify "good" practice and "bad" practice it is likely the end product will be better.
If posting a patch is a "hack me! hack me!" alert and there is not a means of pushing a patch out to everyone, would there be a way that security patches could be obfuscated with "enhancements" and more anonymously roled into scheduled releases?
It's an arms race (Score:5, Insightful)
Ignoring actual bugs, there are many other kinds of security vulnerability. We know that software will always have side-effects that we don't intend. In fact, we desire this (e.g. providing a user with the flexibility to use the product in ways not envisioned by the creator). Sometimes those side-effects have security implications (e.g giving someone an environement variable to control a program's behavior lets a good user do something you might not have thought of, but it turns out a malicious user can abuse this in order to raise their security status).
This means that, as long as software is not static, security bugs will continue to be introduced. Discovering them as fast as possible is the only correct course of action... you KNOW the black-hats will be.
It helps (Score:3, Insightful)
Re:If that happens (Score:3, Funny)
Security through obscurity doesn't work unless the (secure) thing is obscure?
Yes. (Score:5, Insightful)
To the unasked question, "Is finding individual security holes the best possible use of a security researcher's time?", the answer is No. The best use of security research is to classify different types of security holes and use that information to create a development framework where those security holes are extremely difficult to recreate. For example, you're not going to find buffer overruns in Java code, since the memory is dynamically handled for you. Eventually, having security levels, encrypted buffers, etc. will all be part of a standard developer's library.
we need no bugs from the start (Score:2)
Re:we need no bugs from the start (Score:3, Funny)
Control for the experiment (Score:2, Interesting)
I would mark this one as a troll... (Score:5, Insightful)
Do smashing cars head-on into brick walls improve car safety? No, of course not. Evalution of the results of the crash, and using those findings to build better cars, that is what improves car safety, and the situation is entirely analogous in the security world. The assumption is that there is always a weakest link in security, that link is the most likely one to be exploited, and the trick is finding that link and strengthening it against attacks in the future, hopefully to the point where it is more likely that other links are weaker.
improving security... (Score:2, Insightful)
OK, didn't RTFA, but is there 'real evidence' to the contrary?
You can't fix what you don't know is broken. Is ignorance a better security solution?
This is like saying... (Score:3, Informative)
Re:This is like saying... (Score:5, Insightful)
If we don't look for security holes... (Score:3, Insightful)
Its a good idea (Score:3)
Embarrassment encourages vigilence: Software firms are always looking to reduce costs (who isn't) - outside bug hunters encourage them to test more completely.
I think really bad software abuse usually has a motive connected with bad treatment or reputation for bad treatment. Even if there is a small lag time between the discovery and fixing of a hole it doesn't let the problem lie around where people who develop a grudge can use it.
Finally (and most importantly) fairness dictates that if I'm using a product you know a problem with - you should tell me about it. Consumers deserve the chance to disable systems, switch products etc.... if they feel vulnerable.
Especially if software is closed source - how do you know this bug isn't the tip of the iceberg. Companies have conviced consumers they dont get to look inside software -- they can't stop others from hearing about its flaws.
ls
Finding the holes is only half the battle (Score:5, Interesting)
The OpenBSD people have built up a good track record on security by finding holes and fixing them everywhere possible. I am sure they would disagree with your assertion that finding holes does not help to improve security. Finding the bugs is an important first step towards not putting them back in next time you write code.
Finding holes is good (Score:2)
Every program has bugs. There is no way around it. What makes the difference though is how you respond to bugs when they are found.
You have a choice - either be like Microsoft, try to deny that the bugs exist, or downplay the bugs, and try to stifle the person who found it - or be like a real programer, fix the bug, and get people to fix the probl
"Finding flaws" is not the problem (Score:2, Interesting)
fundamental misconception perhaps (Score:2, Interesting)
"If a vulnerability isfound by good guys and a fix is made available, then the number of intrusions--and hence the cost of intru-sions--resulting from that vulnerability is less than if it were discovered by bad guys"
While I'm not certain that there is NO relationship between the two, I'm certainl
Missing a big part of the conclusion (Score:5, Insightful)
Automatic/Forced patching is the only way to make discovery worthwhile because otherwise the number of vulnerable systems is unpredicatable over time and constitutes a large risk. Security issues must be resolved as quickly as possible in order to mitigate risks, and unless patch application is automated and enforced then discovery becomes meaningless.
Re:Missing a big part of the conclusion (Score:3, Insightful)
Precisely. How often have we seen reports of compromises on GNU source code servers on Slashdot? (And I'll bet Microsoft is targeted by 30 times as many black hats; we just never get the incident reports.)
An automatic patch system is the subsystem most vulnerable to serious exploits because it runs with the highest privileges on the target machine and only requires that the exploiter compromise a second machine. Exploit the patch server, you're The Man Who Owned the World.
My Take... (Score:4, Interesting)
The value of finding security holes is totally dependant upon the company whose product has the hole.
I work at Barnes & Noble as a bookseller; within a few months of working there, I found huge security holes in their in-store search system and network. I reported these to my manager; as soon as she grasped the scope of the hole, she called tech support in New York. I spent 30 minutes explaining all the different problems to them, and that was that.
I didn't think they'd do anything about it, and that's the problem--since it costs time and money, most companies can't or won't fix holes. To my surprise, however, my store began beta testing the next version of the in-store software. What was even more surprising was that every hole I had reported was gone (so I went and found more, of course).
There's never a silver-bullet; a new vulnerability will always surface. It's really hard to stay ahead of the curve, but it's something that must be attempted.
change of perspective (Score:2)
One of the obvious benefits of posting secuity holes is that it gives developers the insight and the opportunity to not duplicate that same security flaw in another system. How consistently, or not, we are learning these lessons is a different issue.
Security guy? (Score:5, Funny)
Really. I didn't make that up, check the link! Who is this guy, and why is he giving me software security advice?!
Re:Security guy? (Score:3, Insightful)
Agreed; the first rule of security (let alone *computer* security* is that you can't stop human stupidity.
Re:Security guy? (Score:4, Insightful)
Re:Security guy? (Score:3, Insightful)
I get what you're saying, and you're correct as far as that goes, but I was not concerned that you failed to wipe the CC list.
I was refering to sending your credit card number to someone via electronic mail. Even if I was sure that TLS would occur between my MTA and theirs (ignoring the chance that a third-party secondary MX would get involved) and that they and I were using SSL-enabled IMAP to fetch our mail... I would still hesitate long enough to make it wort
Re:Security guy? (Score:4, Informative)
Member of the IAB. Co-chair of the TLS working group.
If the "good guys" don't find them ... (Score:2, Insightful)
Then again, MS doesn't seem to be trying to find vulnerabilities in their own code; often it's found by others. Sometimes it's the bad guys.
Point being, it's hard to say what effect something is having when you can't contrast it against "not doing it."
Woah... pretty pictures, but bad research (Score:5, Insightful)
Second- it doesn't address the level of the vulnerability. If it is an exploit that lets someone take over a machine, or format a drive, the cost of even those first, possibly limited, intrusions will be astronomical.
development process (Score:3)
But instead of trying to plug the holes, it's better to understand why the holes pop up and what we can do to alter the behavior that leads to holes.
[insert plug for your favorite high level language here]
But even better development tools only gets you so far. The burden has to be laid square on the shoulders of the project leader and their managerial counterparts. You cannot continue to do the business side "favors" by including some technically unnecesary component after the specs and requirements are done, and expect it to get integrated seamlessly and without effecting everything else. once you say "yes" to something, it will be harder to say "no" the next time. Maybe you need to understand the business problem you are trying to solve better before you finish the design. Maybe the business folks need to better understand the development process, so they know they can't add features late in the game.
this is just my "2 years in the system" view of things, after time and again getting an email saying "so-and-so wants such-and-such done to this thing" after the thing's design has already been settled upon. when i ask why, it always comes down to someone not being able (yay office politics) to say no to someone for some reason or another.
want fewer security holes? start with better communication between different groups. end with a written in stone spec. leave out all feature creep until the next design phase.
good luck with that! ha!
The assumptions... (Score:3, Interesting)
I forget the terminology for it, but there's the concept of worms that are launched on a large number of vulnerable machines simultaneously. I'm not aware of an attack like this in the wild but it's theoretically possible and would be terribly destructive. If a black hat hacker plays his cards right, he can probably sneak his exploit onto a few thousand computers without anybody noticing. Then he can launch a massive attack before anybody even knows the vulnerability exists.
Having said that I think that, in the real world, the amount of effort put into finding vulnerabilities by white hats has a minimal cost. There's essentially three areas where security vulnerabilities are discovered by the friendlies:
1) QA of the product developers
2) Hobbyist white hats
3) Network security auditing
The cost of #1 is an assumed cost of any product and is part of the basics of releasing it to the public. You check for typos in the documentation and you check for security bugs.
The cost of #2 is zero because it's people doing these things on their own time for their own amusement.
The cost of #3 is substantial but it's critically important to some businesses to have zero security vulnerabilities. A security breach not only has an immediate cost in time to fix the problem, but it also has a long term cost by damaging the integrity of the company. If your bank got hacked and you lost all your savings, even if it was insured, would you keep your money in that bank?
Yes, it's a good idea (Score:5, Interesting)
Someone finds a buffer overflow problem. Someone finds another one. Someone finds another one. Someone finds another one.
Someone realizes: "what if I centralized all my buffer routines and just got one little section of code working perfectly?" Then you get projects like qmail or vsftp, which simply are more secure. Then people start using these programs, and their systems really are less vulnerable.
This paper looks keeps using the phrase "given piece of software." It's talking about (lack of) improvements at a micro-scale, but ignores improvements in the big picture that can happen due to people getting fed up or scared.
If vulnerabilities were not discovered, would anyone bother to develop secure software?
I think this paper has as an underlying assumption, the popular view that it just isn't possible to write secure software, and that every large software system is just a big leaky sieve that requires perpetual patching. I don't accept that. I understand why the belief is widely held: most popular software was not originally written with security issues in mind. But this is something that will be overcome with time, as legacies die off.
think of a cracking dam (Score:3, Interesting)
if it designed differently, the number of cracks is smaller...
i wish reporters understood that. flame MS for not bringing lonhorn out sooner. XP is not good enough. everyone knows this, nobody in the popular press is saying it in the right way.
*sigh*
Conclusion basically flawed (Score:3, Interesting)
That may hold true and make sense if you study the total number of exploitable systems on the net, but totally ignores the fact that there are a very large number of systems with little priority for security while only a few depend on 100% system security.
Those few high security sites have the need and pay for resources to fix known flaws or patch their software asap. It are those who gain from the knowledge of a security flaw before the black-hat guys do. They cannot live with even the shortest vulnerability timeframes and usually patch exploits as soon as they get published on public security channels.
It may hold true that the work put into security auditing does not pay out on whole, taking all software users into account, but for those few who really care about security it surely does...
Look below the vulnerability (Score:5, Insightful)
There are multiple strategies that will actually improve security far more than just trying to ferret out a new vulnerability. I personally recommend using Java or another type-safe language for programming if at all possible since the most common memory management errors are eliminated. Hoevwer, the best way to stop major security breaches is to have a security layer that will assume software programs will be compromised somehow. Then, the security layer is more interested in enforcing access to the system that program ought to have instead of just trusting the effective user ID of the program to hopefully do the right thing.
A bit of karma-whoring here for my thesis project [cmu.edu] which is based on earlier work in Mandatory Access Controls in Linux, as well as the much more well-known SELinux [nsa.gov]
kernel modules.
I personally did my thesis in Domain & Type Enforcment which simply puts running processes into various different domains that have certain access rights to Types. A type is just a name tag assigned to files, and in my case you can also type system calls, network sockets, and eventually even Linux capabilities. It is similar to part of SELinux but also designed to be much simpler to understand & implement as well.
Anyway, these systems all are designed with the assumption that vital processes will be compromised and the onus is on the policy writers to enforce least-privilege on the processes. This may sound difficult to do, but it is actually trivial compared to the approach we are using now which is to try and figure out every possible attack and write perfect software (the point of the article). It is much easier to define what a program is supposed to do than every nasty malicious thing someone on the Internet can dream up that it should not do.
I've ranted long enough, but I think that there are good solutions to stopping about 90% of the crap that we see going on today, and that the other 10% will be fun to keep us security professionals employed
mainframes... (Score:5, Insightful)
One of the key factors that has kept Mainframes secure for so long is the fact they were designed as a secure environment in the first place:
Bad logic train in post (Score:5, Insightful)
Take a sinking boat. If you are bailing water out, and the boat isn't sinking any more, it does not follow that bailing water isn't a good idea. If you stop bailing water, you're sunk. If good guys stop finding and fixing vulnerabilities, you're sunk, too.
Only blackhats should look for them then? (Score:3, Insightful)
He has a point... (Score:5, Insightful)
Anticipating the shitstorm of people lining up to say 'how stupid' without reading the FA, here's a nice little summary.
The paper is not quite as stupid as it sounded by the description, but it misses/ignores a critical flaw in the argument.
His basic premise is that patching is expensive and people don't do it anyway - probably true for the majority of systems. Therefore, he argues, black-hats are alerted to the security holes by the disclosure. He shows that it doesn't really matter whether holes are discovered by black-hats and are fixes are released after the exploit, or discovered by white-hats and exploited after the fix has been released but not applied.
However, his arguments are based on averages. Where he's wrong is that if you have some systems that are simply so valuable that they cannot be comprimised, proactive bug fixing [openbsd.com] coupled with a manic obsession for patching your system the moment a patch is released is still the best way to stay safe
The philosophy behind this is very simple, (Score:5, Insightful)
It's better to find the security hole yourself and fix it than for someone with malicious intentions to do so and exploit it.
(And good luck convincing
Key logical errors. (Score:5, Insightful)
His basic theory is that he believes, given the current rates of discovery, poor patching rates and the large number of bugs that:
1) Due to massive information exchange and slow patching/fixing, post announcement explotitations are not significantly less than explotiation caused by un-announced bugs, so announcing does not help.
2) There are so many bugs out there that finding a bug and announcing it does not produce a singificant reduction in the number of "bugs unknown to white Hats" so it does not increase security significantly.
His major errors are ignoring the following: 1) Exploitation post announcement is almost entirely done against the "lesser" computers, i.e. either machines with un-important data/programs (home pc's) or important machines with incompetent sysadmin. As such the real cost of these explotations is either A) practically null, or b) high, but likely to get the incompetent sysadmin fired, resulting in i) better employment prospects for good sysadmin and ii)an over-all improvment in quality of security for that company. So while the # of explots may be higher for Post-announcement bugs, soceity has a REAL social and economic gain that is very significant.
2)A)The announcement of bugs allows people to judge how well written a program is and therefore make an informed decision to NOT buy inferior producs (say Windows).
B)That perhaps our problem is not that we are announcing the bugs but that we are not doing sufficient bug hunting. He seems to be implying that because bug hunts don't find enough bugs, the solution is not to bug hunt. But anyone with a brain should be able to see that if we are not depleating the unknown bugs fast enough than one possible solution is to tremoundously increase our bug hunting, not to stop the bug disclosures.
good idea (Score:4, Informative)
But it's even better to find them before the product ships, and design early on to avoid the common ones. I believe the author of qmail is still offering thousands of dollars to the first person who finds even a single vulnerability.
Whenever we identify and cure diseases... (Score:3, Insightful)
Maybe it is just me, but I think linking bubonic plague to flea infested rats was a beneficial advancement in progressing the human situation, for multiple reasons.
Good if combined with sensible disclosure (Score:3, Informative)
Finding problems which can be disclosed at the same time as a patch is very good.
All the major Linux distributors will release updates in a timely manner, and enable people to install them with minimum effort - much like Microsoft does. The only difference with Microsoft's patches is they can, rarely, break things. I've never seen this happen with a Linux update.
Personally I've never heard anybody say anything bad about the pro-active way which the OpenBSD team audit their codebase and this is one of the reasons why I started the Debian Security Audit [debian.org].
Having a dedicated team of people auditing code, combined with the ability to release updates in a timely manner is definately a good thing.
(The results of my work [debian.org] show that even with only a small amount of effort security can be increased)
Did I mention that I'm available for hiring? [steve.org.uk] ;)
Is Funding Law Enforcement a Good IDea? (Score:5, Interesting)
---
If you think security is bad now, just stop fixing security vulnerabilities and see how much worse things get. It's like a sump pump - it may not fix the leak, but it'll keep you from drowning.
Is Fixing Pot-holes a good idea? (Score:5, Insightful)
Patching roads requires people to stop the flow of traffic, and puts workers at risk of being injured or killed by users of the roads. A road that is fully patched encourages users to drive faster, burning fossil fuels at a lower efficiency compared to the slow speed drivers use when they see holes. Driving slower will also save lives in the event of an accident and cause drivers to pay more attention to the road since they never know when a hole could be in their local path.
Many times the problem is not with the road, but the surface that it is built on. Patching the road can only be assumed to be a stop gap measure at best and will likely have to be patched again. Holes in the supporting structure will almost always show up in the things built on it.
Fixing pot holes slows innovation. Once it becomes accepted that roads have holes in them, consumers will demand vehicles able to deal with them. If hole patching was stopped right now, studies show we would all be flying to work in our personal jetson mobiles within 8 years. Space previously set aside for roads will be converted to trails for bikes, bladers and walkers. Butterflies will land on your outstretched hand while you stop to observe the wild flowers.
Fixing holes only maintains the status quo and dominance of local government and the corrupt DOT branches of the states. If you reduce their budget by even 1% they will go on strike and roads will quickly deteriorate. Some communities out there are leading the charge in not fixing pot holes to bring you a world of glass houses on stilts and talking dogs with jet packs.
In conclusion our findings indicate the DOT should be abolished. They have served their purpose but have no place in todays innovative world.
Re:Is Fixing Pot-holes a good idea? (Score:3, Interesting)
Intuition
Be careful what you wish for ... (Score:5, Interesting)
Liability (Score:3, Interesting)
On a microscopic level, individual system administrators have a strong personal interest in avoiding having to tell a CIO something like: "I've heard rumors that attacks like the one that just devastated our system might be possible, but nobody ever discovered a particular hole, so I ignored the issue. But look, here's a paper which says that searching for security flaws is probably just a waste of time and money. See? Even though this attack will cost us millions, think of the money we saved in not having to look for holes!"
The Invisible Alternative (Score:3, Insightful)
To use a straightforward analogy -- possessing an immune system does not by a significant means reduce the pathogenic population, yet lacking one is death. The case is quite similar with vulnerabilities and virii -- it would be very simple for us to completely lack the infrastructure to manage an Internet-wide vulnerability. The low grade malware floating around -- though infuriating -- forces us to create a management infrastructure, on pain of losing connectivity. What the consistent stream of discovered vulnerabilities creates is not fewer vulnerabilities -- software simply isn't mature enough, nor would we really want it to be -- but more managable failures. Put simply, it doesn't matter what this way comes, because we've already been forced to deploy backups, create procedures, and implement recovery strategies.
The alternative state is far more terrifying: Bugs are not talked about, and the strategy is not to fix them but to silence their open researchers. A black market opens up -- it will always be in the benefit of some to have the means to exploit others. These means always work, because nobody defends. Are there fewer with these means? Yes, but one person can write an attack, and the motive to blackmail the entire Internet population (pay me, and I'll "protect" you from the next wave) is quite strong.
Bottom line -- and it's something that took me some time to realize myself, being an active member of the security community who doesn't deal in exploits heavily -- is that whatever the headaches are of full disclosure, the alternative is much worse.
--Dan
GHD: Grey Hat Discovery (Score:3, Interesting)
Re:This is why we need open source (Score:5, Insightful)
This is a proven, incontrovertible fact.
It makes it easier to find, but that doesn't mean open source automatically more secure - you still have to have the right people actually looking, and a defect has to be what they're looking for. I'll explain.
If no-one qualified to spot the hole bothers to look, open source doesn't buy you anything (this is why bugs in things like OpenSSL can linger quite awhile before being discovered - not enough of the right people bothering to look, even though *anyone* can and many do).
A bigger problem is the disconnect between design limitations not meeting end-user expectations. The recent shining example of this is the latest set of CVS vulnerabilities: The CVS team does not claim CVS is secure enough to be publicly-accessible over the internet; yet it frequently IS placed in this position, and that makes it an ongoing security disaster waiting to happen. (Linkage: "We have always said that CVS is not secure" [com.com])
Bug? No; design limitation. But if the end users aren't aware of that (or, worse, choose to disregard the danger), it's still a vulnerability waiting to be exploited, and open source does NOTHING to prevent that.
So, "easier to find holes"? I'll go with the stock CompSci answer, "It depends". It's certainly not a simple or complete answer.
Xentax
Re:Bad Study. (Score:5, Insightful)
You really think the problem is C and C++?? I know that these arn't the holy grail of programming languages but to put some of that blame on them is very nieve. You can write buggy, unsecured code with asm! It's got very little to do with the language involved (the compiler and libraries used may have an effect) since it all gets put into machine code anyway. Blame the design and implementation, not the tool.