Is Finding Security Holes a Good Idea? 433
ekr writes "A lot of effort goes into finding vulnerabilities in
software, but there's no real evidence that it actually improves security. I've been trying to study this problem and the results (pdf) aren't very encouraging. It doesn't look like we're making much of a dent in the overall number of vulnerabilities in the software we use. The paper was presented at the Workshop on Economics and Information Security 2004 and the slides can be found here (pdf)."
Fixing vulnerabilities is GOOD! (Score:3, Insightful)
I've read the paper and disagree (Score:3, Insightful)
It's still a brand new world to explore. We have alot of work ahead of us.
Don't buy it (Score:4, Insightful)
If anything, the data seems to point to the fact that software companies and users need to act on security holes and patches more quickly. This may require better education of the user, and it also would help to have better patching mechanisms.
The alternative is... ? (Score:5, Insightful)
The alternative is to not look and leave that to the people who will fix it or the people that will exploit it. Are you really comfortable with that?
It helps admins (Score:5, Insightful)
Anyone who is subscribed to bugtraq can see the bad situation some software is in. Lately there was a lot of posts about Linksys that raised my eyebrow. Do I really want to deal with a company that doesn't properly address vulnerabilities it's made aware of? Good thing bugtraq posters had a workaround for the Linksys remote administration problem.
The real problem, (Score:5, Insightful)
It's an arms race (Score:5, Insightful)
Ignoring actual bugs, there are many other kinds of security vulnerability. We know that software will always have side-effects that we don't intend. In fact, we desire this (e.g. providing a user with the flexibility to use the product in ways not envisioned by the creator). Sometimes those side-effects have security implications (e.g giving someone an environement variable to control a program's behavior lets a good user do something you might not have thought of, but it turns out a malicious user can abuse this in order to raise their security status).
This means that, as long as software is not static, security bugs will continue to be introduced. Discovering them as fast as possible is the only correct course of action... you KNOW the black-hats will be.
It helps (Score:3, Insightful)
Yes. (Score:5, Insightful)
To the unasked question, "Is finding individual security holes the best possible use of a security researcher's time?", the answer is No. The best use of security research is to classify different types of security holes and use that information to create a development framework where those security holes are extremely difficult to recreate. For example, you're not going to find buffer overruns in Java code, since the memory is dynamically handled for you. Eventually, having security levels, encrypted buffers, etc. will all be part of a standard developer's library.
I would mark this one as a troll... (Score:5, Insightful)
Do smashing cars head-on into brick walls improve car safety? No, of course not. Evalution of the results of the crash, and using those findings to build better cars, that is what improves car safety, and the situation is entirely analogous in the security world. The assumption is that there is always a weakest link in security, that link is the most likely one to be exploited, and the trick is finding that link and strengthening it against attacks in the future, hopefully to the point where it is more likely that other links are weaker.
improving security... (Score:2, Insightful)
OK, didn't RTFA, but is there 'real evidence' to the contrary?
You can't fix what you don't know is broken. Is ignorance a better security solution?
Re:Fixing vulnerabilities is GOOD! (Score:5, Insightful)
If we don't look for security holes... (Score:3, Insightful)
Re:Fixing vulnerabilities is GOOD! (Score:2, Insightful)
In fact, script kiddies serve the purpose of forcing vulnerabilities to be patched quicker by writing exploits that are so badly written that they generally don't do much damage beyond crippling attacked machines.
In contrast, the true black hats that use exploits to quietly and competently install keyloggers, spam relays and mine creditcard/banking data do more economic damage over longer periods of time.
Re:Don't buy it (Score:3, Insightful)
Missing a big part of the conclusion (Score:5, Insightful)
Automatic/Forced patching is the only way to make discovery worthwhile because otherwise the number of vulnerable systems is unpredicatable over time and constitutes a large risk. Security issues must be resolved as quickly as possible in order to mitigate risks, and unless patch application is automated and enforced then discovery becomes meaningless.
Gimme a dollar (Score:1, Insightful)
By hunting for flaws in software and making them public, these flaws can be fixed... Not making a vulnerability public doesn't help anything. It just gives Joe hacker his own personal backdoor that he's free to use indefinitely.
-yeah
Re:This is like saying... (Score:5, Insightful)
If the "good guys" don't find them ... (Score:2, Insightful)
Then again, MS doesn't seem to be trying to find vulnerabilities in their own code; often it's found by others. Sometimes it's the bad guys.
Point being, it's hard to say what effect something is having when you can't contrast it against "not doing it."
Woah... pretty pictures, but bad research (Score:5, Insightful)
Second- it doesn't address the level of the vulnerability. If it is an exploit that lets someone take over a machine, or format a drive, the cost of even those first, possibly limited, intrusions will be astronomical.
Re:Fixing vulnerabilities is GOOD! (Score:5, Insightful)
Re:I've read the paper and disagree (Score:1, Insightful)
Re:The real problem, (Score:5, Insightful)
More important I think than fixing vulnerabilities and posting patches that may or may not be adopted by users is good design. To extend on the parent's thought... if development teams learn from the flaws in their current and past designs and use those considerations to identify "good" practice and "bad" practice it is likely the end product will be better.
If posting a patch is a "hack me! hack me!" alert and there is not a means of pushing a patch out to everyone, would there be a way that security patches could be obfuscated with "enhancements" and more anonymously roled into scheduled releases?
The premise is flawed, so the logic is irrelevant (Score:2, Insightful)
In the long term, one might hope that the vulnerability finding would feed back into software engineering, and eventually decrease the rate of introduction, but we're clearly not there today, and I'm not holding my breath for tomorrow.
So we've got 18 pages of math measuring an irrelevancy.
Re:Fixing vulnerabilities is GOOD! (Score:5, Insightful)
The one point about discovery that I don't recall seeing is that where would our software be today if people didn't take the time to discover vulnerabilities? If you figure only "Black Hat" people discover these, they would likely be better at exploiting than those trying to protect the systems without understanding how to discover an exploit. In general though, I believe you need a good balance of internal discovery along with a process to rapidly develop/deploy patches.
In true
Look below the vulnerability (Score:5, Insightful)
There are multiple strategies that will actually improve security far more than just trying to ferret out a new vulnerability. I personally recommend using Java or another type-safe language for programming if at all possible since the most common memory management errors are eliminated. Hoevwer, the best way to stop major security breaches is to have a security layer that will assume software programs will be compromised somehow. Then, the security layer is more interested in enforcing access to the system that program ought to have instead of just trusting the effective user ID of the program to hopefully do the right thing.
A bit of karma-whoring here for my thesis project [cmu.edu] which is based on earlier work in Mandatory Access Controls in Linux, as well as the much more well-known SELinux [nsa.gov]
kernel modules.
I personally did my thesis in Domain & Type Enforcment which simply puts running processes into various different domains that have certain access rights to Types. A type is just a name tag assigned to files, and in my case you can also type system calls, network sockets, and eventually even Linux capabilities. It is similar to part of SELinux but also designed to be much simpler to understand & implement as well.
Anyway, these systems all are designed with the assumption that vital processes will be compromised and the onus is on the policy writers to enforce least-privilege on the processes. This may sound difficult to do, but it is actually trivial compared to the approach we are using now which is to try and figure out every possible attack and write perfect software (the point of the article). It is much easier to define what a program is supposed to do than every nasty malicious thing someone on the Internet can dream up that it should not do.
I've ranted long enough, but I think that there are good solutions to stopping about 90% of the crap that we see going on today, and that the other 10% will be fun to keep us security professionals employed
Re:Fixing vulnerabilities is GOOD! (Score:3, Insightful)
I think Ben Franklin had it right... (Score:1, Insightful)
But what we need to keep in mind is that no matter how hard we try our code is never going to be completely perfect. It's in our nature. I think finding security holes in software is necessary to build on our understanding of security flaws.
Re:Security guy? (Score:3, Insightful)
Agreed; the first rule of security (let alone *computer* security* is that you can't stop human stupidity.
Bad logic train in post (Score:5, Insightful)
Take a sinking boat. If you are bailing water out, and the boat isn't sinking any more, it does not follow that bailing water isn't a good idea. If you stop bailing water, you're sunk. If good guys stop finding and fixing vulnerabilities, you're sunk, too.
Re:Fixing vulnerabilities is GOOD! (Score:5, Insightful)
That's like saying that we shouldn't produce safer cars because everyone doesn't buy one. And hell, why train drivers, because, you know, crappy drivers are everywhere. Or like saying we shouldn't make furniture fireproof, because, you know, something else will burn. Of course, it completely disregards those of us who routinely patch our managed systems and keep them dead secure, compatibility and testing be damned. This is DMCA logic. If we criminalize software holes, only criminals will know of exploits. See the problem?
Only blackhats should look for them then? (Score:3, Insightful)
Comment removed (Score:3, Insightful)
Re:The real problem, (Score:2, Insightful)
Time to market, constrained budget, less resources. These things of modern software development chaos do not equate to "more secure programs".
And no, there is no magic bullet to that either.
Re:Fixing vulnerabilities is GOOD! (Score:3, Insightful)
Somebody is going to look for them, if good guys don't look for them and publicise them then only the outlaws will know what holes would need fixing.
Re:This is why we need open source (Score:5, Insightful)
This is a proven, incontrovertible fact.
It makes it easier to find, but that doesn't mean open source automatically more secure - you still have to have the right people actually looking, and a defect has to be what they're looking for. I'll explain.
If no-one qualified to spot the hole bothers to look, open source doesn't buy you anything (this is why bugs in things like OpenSSL can linger quite awhile before being discovered - not enough of the right people bothering to look, even though *anyone* can and many do).
A bigger problem is the disconnect between design limitations not meeting end-user expectations. The recent shining example of this is the latest set of CVS vulnerabilities: The CVS team does not claim CVS is secure enough to be publicly-accessible over the internet; yet it frequently IS placed in this position, and that makes it an ongoing security disaster waiting to happen. (Linkage: "We have always said that CVS is not secure" [com.com])
Bug? No; design limitation. But if the end users aren't aware of that (or, worse, choose to disregard the danger), it's still a vulnerability waiting to be exploited, and open source does NOTHING to prevent that.
So, "easier to find holes"? I'll go with the stock CompSci answer, "It depends". It's certainly not a simple or complete answer.
Xentax
He has a point... (Score:5, Insightful)
Anticipating the shitstorm of people lining up to say 'how stupid' without reading the FA, here's a nice little summary.
The paper is not quite as stupid as it sounded by the description, but it misses/ignores a critical flaw in the argument.
His basic premise is that patching is expensive and people don't do it anyway - probably true for the majority of systems. Therefore, he argues, black-hats are alerted to the security holes by the disclosure. He shows that it doesn't really matter whether holes are discovered by black-hats and are fixes are released after the exploit, or discovered by white-hats and exploited after the fix has been released but not applied.
However, his arguments are based on averages. Where he's wrong is that if you have some systems that are simply so valuable that they cannot be comprimised, proactive bug fixing [openbsd.com] coupled with a manic obsession for patching your system the moment a patch is released is still the best way to stay safe
Re:Fixing vulnerabilities is GOOD! (Score:1, Insightful)
I'm all for keeping systems patched up to date to maintain security. But I'm not in a rush to beta test the latest patch from MS every time IE has a new hole discovered. Nor do I want a vendor (MS or otherwise) determining when patches will be installed.
Of course, this only applies to enviroments that are actively administered where the patches will get installed (if not immediately, at least in a timely manner). For the majority of home PCs, some form of auto update makes more sense.
Re:What about people... (Score:3, Insightful)
Report it to the developer, not the whole world.
If reports aren't made but patches are, some (the smart) people will not install patches without knowing exactly what they are installing (especiallly important for Windows users).
You're still installing binary code that you know little about. Whether you have a code sample for the exploit, or you just know that there was an exploit in XXX service through which an attacker could get administrative access, I don't see any difference to the admin.
Those who decide to look after their security shouldn't be hindered.
Well, then the rest of us will just continue to suffer with a constant flow of new Code Reds and other such drivel because you security conscious people think it is better to have a public list of ways to exploit a system.
Re:Security guy? (Score:4, Insightful)
Re:Uhuh. Is this good if Microsoft does this? (Score:5, Insightful)
Yes, it involves a certain amount of trust, but if you didn't trust anyone, you'd have to write everything yourself. Also, the company's business model depends on the reliability of said patching service, so they do their best to run it well.
Of course, license changes are evil, but they're unlikely to happen with FOSS. Yet another reason to move away from Microsoft.
Re:But what about the converse? (Score:3, Insightful)
Secondly, vulnerabilities will get reported anyway -- perhaps just not so openly. Script kiddies will likely still have access as well as other nasty types -- such as organized spam gangs and other groups with an interest in using compromised machines. At least with the current system there's parity of knowlege -- white hats have access to the same information as black hats, and get it in a timely manner. Supressing open reporting only tips the balance of power the wrong way.
The philosophy behind this is very simple, (Score:5, Insightful)
It's better to find the security hole yourself and fix it than for someone with malicious intentions to do so and exploit it.
(And good luck convincing
Key logical errors. (Score:5, Insightful)
His basic theory is that he believes, given the current rates of discovery, poor patching rates and the large number of bugs that:
1) Due to massive information exchange and slow patching/fixing, post announcement explotitations are not significantly less than explotiation caused by un-announced bugs, so announcing does not help.
2) There are so many bugs out there that finding a bug and announcing it does not produce a singificant reduction in the number of "bugs unknown to white Hats" so it does not increase security significantly.
His major errors are ignoring the following: 1) Exploitation post announcement is almost entirely done against the "lesser" computers, i.e. either machines with un-important data/programs (home pc's) or important machines with incompetent sysadmin. As such the real cost of these explotations is either A) practically null, or b) high, but likely to get the incompetent sysadmin fired, resulting in i) better employment prospects for good sysadmin and ii)an over-all improvment in quality of security for that company. So while the # of explots may be higher for Post-announcement bugs, soceity has a REAL social and economic gain that is very significant.
2)A)The announcement of bugs allows people to judge how well written a program is and therefore make an informed decision to NOT buy inferior producs (say Windows).
B)That perhaps our problem is not that we are announcing the bugs but that we are not doing sufficient bug hunting. He seems to be implying that because bug hunts don't find enough bugs, the solution is not to bug hunt. But anyone with a brain should be able to see that if we are not depleating the unknown bugs fast enough than one possible solution is to tremoundously increase our bug hunting, not to stop the bug disclosures.
Overlooked benefit of finding vulnerabilities (Score:2, Insightful)
Even if only a small percentage of computer users apply security patches, there is still the benefit of building up a knowledge base about security vulnerabilities. The blackhats are building up such knowledge anyway, we can't prevent that. But the "good side" needs to also build up this knowledge, otherwise the blackhats would soon be so much more knowledgable and skilled than the whitehats that it becomes impossible to set up any machine in a reasonably secure manner.
mainframes... (Score:5, Insightful)
One of the key factors that has kept Mainframes secure for so long is the fact they were designed as a secure environment in the first place:
Whenever we identify and cure diseases... (Score:3, Insightful)
Maybe it is just me, but I think linking bubonic plague to flea infested rats was a beneficial advancement in progressing the human situation, for multiple reasons.
Re:Fixing vulnerabilities is GOOD! (Score:5, Insightful)
No. Your analogy is flawed.
If cars worked like exploits and patches, then every time a safer car came out, your car would suddenly become less safe than it had been yesterday- and it would become incumbent upon you to get it fixed. Cars, being physical objects, do not behave this way.
And hell, why train drivers, because, you know, crappy drivers are everywhere. Or like saying we shouldn't make furniture fireproof, because, you know, something else will burn.
All these analogies are flawed because they miss the point. When safer drivers are trained, existing drivers don't suddenly become more liable to be in accidents. When safer furniture comes out, the furniture in your living room does not suddenly develop an odor of gasoline.
Of course, it completely disregards those of us who routinely patch our managed systems and keep them dead secure, compatibility and testing be damned.
I think it acknowledges us, but for the minority that we are. The existence on the Internet of a large number of systems remaining unpatched to published vulnerabilities is exactly the nightmare scenario everyone wants to avoid- and suggests that the publish and patch system is broken. People don't patch.
This is DMCA logic. If we criminalize software holes, only criminals will know of exploits. See the problem?
There's a big difference between "criminalizing software holes" and voluntarily agreeing not to publish exploit code. And the way that sentence is worded is extremely misleading. It suggests that if the exploits aren't published then all criminals will still have unfettered access and that isn't true. While it is true that some of the people left who know of the exploits will be criminals, most criminals will no longer know of the exploits because they aren't published and require hard work to discover. Criminals are free even now to ignore the published vulnerabilities and look for unknown ones to exploit. Few choose to do so because it's a lot of work and most of them are lazy and stupid. Not publishing the exploits would force them to always develop this way.
It comes down to this- you can either have 100% of machines unpatched to N unknown vulnerabilities, or you can have 100% unpatched to N-m unknown vulnerabilities and 50% patched to m published vulnerabilities. Even if you do publish and patch, there are still apparently an unlimited numer of unknown vulnerabilities in software. They become much more dangerous and easy to exploit once they're published, and not everyone patches. Even if you do patch, unpatched machines on the network still affect you.
Re:Finding the holes is only half the battle (Score:2, Insightful)
This is why documents such as The Secure Programming for Linux and Unix [dwheeler.com] should be compulsory reading for developers.
Time after time we see the same flaws being found, sometimes by me [debian.org], sometimes by more focussed groups [openbsd.org].
I seriously believe half the problem is the number of young developers who read manuals/textbooks/online guides which have a paragraph at the introduction saying something like "To keep the code concise we've ommitted all error checking in our examples". With nary a mention of security throughout the rest of the piece.
Half joking - half serious.
Is Fixing Pot-holes a good idea? (Score:5, Insightful)
Patching roads requires people to stop the flow of traffic, and puts workers at risk of being injured or killed by users of the roads. A road that is fully patched encourages users to drive faster, burning fossil fuels at a lower efficiency compared to the slow speed drivers use when they see holes. Driving slower will also save lives in the event of an accident and cause drivers to pay more attention to the road since they never know when a hole could be in their local path.
Many times the problem is not with the road, but the surface that it is built on. Patching the road can only be assumed to be a stop gap measure at best and will likely have to be patched again. Holes in the supporting structure will almost always show up in the things built on it.
Fixing pot holes slows innovation. Once it becomes accepted that roads have holes in them, consumers will demand vehicles able to deal with them. If hole patching was stopped right now, studies show we would all be flying to work in our personal jetson mobiles within 8 years. Space previously set aside for roads will be converted to trails for bikes, bladers and walkers. Butterflies will land on your outstretched hand while you stop to observe the wild flowers.
Fixing holes only maintains the status quo and dominance of local government and the corrupt DOT branches of the states. If you reduce their budget by even 1% they will go on strike and roads will quickly deteriorate. Some communities out there are leading the charge in not fixing pot holes to bring you a world of glass houses on stilts and talking dogs with jet packs.
In conclusion our findings indicate the DOT should be abolished. They have served their purpose but have no place in todays innovative world.
Re:Bad Study. (Score:5, Insightful)
You really think the problem is C and C++?? I know that these arn't the holy grail of programming languages but to put some of that blame on them is very nieve. You can write buggy, unsecured code with asm! It's got very little to do with the language involved (the compiler and libraries used may have an effect) since it all gets put into machine code anyway. Blame the design and implementation, not the tool.
Re:Fixing vulnerabilities is GOOD! (Score:1, Insightful)
Ahh, but the in actuality the car was really unsafe the begin with, it's just that no one knew about it. Just because a "new" exploit just came out, doesn't mean that some clever hacker in Russia who wants to get into your bank account doesn't know about it. Sure the problem isn't as wide spread, but the risk is still very real.
-- gid
The Invisible Alternative (Score:3, Insightful)
To use a straightforward analogy -- possessing an immune system does not by a significant means reduce the pathogenic population, yet lacking one is death. The case is quite similar with vulnerabilities and virii -- it would be very simple for us to completely lack the infrastructure to manage an Internet-wide vulnerability. The low grade malware floating around -- though infuriating -- forces us to create a management infrastructure, on pain of losing connectivity. What the consistent stream of discovered vulnerabilities creates is not fewer vulnerabilities -- software simply isn't mature enough, nor would we really want it to be -- but more managable failures. Put simply, it doesn't matter what this way comes, because we've already been forced to deploy backups, create procedures, and implement recovery strategies.
The alternative state is far more terrifying: Bugs are not talked about, and the strategy is not to fix them but to silence their open researchers. A black market opens up -- it will always be in the benefit of some to have the means to exploit others. These means always work, because nobody defends. Are there fewer with these means? Yes, but one person can write an attack, and the motive to blackmail the entire Internet population (pay me, and I'll "protect" you from the next wave) is quite strong.
Bottom line -- and it's something that took me some time to realize myself, being an active member of the security community who doesn't deal in exploits heavily -- is that whatever the headaches are of full disclosure, the alternative is much worse.
--Dan
Re:Fixing vulnerabilities is GOOD! (Score:2, Insightful)
finding them is not really what is at issue...
actively looking for them, then publishing the fact that you found them, then how to exploit them is.
the author's point is that it is nearly useless for whitehats to comb through code looking for holes, since they will miss most of them and only catch a few, create a huge hubub about the ones they do find, then release the news to watch people scrambling to patch a problem that no one would have found otherwise.
inevitably some stuff will get missed, then you have a problem.
Finding holes is the result of someone actively looking for them. If no one bothers, we will only have the holes that the blackhats find. The script kiddies never find stuff on their own, they look to published vulnerabilities and their exploits to do their damage. crackers, aka script kiddies do most of the damage.
crackers, in general, are no talent, script modifiers that simply spread damage by using the findings of security "professionals". Most are incapable of finding this stuff themselves and rely on published findings by the legitimate whitehats for material that enables them to feed their obsessive appetite to destroy things.
The author's point is that 90% of "vulnerabilities" would never be exploited if someone didn't find them and publish them for the crackers to exploit.
Hence the whitehats are actually doing more harm then good.
It's definitely a valid point that needs to be explored further. It's possible these whitehats are doing nothing but promoting their own reputation so they can sell their services. They don't really do much good.
If you look at anyone actually hacked by a talented blackhat, the patches wouldn't have done them any good because the exploit is not yet published and a patch created yet. The people hacked by script kiddies and worms wouldn't have been hacked had the stuff not been published. Most crackers are too stupid to do it on their own.
They cut paste and automate. Whoop dee doo. Cut them of from something to paste and they are dead in the water.
Any idiot can crack, it takes intelligence to hack, and no hackers (or very few) are willing to devote their talent to cracking, since they can actually make money programming and doing positive things with their talents.
The ones that are talented and evil, well, there isn't much you can do about them anyway. They will always find a way. The least you can do is not enable any idiot to do it.
If man can make it, man can break it.
l8,
AC
Re:Fixing vulnerabilities is GOOD! (Score:2, Insightful)
Yes, they do. Safer cars did come out: honking big SUVs. When these things proliferated smaller cars were less safe because odds increased that you'd be facing a much more massive vehicle in a collision.
Re:Fixing vulnerabilities is GOOD! (Score:4, Insightful)
Your language is obscuring your logic. A car is "safe" or "unsafe". But a software vulnerability is unsafe when nobody knows about it and extremely unsafe when everyone does. Not like cars.
People put down security by obscurity all the time, with anecdotes of how it can't be relied upon, etc., without realizing what a security catastrophe it would be if all the obscurity in the world were to vanish. While it isn't a good idea to be relying on security by obscurity, the fact is that much of the world in fact does rely on obscurity, and going out of our way to destroy obscurity isn't necessarily a good idea either.
Re:I've read the paper and disagree (Score:3, Insightful)
When writing code for the Space Shuttle, the coders understand that not only are up to seven lives at stake, so is a 4 billion dollar irreplaceable piece of hardware.
Microsoft doesn't have that motivation. Neither do IBM, Sun, RedHat, SCO or Linus. Until you make them feel the pain of their neglect, their ignorance and arrogance, nothing about insecure software will change.
Re:Security guy? (Score:3, Insightful)
I get what you're saying, and you're correct as far as that goes, but I was not concerned that you failed to wipe the CC list.
I was refering to sending your credit card number to someone via electronic mail. Even if I was sure that TLS would occur between my MTA and theirs (ignoring the chance that a third-party secondary MX would get involved) and that they and I were using SSL-enabled IMAP to fetch our mail... I would still hesitate long enough to make it worth my while to just find their phone number and phone in the CC#.
That you claim to be a security expert and yet seemingly advocate against looking for exploits AND send your credit card number out via email... well, I worry is all.
Re:Fixing vulnerabilities is GOOD! (Score:2, Insightful)
Do I really need to point out the Zero-Day IE exploit that the white-hats still don't completely understand?
The [Neglected] Role of Expectations Management (Score:2, Insightful)
Hello! Reality Check (Score:2, Insightful)
Re:Missing a big part of the conclusion (Score:3, Insightful)
Precisely. How often have we seen reports of compromises on GNU source code servers on Slashdot? (And I'll bet Microsoft is targeted by 30 times as many black hats; we just never get the incident reports.)
An automatic patch system is the subsystem most vulnerable to serious exploits because it runs with the highest privileges on the target machine and only requires that the exploiter compromise a second machine. Exploit the patch server, you're The Man Who Owned the World.