Slow Down the Security Patch Cycle? 302
Ant writes "Computerworld has an editorial article about slowing down, not speeding up, patch releases."
Administration: An ingenious abstraction in politics, designed to receive the kicks and cuffs due to the premier or president. -- Ambrose Bierce
Yes. (Score:5, Funny)
We all know how well that works for MS Outlook.
Re:Yes. (Score:5, Interesting)
the obvious solution is to distribute patches via an outlook virus. it seems to be the only distro method that's guarnteed to work.
Re:Yes. (Score:5, Funny)
I don't think you'll get an argument from MS (Score:5, Funny)
Re:I don't think you'll get an argument from MS (Score:5, Funny)
Re:I don't think you'll get an argument from MS (Score:5, Insightful)
This is essentially the point of the author...
"They [hackers] wait until the patch for the vulnerability is released, then they reverse-engineer the patch. This is orders of magnitude easier than finding the vulnerability directly."
I believe this idea is flawed. A general description may give a would-be "zero-day hax0r" a place to look, but patches are distributed not as patches to individual files (e.g. diffs) but as whole file replacements.
To further reflect the sophistication of the author, he also spews this gem:
"An exploit is a method devised to take advantage of a specific software vulnerability using a software virus, Trojan horse or worm. When the exploit is done without a virus, Trojan or worm, it's using an undocumented feature."
Conclusion? This guy is a putz...
Re:I don't think you'll get an argument from MS (Score:5, Interesting)
You are aware that with a complete copy of the original directories, even with "whole file replacements," you're now just one step away from getting a diff?
Although I still think patches should be released as soon as possible because even if they do help "crackers" (or whatever we're calling them today) find exploits, there are still very intelligent black hats who will eventually find the exploit and start spreading it around. Patching it faster may mean more exploits sooner, but it also means that people can patch against the flaw without waiting for some black hat to make the entire point moot.
Re:I pity the hacker... (Score:3, Informative)
Like Microsoft is doing? (Score:4, Insightful)
Re:Like Microsoft is doing? (Score:3, Insightful)
Re:Like Microsoft is doing? (Score:3, Insightful)
Actually, I'm pretty sure that most of the zealots would just be crowing about how "we won!" Microsoft distributing an operating system for which they are license-bound to also distribute the code? No hidden hooks for their own products? Bill Gates bowing down before the Altar of Linus?
The zealots would be thrilled.
I want it fixed ASAP (Score:5, Insightful)
I dont think the issue has to do with patches coming out all the time, but having a better way to install said patches. Lets just say I am really looking forward to Novells Zenworks Patch Management solution.
Re:I want it fixed ASAP (Score:5, Insightful)
But that's part of the problem (according to the article.) When a software company releases a patch, it's not just the customers who receive it, but all the virus/worm writers. If they can reverse-engineer the patch and come up with an exploit faster than it takes for *all* the customers to apply the patch, they win. And trust me, they will *always* beat the masses, as long as there are people out there who seldom/never patch their systems.
Perhaps all software patches should be about 1GB in size, mostly consisting of random crap, with the little patch embedded deep inside.
Re:I want it fixed ASAP (Score:2, Insightful)
So, how does waiting longer to release the patch change that situation at all?
Re:I want it fixed ASAP (Score:2)
So, how does waiting longer to release the patch change that situation at all?
Good question. I never really did see the answer to that, and now the article is
Re:I want it fixed ASAP (Score:2, Interesting)
Re:I want it fixed ASAP (Score:2)
Re:I want it fixed ASAP (Score:5, Insightful)
Though if he really thinks that a patch in this form would be significantly harder to crack than a 'normal' patch, he's stretching.
Even if it was, the key would at least occasionally get leaked privately before it was publicly sent, and thus malware writers would have a field day.
All of that is also based on the assumption that exploit writers use the patch to reverse-engineer the vulnerability and exploit it. If this slower cycle he's proposing is too slow, there'll be plenty of "ne'er-do-wells" that will find vulnerabilities the old fashioned way. It's trading the current problem for yesterday's, not what I'd call a step in the right direction.
Working harder to make consumer machines and OSes able to intelligently patch themselves is a better solution. XPsp2 will switch Windows Update to "install by default" instead of "off by default", which will help there. Making it as transparent and yet as unobtrusive to Joe AverageComputerUser is, IMHO, the way to get the attack surface down from millions of machines to a few thousand or less.
The one thing I'll agree about as far as slowing the patch cycle down is that making sure any released patch DOES fix the problem and DOES NOT break other things in the process. Those are the kinds of arguments that various parties throw up when they're objecting to applying patches as soon as they're available (that's what was horribly badly wrong with the old NT service packs, for example -- they often broke applications and thus people would wait months or even stay a full service pack behind the latest version).
Xentax
Re:I want it fixed ASAP (Score:2, Funny)
Re:I want it fixed ASAP (Score:2)
i mean, if we only rely on someone finding the bug after the release and reporting it, we are in big trouble...who said that all the bugs found have been reported?
additionally, security is not something that can be fixed after the product is designed -- security is just as
The patch causes the exploit?? (Score:4, Insightful)
What if the distribution of the patch is, as matter of emperical fact, what *causes* the development of the exploit? From the article:
Now I know that this looks like a call for security through obscurity [wikipedia.org] (see also here [slashdot.org]), but it is an interesting point. It appears the argument is that but for the distribution of the patch, there woudn't have been an exploit. I don't know how often that is true, if ever. But it does appear worth investigation.
As to your last point, the article indicates that the issue is not finding a better way to install patches, but instead finding a better way to distribute them without, if possible, also disseminating information that can be exploited by black hats. Again, from the article:
Is this possible?
Re:I want it fixed ASAP (Score:5, Insightful)
Say, Microsoft finds a bug (either internally or via a good/trusted samaritan who will keep it private). Now, they go ahead and code up a patch for the bug but when do they release it?
Because the patch for blaster required Win2K SP2, many people were not able to protect themselves appropriately as SP2 is over 8 hours on a dial-up connection (more than half of the 'net). Now, if MS can get a "quarterly updates" on CD mailed to all of these people, then they can give everyone a better means of securing their boxen prior to letting the hackers pick apart the actual patch to find out what the hole is and how to exploit it (though blaster isn't a good example of a patch being reverse engineered into an exploit).
This is a HUGE dillemna for corporations. Especially those that have ooldes of laptops with users connecting via dial-up. I'm actually connected, as-we-speak (type?), to windowsupdate.com and have been for the past hour or so... ON BROADBAND...
What I would suggest is the best of both worlds - release patches only as exploits are found in the wild while compiling fixes for deployment en bulk. And you'd think that Microsoft, with billions in free cash, would start putting a bounty on some of this stuff (either reporting the holes themselves or the hackers that exploit them). It just shows how little Microsoft and Billy care about Joe User.
And how about some freakin' color schemes for XP? I mean, really... three whole color schemes?
Re:Windows Security Update CD (Score:3, Informative)
(probably held up in customs, but still...)
One other thing I discovered is that MS automatically made a passport for me when I filled out the order form. It didn't say anything about that until I tried to check the order status and was redirected.
In response to the parent about changing WinXP themes... get a patched uxtheme.dll file. WinXP file protection will complain but you can ignore it. Then, you
Just more astroturf (Score:5, Insightful)
Speaking of astroturf (Score:2, Interesting)
As opposed to the endless [slashdot.org] list [linuxsecurity.com] of problems with free software?
Re:Speaking of astroturf (Score:4, Insightful)
Let's get the *whole* quote shall we?
Re:Speaking of astroturf (Score:3, Informative)
Yes, it allows me to turn the software off, or take the machines down running it until I can patch it. Keeping me in the dark is doing me a disservice.
That fact that a good deal of people are not vigilant about security and let their machines get exploited is no reason those of us who are vigilant should be penalized.
And I realize that 24 hours is not a lot of time to ins
Re:Speaking of astroturf (Score:4, Insightful)
I don't care if "most people won't install a quick fix for a security hole even if it was available from day 1," I will, so let me protect my network and let their networks burn.
Because the people you're talking about have shown that they will not install a patch *months* after "a major security upgrade is released," so how does this security model help at all? Hell most of them aren't even aware of the vulnerability untill their machines slow to a crawl and they hear about a new worm on the local news. So why should I wait for them to patch before I protect myself?
Re:Speaking of astroturf (Score:3, Interesting)
Ah yes. The old "Let's compare the security of programs that Microsoft makes against every hackjob program out there under the GPL or BSD license that might be exploited across a good dozen distributions."
While we're at it, lets fail to consider that there's no such thing as an exploit-free system that still does something useful, and let's not consider the other critical part of security: response and patch times.
In other news, there are a lot more apples in the world than oranges when you compare every
It never says that. (Score:5, Insightful)
Personally, I'd prefer if they didn't use valid scientific methods to prove if this is or isn't the case, if the result is network being saturated when the exploits finally hit, and manage to take out a significant number of hosts.
One of the key points that he didn't mention is that there was an attack about two years ago (sorry I can't be more specific, I was working on other projects at the time, as wasn't responsible for cleaning up after it where I worked), that one of the virus companies had a 'prefered customer' system, where they'd let certain customers know about virus outbreaks before the general public, and put out a press release that if you had been one of their clients, you'd have been protected from the virus outbreak. [I think it was one of the more hard hitting ones, too...like CodeRed, or at least near that time]
I would think that the issue that this article is talking about has absolutely nothing to do with speed -- it comes down to issues with the current procedures being exploitable, and needing to be fixed. He is simply giving a recommendation to fix the problem, which has a (not quite desired) side effect of longer times before systems are patched.
I would think that there's most likely some other solution out there that would have the desired end result (more difficult to reverse engineer the patches before the majority of users have patched their system), without creating some sort of intentional delay in the procedures. (and whoever comes up with it should probably patent it, to protect themselves and screw others, or should make sure to get it published, so it can be claimed as prior art before someone else patents it)
Here's Why (Score:4, Insightful)
It was probably posted so we'll be aware what PHBs are being fed.
It is NO wonder why it was written... from the bottom of the article:
He does have a point about reverse-engineering, but the solution to that isn't "don't release a patch". His article reads like a Microsoft HOWTO Cover Our Ass document.
One thing that would be interesting (but very difficult) to measure would be the relationship between exploits and fancy features. Fewer features/capabilities must mean fewer potential exploits. And if, as some estimates stated, Word 2000 users on average exercised 10-15% of the features it provided, one must wonder if the other 85-90% of the features were worth the associated exploit and bug potential.
Imagine Internet Explorer minus ActiveX, minus silently-installing "agents", and minus some of the magical integration with the OS. It might look something like Firefox (fast, clean, and comparatively exploit-free).
LMAO! (Score:3, Funny)
Undocumented feature? WTF?
It's a security hole! Not an "undocumented feature".
Hahahahahahahahaha!
Re:LMAO! (Score:2, Funny)
Do it right the first time... (Score:2, Insightful)
It would also mean forcing more programers to do their jobs right, and more managers to learn what they're doing as well (And that code doesn't fix itself because you lit a candle for it the night before).
Re:Do it right the first time... (Score:2, Insightful)
no matter what you do, your code will have bugs.
you just do everything you can to keep them to a minimum, but if you spent a hundred years working on the same project, when all was said and done there would still be bugs.
Re:Do it right the first time... (Score:2, Interesting)
True, but there are steps you can take to minimize bugs. There are ways to check programs for out-of-bounds conditions. There are ways of fixing exploits relatively quickly. (And I mean weeks instead of months). There are ways of releasing "work arounds" instead of fixes.
True, the above may no produce the fastest code, but shit.. we are talking about Windows here. You want it to run faster? Buy a bigger computer.
Re:Do it right the first time... (Score:2, Interesting)
Re:Do it right the first time... (Score:3, Interesting)
So have I. And while I agree that it's theoretically possible to write bug-free software (for a sufficiently small program), even sky-high military budgets can't afford that level of redundant effort. (By quantum physics, nothing is truely impossible. But some things are hard enough to be practically impossible)
The V-22 [dfw.com]? Lethal software bugs. The FA-22? Software crash every 2 hours [washingtonpost.com].
Funny how those nukes don't go off by accident isn't it?
Just bec
Re:Do it right the first time... (Score:2)
Sure it is, with proper planning, scheduling, and a skilled programming and QA team familiar with the problem domain.
While I'm in fantasy-land, I'd like a unicorn too.
Re:Do it right the first time... (Score:4, Insightful)
Maybe if we were given some fucking time to do the job right......
Re:Do it right the first time... (Score:2, Insightful)
Ever fail to notice part of some instructions, only to regret it later? Ever use the wrong [software] tool for the job because you couldn't afford the right one? Ever zone out or get distracted during a class/meeting? Ever make a coding error that the compiler didn't catch? Ever shortcut a process because you had to rush home to take care of a sick kid or to meet a "drop-dead" deadline? Regular people do this and programm
Re:Do it right the first time... (Score:3, Insightful)
Once you get beyond trivial programs, there's no such thing as fault-free software.
The reason for this is that software does not exist in a vacuum; the "correctness" of the behavior of any software is always evaluated through the eyes of the person using it.
This is why software which is considered bug-free today can become bug-ridden tomorrow if an exploit is discovered which exposes some previously hidden undesirable behavior. The so
Patch release cycle (Score:5, Insightful)
It's a difficult one. On the one hand you've got the problem of lazy vendors and on the other you've got full disclosure where the enemy will like develop the worm before you can test your patch properly.
I think the people that find these vulnerabilities should but an expire date on their vulnerability at which point full disclosure kicks in. There should be protections in law to ensure this practice is legal too.
That way.. we have motivated vendors and give the vendors enough time to fix the problem.
Simon.
Wouldn't be a bad thing (Score:5, Interesting)
Either that or like one poster suggested, we just need better tools for keeping track and managing the flow of updates... Strangely enough, MS's XP update does a really good job at this (despite their slow release process).
Re:Wouldn't be a bad thing (Score:2, Insightful)
Depending on the distro, Linux is mindlessly easy to keep up to date. Of course, you wouldn't use slack in this sort of enviroment, but RH has a nice package management system, and let us not forget Debian.
Cron jobs, that's where it's at.
Re:Wouldn't be a bad thing (Score:2)
AutoCAD 2004 (regular and LT)
Voloview 3.0
Office XP (plus Visio 2003, Publisher 2000, etc - my boss rocks!)
Pro/Engineer
etc etc etc.
It would be a huge timesaver if there was 1 server application that could manage all the patches for all the applications that I have to support. Thats where I spend th
But then how can vendors be 1337? (Score:5, Insightful)
After crying wolf so many times, it's no wonder advisories concerning critical security holes can get lost in the shuffle.
Re:But then how can vendors be 1337? (Score:2)
In other words, putting the opposite spin on what you're saying of course, is that Open Source breeds more perfect software right? Not just more secure, but little bugs like this are fixed, which can lead to big issues down the road, right?
I can see what the article's saying, but at the same time, things that are very critical should be patched right away, and the patch should be appl
Re:But then how can vendors be 1337? (Score:2, Informative)
It's not possible to slog through millions of lines of assembly. Even if you do 1 line a second, 8 hours a day, 5 days a week, you won't finish in less than a few months (of course, if you have a 10-million-line source code program, the binary will hold a LOT more than "a few mil
Quality not quantity (Score:4, Insightful)
If I may expand upon that. (Score:3, Insightful)
May I also point out that such is the case with the existing "anti-virus" market?
We see "patches" every week for the latest round of viruses. And we will continue to, until Microsoft addresses the actual vulnerabilities in their software (and the security model upon which it is based).
A virus or a worm (and, to a less extent, trojans) are all FAILURES in the security of a system. How many failures of an almost identical nature does it take before people realize that the model i
fuzzy logic? (Score:3, Insightful)
Re:fuzzy logic? (Score:2)
Re:fuzzy logic? (Score:2)
I guess another plan would have been to disguise the patch inside an "update" that modified a lot of the rest of the code, making it harder for the script kiddies to latch onto the key changes.
Maybe we are heading toward an era in which patches are issued in encrypted form, and specia
I never update (Score:2, Funny)
Never had a virus worm or any of that crap.
What am I doing wrong?
Re:I never update (Score:2)
Re:I never update (Score:3, Insightful)
in related news (Score:5, Informative)
Automatic updates (Score:2)
Zinger (Score:2, Insightful)
"...or allowed critics to claim the superiority of some other system that supposedly doesn't need patches."
He's right though. Just because certain closed source vendors aren't doing so well with bugs, doesn't mean that the open source movement can sit back and laugh at them. There needs to be as much participation as possible to maintain OSS's reputation for quality.
A perspective from a gentoo user (Score:3, Insightful)
Some times the changes are so minor, I really wonder if it is worth it.
Re:A perspective from a gentoo user (Score:2, Informative)
Re:A perspective from a gentoo user (Score:2)
What portage needs to do , is seperate security fixes from enhancements. of course for major upgrades, security fixes and enhancements will be in the same version. but atleast when going from 0.98_0.1r3 to 0.98_0.2 , I should know if the upgrade was for a secuirity fix or add some eyecandy enhancement, which I can do without
Re:A perspective from a gentoo user (Score:2)
Not quite perfect yet, but getting closer.
Re:A perspective from a gentoo user (Score:2)
Drop this into
Re:A perspective from a gentoo user (Score:2)
there's a new experimental feature, GLSA only updates [gentoo.org]
Basically, it's a script that only pulls in the updates that warrant a gentoo linux security announcement.
It's still worth doing an emerge -puvD world every so often though
How does the Kool-aid taste? (Score:3, Insightful)
He's arguing that they should slow down the patch cycle because all exploits come from reverse engineering patches. Slow down the patches, and you slow down the exploits.
Because, you know, nobody ever figures these things out on their own. It sure is a good thing we live in a world where exploits are never found in the wild before a nice, safe, 100% effective patch is released to counter them.
Took them long enough (Score:3, Insightful)
Patches in the near future- (Score:2, Interesting)
Greatest patch of all (Score:5, Insightful)
Pure insanity but it makes business sense. (Score:2)
The answer is going to come from the market who will decide in MS's case if they do not mind waiting for the plugs to fill the dyke, vs. OSS who
The article is definitely correct! yay! (Score:2)
This is why there are fewer exploits than ever before, and fewer cases of PCs being 0wned and trojaned.
The recent BlackIce break-in clearly demonstrates... oh, sorry, it doesn't, and attacks on PCs are escalating, not going down.
Perhaps professional attackers don't need to wait for exploits after all.
Exploits are often hard to detect... (Score:5, Insightful)
It is very difficult to establish what new exploits are being used in the wild. With the exception of viruses and worms (which have an analyzable payload), most exploits must be caught in the act to understand what they really are.
So if Company X has a vulnerability, they can:
a) hold off on a patch since there is no exploit (as the article suggests), or
b) patch right away, since there is an exploit in the wild
Option a saves Company X money....how hard will they look for an exploit?
Re:Exploits are often hard to detect... (Score:2)
In his article he also equates the fact that the exploit came out for ISS's software immediately after the patch was released. Eeye had found it 10 days before the patch was released, why does he assume that the only ones that had found it and kn
I completely agree with this article... (Score:2)
Okay, honestly though, I can agree with some of his arguments, they are fine, but to make backward assumptions like they did by not mentioning the fact that black-hats can actually find and exploit vulnerabilities f
Hrm (Score:3, Interesting)
What are you talking about, "enabled?" It is their fault for not properly patching the system.
Ultimately, more systems will be developed using managed code (for example, Java and C#). This will narrow the problem to the bootstrap code those systems rely on without every application developer needing to be hypervigilant about buffer overflows.
That only makes sense if you think buffer overflows are the only security risk. Using Java doesn't magically make programs secure. In fact, a lot of damage can be done even when you don't have the ability to run arbitrary code on a remote machine.
Lastly, and most importantly, once the patch was released, the exploit was released the very next day. This wasn't a coincidence where the exploiters just missed having a zero-day exploit. If the patch had been released a week earlier, the worm also would have come out a week earlier.
So it doesn't matter in the slightest how often you release patches, exploiters will exploit them. Nothing in the article explains how delaying a patch release will make the system more secure.
[To make the system more secure] . . . software owners would subscribe to an automated patch service. . . . Subscribers would receive a predeployed, encrypted version of the patch.
That entire statement sums the entirety of the useful information in this article. Erase the whole thing and leave that statement. (I'm mean. Sorry.)
Translation: (Score:3, Funny)
Not about slowing down the cycle (Score:5, Interesting)
He makes the point that as soon as a patch is available, it is reverse engineered and exploited. He is advocating sending out encrypted versions of a patch, get everyone who is always-connected to the internet to automatically download the encrypted version, and once the downloads per second curve decreases by a certain amount (say 95% or so), then you send out the decryption key. Everyone installs the patch simultaneously; and zero-day exploits have as targets only those systems that do not subscribe to the patch service, and use traditional methods to procure patches.
This is based on the assumption that zero-day exploits reverse engineer patches. I have found this not to be the case; they usually just exploit the vendor description of the vulnerability; in many cases, this description is posted to a security mailing list a few days (or weeks depending on the vendor) before a patch is available; usually this is the method by which a vendor finds a vulnerability.
This process is right and proper as it gives the vendor a huge incentive to correct flaws quickly; many people who discover a vulnerability report it to the vendor, wait for it to be fixed, and then when a fix is not apparent, report it to the community to give the vendor a sense of urgency. Unfortunately, it is a necessary part of the security patch cycle; without it, we would have a priviledged few individuals who could write truly devastating worms and virii, for which the vendor may not even be working on a patch.
SQL Slammer was bad. But imagine it if Microsoft had no intention of correcting the vulnerability at the time it hit. How many more people would it have hit, considering that a significant portion of Microsoft's customers had already patched at that point? How long would it take Microsoft to issue a patch? How would they distribute it with so much of the internet simply unavailable? How long until our infrastructure approached something like normalcy?
That's what could happen in a world where public forums don't hold vendors accountable for fixing vulnerabilities. And that's exactly the kind of world necessary for it to make sense to slow down your patch distribution.
The goal of this author (Score:2, Troll)
My company thinks like that... (Score:3, Funny)
Uninformed or just stupid ? (Score:5, Interesting)
Just because a Worm was released right after the patch was released, it mean that they used the patch to create the exploit ? That is simply being obtuse.
Real cracker (or whatever you like to call them) are not there to make their name. They are out there to make a profit. Simple as that. Those are the guys with real motivation (and I mean money) to explore all possibilities. I do agree that the kids that make the worms to became famous among their 13371 frieds won't spend days working on disassemble code, but you can be very sure someone willing to compromise an specific target (a bank, or a given company) will do that. Add a little social engeneering to the mix, and things get real ugly.
Usually, worms are released after the patch. True. That is usually when the so called "zero-day" exploit becames useless, or nearly so. Also, releasing a worm is a good way to divert the attention from the other bug the cracker will be exploiting. Believe me, I have seen companies with 400+ employess come nearly to a halt due to patch deployment after a new worm shows up.
So, slowing down patch releases will slow down new worms ? At first glance, yes. It will also multiply the number of active worms on the wild, and allow the bad-bad-bad guys to keep making money, and cause real trouble, the kind of trouble take can take a company out of the market.
yet again. (Score:5, Insightful)
The patch had the specific information embedded in it that the exploiters needed, and the exploiters already had the expertise and tools required to rapidly make use of the information.
Slowing it down won't do anything, and they jump to that conclusion at the last line. Slowing it down, will have the same effect of speeding it up. They used speeding it up as an example:
If the patch had been released a week earlier, the worm also would have come out a week earlier.
The same could be said if it were released a week later.
Slowing the quickness of release shouldn't be a factor, only implementation of distribution. If they can find and fix a problem *right now*, why wait 2 weeks to distribute it? I just don't get why they mention time as an issue, except as flamebait.
End users are in a dilemma, however, because the current method of deploying patches doesn't allow them enough time before an exploit based on reverse-engineering of the patch can be deployed.
The only dilema is that of the producers of software. How fast can we notify end users that a fix is available and if they don't install it, they will be vulnerable to some attack.
If someone understands why the article claims slowing down will benefit, please explain to me. This is pissing me off. It makes no sense. The only thing that makes sense is their statement about a "patch subscription system". But that is crying out "Pay for this service". So they want to make people pay to have quick security patches, the rest get slow patching? I don't get it. I give up trying.
It's like saying "hey, what i want to do makes no sense at all! therefore it HAS to be good, new, and innovative. So give us money!"
Slowing patches doesn't work (Score:3, Interesting)
Security "experts" (have you ever met any? oh really?) are confusing topics here. This is the same argument I've seen time and time again in the security world. Here are a few examples:
1) chroot environments
2) stack protectors
In the case of chroot environments, people were wanting to protect 99.9% of remote attacks because "kiddies" used remote buffer overflows as the primary method of breaking into computers. What happened? Somebody figured out ways of breaking out of chroot environments. It wasn't difficult. Now, kiddies and damn near everyone can read about how to break out of chroot environments. They don't protect anything when the technique/knowledge of how to break them is so widely available.
In the case of stack protectors, people again wanted to protect against 99.9% of the attacks. In this example, it's more clear because new attacks became available because of the protection methods. Buffer overflows were 99.9% of the attacks back in the day. When stack protectors started popping on the scene, tons of papers and research went into heap overflows, format string holes, shared library injection, et al. Now, buffer overflows represent maybe 60-80% of the exploits out there. Since the other methods are now well-known, stack protectors are not anywhere near full-proof, and becoming less so by the day.
Exploits are found in the wild. Anyone with ASM or C knowledge can find them, however some attacks require different ways of thinking and different coded implementations. There are many attacks against HTTP, for example, that require no knowledge of ASM or C. Anyone with the desire to find an exploit in almost any computer PROGRAM or line of code (and how many lines of code are there?).... will find one. Give a person a 6-pack of jolt and a box of Cap'N Crunch cereal, and that person will break code for fun or for profit.
Slowing down patches just makes the real hacker's results worth more. And software bugs (which what security holes are) can cause mass hysteria and even human death. Why delay a patch to a fix that could cause events such as historical software-related disasters? I see delaying patches as Armageddon. Who's with me on this?
Re:Slowing patches doesn't work (Score:3, Insightful)
No. Where did you come up with that odd definition?
That means that if you discover a bug in Linux 2.6.x kernel, that bug has been around since the Minix days!
By that interpretation, a zero-day bug would be so rare we wouldn't even need a word to describe them! (BTW: Linux has never contained any Minix source code)
Zero-day really means that the person running the e
There are two type of patch management (Score:2, Interesting)
However, if you have just discovered a vulnerability in your software, odds are some black hat hasn't just coincidentally discovered the same thing, so releasing a patch immediately is not likely buy you much security. Anyway, releasing a single patch when a bu
Buffer overflows. Why? (Score:2, Troll)
Partially Correct About C/C++ (Score:3, Insightful)
The problem is that, like most bugs, the complexity of the language makes it hard to predict problems. Even in memory managed systems like Java and C# you can have crippling errors. You don't "fix" the problem by moving to these types of memory
Job Security (Score:2, Interesting)
Not all right, but not all wrong either (Score:5, Insightful)
Some exploits are reverse-engineered insanely quickly from patches. (True, with an example cited.)
Slowing down patches will reduce the total severity of exploits. (Way too vague.)
Slowing down patches will delay the existence of exploits. (False; not all exploits are reverse-engineered from patches.)
Slowing down patches in a "Tuesdays only" fashion will make it easier for admins to check for patches on a predictable schedule, and install them soon after they're released. (True as far as it goes, but the reverse-engineers can also check for patches on a predictable schedule; this also totally ignores exploits that aren't reverse-engineered from a patch.)
Slowing down patches long enough to make sure they don't cause some other severe problem is a good idea. (True, but not mentioned in the article.)
Providing patches in an encrypted-but-usable form right away, and in a decrypted form later, will help admins keep ahead of reverse-engineers. (Obvious "this is anathema to OSS" aside, how would this actually work? Windows Update patches are already distributed in binary-only form, and they still get reverse-engineered.)
Managed-code languages like Java and C# will eliminate buffer overflows, which are a common source of exploits, but they're nowhere near universal. (Basically true, probably with numerous exceptions and caveats.)
An angle I haven't seen before (Score:5, Interesting)
What if the reason some of these exploits aren't happening until the patch has been released is because the blackhats are being careful not to break into systems that belong to clueful users (tm)?
The reasoning would be: -I want to break into a computer
-I don't want to get busted
-I want to make sure whoever I break into isn't going to bust me
-I'll pick a computer that obviously isn't having much attention payed to it -If a system isn't getting patched, it probably isn't being checked for intrusions either.
Now I'm not saying that it accounts for the majority of cases, but it is interesting to consider.
hasn't this already been tried... (Score:2, Funny)
Isn't this what Microsoft has been doing for years? (rim shot)
sorry, mark it as Obvious.
Cb
Remote buffer overflows or ???? (Score:2, Interesting)
Computer systems are more likely to get compromised in the following two ways:
1) Poor choice of passwords. This is a vendor implementation problem. Computers and programs should not allow people to choose bad passwords. There should NOT be a setting to make this optional. If passwords aren't secure, why require them in the first place?
2) Exploiting a trust relationship of some kind. This is generally a protocol design problem, that qu
Stop Releasing Patches (Score:3, Interesting)
Not slow down, but wait and deploy more rapidly (Score:3, Insightful)
Why not have a standard piece of software that scans your system for different programs you have installed, one that registers the programs as well as your machine's ip address with a server? There could be a centralized server system or each vendor could have their own server to allay privacy concerns. Encrypted patches then could be auto-downloaded upon release and then held until some point in the future. Then simple UDP packets containing decryption keys could be sent to all registered systems - at least once enough of them have downloaded the patch - allowing near simultaneous installation.
An added bonus would be that if a worm/virus is reported in the wild, patching can commence immediately. This would really put a damper on the ridiculous rate of infection we usually see, currently so rapid in fact that anyone not patched is usually hit within a day. I'm glad most of these worms don't carry destructive payloads, the recent destructive witty worm killed my weekend. Try recovering data after random parts of the drive have been overwritten.
What if it's a bad patch? (Score:3, Insightful)
Now consider what happens when *everyone* installs at the same time. No chance for the vendor to get feedback and pull the patch. Somehow this seems risky....
Bad Writing (Score:3, Insightful)
Catchy way of describing my idea (albeit misdirective)
Why we need my idea
My idea.
As you can tell from how Slashdotters are reacting, they never finished the article, or didn't read the whole of the last paragraph, where the idea of encrypting patches and distributing a key days or weeks later is actually stated.
It's a good idea. It solves badwidth issues for people with huge patches (Microsoft, in particular).
But he has so much in the first and second section and so little in the last section that his idea gets buried. I think he needs to make his ideas less mysterious. Give us some terms of the actually idea ("Slow down patches WITH CRYPTOGRAPHY") or something so that we actually READ the last paragraph.
Furthermore, there would need to be a darn easy way to do this for it to work. Microsoft's update feature could do it, as (we can pretend) every windows box has it.
If SSH has a vulnerability, you can release a patch this way, unless every OS it runs on has a automatic system -- one which gets patches and keys and installs patches when a key arrives. Red Carpet could be modified to do that, but what do the Plan Nine users use, the MacOS users, the FreeBSD people?
However, over time, such a system could get wide acceptance for configuration/vendor specific patches and then become useful for applications, as well.
It would have to be a well-defined OPEN system, and MicroSoft (it seems) need not be included -- they'd do their own thing and make the system non-portable.
OpenSSH tried this once (Score:3, Interesting)
It was a complete failure. It lead to some of the worst criticism the project had ever experienced. And they ended up releasing the patch earlier than announced, not because of the criticism, but because exploit code was being written despite of the patch not being made generally available.
Re:OpenSSH tried this once (Score:3, Informative)
At the time of the original announcement it was specified that there was a way to mitigate the problem (Privilege Separation) and at least some of the criticism was because PrivSep didn't work on all platforms.
The patch was released early because the discoverer released the announcement early. I don't know if there were exploits available at that time.
Disclosure: I'm one of the OpenSSH developers, but I wasn't a
False premise (Score:3, Insightful)
The code exploiters use the same tactics against the software vendors that the software vendors and antivirus companies use on them. They wait until the patch for the vulnerability is released, then they reverse-engineer the patch. This is orders of magnitude easier than finding the vulnerability directly.
This is wrong, and therefore the whole article based on this premise is nonsense.
Most of the security flaws are found ransomly or through testing and observation of running software, by various people outside the companies that produce the patches. So the possibilities are:
So the conclusion is: there is no possible scenario that justifies the delays of the patches release. Only a lazy software vendor may think of such a lame excuse for delays.
Re:In case the site falls over.... (Score:2, Insightful)
WHY do we mod up people who pollute the comments by copying the article into the comments?
Not informative. damn, where are my mod points when i need them?