Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Slow Down the Security Patch Cycle? 302

Ant writes "Computerworld has an editorial article about slowing down, not speeding up, patch releases."
This discussion has been archived. No new comments can be posted.

Slow Down the Security Patch Cycle?

Comments Filter:
  • Yes. (Score:5, Funny)

    by ImTwoSlick ( 723185 ) on Tuesday April 13, 2004 @03:32PM (#8851652)
    slowing down, not speeding up, patch releases

    We all know how well that works for MS Outlook.

  • by Anonymous Coward on Tuesday April 13, 2004 @03:32PM (#8851654)
    Any slower and Microsoft would just start shipping exploits instead of patches.

    • by Jason Straight ( 58248 ) on Tuesday April 13, 2004 @03:43PM (#8851811) Homepage
      They already do!
    • by pegr ( 46683 ) * on Tuesday April 13, 2004 @04:21PM (#8852335) Homepage Journal
      Any slower and Microsoft would just start shipping exploits instead of patches.

      This is essentially the point of the author...

      "They [hackers] wait until the patch for the vulnerability is released, then they reverse-engineer the patch. This is orders of magnitude easier than finding the vulnerability directly."

      I believe this idea is flawed. A general description may give a would-be "zero-day hax0r" a place to look, but patches are distributed not as patches to individual files (e.g. diffs) but as whole file replacements.

      To further reflect the sophistication of the author, he also spews this gem:

      "An exploit is a method devised to take advantage of a specific software vulnerability using a software virus, Trojan horse or worm. When the exploit is done without a virus, Trojan or worm, it's using an undocumented feature."

      Conclusion? This guy is a putz...
      • by _xeno_ ( 155264 ) on Tuesday April 13, 2004 @04:40PM (#8852574) Homepage Journal
        but patches are distributed not as patches to individual files (e.g. diffs) but as whole file replacements.

        You are aware that with a complete copy of the original directories, even with "whole file replacements," you're now just one step away from getting a diff?

        Although I still think patches should be released as soon as possible because even if they do help "crackers" (or whatever we're calling them today) find exploits, there are still very intelligent black hats who will eventually find the exploit and start spreading it around. Patching it faster may mean more exploits sooner, but it also means that people can patch against the flaw without waiting for some black hat to make the entire point moot.

  • by JPriest ( 547211 ) on Tuesday April 13, 2004 @03:32PM (#8851662) Homepage
    Everyone already decided this was a bad idea when Microsoft starting doing it. We can't change our minds and copy them because we all know Microsoft does not Innovate.
    • This all just lends credence to my theory that the best way for Microsoft to kill Linux is for them to adopt it. Nothing would make the Linux Zealots jump ship faster. Within days we'd be hearing about how much Linux sucks and how BSD or some other OS is the "One True Operating System".
      • This all just lends credence to my theory that the best way for Microsoft to kill Linux is for them to adopt it. Nothing would make the Linux Zealots jump ship faster.

        Actually, I'm pretty sure that most of the zealots would just be crowing about how "we won!" Microsoft distributing an operating system for which they are license-bound to also distribute the code? No hidden hooks for their own products? Bill Gates bowing down before the Altar of Linus?

        The zealots would be thrilled.

  • by Gr33nNight ( 679837 ) on Tuesday April 13, 2004 @03:33PM (#8851675)
    I dont know about everyone else, but if a bug or security hole is found, I want a patch for it ASAP, and not in 2 months when the next 'service pack' or whatever comes out.

    I dont think the issue has to do with patches coming out all the time, but having a better way to install said patches. Lets just say I am really looking forward to Novells Zenworks Patch Management solution.
    • by cK-Gunslinger ( 443452 ) on Tuesday April 13, 2004 @03:49PM (#8851880) Journal

      But that's part of the problem (according to the article.) When a software company releases a patch, it's not just the customers who receive it, but all the virus/worm writers. If they can reverse-engineer the patch and come up with an exploit faster than it takes for *all* the customers to apply the patch, they win. And trust me, they will *always* beat the masses, as long as there are people out there who seldom/never patch their systems.

      Perhaps all software patches should be about 1GB in size, mostly consisting of random crap, with the little patch embedded deep inside. ;)

      • If they can reverse-engineer the patch and come up with an exploit faster than it takes for *all* the customers to apply the patch, they win.

        So, how does waiting longer to release the patch change that situation at all?

        • So, how does waiting longer to release the patch change that situation at all?

          Good question. I never really did see the answer to that, and now the article is /.'d. Perhaps because more people are likely to apply a large patch that "fixes 37 known security issues" than they are to apply 37 individual patches over a 3-6 month span.

        • Just slowing it down by itself won't help. If the patches were all listed at some very convenient central location and everyone knew that the first of the month was patch day, maybe people would remember to patch. If it were some set day, each OS could pop up a little message saying to go get all of this month's patches. Additionally, if the patch schedule is random like it is now, then the exploiters are much more likely to hear about the patch than the average computer user. Having a set day might eli
          • Even better, how about some little thing built into the OS which goes and fetches the patches automagically, and asks if you want to install them. Then the user doesn't have to do anything except click "install" when prompted. I wonder when someone will invent [microsoft.com] that [redhat.com].
        • by Xentax ( 201517 ) on Tuesday April 13, 2004 @04:12PM (#8852165)
          The article author suggests that the patch be distributed in an encrypted form, and then on a specified date, the key gets sent and everyone patches simultaneously.

          Though if he really thinks that a patch in this form would be significantly harder to crack than a 'normal' patch, he's stretching.

          Even if it was, the key would at least occasionally get leaked privately before it was publicly sent, and thus malware writers would have a field day.

          All of that is also based on the assumption that exploit writers use the patch to reverse-engineer the vulnerability and exploit it. If this slower cycle he's proposing is too slow, there'll be plenty of "ne'er-do-wells" that will find vulnerabilities the old fashioned way. It's trading the current problem for yesterday's, not what I'd call a step in the right direction.

          Working harder to make consumer machines and OSes able to intelligently patch themselves is a better solution. XPsp2 will switch Windows Update to "install by default" instead of "off by default", which will help there. Making it as transparent and yet as unobtrusive to Joe AverageComputerUser is, IMHO, the way to get the attack surface down from millions of machines to a few thousand or less.

          The one thing I'll agree about as far as slowing the patch cycle down is that making sure any released patch DOES fix the problem and DOES NOT break other things in the process. Those are the kinds of arguments that various parties throw up when they're objecting to applying patches as soon as they're available (that's what was horribly badly wrong with the old NT service packs, for example -- they often broke applications and thus people would wait months or even stay a full service pack behind the latest version).

          Xentax
      • My girlfriend's younger sister never patches her machine. I was checking it out, it was an XP box with no patches applied, and the anti-virus table was from 2001! I said it was a bad idea. She said "Who cares. Oh no, they can read my school work and my e-mail! I'll just format every couple months when it gets too slow" Mind you, she gave us a ride, and after there was sweet smelling smoke pouring out of the hood. "You're burning anti-freeze" I say. Her response was "Yeah, its leaking all over everyt
    • i think people are missing the point, and the point is: we need to write better software in the first place -- test it well BEFORE releasing it, not relying on the fact that we can release a patch later, after the bug is found by someone

      i mean, if we only rely on someone finding the bug after the release and reporting it, we are in big trouble...who said that all the bugs found have been reported?

      additionally, security is not something that can be fixed after the product is designed -- security is just as
    • by David Hume ( 200499 ) on Tuesday April 13, 2004 @04:11PM (#8852152) Homepage

      I dont know about everyone else, but if a bug or security hole is found, I want a patch for it ASAP, and not in 2 months when the next 'service pack' or whatever comes out.

      I dont think the issue has to do with patches coming out all the time, but having a better way to install said patches. Lets just say I am really looking forward to Novells Zenworks Patch Management solution.


      What if the distribution of the patch is, as matter of emperical fact, what *causes* the development of the exploit? From the article:

      Lastly, and most importantly, once the patch was released, the exploit was released the very next day. This wasn't a coincidence where the exploiters just missed having a zero-day exploit. If the patch had been released a week earlier, the worm also would have come out a week earlier.


      The patch had the specific information embedded in it that the exploiters needed, and the exploiters already had the expertise and tools required to rapidly make use of the information.


      Now I know that this looks like a call for security through obscurity [wikipedia.org] (see also here [slashdot.org]), but it is an interesting point. It appears the argument is that but for the distribution of the patch, there woudn't have been an exploit. I don't know how often that is true, if ever. But it does appear worth investigation.

      As to your last point, the article indicates that the issue is not finding a better way to install patches, but instead finding a better way to distribute them without, if possible, also disseminating information that can be exploited by black hats. Again, from the article:

      The main idea is that vendors need to rethink the patch distribution process, slow it down rather than speed it up and deliver security patches in a way designed to defeat the reverse-engineering process.


      Is this possible?

    • by swordboy ( 472941 ) on Tuesday April 13, 2004 @04:28PM (#8852438) Journal
      I don't think that you are seeing the big picture.

      Say, Microsoft finds a bug (either internally or via a good/trusted samaritan who will keep it private). Now, they go ahead and code up a patch for the bug but when do they release it?

      Because the patch for blaster required Win2K SP2, many people were not able to protect themselves appropriately as SP2 is over 8 hours on a dial-up connection (more than half of the 'net). Now, if MS can get a "quarterly updates" on CD mailed to all of these people, then they can give everyone a better means of securing their boxen prior to letting the hackers pick apart the actual patch to find out what the hole is and how to exploit it (though blaster isn't a good example of a patch being reverse engineered into an exploit).

      This is a HUGE dillemna for corporations. Especially those that have ooldes of laptops with users connecting via dial-up. I'm actually connected, as-we-speak (type?), to windowsupdate.com and have been for the past hour or so... ON BROADBAND...

      What I would suggest is the best of both worlds - release patches only as exploits are found in the wild while compiling fixes for deployment en bulk. And you'd think that Microsoft, with billions in free cash, would start putting a bounty on some of this stuff (either reporting the holes themselves or the hackers that exploit them). It just shows how little Microsoft and Billy care about Joe User.

      And how about some freakin' color schemes for XP? I mean, really... three whole color schemes?
  • by Anonymous Coward on Tuesday April 13, 2004 @03:34PM (#8851682)
    While it doesn't name any names, the gist of this article is exactly the same as when Microsoft said that exploits only come after patches are released. This is patent nonsense and we all know it; - every week there's a few stories about a new MS hole that's being exploited but that they refuse to (or cannot) fix. I wonder why such a vapid article was posted?
    • every week there's a few stories about a new MS hole

      As opposed to the endless [slashdot.org] list [linuxsecurity.com] of problems with free software?

      • by Atzanteol ( 99067 ) on Tuesday April 13, 2004 @03:45PM (#8851834) Homepage
        ...that's being exploited but that they refuse to (or cannot) fix.

        Let's get the *whole* quote shall we?
      • Ah yes. The old "Let's compare the security of programs that Microsoft makes against every hackjob program out there under the GPL or BSD license that might be exploited across a good dozen distributions."

        While we're at it, lets fail to consider that there's no such thing as an exploit-free system that still does something useful, and let's not consider the other critical part of security: response and patch times.

        In other news, there are a lot more apples in the world than oranges when you compare every

    • by oneiros27 ( 46144 ) on Tuesday April 13, 2004 @03:51PM (#8851909) Homepage
      You're just reading it in. It is however, stating that attacks seem to come rather quickly after patches are released.

      Personally, I'd prefer if they didn't use valid scientific methods to prove if this is or isn't the case, if the result is network being saturated when the exploits finally hit, and manage to take out a significant number of hosts.

      One of the key points that he didn't mention is that there was an attack about two years ago (sorry I can't be more specific, I was working on other projects at the time, as wasn't responsible for cleaning up after it where I worked), that one of the virus companies had a 'prefered customer' system, where they'd let certain customers know about virus outbreaks before the general public, and put out a press release that if you had been one of their clients, you'd have been protected from the virus outbreak. [I think it was one of the more hard hitting ones, too...like CodeRed, or at least near that time]

      I would think that the issue that this article is talking about has absolutely nothing to do with speed -- it comes down to issues with the current procedures being exploitable, and needing to be fixed. He is simply giving a recommendation to fix the problem, which has a (not quite desired) side effect of longer times before systems are patched.

      I would think that there's most likely some other solution out there that would have the desired end result (more difficult to reverse engineer the patches before the majority of users have patched their system), without creating some sort of intentional delay in the procedures. (and whoever comes up with it should probably patent it, to protect themselves and screw others, or should make sure to get it published, so it can be claimed as prior art before someone else patents it)
    • Here's Why (Score:4, Insightful)

      by blunte ( 183182 ) on Tuesday April 13, 2004 @04:41PM (#8852589)
      I wonder why such a vapid article was posted?


      It was probably posted so we'll be aware what PHBs are being fed.

      It is NO wonder why it was written... from the bottom of the article:

      Bill Addington is a software entrepreneur and inventor, operating several application service provider companies. His spinoff technologies have been sold to companies including America Online Inc. and Microsoft Corp. He conducts his security business in conjunction with Dyonyx in Houston.


      He does have a point about reverse-engineering, but the solution to that isn't "don't release a patch". His article reads like a Microsoft HOWTO Cover Our Ass document.

      One thing that would be interesting (but very difficult) to measure would be the relationship between exploits and fancy features. Fewer features/capabilities must mean fewer potential exploits. And if, as some estimates stated, Word 2000 users on average exercised 10-15% of the features it provided, one must wonder if the other 85-90% of the features were worth the associated exploit and bug potential.

      Imagine Internet Explorer minus ActiveX, minus silently-installing "agents", and minus some of the magical integration with the OS. It might look something like Firefox (fast, clean, and comparatively exploit-free).
  • LMAO! (Score:3, Funny)

    by khasim ( 1285 ) <brandioch.conner@gmail.com> on Tuesday April 13, 2004 @03:36PM (#8851713)
    First, let's define zero-day exploit. An exploit is a method devised to take advantage of a specific software vulnerability using a software virus, Trojan horse or worm. When the exploit is done without a virus, Trojan or worm, it's using an undocumented feature.
    Undocumented feature? WTF?
    It's a security hole! Not an "undocumented feature".
    Hahahahahahahahaha!
    • Re:LMAO! (Score:2, Funny)

      SARCASM_ON: Yeah, right. It was a feature. So we didn't document it. Hell, we didn't even advertise it. I run linux exclusively; and I'd expect patches the same day for any system I deal with, be it Mac, Windows, or anything else. Of course, I could always take my business elsewhere; I prefer to deal with those who stand behind their products, words, and actions, *without lawyers and weasel-wording*, which is why I prefer Open systems per se. They have a better track record, IMHO.
  • And you won't have to keep patching it, would you? Of course that would mean spending money and time up front, rather than being able to hide the costs of the continuous maintenance cycles.

    It would also mean forcing more programers to do their jobs right, and more managers to learn what they're doing as well (And that code doesn't fix itself because you lit a candle for it the night before).
    • sorry, but your obviously NOT a programmer
      no matter what you do, your code will have bugs.
      you just do everything you can to keep them to a minimum, but if you spent a hundred years working on the same project, when all was said and done there would still be bugs.
      • > no matter what you do, your code will have bugs.

        True, but there are steps you can take to minimize bugs. There are ways to check programs for out-of-bounds conditions. There are ways of fixing exploits relatively quickly. (And I mean weeks instead of months). There are ways of releasing "work arounds" instead of fixes.

        True, the above may no produce the fastest code, but shit.. we are talking about Windows here. You want it to run faster? Buy a bigger computer.
    • by negacao ( 522115 ) * <dfgdsfg@asdasdasd.net> on Tuesday April 13, 2004 @03:50PM (#8851890)
      It would also mean forcing more programers to do their jobs right,

      Maybe if we were given some fucking time to do the job right......
    • by Anonymous Coward
      [Do it right the first time] And you won't have to keep patching it, would you?

      Ever fail to notice part of some instructions, only to regret it later? Ever use the wrong [software] tool for the job because you couldn't afford the right one? Ever zone out or get distracted during a class/meeting? Ever make a coding error that the compiler didn't catch? Ever shortcut a process because you had to rush home to take care of a sick kid or to meet a "drop-dead" deadline? Regular people do this and programm
    • It would also mean forcing more programers to do their jobs right...

      Once you get beyond trivial programs, there's no such thing as fault-free software.

      The reason for this is that software does not exist in a vacuum; the "correctness" of the behavior of any software is always evaluated through the eyes of the person using it.

      This is why software which is considered bug-free today can become bug-ridden tomorrow if an exploit is discovered which exposes some previously hidden undesirable behavior. The so

  • by Ckwop ( 707653 ) * on Tuesday April 13, 2004 @03:38PM (#8851738) Homepage
    The problem with slowing down that patch release cycle is the software vendors get lazy. "I wont release this patch for 18 months because no-one knows the vulnerability"..

    It's a difficult one. On the one hand you've got the problem of lazy vendors and on the other you've got full disclosure where the enemy will like develop the worm before you can test your patch properly.

    I think the people that find these vulnerabilities should but an expire date on their vulnerability at which point full disclosure kicks in. There should be protections in law to ensure this practice is legal too.

    That way.. we have motivated vendors and give the vendors enough time to fix the problem.

    Simon.
  • by The Desert Palooka ( 311888 ) on Tuesday April 13, 2004 @03:38PM (#8851740)
    Besides absolutely critical patches (for worms, and exploits in the wild and the like) I think this could be a really good thing. I know when I was a network administrator it was nigh impossible to keep up with all the patches on my linux boxen. If all patches were released like movies and music, on Tuesdays only. It would have been easier. Come into work every tuesday read what patches I need to install...

    Either that or like one poster suggested, we just need better tools for keeping track and managing the flow of updates... Strangely enough, MS's XP update does a really good job at this (despite their slow release process).
    • Sorry, I'm going to go with a WTF here. Windows is, far and away, the harder system to keep up to date, and it's dead easy to do now ( so no excuse to those of you window admins that don't keep your systems patched daily ).

      Depending on the distro, Linux is mindlessly easy to keep up to date. Of course, you wouldn't use slack in this sort of enviroment, but RH has a nice package management system, and let us not forget Debian.

      Cron jobs, that's where it's at.
    • Its not just the OS that is the problem. I am a Windows/Novell admin, and OS updates are not a problem (SUS). The problem lies in all the applications that need to be patches. Here is a short list :

      AutoCAD 2004 (regular and LT)
      Voloview 3.0
      Office XP (plus Visio 2003, Publisher 2000, etc - my boss rocks!)
      Pro/Engineer
      etc etc etc.

      It would be a huge timesaver if there was 1 server application that could manage all the patches for all the applications that I have to support. Thats where I spend th
  • by Rex Code ( 712912 ) <rexcode@gmail.com> on Tuesday April 13, 2004 @03:38PM (#8851746)
    The problem is that the standard for what constitutes a security hole has become (in some cases) ridiculously low. A few years ago, you'd have to have a demonstratable exploit to get a vendors attention. These days, someone notices an overflow, or an off-by-one error in the source code, and makes a post to full-disclosure or BugTraq, and next thing you know all the vendors have to patch it even if it doesn't pan out to have real security implications. On top of this, you've got the vendors themselves issuing advisories for low-level risk issues which forces the other vendors to issue advisories themselves.

    After crying wolf so many times, it's no wonder advisories concerning critical security holes can get lost in the shuffle.
    • someone notices an overflow, or an off-by-one error in the source code, and makes a post to full-disclosure or BugTraq

      In other words, putting the opposite spin on what you're saying of course, is that Open Source breeds more perfect software right? Not just more secure, but little bugs like this are fixed, which can lead to big issues down the road, right?

      I can see what the article's saying, but at the same time, things that are very critical should be patched right away, and the patch should be appl
      • There's one point that has been missed so far ... here's the quote from the article:

        They know there is a much easier way to determine the details of a particular vulnerability than

        slogging through millions of lines of assembly.

        It's not possible to slog through millions of lines of assembly. Even if you do 1 line a second, 8 hours a day, 5 days a week, you won't finish in less than a few months (of course, if you have a 10-million-line source code program, the binary will hold a LOT more than "a few mil

  • by SkiddyRowe ( 692144 ) <bigskidrowe@hotmail.com> on Tuesday April 13, 2004 @03:39PM (#8851748)
    Personally I'd rather see patches that won't open new vulnerabilities. Too many times we've seen these quick reflex fixes from Redmond, only to see that they release a patch for the patch. I'd rather patch the problem fully, than have a quick punch patch, that just gets exploited a week later.
    • You are completely correct.

      May I also point out that such is the case with the existing "anti-virus" market?

      We see "patches" every week for the latest round of viruses. And we will continue to, until Microsoft addresses the actual vulnerabilities in their software (and the security model upon which it is based).

      A virus or a worm (and, to a less extent, trojans) are all FAILURES in the security of a system. How many failures of an almost identical nature does it take before people realize that the model i
  • fuzzy logic? (Score:3, Insightful)

    by Zoko Siman ( 585929 ) <<moc.liamg> <ta> <awaleibmit>> on Tuesday April 13, 2004 @03:39PM (#8851761) Homepage
    I don't see how this really changes anything. Either way, once the patch is released from what this guy is saying is that an exploit will come out in not much time. He's basically just saying to release the patch at a later date, I don't see any real reason however to justify his reasoning. How do you expect to get the problem patched at all if you just don't release the patch? To me to seems like with this reasoning the man is using we could just forever delay all patches.
    • Although I don't agree with the article's author, I think the inference is that if patches were released more like service packs, where you patch 30 or 40 things at a shot, and only have to do it once a month, that the lower administrative overhead involved in this model (vs the micro patching from today) would encourage a shorter average time from patch release to patch install across the board.
    • Nice point, what was the vendor supposed to do? Invite all their customers to a secret club meeting where the club handshake and the club password is needed to get in the door, and the patch is handed out by invitation only?

      I guess another plan would have been to disguise the patch inside an "update" that modified a lot of the rest of the code, making it harder for the script kiddies to latch onto the key changes.

      Maybe we are heading toward an era in which patches are issued in encrypted form, and specia
  • by Anonymous Coward
    I never update and I don't have any problems.

    Never had a virus worm or any of that crap.

    What am I doing wrong?
    • Well, you should probably drag your computer back out of that safe you buried under a nuclear power plant. No wonder you're not getting anything.
    • Re:I never update (Score:3, Insightful)

      by Lehk228 ( 705449 )
      are you behind a NAT/Firewall/Switch, they protect against ***almost*** all remote exploits by making your computer unroutable from the outside "you can't get there from here"
  • in related news (Score:5, Informative)

    by jannesha ( 441851 ) on Tuesday April 13, 2004 @03:41PM (#8851774)
    There's new critical updates available on Windows Update (5 in all, for WinXP + IE 6SP1).
  • Why is this an issue with automatic updates? Set Granny's computer to be automatically patched whenever available. Don't instruct her to review patches herself by clicking ok, then yes, then scroll down to "I'm really sure I want to patch" and then hit "update". She won't do it, and then later you'll have to fix her computer when her computer gets "owned". If you're paranoid, review each patch on your personal machines individually. But for grandmothers and clueless people everywhere, ignorance really i
  • Zinger (Score:2, Insightful)

    by CaptKilljoy ( 687808 )
    Nice ding against Linux and open source:

    "...or allowed critics to claim the superiority of some other system that supposedly doesn't need patches."

    He's right though. Just because certain closed source vendors aren't doing so well with bugs, doesn't mean that the open source movement can sit back and laugh at them. There needs to be as much participation as possible to maintain OSS's reputation for quality.
  • by frodo from middle ea ( 602941 ) on Tuesday April 13, 2004 @03:41PM (#8851780) Homepage
    Every time I do emerge -pUv world, I remember I haven't synced the latest tree. So I do emerge --sync and , then bam, another 200 packages to upgrade.

    Some times the changes are so minor, I really wonder if it is worth it.

    • After you run emerge sync you should run emerge -u -p world it will tell you what you can update...u dont have to do evrything...just pick whatever makes sense...no need to update OpenOfiice.org from 1.1 to 1.1.1 ....but if there is something like ssh or postfix its a good idea to grab those...i mean use common sense...and sense u read slashdot you should know of any BIG remote exploits that apear and emerge fixes for them...but if it work for you dont fix it by emerge -u world....simply not neccesary.
      • Exactly my point. After a while the emerge -u world becomes unuseable. Everytime I have to emerge -puv world, and then select the one's that I really need to fix and neglect the others.

        What portage needs to do , is seperate security fixes from enhancements. of course for major upgrades, security fixes and enhancements will be in the same version. but atleast when going from 0.98_0.1r3 to 0.98_0.2 , I should know if the upgrade was for a secuirity fix or add some eyecandy enhancement, which I can do without

    • You need to think a little lazier ;)

      Drop this into /etc/cron.weekly
      #!/bin/bash
      emerge -q --nospinner sync
      emerge -up world
      Now if they would only stop releasing borked packages.
    • If you only want to regularly check for the important security updates, other than minor bugfixes, feature upgrades etc

      there's a new experimental feature, GLSA only updates [gentoo.org]

      Basically, it's a script that only pulls in the updates that warrant a gentoo linux security announcement.

      It's still worth doing an emerge -puvD world every so often though ;)
  • by Minwee ( 522556 ) <dcr@neverwhen.org> on Tuesday April 13, 2004 @03:43PM (#8851801) Homepage
    [...] most importantly, once the patch was released, the exploit was released the very next day. This wasn't a coincidence where the exploiters just missed having a zero-day exploit. If the patch had been released a week earlier, the worm also would have come out a week earlier.

    He's arguing that they should slow down the patch cycle because all exploits come from reverse engineering patches. Slow down the patches, and you slow down the exploits.

    Because, you know, nobody ever figures these things out on their own. It sure is a good thing we live in a world where exploits are never found in the wild before a nice, safe, 100% effective patch is released to counter them.

    • You kind of missed the point there. If you have something out running in the wild found you release a useable patch that day. He's talking about exploits found from the vendor, not ones out in the wild. i.e. when Redhat finds an extremely obscure buffer overflow not joe hacker. Instead of releasing a statement about the problem an hour after it's found, and then putting a patch out a day later, with admins patching couple of days after (a weekend you know) he's advocating that since nobody else knows ab
  • Although I believe it is important to focus more on creating a good product out of the door - Dev's take note, buggy releases are more of a headache to fix than to do it right the first time - I feel that in specific instances - online games, optimized databases, web-related code, it is vital that patches are up to current asap. In a game, a bug will be exploited thousands of times before it can be patched (especially larger MMORPG's), and it is only fair that Mission critical software have the latest fixes
  • by cexshun ( 770970 ) on Tuesday April 13, 2004 @03:43PM (#8851816) Homepage
    I think the greatest "fix all" patch would be to distribute a book with every PC sold titled "The Internet: How to not be an idiot". I can't think of many email viruses out there that can exploit the ol' "Do not open unless I know what it is" bug!
  • The only comfortable allude I can see to this is the mirror that some companies take to fixing items or training new hires: It is more of a cost benefit to allow the problem or issue to continue than it is to delegate a $100pr hour tech to fix one PC or train one person. They can save more money by having that same $100 per hour tech fix 10 PCs or train 10 people.

    The answer is going to come from the market who will decide in MS's case if they do not mind waiting for the plugs to fill the dyke, vs. OSS who

  • Whereas security holes used to be obvious and easy to find, these days they require a lot more work to find.

    This is why there are fewer exploits than ever before, and fewer cases of PCs being 0wned and trojaned.

    The recent BlackIce break-in clearly demonstrates... oh, sorry, it doesn't, and attacks on PCs are escalating, not going down.

    Perhaps professional attackers don't need to wait for exploits after all.
  • by cheezit ( 133765 ) on Tuesday April 13, 2004 @03:46PM (#8851846) Homepage
    This article is pretty interesting, but it is built on the assumption that vulnerabilities usually don't have exploits in the wild until the patch comes out. Sometimes that is true (as his examples show), sometimes it is not. The problem is showing the difference.

    It is very difficult to establish what new exploits are being used in the wild. With the exception of viruses and worms (which have an analyzable payload), most exploits must be caught in the act to understand what they really are.

    So if Company X has a vulnerability, they can:
    a) hold off on a patch since there is no exploit (as the article suggests), or
    b) patch right away, since there is an exploit in the wild

    Option a saves Company X money....how hard will they look for an exploit?
    • This article is pretty interesting, but it is built on the assumption that vulnerabilities usually don't have exploits in the wild until the patch comes out. Sometimes that is true (as his examples show), sometimes it is not. The problem is showing the difference.

      In his article he also equates the fact that the exploit came out for ISS's software immediately after the patch was released. Eeye had found it 10 days before the patch was released, why does he assume that the only ones that had found it and kn
  • Because we all know that only researchers and vendors who report their findings to CERT immediately are capable of finding problems in code. And we also know from his article that the only way to create an exploit for a vulnerability is to wait until the patch comes out and reverse engineer it.

    Okay, honestly though, I can agree with some of his arguments, they are fine, but to make backward assumptions like they did by not mentioning the fact that black-hats can actually find and exploit vulnerabilities f
  • Hrm (Score:3, Interesting)

    by ReciprocityProject ( 668218 ) on Tuesday April 13, 2004 @03:55PM (#8851955) Homepage Journal
    For example, the patch for the SQL Slammer worm was released six months before the exploit was launched. This long delay enabled blame to be placed on lax systems administrators for not properly patching system

    What are you talking about, "enabled?" It is their fault for not properly patching the system.

    Ultimately, more systems will be developed using managed code (for example, Java and C#). This will narrow the problem to the bootstrap code those systems rely on without every application developer needing to be hypervigilant about buffer overflows.

    That only makes sense if you think buffer overflows are the only security risk. Using Java doesn't magically make programs secure. In fact, a lot of damage can be done even when you don't have the ability to run arbitrary code on a remote machine.

    Lastly, and most importantly, once the patch was released, the exploit was released the very next day. This wasn't a coincidence where the exploiters just missed having a zero-day exploit. If the patch had been released a week earlier, the worm also would have come out a week earlier.

    So it doesn't matter in the slightest how often you release patches, exploiters will exploit them. Nothing in the article explains how delaying a patch release will make the system more secure.

    [To make the system more secure] . . . software owners would subscribe to an automated patch service. . . . Subscribers would receive a predeployed, encrypted version of the patch.

    That entire statement sums the entirety of the useful information in this article. Erase the whole thing and leave that statement. (I'm mean. Sorry.)
  • by Gothmolly ( 148874 ) on Tuesday April 13, 2004 @03:58PM (#8851994)
    "Help, I can't keep up with the patches, because I told my boss that we could cut the budget by switching to Windows, and its turning out to be the opposite!"
  • by merlin_jim ( 302773 ) <.James.McCracken. .at. .stratapult.com.> on Tuesday April 13, 2004 @03:59PM (#8851997)
    This article is not advocating slowing down the security patch cycle; the slashdot title is misleading... the author is advocating slowing down the security patch distribution method.

    He makes the point that as soon as a patch is available, it is reverse engineered and exploited. He is advocating sending out encrypted versions of a patch, get everyone who is always-connected to the internet to automatically download the encrypted version, and once the downloads per second curve decreases by a certain amount (say 95% or so), then you send out the decryption key. Everyone installs the patch simultaneously; and zero-day exploits have as targets only those systems that do not subscribe to the patch service, and use traditional methods to procure patches.

    This is based on the assumption that zero-day exploits reverse engineer patches. I have found this not to be the case; they usually just exploit the vendor description of the vulnerability; in many cases, this description is posted to a security mailing list a few days (or weeks depending on the vendor) before a patch is available; usually this is the method by which a vendor finds a vulnerability.

    This process is right and proper as it gives the vendor a huge incentive to correct flaws quickly; many people who discover a vulnerability report it to the vendor, wait for it to be fixed, and then when a fix is not apparent, report it to the community to give the vendor a sense of urgency. Unfortunately, it is a necessary part of the security patch cycle; without it, we would have a priviledged few individuals who could write truly devastating worms and virii, for which the vendor may not even be working on a patch.

    SQL Slammer was bad. But imagine it if Microsoft had no intention of correcting the vulnerability at the time it hit. How many more people would it have hit, considering that a significant portion of Microsoft's customers had already patched at that point? How long would it take Microsoft to issue a patch? How would they distribute it with so much of the internet simply unavailable? How long until our infrastructure approached something like normalcy?

    That's what could happen in a world where public forums don't hold vendors accountable for fixing vulnerabilities. And that's exactly the kind of world necessary for it to make sense to slow down your patch distribution.
  • Is to create questions in the CIO's head about open source and software updates in an attempt to make open source look bad. Remember it was just a little while ago that Bill Gates told all the crackers the only security holes they found were from reading security patches.
  • by Beek Dog ( 610072 ) on Tuesday April 13, 2004 @03:59PM (#8852007)
    They sent out a memo on cardstock (assuming people would actually read it if it was cardstock) telling us to cut down on the number of unnecessary copies and duplicate forms. We have 4500 employees. They told us to use email instead. Then they sent out a memo (regular paper. I think they could hear us laughing already) stating that they were blocking all attachments. I hung that one on my wall. And made copies.
  • by morcego ( 260031 ) on Tuesday April 13, 2004 @04:00PM (#8852020)
    The degree of ignorance demonstrated on this article almost left me speachless. Not only the logic, but the data he uses is so flawed, that I should be laughting hard right now, except for the possible consequences of the article.

    Just because a Worm was released right after the patch was released, it mean that they used the patch to create the exploit ? That is simply being obtuse.

    Real cracker (or whatever you like to call them) are not there to make their name. They are out there to make a profit. Simple as that. Those are the guys with real motivation (and I mean money) to explore all possibilities. I do agree that the kids that make the worms to became famous among their 13371 frieds won't spend days working on disassemble code, but you can be very sure someone willing to compromise an specific target (a bank, or a given company) will do that. Add a little social engeneering to the mix, and things get real ugly.

    Usually, worms are released after the patch. True. That is usually when the so called "zero-day" exploit becames useless, or nearly so. Also, releasing a worm is a good way to divert the attention from the other bug the cracker will be exploiting. Believe me, I have seen companies with 400+ employess come nearly to a halt due to patch deployment after a new worm shows up.

    So, slowing down patch releases will slow down new worms ? At first glance, yes. It will also multiply the number of active worms on the wild, and allow the bad-bad-bad guys to keep making money, and cause real trouble, the kind of trouble take can take a company out of the market.
  • yet again. (Score:5, Insightful)

    by happyfrogcow ( 708359 ) on Tuesday April 13, 2004 @04:06PM (#8852093)
    The article seems to be making yet another claim that security through obscurity is better. They say that the patch release contains enough information for an exploitation to be made immediately if there isn't one already:

    The patch had the specific information embedded in it that the exploiters needed, and the exploiters already had the expertise and tools required to rapidly make use of the information.

    Slowing it down won't do anything, and they jump to that conclusion at the last line. Slowing it down, will have the same effect of speeding it up. They used speeding it up as an example:

    If the patch had been released a week earlier, the worm also would have come out a week earlier.

    The same could be said if it were released a week later.

    Slowing the quickness of release shouldn't be a factor, only implementation of distribution. If they can find and fix a problem *right now*, why wait 2 weeks to distribute it? I just don't get why they mention time as an issue, except as flamebait.

    End users are in a dilemma, however, because the current method of deploying patches doesn't allow them enough time before an exploit based on reverse-engineering of the patch can be deployed.

    The only dilema is that of the producers of software. How fast can we notify end users that a fix is available and if they don't install it, they will be vulnerable to some attack.

    If someone understands why the article claims slowing down will benefit, please explain to me. This is pissing me off. It makes no sense. The only thing that makes sense is their statement about a "patch subscription system". But that is crying out "Pay for this service". So they want to make people pay to have quick security patches, the rest get slow patching? I don't get it. I give up trying.

    It's like saying "hey, what i want to do makes no sense at all! therefore it HAS to be good, new, and innovative. So give us money!"
  • by dre23 ( 703594 ) * <slashdot@andre.operations.net> on Tuesday April 13, 2004 @04:07PM (#8852110)
    This guy doesn't even know the meaning of zero-day. Zero-day means that the programming bug has existed since the software was written. That means that if you discover a bug in Linux 2.6.x kernel, that bug has been around since the Minix days! It has nothing to do with being "elite" -- that's all the kiddie mumbo-jumbo.

    Security "experts" (have you ever met any? oh really?) are confusing topics here. This is the same argument I've seen time and time again in the security world. Here are a few examples:
    1) chroot environments
    2) stack protectors

    In the case of chroot environments, people were wanting to protect 99.9% of remote attacks because "kiddies" used remote buffer overflows as the primary method of breaking into computers. What happened? Somebody figured out ways of breaking out of chroot environments. It wasn't difficult. Now, kiddies and damn near everyone can read about how to break out of chroot environments. They don't protect anything when the technique/knowledge of how to break them is so widely available.

    In the case of stack protectors, people again wanted to protect against 99.9% of the attacks. In this example, it's more clear because new attacks became available because of the protection methods. Buffer overflows were 99.9% of the attacks back in the day. When stack protectors started popping on the scene, tons of papers and research went into heap overflows, format string holes, shared library injection, et al. Now, buffer overflows represent maybe 60-80% of the exploits out there. Since the other methods are now well-known, stack protectors are not anywhere near full-proof, and becoming less so by the day.

    Exploits are found in the wild. Anyone with ASM or C knowledge can find them, however some attacks require different ways of thinking and different coded implementations. There are many attacks against HTTP, for example, that require no knowledge of ASM or C. Anyone with the desire to find an exploit in almost any computer PROGRAM or line of code (and how many lines of code are there?).... will find one. Give a person a 6-pack of jolt and a box of Cap'N Crunch cereal, and that person will break code for fun or for profit.

    Slowing down patches just makes the real hacker's results worth more. And software bugs (which what security holes are) can cause mass hysteria and even human death. Why delay a patch to a fix that could cause events such as historical software-related disasters? I see delaying patches as Armageddon. Who's with me on this?
    • This guy doesn't even know the meaning of zero-day. Zero-day means that the programming bug has existed since the software was written.

      No. Where did you come up with that odd definition?

      That means that if you discover a bug in Linux 2.6.x kernel, that bug has been around since the Minix days!

      By that interpretation, a zero-day bug would be so rare we wouldn't even need a word to describe them! (BTW: Linux has never contained any Minix source code)

      Zero-day really means that the person running the e
  • by Anonymous Coward
    I believe that there are two ways that patches should be managed. If an exploit is already available, the patch should be immediate -- what is there to wait for? If an exploit is not available, a patch should be released at the end of the month, say.

    However, if you have just discovered a vulnerability in your software, odds are some black hat hasn't just coincidentally discovered the same thing, so releasing a patch immediately is not likely buy you much security. Anyway, releasing a single patch when a bu
  • From the article:

    By far, the most common type of exploit is the buffer overflow, and software vendors are spending millions of dollars to find and prevent these types of vulnerabilities. These vulnerabilities still exist -- they are getting fewer in number, however, and finding them is now much more difficult. Part of my consulting practice to software vendors and their major customers is finding and reporting these types of vulnerabilities. Where I used to be able to do the "find vulnerabilities blindfo

    • I'm always bummed when people poo-poo C/C++ because they fail to see the real problem. It is true that C/C++ is probably too "low level" to do effective and safe application writing but that isn't the problem since one can clearly write bulletproof apps in C/C++.

      The problem is that, like most bugs, the complexity of the language makes it hard to predict problems. Even in memory managed systems like Java and C# you can have crippling errors. You don't "fix" the problem by moving to these types of memory
  • Job Security (Score:2, Interesting)

    by devobelisk ( 768658 )
    I don't know about you guys, but without exploits I would be out of a job. There would be no need for administration or security. So.. bravo Microsoft, keep up the good work. /me golf claps
  • by emurphy42 ( 631808 ) on Tuesday April 13, 2004 @04:14PM (#8852196) Homepage
    Here's my analysis of some claims stated or implied in the article:

    Some exploits are reverse-engineered insanely quickly from patches. (True, with an example cited.)

    Slowing down patches will reduce the total severity of exploits. (Way too vague.)

    Slowing down patches will delay the existence of exploits. (False; not all exploits are reverse-engineered from patches.)

    Slowing down patches in a "Tuesdays only" fashion will make it easier for admins to check for patches on a predictable schedule, and install them soon after they're released. (True as far as it goes, but the reverse-engineers can also check for patches on a predictable schedule; this also totally ignores exploits that aren't reverse-engineered from a patch.)

    Slowing down patches long enough to make sure they don't cause some other severe problem is a good idea. (True, but not mentioned in the article.)

    Providing patches in an encrypted-but-usable form right away, and in a decrypted form later, will help admins keep ahead of reverse-engineers. (Obvious "this is anathema to OSS" aside, how would this actually work? Windows Update patches are already distributed in binary-only form, and they still get reverse-engineered.)

    Managed-code languages like Java and C# will eliminate buffer overflows, which are a common source of exploits, but they're nowhere near universal. (Basically true, probably with numerous exceptions and caveats.)
  • by theLOUDroom ( 556455 ) on Tuesday April 13, 2004 @04:18PM (#8852266)
    While reading the responses to this article I came across an idea that hadn't struck me before:

    What if the reason some of these exploits aren't happening until the patch has been released is because the blackhats are being careful not to break into systems that belong to clueful users (tm)?

    The reasoning would be: -I want to break into a computer
    -I don't want to get busted
    -I want to make sure whoever I break into isn't going to bust me
    -I'll pick a computer that obviously isn't having much attention payed to it -If a system isn't getting patched, it probably isn't being checked for intrusions either.


    Now I'm not saying that it accounts for the majority of cases, but it is interesting to consider.
  • ...slowing down, not speeding up, patch releases.

    Isn't this what Microsoft has been doing for years? (rim shot)

    sorry, mark it as Obvious.

    Cb
  • Remote buffer overflows are not as big as many people say they are.

    Computer systems are more likely to get compromised in the following two ways:

    1) Poor choice of passwords. This is a vendor implementation problem. Computers and programs should not allow people to choose bad passwords. There should NOT be a setting to make this optional. If passwords aren't secure, why require them in the first place?
    2) Exploiting a trust relationship of some kind. This is generally a protocol design problem, that qu
  • by caffeinebill ( 661471 ) on Tuesday April 13, 2004 @04:23PM (#8852353) Journal
    Clearly, if it is better to slow down the release of patches, it would be best to NEVER release a patch. That way, there is nothing to "reverse-engineer" and there is no way the "exploiters" will have nothing to work with. Better yet, don't even release the software in the first place, and no exploit will ever be found, even for the less lazy, more sophisticated exploiters who find bugs in the original code!!!
  • by EMIce ( 30092 ) on Tuesday April 13, 2004 @04:31PM (#8852458) Homepage
    The slashdot blip about the article is misleading as usual. The author's main point is that it takes time for all systems to be patched, such that someone will reverse engineer the patch and release a rapidly spreading worm/virus before a majority of systems are patched. Simply slowing down the release of patches won't really help unless all vendors pick a regular patch release interval that people actually follow. This is what Microsoft appears to be trying, but the system is too dependent on people staying on schedule. The author says a fundamental change is needed, perhaps allowing encrypted patch downloads for some time, and having the patch installer wait for a key to install those patches simultaneously at a later date. This is clever and leads me to expand on the idea.

    Why not have a standard piece of software that scans your system for different programs you have installed, one that registers the programs as well as your machine's ip address with a server? There could be a centralized server system or each vendor could have their own server to allay privacy concerns. Encrypted patches then could be auto-downloaded upon release and then held until some point in the future. Then simple UDP packets containing decryption keys could be sent to all registered systems - at least once enough of them have downloaded the patch - allowing near simultaneous installation.

    An added bonus would be that if a worm/virus is reported in the wild, patching can commence immediately. This would really put a damper on the ridiculous rate of infection we usually see, currently so rapid in fact that anyone not patched is usually hit within a day. I'm glad most of these worms don't carry destructive payloads, the recent destructive witty worm killed my weekend. Try recovering data after random parts of the drive have been overwritten.
  • by menscher ( 597856 ) <menscher+slashdotNO@SPAMuiuc.edu> on Tuesday April 13, 2004 @04:34PM (#8852511) Homepage Journal
    There are occasional cases of a "bad patch" -- one that crashes machines, etc. I've seen a few over the years.

    Now consider what happens when *everyone* installs at the same time. No chance for the vendor to get feedback and pull the patch. Somehow this seems risky....

  • Bad Writing (Score:3, Insightful)

    by wonkavader ( 605434 ) on Tuesday April 13, 2004 @04:48PM (#8852667)
    The substance of this article seems to be:
    Catchy way of describing my idea (albeit misdirective)
    Why we need my idea
    My idea.

    As you can tell from how Slashdotters are reacting, they never finished the article, or didn't read the whole of the last paragraph, where the idea of encrypting patches and distributing a key days or weeks later is actually stated.

    It's a good idea. It solves badwidth issues for people with huge patches (Microsoft, in particular).

    But he has so much in the first and second section and so little in the last section that his idea gets buried. I think he needs to make his ideas less mysterious. Give us some terms of the actually idea ("Slow down patches WITH CRYPTOGRAPHY") or something so that we actually READ the last paragraph.

    Furthermore, there would need to be a darn easy way to do this for it to work. Microsoft's update feature could do it, as (we can pretend) every windows box has it.

    If SSH has a vulnerability, you can release a patch this way, unless every OS it runs on has a automatic system -- one which gets patches and keys and installs patches when a key arrives. Red Carpet could be modified to do that, but what do the Plan Nine users use, the MacOS users, the FreeBSD people?

    However, over time, such a system could get wide acceptance for configuration/vendor specific patches and then become useful for applications, as well.

    It would have to be a well-defined OPEN system, and MicroSoft (it seems) need not be included -- they'd do their own thing and make the system non-portable.
  • by kasperd ( 592156 ) on Tuesday April 13, 2004 @06:05PM (#8853688) Homepage Journal
    One time (a few years ago, I don't remember exactly when) a flaw was discovered in OpenSSH. It was anounced that a bug had been found, and that a patch would be released one week later, such that every distributor could release them at the same time and administrators would be prepared to install them. That aproach was very similar to what this article describes. (Yes I actually read it)

    It was a complete failure. It lead to some of the worst criticism the project had ever experienced. And they ended up releasing the patch earlier than announced, not because of the criticism, but because exploit code was being written despite of the patch not being made generally available.
    • by Tuck ( 41529 )
      It was June 2002, and here are the details [openssh.com] including a description of the release process.

      At the time of the original announcement it was specified that there was a way to mitigate the problem (Privilege Separation) and at least some of the criticism was because PrivSep didn't work on all platforms.

      The patch was released early because the discoverer released the announcement early. I don't know if there were exploits available at that time.

      Disclosure: I'm one of the OpenSSH developers, but I wasn't a

  • False premise (Score:3, Insightful)

    by Alex Belits ( 437 ) * on Tuesday April 13, 2004 @07:31PM (#8854748) Homepage

    The code exploiters use the same tactics against the software vendors that the software vendors and antivirus companies use on them. They wait until the patch for the vulnerability is released, then they reverse-engineer the patch. This is orders of magnitude easier than finding the vulnerability directly.

    This is wrong, and therefore the whole article based on this premise is nonsense.

    Most of the security flaws are found ransomly or through testing and observation of running software, by various people outside the companies that produce the patches. So the possibilities are:

    1. A black hat hacker/cracker found the vulnerability. Then the exploit is soon to follow, and only after the exploit is found, the fix can be issued. Without any doubt, this has to happen as soon as possible because it will reduce the harm caused by exploit.
    2. A security researcher or regular user found a vulnerability. Then he will follow an established procedure of notifying the vendor and waiting for some reasonable time for the fix to be issued, and after that he may publish a detailed explanation of the nature of vulnerability. This is by far the most common kind of situation -- just look at Bugtraq. Then unless the vulnerability is discovered independently, black hats won't start the work on exploits until at least the patch is ready (and then they will just have to look at what is patched -- no real need for any complex reverse-engineering), but most likely until the explanation of the vulnerability is published (and then only very lazy users are supposed to keep running the unpatched versions). The timing of the patch release is absolutely irrelevant for this case, the clock starts ticking at the moment when the evidence of vulnerability reached black hats, and stops when all the users get the patch. Will it happen sooner or later is irrelevant, but delaying the patch increases the probability of independent discovery, and then back to the case 1, so the sooner is the patch released, the safer the users are.
    3. Same as above but the company managed to delay the release of the patch beyond any reasonable time. Then the person who discovered it may publish the vulnerability description, in hope that then it will be fixed. Will it be out of concern for the users (why are likely to be hit by an exploit if this vulnerability is found independently), or for himself (a user may rely on the software, and afraid that he will be attacked in case of independent discovery by black hats), or as an attempt to shame the company into fixing things faster in the future, or out of plain frustration of dealing with uncoperative vendor, it does not really matter -- vendor should react as soon as possible in either case.
    4. Same as above but the description is published immediately by an uncooperative user/researcher. Needless to say, delays are even more dangerous then.

    So the conclusion is: there is no possible scenario that justifies the delays of the patches release. Only a lazy software vendor may think of such a lame excuse for delays.

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...