Forgot your password?
typodupeerror
Security

Due Diligence? 226

Posted by michael
from the postscript-always-a-nice-touch dept.
ekr writes "The OpenSSL remote buffer overflows discovered at the end of July got a lot of press here on /. But how many people actually fixed their machines? I decided to study this question, and the results are kind of depressing. Two weeks after the release of the bug, over two thirds of the servers I sampled were still vulnerable. Even two weeks after the Slapper worm was announced, a third of the total servers were vulnerable. The paper can be found here in PDF or Postscript."
This discussion has been archived. No new comments can be posted.

Due Diligence?

Comments Filter:
  • by Anonymous Coward on Friday November 15, 2002 @06:43PM (#4681318)
    Many systems administrators aren't full-time and have other responsibilities. Keeping up-to-date with every security patch is very time consuming and sometimes management doesn't understand this and doesn't allocate resources for it as long as things are "working".
    • by dfn5 (524972) on Friday November 15, 2002 @07:41PM (#4681853) Journal
      Unfortunately keeping up with patches is a very important part of any security strategy. I am all for letting companies do things their way, but if admins don't allocate more time to security and patching then I'm afraid the government will do more than just recommend actions for Security on the Internet and will start mandating stuff. I for one don't want that to happen.

      Bottom line? Improve your security while you still have the rights to do it yourself.

      • Why should the government care about Internet security?

        It doesnt affect the actual living-or-death state of actual real people.

        Who cares. This is a private matter. Let it be settled without nasty government "solutions".

        • What a naive comment. You'd be amazed at the number of computers with medical, financial, academic, or other critical records on public IPs, especially since a technology like OpenSSL facilitates putting sensitive info like that online because it's "secure."

          There are tons of recent examples of computers accidentally storing critical info and records out in the open, much less secured by a barrier like OpenSSL.
          • You'd be amazed at the number of computers with medical, financial, academic, or other critical records on public IPs,
            They have absolutely no business being there. But still, thats only data. And private data at that. Its not life critical. Its not that important. Not enough to warrant government intervention. Not by a long shot.

            IF its very sensitive, dont put it on a public network. PERIOD.
        • It's not a private matter, just as airplane hijackings are not a private matter.

          By gaining access to many hosts on the net an attacker may launch distributed attacks against specific targets.

          Imagine if sombody took down all top level DNS servers in such an attack. Today internet is part of the information infrastructure government and press use to communicate with the citizens. And if that information channel is at risk, governments will act.

        • It doesnt affect the actual living-or-death state of actual real people.

          Read as: it doesn't negatively affect the bonuses of the company execs or the board.
    • There's no excuse for this. I'll refrain from saying that everyone should just use Debian, because that's not really true, but my system was patched within a few hours with a simple apt-get, with almost no effort on my part.

      What I will say is that more distributions and operating systems should implement systems like apt, and vendors should keep up with patches as well as the Debian security team does. It really shouldn't be a big effort on any administrator's part. (Though that doesn't absolve administrators from having to be mindful of security issues.)
  • by mr_gerbik (122036) on Friday November 15, 2002 @06:45PM (#4681335)
    This is why I run Windows 3.11. No worries about falling behind and not installing the latest fixes.
    • Re:this is why... (Score:2, Interesting)

      by Anonymous Coward
      I recently did some work for a colo client with an unpatched stock FreeBSD 2.6 machine. It was in perfect condition (everything worked fine, no hidden files, no core dumps, nothing funny in the logs going back months). No only that, but all the default old-skool services were running like rexec or whatever. My mouth was hanging open in disbelief but apparently it never got hacked (or it got hacked so well I couldn't find anything). The red hat linux machines on the same lan that are supposedly kept up-to-date by the clients get hacked regularly.

      I'm starting to wonder if installing linux 0.99 wouldn't be such a bad idea.....
  • by JimmytheGeek (180805) <jamesaffeld&yahoo,com> on Friday November 15, 2002 @06:46PM (#4681343) Journal
    I noticed connection attempts from Korea just after the announcement and decided it was time to nuke the the boxes from orbit. Not much point in having an O-BSD box you are only mostly sure of.

    I had some angst with RedHat boxes, though. The update mechanism didn't change the reported version number of OpenSSH. Annoying.
    • I don't understand why you felt OpenBSD was less secure than Redhat in this regard. You can patch the software on OpenBSD fairly easily. Heavens, I've updated our OpenBSD box's Apache several times now.
      • The redhat boxes were different- they occupy a different niche. The oBSD boxes remain, and my (attempted) point about the redhat ones is that they are harder to patch, or at least it's harder to be confident in the patch level, because the updated openssh rpm has an obsolete version designation.

        I think openbsd is wonderful and I buy their swag to support them. I've offered to mirror, but I think they might have enough.
  • Securing OpenSSL (Score:5, Interesting)

    by Exmet Paff Daxx (535601) on Friday November 15, 2002 @06:58PM (#4681455) Homepage Journal
    Some points to consider:
    • Fear. Most Linux users are probably reeling in shock from the recent trojan inserted by elite hacking group ADM into the libpcap distribution [theregister.co.uk]. The old standby argument that 'checking the MD5 signatures' will save you has become null & void; ADM replaced the MD5 signatures too. The only reason the trojan was detected was because of the Google cache! This kind of thing probably has most users afraid to move to anything recently released that hasn't been extensively peer reviewed.
    • Ignorance. Since the Slapper worm only contains offsets for a handful of platforms, many flavors of Linux are 'immune' to automated infection. While blackhat groups have offsets for nearly every implementation of Apache/SSL in existence (yes, even you x86 Solaris people), this threat isn't considered 'immediate' enough to justify the third point:
    • Sloth. Upgrading your OpenSSL isn't as easy as it could be. You actually have to recompile Apache with ./configure flags to link it to the new version of OpenSSL which you just recently downloaded (it's not trojaned... right?). Sounds easy, but for a production server that hasn't been touched in a year, this tends to make people really nervous

    All of this points to the fact that there is a fundamental flaw in the way that the Open Source community is securing their software. Putting MD5 signatures on the same server that the software is available from isn't even close to secure - Dave Aitel of Immunity Security keeps hammering on this point in BugTraq. And we're going to see even more of this 'Upgrade Fear' as more and more distributions get trojaned - Slash is probably next on the list.

    We need to look at existing, successful solutions to this problem (like Windows Update) and catch up. Now.
    • Re:Securing OpenSSL (Score:4, Informative)

      by Psiren (6145) on Friday November 15, 2002 @07:10PM (#4681597)
      Upgrading your OpenSSL isn't as easy as it could be.

      Any decent distribution will do this work for you. That's what they're for after all. My Debian box was updated no more than 24 hours after I read about the problem, requiring nothing more that an apt-get update on my part.
      • Re:Securing OpenSSL (Score:4, Interesting)

        by Flarners (458839) on Friday November 15, 2002 @07:34PM (#4681788) Journal
        My Debian box was updated no more than 24 hours after I read about the problem, requiring nothing more that an apt-get update on my part.
        That's exactly the problem the parent poster was trying to highlight! Sure, you can blindly trust that the Debian servers are secure and un-trojanned and let apt-get install it without so much as checking a key signature. Even if you do configure apt to check the signature, it fetches the public key from the same server as the packages. Thus, an attacker can easily trojan your machine with man-in-the-middle or DNS attacks, sending you to an update site with a trojanned package signed with his own public key. If someone sneaks into the real debian servers, subverting apt-get is as easy as:
        • Upload new "developers" key signature.
        • Sign trojanned package with new signature.
        • Upload trojanned package.
        and it's Game Over. Of course, Red Hat and Mandrake's solutions aren't much better. The key needs to be stored with a trusted entity like Verisign, which is how Windows Update and other commercial-grade updating systems ensure the integrity of their packages. You've never heard of Windows Update being trojanned, have you?
        • Thus, an attacker can easily trojan your machine with man-in-the-middle or DNS attacks, sending you to an update site with a trojanned package signed with his own public key.
          (...)
          The key needs to be stored with a trusted entity like Verisign, which is how Windows Update and other commercial-grade updating systems ensure the integrity of their packages.


          I find this a weak argument. A man in the middle attack would only get slightly harder if they put keys and packages on different (trusted) servers. Their just harder to fake than one server systems, but by monitoring regular patterns between all servers and a client you'd be able to assemble the right phony proxy (or whatever the cracker's using.)
        • --wish I knew more about this stuff. Looking at this and trying to get a handle on it. Seems to me the biggest problem is the "whole" key is used for verification, and it doesn't matter where it's stored as it's vulnerable in transit. Well, why can't the key be split into coming from several places? And if the parts don't match and make sense, the entire transaction gets flagged as bogus.

          In other words no one place is trusted with the entire verification, one source or path might get compromised, but all of them simultaneously would take quite the effort. And perhaps if the keyholders were on a freenet sort of arrangement?
        • Re:Securing OpenSSL (Score:4, Informative)

          by rcw-home (122017) on Friday November 15, 2002 @09:10PM (#4682518)
          You've never heard of Windows Update being trojanned, have you?

          No, but they have been cracked before:
          http://www.attrition.org/security/commentary/ms16. html [attrition.org]

          It doesn't take a whole lot of imagination to come up with some very scary scenarios of what could have been put there instead of "Hacked by Chinese!" After all, how many people visiting Windows Update are running versions of IE without known run-arbitrary-code security holes?

        • > ...a trusted entity like Verisign...

          "Trusted"? ROFLMAO.
        • by TheLink (130905)
          "The key needs to be stored with a trusted entity like Verisign, which is how Windows Update and other commercial-grade updating systems ensure the integrity of their packages. You've never heard of Windows Update being trojanned, have you?"

          0) How are you sure it hasn't already been trojaned?

          1) Verisign just _claims_ to say the entity is who they claim they are, not that the entity is trustworthy.

          2) Verisign screw up - certs issued to wrong people see Microsoft Security Bulletin MS01-017.

          3) Microsoft screw up- there was an issue where the wrong types of certs could be used as CA certs. [Microsoft Security Bulletin MS02-050]

          4) Network Solutions is part of Verisign. NS is not known to be very security conscious. If someone screwed both the certs and the DNS most people wouldn't notice.

          5) Windows Update could become a trojan itself- make sure you read the EULA. e.g. one day you might see stuff like:
          "You acknowledge and agree that Microsoft may automatically check the version of the OS Product and/or its components that you are utilizing and may provide upgrades or fixes to the OS Product that will be automatically downloaded to your computer"

          And how sure are we that it will do it correctly?

          Also note that Microsoft has recently said that they may break some apps.

          So if windows update automatically downloads stuff which breaks some of your apps, it starts getting hard to distinguish it from a trojan.

          --
          I'm not saying Open Source is more trustworthy either. Most software isn't secure. Most Open Source software isn't secure. Most were never designed with security in mind - look at PHP - many of the features that make PHP PHP are actually bad for security. Look at ISC's range of software, see the history and the design/architecture of the software.

          Unfortunately there are only very few who can program securely, and C just makes things worse - even fewer of the few can program securely in C.
      • If only we could all use decent distributions...

        I'm sitting here on my beautiful, instantly up-to-date Debian box with a terminal open to a Solaris production server. Now I'm sure there's some way to get the binary distribution of Apache to install, but I'm not sure I'll be able to actually figure it out in less time then it would take me to configure and compile the source. Of course who knows how long that will take if I have to hunt down the Solaris packages for all those "useless" tools like a C compiler that aren't installed by default.

        Yes, my 31337 h4x0r friends, this is one box that won't ever be secure until I convince the boss that SPARCs should be running Linux.

    • Re:Securing OpenSSL (Score:4, Interesting)

      by Albanach (527650) on Friday November 15, 2002 @07:13PM (#4681618) Homepage
      There's also the issue of those running servers who, sensibly, have either gcc set to non executable, or simply have no compiler installed. It's much more difficult to compile code when there isn't a compiler, and with no gcc available, slapper can't do an awful lot.

      Sure something else might come along that can, but as you point out, if you're running a server that's been up a year, changing things is never comfortable, and if you know slapper isn't going to infect you, there's much less motivation.

      • by mwalker (66677)
        Sure something else might come along that can, but as you point out, if you're running a server that's been up a year, changing things is never comfortable, and if you know slapper isn't going to infect you, there's much less motivation.

        That's exactly what the Blackhats of the world wanted to hear. Of course, they can use the exploit on you, log in, download their BINARY rootkits that don't need a compiler, and use your bandwidth to rape innocent sites like Slashdot with DoS attacks. After deleting your logs, they'll install a sniffer to see what other systems they can compromise using your NIC's visibility, and finally they'll deface your web site and pipe /dev/urandom onto your hard drive's raw interface.

        Have fun!

        It's really a damned shame you don't have a way of getting a securely signed OpenSSL update. While Debian has signature and key checking, it's all on a single point of failure server. You really need a trusted key that comes with the install media, but so far the only O/S which supports this is Windows. People who use Free software don't get install media and are pretty much up the creek...
    • by schulzdogg (165637) on Friday November 15, 2002 @07:47PM (#4681897) Homepage Journal
      The old standby argument that 'checking the MD5 signatures' will save you has become null & void; ADM replaced the MD5 signatures too. The only reason the trojan was detected was because of the Google cache! This kind of thing probably has most users afraid to move to anything recently released that hasn't been extensively peer reviewed.

      False. From the HLUG website [hlug.org] (the group that discovered the trojan):
      Thanks to Antioffline.com for hosting us, and Gentoo's Portage system for catching the trojaned files via checksums.

      Putting MD5 signatures on the same server that the software is available from isn't even close to secure



      This is true though.

      • False (Score:3, Redundant)

        by mwalker (66677)
        Thanks to Antioffline.com for hosting us, and Gentoo's Portage system for catching the trojaned files via checksums.

        Gentoo had the OLD checksums, which is the reason it was caught. Everyone who checked the new checksums got owned. The Gentoo suspicions were confirmed by checking the Google cache.

        Gentoo basically caught this because they were so far behind the curve that they still had the old distribution. While it's a great argument not to use Gentoo, this kind of security-through-being-behind accident is not a security process, nor is it repeatable, nor should it be considered a success of the checksum system.
        • Re:False (Score:3, Informative)

          by Anonymous Coward
          Not sure how gentoo does things, since I tend to avoid cults, but the point here isn't that they were behind the curve, or old. It is that they had checksums of the correct (untrojaned) archives on a different server/media. Thus when somebody tried to build, the modifed archive was caught. Your post is just stupid and wrong. Additionally the OpenSSH trojan was caught in the same way, only via the FreeBSD port system which includes the checksums as part of the port. If the downloaded archive doesn't match the checksum, you get a warning. It is a security process, in that by seperating the checksums from the archives you can verify them without trusting the checksum from the same repository as the archive itself. With a sufficiently strong checksum this will catch *all* third party, substitution, trojan attempts.
        • Re:False (Score:2, Informative)

          by Anonymous Coward
          I'm afraid you've misunderstood. libpcap 0.7.1, for instance, was released at least four months ago (and probably much more, as it's four months since it was merged [freebsd.org] to the relatively conservative FreeBSD.) Talking about an "old distribution" and a "new distribution" only causes confusion - there is no new distribution, there was no version number increment, so the file should not have changed. When the checksums and file change with no announcement or known reason for the change, it's not a prompt to update your packaging system's checksums, it's a warning that something fishy is going on.

          No right-minded packaging system would have merged in the MD5 changes from the site on blind faith without at least giving the old and new files a cursory diff. The only people that were fooled by the weak strategy of modifying the server's MD5sum file are people who used the equally weak strategy of downloading it at the same time as the file. Everybody with a reasonable checksum-checking build system is safe.

    • Re:Securing OpenSSL (Score:2, Interesting)

      by drunken monkey (1604)
      The only thing I've used the MD5 sig is for verifying download completeness. That and generating a session id string using a secret string in combination with publicly known strings.
      But that secret string is as good as the security of the server it exists in.

      I have higher degree of trust for the gpg key that comes on the RedHat CD's from official RedHat boxes. Nothing against other distro's, RedHat is just an example.

      narbey
    • You coul fix the MD5 thing if you are a distributor by maintaining copies of the MD5 sigs on a secure internal server and automatically comparing them to the ones on the distribution server every few minutes. If a compromise is detected, the distro server is shut down and somebody is paged to fix it. Meanwhile everybody who has downloaded the software since the last integrity check is informed by e-mail (anonymous ftp captures e-mail addresses).

      Haven't figured out what to do about mirrors yet.
  • IMHO the right way is to write a system that disables servers automatically if a vulnerability is known and the administrator did not fix it in n days. Either the functionality is integrated into the daemons, and they check (via http, mail, whatever) every day whether they are affected by a problem. Or each system should run a daemon that controls all server software.
    It should warn the admin before it turns the server off, of course, but a broken unmaintained server is always better than a hacked server.
    • What happens when the author's website becomes obsolete? Security may be up-to-date, but you now have a bad webpage. Will the server shut itself down, or let you run with a possible compromised server? Also, consider the effect if the timing is set in the source. The first of every (month|year|week) will show an enormous load on the update server. What happens when the software can't talk to the update server? Assume everything's fine? When someone hacks the update server with a new, trojaned form, suddenly everybody will have it. Check the server's logs, you now have a list of compromised hosts.

      Other than that, it's a good idea...

    • I don't really like this idea.

      I didn't update the version of OpenSSH on a few of my boxes when the last advisory came out. I wasn't using a vulerable configuration (CHAP disabled) so I didn't really see it as an immediate danger.

      Stuff like auto updates also has issues for the same reason. I can't think of an easy replacement for a resposible admin.

      • I dont care if responsible admins turn the option off. But the majority of servers are more or less umaintained. The majority of people doesn't care for security advisories, and those people can be protected by reasonable defaults that cause the server to deactivate itself if it is a danger for the (obviously naive) owner.

        And BTW, dont forget that not only the owner of the server is harmed when a hacker compromises a server and starts a distributed DOS attacks...
    • This could be a nightmare in itself. What happens if the update server gets hacked... then all of a sudden you have systems either auto-trojaning themselves - or shutting down everywhere.

      Not really a good idea... an equivilient to *gasp* "windows update" for terminal would be nice (RH8 has one, if you pay for or try RHN), where it automatically gives you a list of updates available and allows you to pick them in a dselect-style (debian) format, or something similar.
      • I think that you can prevent this kind of problem if you make the number N large enough. If you, for example, wait 7 days before you turn the server off (and just send a mail to the admin immediately), you can still prevent most worms.
        Shutting the server down is only the last resort, when the sysadmin does not react.
        The most important advantage is, that the admin knows a) that there is a bug that affects a server he is responsible for and b) he gets a complete list of all affected machines.
  • How sure are you the the administrators of the servers you sampled are also Slashdot readers? While certainly some laziness could explain your statistics, what of good old-fashioned lack of communcations? Just because a message warning about a security hole was sent out, doesn't mean it got received, or even read in a timely manner. Besides, maybe most of those administrators were taking three-week vacations just then!
  • We Fixed Ours (Score:2, Interesting)

    by Shackleford (623553)
    I worked at an ISP at the time that this was happening and we were quite well aware of these vulnerabilites. We often referred to CERT [cert.org] when looking for vulnerabilities that may have affected us. It was through sites like those that we found out about the problems with OpenSSL and we made the necessary changes. I'm not sure why it was found that many others didn't do what was necessary. Perhaps there are many admins that don't understand that they need to keep themselves up to date on these matters, and of course, they are often busy with many other tasks. It's not easy being an admin. Maybe that's why there is a System Administrator Appreciation Day. [sysadminday.com]
  • Should have upgraded (Score:5, Informative)

    by Hegemony (104638) on Friday November 15, 2002 @07:04PM (#4681520)
    I didn't and paid for it. Within about 3 hours time, some bastard got in, created a superuser, DoS'd nasa.gov, spawned a few irc servers, and started scanning other IP's for the same exploit. Damn they work fast.
  • When to Patch (Score:5, Interesting)

    by Crispin Cowan (20238) <{moc.nawocnipsirc} {ta} {nipsirc}> on Friday November 15, 2002 @07:05PM (#4681527) Homepage
    Readers interested in this topic may be interested in this paper that we presented last week at USENIX LISA 2002 [usenix.org]:

    Steve Beattie, Seth Arnold, Crispin Cowan, Perry Wagle, and Chris Wright
    WireX Communications, Inc. http://wirex.com [wirex.com]
    and
    Adam Shostack
    Informed Security http://www.informedsecurity.com [informedsecurity.com]
    Security vulnerabilities are discovered, become publicly known, get exploited by attackers, and patches come out. When should one apply security patches? Patch too soon, and you may suffer from instability induced by bugs in the patches. Patch too late, and you get hacked by attackers exploiting the vulnerability. We explore the factors affecting when it is best to apply security patches, providing both mathematical models of the factors affecting when to patch, and collecting empirical data to give the model practical value. We conclude with a model that we hope will help provide a formal foundation for when the practitioner should apply security updates.
    Crispin
    ----
    Crispin Cowan, Ph.D.
    Chief Scientist, WireX Communications, Inc. [wirex.com]
    Immunix: [immunix.org] Security Hardened Linux Distribution
    Available for purchase [wirex.com]
    • Downstream Liability (Score:5, Informative)

      by sczimme (603413) on Friday November 15, 2002 @09:28PM (#4682608)

      In the same vein, readers might be interested in a presentation we did at RSA 2002; the topic was Downstream Liability for Attack Relay and Amplification [cert.org]

      (A two-page summary is available here [whitewolfsecurity.com].)

      We described a scenario in which an intruder compromised a system, used it as a jumping-off point, and subsequently caused damages to an individual. The paper focuses on the legal side of things; IANAL but the other two authors - Tim Rosenberg, J.D. and Ron Plesco, J.D. - are.

      I also state my opinion, which is that "...patches should be installed no later than ten (10) calendar days after release of the patch by the vendor". Discuss.
    • Re:When to Patch (Score:2, Interesting)

      by ekr (315362)
      Indeed. In fact, I cite your paper in mine. :)

      Interestingly, if you look at the curves of people's behavior, you can see that the algorithm they're apparently following is pretty different from the one you recommend. Namely, people who are going to upgrade do so pretty much right away (mostly within 10-14 days of the stimulus that seems to have forced them to upgrade). The second interesting observation is that a substantial number of people seem to wait for a vulnerability to be released before they upgrade. One question that would be interesting to ask in the context of best practices would be how long on average vulnerabilities circulate before they are announced.
  • MS way (Score:2, Insightful)

    That's why MS wants to make apps that upgrade themselves automagically

    It's not a bad idea after all, too bad you can't trust MS on anything (They use a good idea bundled with a bad one and a EULA that grants them too much)
    • And frequently the patches cause more problems that they fix. This is why my company has switched to OSS/Linux (eat that GNU).

      If you are a MS server admin you are always double checking TechNet and the other available sources because the delay between the patch on TechNet and the WindowsUpdate (critical area) can be as long as 3 weeks sometimes. Now that's sorry as can be. Lockdown tool is a joke.
  • Damn MCSEs (Score:5, Funny)

    by davinciII (469750) on Friday November 15, 2002 @07:15PM (#4681648)
    See, this is exactly what happens when you hire a bunch of paper MCSEs to run your........

    wait, did you say Linux?
    • Re:Damn MCSEs (Score:2, Interesting)

      by LostCluster (625375)
      But I've said it before. Linux is now getting to the point that the clueless people who poorly run Windows boxes are now moving over to Linux. These admins are too lazy or clueless to properly secure ANY operating system, no matter how well it's designed.
  • Due diligence. (Score:5, Informative)

    by Black Parrot (19622) on Friday November 15, 2002 @07:16PM (#4681664)


    This is too easy, folks. Subscribe to your distro's update announcement list, read your mail daily, and apply the relevant patches promptly.

    It's really not that hard. A typical update for me is:

    1. read mail
    2. ncftpget whatever.rpm
    3. rpm -Uhv whatever
    4. read rest of mail
    By far the most time-consuming part is waiting for the RPM to download. Some say that it's even easier for source-based distros.

    • That is correct. That is how you fix it.

      However, some people take vacations, or go on project, or the such. Some people even sleep, and a window of vulnerability of only a few hours can create a serious problem. Your advice is perfect for those situations which need it the least - where the system is regularly serviced by a nearly-constantly-available administrator. This by no means covers all situations.

      • On my RH server, I have it set up every 6 hours to do an up2date followed by an autoupdate. I'm out of sync with the official website on average 3 hours. Granted, it's not a high-traffic site, so I don't have to worry too much even if updates do go awry.
        • This is scary though. What if you get a broken package from up2date? Or if that package bvreaks something else?

          • Re:The window (Score:3, Informative)

            by Mandi Walls (6721)
            that's the point of up2date. Neither of those things will happen.

            up2date runs with gpg signatures on all packages

            and it checks all dependencies. And, since the packages are built by a company trying to guarantee you can run oracle on your box rather than a couple of dudes in a basement, the packages and their dependencies are correct and current.

            :)

            --mandi

            • Re:The window (Score:3, Informative)

              by jroysdon (201893)
              I've had up2date break a ton of things when it installs a newer version. ypbind and xinetd are two that bit us recently. They both installed and initially tested fine, but there were subtle changes that broke other things (securenets on ypbind, and all our ssl-based email services like spop3 and simap with xinetd). Easy enough to fix, but not something you want automated when you're on vacation.

              We have a cron job at 4am that mirrors the RH update directories (only downloads changes) and then emails us if there are changes. Then we install and test them on a test non-production server to verify first, then install on the production boxes, plus we already have the update(s) on a local box so 'rpm -Fvh ftp://localupdateserver/whatever.rpm' goes really fast (especially when you have a couple dozen boxes to maintain).
            • I don't buy it, this is just too dangerous. Redhat may have done a very good job to date but it still could happen. Of course being a Windows admin as well I have a knee jerk fear of updates and patches so that could be spilling over to the Linux part of my brain.

      1. read mail
      2. ncftpget whatever.rpm
      3. rpm -Uhv whatever
      4. read rest of mail

      You forgot:
      2.5. rpm --checksig whatever.rpm

      YES. DO IT.

  • by AugstWest (79042) on Friday November 15, 2002 @07:19PM (#4681678)
    If you haven't yet, you should definitely check out nessus [nessus.org].

    It'll scan your machiens for known vulnerabilities and give you pointers on how to go about taking care of any that it finds. It's also got built-in updating to pull in the latest exploits.

    The clients are even getting pretty spiffy these days, and the project has matured very rapidly.
  • by mao che minh (611166) on Friday November 15, 2002 @07:28PM (#4681748) Journal
    Perhaps Linux users and administators have grown overly comfortable due to the long reign of tight security and lack of virii? Until rather recently, disclosed security advisories for FOSS could be neglected for substantial periods of time without worry. The world's hackers mostly took aim at easily exploitable IIS and Exchange servers, flimsy Win32 email clients, and major routers (like AT&T backbone routers to Asia and such). Largely ignored were the hordes of vulnerable web and mail Linux/BSD servers on campus networks and elsewhere (mostly left vulnerable due to neglect, not inherent OS issues). However, the desire to orchestrate large scale DDoS attacks and an exponential increase in the use of Linux systems has caused many hackers to take interest in conquering new grounds.

    All of these years of rock solid security has made us complacent. We have to remember that, while Linux and OSS may be inherently secure, and Linux's modular design works as a fail safe against complete failure, we are still just as vulnerable if we don't remain vigilant.
    • by AugstWest (79042) on Friday November 15, 2002 @07:45PM (#4681879)
      Perhaps Linux users and administators have grown overly comfortable due to the long reign of tight security and lack of virii?

      I think this is a complete fallacy. Most default Linux installations, when left alone on a cable/DSL connection, have been hackable for years now. I can remember when I installed RedHat 6.2 on my gateway machine without having time to do the updates, and before midnight that night the box had been hacked.

      I think that a lot of Linux users don't even realize when they've been hacked, either. Even the automated scan-and-exploit tools these days are becoming quite good at getting themselves installed on a system quietly. Unless you watch your logs on a daily basis, you often have no idea what is actually going on with your system.
  • From page 4 of the PDF:
    • As should be noted from Figure 4, updating on Linux was rather easier than updating on *BSD, since all of the *BSD updates required compilation, either of the base system or from the ports/packages collection.
    Huh?

    # pkg_add ftp://ftp.freebsd.org/pub/FreeBSD/ports/packages/A ll/openssl-0.9.6g.tgz

    Binaries installed -- no compilation required!

  • by dfn5 (524972) on Friday November 15, 2002 @07:32PM (#4681773) Journal
    I find that other admins patch by necessity. i.e. If something is broke, then patch it. If not leave it alone.

    However, I read a stat somewhere that said that a large majority of security breaches could have been prevented by merely keeping up with patches. Therefore my philosphy is to create a patch schedule. And because I'm on Solaris things like OpenSSL are 3rd party to the OS, therefore I upgrade immediately. I rebuilt my solaris RPMs of OpenSSL that day and had it deployed to all my machines. Other things like GnuPG, IPFilter, OpenSSH, apache, sendmail, etc... they all need to be upgraded ASAP.

    So all you Slashdot readers who posted that you have nothing to do but read Slashdot in that downsizing article [slashdot.org], get off your butts and start patching. That should keep you busy full time.

    • by mao che minh (611166) on Friday November 15, 2002 @07:43PM (#4681866) Journal
      Yes. I think that services like the "Red Hat Network" will greatly benefit end users and admins alike in this respect. Having a service that organizes errata (updates) and informs you what the current security threats are, and then shows you what systems you own/administer are vulnerable is very helpful. It gives end users an almost hands-free way of keeping themselves safe (as safe as they can in terms of updates, anyways), and can point out things that admins might have missed. I really like it.
  • by TheFlu (213162) on Friday November 15, 2002 @07:46PM (#4681895) Homepage
    Run GRAB [runlevelzero.net] or one of the other automatic updaters for Linux and never worry about this problem again. GRAB has saved me countless hours of updating all of the Linux boxes I administer.
  • because this means that for any given attack, there will always be thousands or millions of vulnerable machines out there, all of which will (potentially) be made into DDOS zombies.

    I'm tempted to think that what's needed is a bunch of vigilantes who go accross the net and wipe any machine that's still vulnerable to any given bug after a while - but even this is not a solution, as some exploits/rootkits, after cracking a machine, install the fix to 'close the door behind them'.
    • because this means that for any given attack, there will always be thousands or millions of vulnerable machines out there, all of which will (potentially) be made into DDOS zombies.

      So we'll innovate and develop protocols and mechanisms which are resistent to DDOS attacks, and counter-worms that attack the malicious code and clean it out.

      It isn't doomsday, its just another bump in the upgrade cycle.
  • Two words (Score:4, Insightful)

    by npcole (251514) on Friday November 15, 2002 @08:00PM (#4682027)
    Package management.

    For the part-time server admin, who doesn't have time to read up on the latest patches, recompile all applications and their libraries from source, work out dependencies etc., the work of the debian security team is invaluable.

    apt-get update
    apt-get upgrade

    Now, I know that this relies on the hard work of others; all I can say is, "thanks, I really appreciate it."

    Disclaimer:

    This post is not meant to be debian propaganda --- no doubt other venders are doing a good job too. The point is that this is the kind of problem that a package system solves, and solves well. (As long as there is a trusted source)
  • It's simple really (Score:3, Interesting)

    by Anonymous Coward on Friday November 15, 2002 @08:01PM (#4682032)
    Hundreds of machines running ssl. A few administrators. Often upgrading ssl means recompiling ssh and other applications, so it is NOT a simple

    rpm -Uvh
    or ./configure
    make
    make install

    (If I see these simplistic answers one more time I will puke)

    Other things are seen as more important than security by the people that employ the sysadmins.
    You can either

    A) piss people off by not fixing their problems, argue with them about how some mysterious security shit is more important, waste valuable time, maybe get the patch done and avoid getting hacked. For this you're not rewarded, you're seen as inefficient - you didn't get some dipshit's email to work or something (and this dipshit may control your life in the organization).

    B) Fix dipshit's email, do whatever else people think is imporant. Don't waste time making your case. Do mention that you should be working on this mysterious security patch thing no one wants to hear about though... Get hacked - holy SHIT everyone knows that's a bad thing, no more "can I please reboot your machine" no more questions at all, you get to take that piece of shit computer and wipe it clean, upgrade everything, piece of cake. Then for the next few days no one bitches that you are patching stuff all over the building - they WANT you too so they don't get screwed like the poor schlub that got hacked. End result of not be "Duly Diligent"? You are a hero, everyone knows you work way too hard, etc etc.

    The age old problem for sysadmins - if you are truly good people think you don't do anything, stuff just works.

    I have ten years experience at this, I manage sysadmins at my organization now. I spend a lot of time trying to cheerlead for the important work they do. Still we often have these kinds of problems. I've found it does work to sometimes just put a smile on your face and say "what, me worry?" and let some things break. Raises follow.

    Fuck people, fuck them all.

    Ah, a little sysadmin recovery makes me feel so much happier.

    And no, none of us read your stupid email. Your lives are boring, face it!

    -A Grouchy Young Man
  • by mcrbids (148650) on Friday November 15, 2002 @08:24PM (#4682212) Journal
    I, too, did my own little survey.

    Many webservers reporting themselves as "Apache 1.3.22" are in fact RedHat RPM based distros using the Apache RPM with the latest patches applied. (that fix the vulnerabilities mentioned)

    Doing a simple "Head" WILL NOT pick up on this detail. To be really sure, you have to run exploit code, and that gets you into all kinds of sticky legal issues.

    A while back, I wrote a quick script that did much as described in this article, only I went one step further and shot an email to whoever whois reported as the administrative contact.

    I got alot of angry calls that morning, and I quickly stopped. I'd guess that of those sites that I figured were vulnerable, at least 8 of 10 were not.
    • by ekr (315362) on Friday November 15, 2002 @10:09PM (#4682814)
      Doing a simple "Head" WILL NOT pick up on this detail. To be really sure, you have to run exploit code, and that gets you into all kinds of sticky legal issues.

      As described in the paper, I directly checked for the vulnerability. This allowed me to determine when implementations had been patched rather than upgraded.
    • by Mandi Walls (6721) on Friday November 15, 2002 @10:14PM (#4682844) Homepage Journal
      Exactly. I think we got into this a bit during the SSL thing. When apis and features change between releases, you can't always upgrade to the latest and greatest without pissing off your developers (or, in some cases, users on other platforms whose clients don't support various random features of X protocol). Therefore, you patch the old stuff to a safe level.

      so, if you're running a service scanner across your network, and it gets an OpenSSH_3.1 signature, that doesn't mean that that particular machine is vulnerable, it means the vendor decided that putting a new version on there compromised the stability of other software on the system.

      So, if you're going to look for vulnerabilities on your network, make sure your scanner looks for the vulnerability, not a version number on the software. (and therefore don't use SARA as your scanner... "blah.blah.blah.blah may be vulnerable to the bend-server-over-the-table but we're not sure, so click this link...." UGH!)

      --mandi

    • Many webservers reporting themselves as "Apache 1.3.22" are in fact RedHat RPM based distros using the Apache RPM with the latest patches applied. (that fix the vulnerabilities mentioned)

      It certainly seems to be the case that RedHat prefers to backport security fixes to the version of an application or library that they are already distritubuting, rather than upgrading their distribution to a new version. They are very conservative about maintaining backward compatibility within a major version.

      The only exception I can think of to this is that their updates to RedHat Linux 6.2 now install bind9 rather than a patched bind8; I suspect that this may be either because of ISC's broken patch distribution procedures, or because they've moved to bind9 on their newer major versions and can't justify the work in maintaining bind8.

  • Those of us who are security conscious were still pushing SSH (any kind) over telnet as late as 2000. Never mind signed distributions or one time passwords; most people can't even be bothered to type in a password. Their idea of one-time password is typing it into their browser a single time before using frontpage to manage their site forever.

    What's funny is that people spend so much time dealing with TCO issues comparing Windows vs Linux, when these little security snafus cost huge amounts of time. Especially for smaller operations who build their machines from scratch, rather than periodically roll up their own images and then patch sets against them, the time lost having to rebuild a set of machines wrecked by such issues could cost tens of thousands of dollars.

    Companies, remember that, when you are presented with the extra cost of a security professional. Pay up front, or pay in aggravation later. Nothing beats having a pro-active advocate for security within your organization, if you have any significant IT assets.
  • I tried (and failed) to persuade the Pine development team that the build script for the upcoming 4.50 version of pine should refuse to build against known vulnerable versions of OpenSSL.

    Lots of admins are probably thinking, "well, I don't run a HTTPS service, so I won't bother with this yet." But if many months later they decide to run such a service, they may forget that they should get an updated version.

    Pine, however, is something that lots of admins who don't run "servers" do upgrade with each release and it does make use of openssl. So this would be a place to catch such systems.

  • ... that keeps probing my servers!!
  • Click 'Install' (Score:3, Interesting)

    by KFury (19522) on Friday November 15, 2002 @09:40PM (#4682656) Homepage
    On the mac, I just set 'Suftware Update; to run daily, and I click 'install' when it says there's a security fix.

    By default, users only have to click one button (the default button) to keep their Mac-flavored BSD secure.

    And they don't have to subscribe to mailing lists or be security geeks. They could be your mom and still get it right.

    Not trying to rip on your mom.

    I don't even know her!

    No, seriously. That wasn't me, that time at the Quaker Steak n Lube!
  • by wuchang (524603) on Friday November 15, 2002 @10:39PM (#4682983)
    The paper looks at version numbers but does not account for back patches to old versions that fix the bug. I'm running a patched Mandrake https server which returns a version of 2.8.7/0.9.6c. Slapper requests correctly return an error message. What the paper needs to do is issue the exploit itself to determine whether or not things have been patched. Otherwise, the author overcounts the vulnerable systems out there.
  • by tz (130773) on Friday November 15, 2002 @11:06PM (#4683125)
    I also checked the browsers, mainly command line a little while ago when the IE cert chain vulnerability was found. Most (wget, links, lynx) didn't bother to check the chain. Some didn't check anything at all, so any proxy server could spoof any page.

    If you can see https://www.amazone.com, your browser is badly broken. amazone.com points to amazon.fr - but you should match the cert to the DNS.

    Opera on the Zaurus was also vulnerable. Apple doesn't install any certificates in their OS X or Darwin OpenSSL directory.

    One thing that happened between SSLeay (the original project) and OpenSSL is that the certificate chains were NOT installed by default, so everyone had a library, but no way of checking certificates since you require root certificates to check the site certificate. A second thing, probably worse is that the old default was to return an error if the certificate couldn't be validated. Now the default is TO GIVE NO ERROR IF THE CERTIFICATE CANNOT BE CHECKED. It would be better to give an error that would have to be overridden, which would cause developers to have to take a look and to actively disable security.

    Curl was the only one that included any checking, but it required manually installing certs and specifying an option to turn it on. It would SILENTLY connect to SSL sites without security.

    Mozilla was fine, and Konqueror fixed any problem it had, but the Opensource community should be embarrassed since the rest of the browsers security was not just flawed like IE, but DISABLED without any notice to this effect or NONEXISTENT.
  • gah (Score:3, Funny)

    by nomadic (141991) <nomadicworld&gmail,com> on Saturday November 16, 2002 @03:46AM (#4684353) Homepage
    But how many people actually fixed their machines? I decided to study this question, and the results are kind of depressing.

    If you're depressed by that, you might want to see a psychiatrist. I mean, you shouldn't have that kind of reaction to such a minor issue.
  • The ISS [iss.net] folks send out their periodic newsletter about new vulnerabilities. It's pretty depressing, not just because the number of bugs - but because most of them are the SAME BUGS - BUFFER OVERFLOWS. How long have buffer overflows been a known security risk? Why are we still putting up with them?
    • They were certainly well-known when I was in college in the mid 70s, but the PL/C dialect of the PL/I checkout compiler corrected mistakes like that at run-time. (OK, it often fixed them incorrectly, but at least least it wouldn't overrun an array.) And our professors dinged us for writing programs where that happened, and made us run the programs on input decks that were maliciously designed to check for programs that overflowed their buffers.
    • They were certainly well-known when K&R wrote their books on C which warned you to be careful about bounds checking when using pointers and arrays.
    • They were certainly well-known in the early 80s when everybody started complaining that the gets() and scanf() routines made it easy to overrun buffers on input when you weren't doing it by hand.
    • They were certainly well-known in the late 80s when the Morris Worm wandered around a lot of the machines in the internet.
    • They were certainly well known when the C++ string-handling libraries were designed to NOT overrun buffers, and when Java was designed to not even have pointers, and had array objects that checked bounds for you.
    • There are enough software engineering CASE tools that try to find problems more complex than lint() looks for, though perhaps array bounds checking isn't something they check effectively.
    I like C - I really like it. It's time to stop using it. It's time to stop shipping code that has array bounds problems, and security code that hasn't been proofread for them. And it's time to stop using programs that run as root when they don't need to. This isn't the 80s any more.

    There are other bugs out there - a popular attack is to try to abuse dotdots in path names, which there's more excuse for forgetting to check, and there are things like race conditions that are genuinely hard to check for (e.g. what happens when somebody's ripping up your temp files while your program is running), though checking return codes on system calls and doing something appropriate about failures is a good start.

    • Seriously, what would you suggest?

      I personally dislike C - programming securely in C feels like clearing a minefield inch by inch.

      So any suggestions?

      Haven't seen docs on programming securely in Lisp. Many Lisp coders like to mix data and code, that seems scary, but I'm very very new to Lisp so is that safe? There's little out there to suggest it is or isn't.

      Forth seems to have about the same problems as C - buffer overflows. Data-code mixing (but a Forth coder said a solution is to keep dictionaries separate).

      Many of the other languages are more high level, very useful and recommended for certain apps, but not suitable for the low level system programming area.

      The languages have to be able to hook to C apps very easily.
  • by wowbagger (69688) on Saturday November 16, 2002 @11:57AM (#4685522) Homepage Journal
    I know of a Linux system that logs and reports intrusion attempts by CodeRed/Nimda, Slapper, et. al., and mails a report to a system admin every morning.

    The system admin wasn't pursuing these reports. I asked why.

    His response - "Well, those are attempts to exploit a Windows server, and this is a Linux box, so they don't matter."

    I made the counterpoint "If one of your system was infected, wouldn't you want to be told about it?"

    If every systems admin would take the time to track down the Code Red attempts on their systems, and notify the responsible parties whereever possible, then a lot of the unpatched systems would be shut down (if not by their administrators, then by the ISP supplying connectivity).

    I just don't understand an admin with an attitude like that.
    • "If one of your system was infected, wouldn't you want to be told about it?"

      Yes, I would. But how many of these reports did your sysadmin get? Probably quite a few, daily. What would his manager say if he spent most of his time helping other people administer their systems instead of doing his job? If it's an occasional occurrence, you can help, but the infections are endemic, and helping a few is unlikely to change anything, as there are thousands of new virus cultures sold every day...

Byte your tongue.

Working...