Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Bug Open Source Security Upgrades Virtualization IT

Xen Cloud Fix Shows the Right Way To Patch Open-Source Flaws 81

darthcamaro writes Amazon, Rackspace and IBM have all patched their public clouds over the last several days due to a vulnerability in the Xen hypervisor. According to a new report, the Xen project was first advised of the issue two weeks ago, but instead of the knee jerk type reactions we've seen with Heartbleed and now Shellshock, the Xen project privately fixed the bug and waited until all the major Xen deployments were patched before any details were released. Isn't this the way that all open-source projects should fix security issues? And if it's not, what is?
This discussion has been archived. No new comments can be posted.

Xen Cloud Fix Shows the Right Way To Patch Open-Source Flaws

Comments Filter:
  • by martyros ( 588782 ) on Thursday October 02, 2014 @09:55AM (#48046823)
    The XenProject security process gives them time to patch their systems (in this case, 2 weeks). If you don't have your stuff patched by then, they won't wait for you.
    • by nman64 ( 912054 ) on Thursday October 02, 2014 @11:20AM (#48047589) Homepage

      Actually, the flaw in bash was also embargoed for a couple of weeks. The problem is that the original patch that was given time to circulate didn't fully fix the issue, and nobody realized that until after the embargo was lifted and the problem became public knowledge. "Responsible disclosure" was exercised in both cases, it just didn't work out well with Shellshock.

  • by charles05663 ( 675485 ) on Thursday October 02, 2014 @09:56AM (#48046833) Homepage
    Their hysteria drive news cycle.
  • Maybe? (Score:4, Insightful)

    by i kan reed ( 749298 ) on Thursday October 02, 2014 @09:57AM (#48046849) Homepage Journal

    I mean, some open source projects don't actually have anyone doing live support and a patch happens when someone "gets around to it".

    And some exploits are out there whether you say anything or not. Slashdot users pretty regularly complain about this with bumper sticker wisdom about "security through obscurity".

    And just because the deployments are all fixed, doesn't mean someone has used that. Heartbleed(cited in the summary) was fixable within a couple days on every major linux distro with a simple update. That didn't mean no one got hacked.

    All-in-all, sure it's a good policy, but not the magic perfect, oh-lets-all-be-like-xen thing the summary makes it out to be.

    • Re: (Score:3, Interesting)

      by kaiser423 ( 828989 )
      It seems all pretty reasonable to me. If known exploits are out there, or if the vulnerability is known then the fix gets published right away and there's no two-week embargo. But if it appears that no one else knows about this vulnerability, then the two-week wait seems to be a great policy. Give most people that can keep their mouths shut two weeks to get everything patched up and tested.

      I get that a lot of people just chant the "security through obscurity" mantra, but obscurity really is a layer of
      • Re:Maybe? (Score:5, Informative)

        by i kan reed ( 749298 ) on Thursday October 02, 2014 @10:13AM (#48047013) Homepage Journal

        your salted password hash is just an obscured version of your password.

        Negatory. Salted hashes are not reversable without a huge damned rainbow table particular to the salt, and most passwords are hashed, not encrypted.

        There isn't actually a password to recover from that.

      • Hell, a password is a form of security through obscurity -- your salted password hash is just an obscured version of your password.

        Not really; hashing a password throws away information, so it is more secure than storing the password (obfuscated or not) in the case that an attacker compromises the data store.

        Likewise, salting adds information to a given password, so it is more secure than using the unsalted password (obfuscated or not) in the case that an attacker is brute-forcing against known hashes (eg. rainbow tables).

        Passwords themselves are a pure form of obfuscation: their entire reason for existence is to not be known by others

        • Exactly. Good passwords are obscure enough that they make really, really good security. That's kind of my point that obscurity makes a good layer of security and shouldn't just be dismissed by people who like to say "security through obscurity is no security at all", which was what the OP was referring to when he said 'Slashdot users pretty regularly complain about this with bumper sticker wisdom about "security through obscurity"'.

          Of course, bad passwords, like "password" even with salts makes pretty
    • I think the real idea here is that you spot an exploitable bug. You inform those who are responsible to fix it. You wait (while making sure they are working furiously on a patch.). If they are working hard toward a fix you continue to wait till it is fixed or out in the open anyway. If they blow you off and state that "they will get around to it" you let it fly.
    • I mean, some open source projects don't actually have anyone doing live support and a patch happens when someone "gets around to it".

      True but a delayed publication of the bug isn't really going to affect them.

      And some exploits are out there whether you say anything or not. Slashdot users pretty regularly complain about this with bumper sticker wisdom about "security through obscurity".

      I'm not sure that specific complaint is that common. Certainly if a project sits on a security bug for months, or even years, then the security through obscurity criticism is valid. But the vast majority seem to feel it's alright to wait a couple weeks to get a patch together and inform the major users, that seems to be the fastest way to protect the most people as quickly as possible.

      And just because the deployments are all fixed, doesn't mean someone has used that. Heartbleed(cited in the summary) was fixable within a couple days on every major linux distro with a simple update. That didn't mean no one got hacked.

      All-in-all, sure it's a good policy, but not the magic perfect, oh-lets-all-be-like-xen thing the summary makes it out to be.

      AFAIK Heartbleed was fixed before the disclosure

  • Apples and Oranges (Score:5, Insightful)

    by BenFranske ( 646563 ) on Thursday October 02, 2014 @10:00AM (#48046873) Homepage

    Sure, it's an ideal situation where a bug was identified, fixed quickly and a patch pushed out and applied by large users quickly but Xen is a program which is much more centrally controlled than BASH or OpenSSL. BASH and OpenSSL are more key infrastructure bits than Xen is. What I mean is that they are integrated into FAR more devices and systems making a silent patch nearly impossible.

    • by QuietLagoon ( 813062 ) on Thursday October 02, 2014 @10:25AM (#48047117)

      ... BASH and OpenSSL are more key infrastructure bits than Xen is. What I mean is that they are integrated into FAR more devices and systems making a silent patch nearly impossible.

      Quite correct.

      .
      Just try to estimate the number of devices affected by Heartbleed and Shellshock. It's probably in the billions.

      As a case in point, a single Zen installation can host hundreds, maybe even thousands, of vulnerable installations of Shellshock and Heartbleed.

      It is truly an apples and oranges comparison.

    • Yeah, they can come up with a fix before they announce, but for as ubiquitous as BASH is, you can't really keep it secret until it's been applied. You'd have to notify various distribution points (every Linux and Unix distributor), creating such a long list of people who are notified that it amounts to a public disclosure.

      At least, I'd imagine that's the case.

  • by jones_supa ( 887896 ) on Thursday October 02, 2014 @10:02AM (#48046897)

    the Xen project privately fixed the bug and waited until all the major Xen deployments were patched before any details were released. Isn't this the way that all open-source projects should fix security issues?

    I do see value in that approach. When a vulnerability is found, it's better to report it discretely to the authors. Shouting the details to the world in the name of "openness" just causes script kiddies to go wild and nuke a bunch of machines which could have been otherwise avoided.

    • by Anonymous Coward

      I do see value in that approach. When a vulnerability is found, it's better to report it discretely to the authors. Shouting the details to the world in the name of "openness" just causes script kiddies to go wild and nuke a bunch of machines which could have been otherwise avoided.

      In every case I have seen where someone is "shouting the details to the world" the authors responsible for fixing the bug was given plenty of weeks to fix it but ignored it since no-one was shouting about it.
      Then after a few weeks when it didn't get fixed and the authors never responded, the bug-reporter decided to go public with it. After the shit-storm was raised the authors accused the bug-reporter of "shouting the details to the world".

  • by mwvdlee ( 775178 ) on Thursday October 02, 2014 @10:05AM (#48046929) Homepage

    The question is whether black hat hackers are aware of the security holes.

    Since Open Source projects communicate in the open (even if just version control commits), I find it quite likely that all major security-related projects are monitored by black hat hackers. The few weeks waiting period gives them ample time to use the security hole.

    • Re:Black hat (Score:4, Interesting)

      by meustrus ( 1588597 ) <meustrus@BLUEgmail.com minus berry> on Thursday October 02, 2014 @10:08AM (#48046957)
      Many open source projects have specific protocols for security flaws that include having an insulated security team communicating in private with private code repositories. But even for those that don't, two weeks of security by obscurity is better than two weeks of no security at all.
    • Re:Black hat (Score:4, Informative)

      by martyros ( 588782 ) on Thursday October 02, 2014 @10:21AM (#48047089)

      Since Open Source projects communicate in the open (even if just version control commits), I find it quite likely that all major security-related projects are monitored by black hat hackers. The few weeks waiting period gives them ample time to use the security hole.

      That's why the Xen Project doesn't put the fix into version control until after the embargo period is over. Only people on the predisclosure list (or those able to listen in) would be able to learn about the vulnerability without doing their own audit of the code to find the bug themselves (which is very expensive).

      There's basically a balance to be struck. All users not on the predisclosure list (and thus who cannot update their systems until the embargo period is over) will continue to be privately vulnerable during the embargo period: anyone who happens to have dug deep enough and found the bug can still exploit it. But as soon as the announcement is made, everyone who hasn't yet updated is publicly vulnerable: Nobody has to search to find the bug, they just have to write an exploit for it. Being privately vulnerable is certainly bad, but being publicly vulnerable is far worse. The goal of the embargo period is to try to reduce the time that users are publicly vulnerable by extending the time they are privately vulnerable. Two weeks has been found to be a reasonable cost/benefit trade-off in our experience.

      • by koinu ( 472851 )

        What if someone who privately knows about the vulnerability gets the idea to exploit various installations of competitors (or even common users!) during the embargo period? Do you trust large enterprises not to misuse their knowledge to their own advantage?

        • What if someone who privately knows about the vulnerability gets the idea to exploit various installations of competitors (or even common users!) during the embargo period? Do you trust large enterprises not to misuse their knowledge to their own advantage?

          Of course that's a risk. But again, is it worse to have a handful of people who are trying to be secretive know about the vulnerability while vendors update and carefully test their software, or for for the entire world to know about the vulnerability wh

        • What if someone who privately knows about the vulnerability gets the idea to exploit various installations of competitors (or even common users!) during the embargo period? Do you trust large enterprises not to misuse their knowledge to their own advantage?

          A patch cannot be prepared "privately" without a number of people knowing about it: Developers, testers, reviewers, server admins etc. At each of the organizations that are privy to the predisclosure.

          There are money to be made from that. To gain access to an exploitable vulnerability before a patch can be distributed broadly is a massive opportunity.

          What if someone starts to sell off their knowledge to blackhatters?

          What if someone sets up what looks like a legitimate business (a fake antimalware) and uses i

    • Indeed. Furthermore, there are usually ways to defend yourself against attacks that don't involve patching. You might set up a firewall, or (in this case) switch to mod_perl instead of mod_cgi.....
  • by meustrus ( 1588597 ) <meustrus@BLUEgmail.com minus berry> on Thursday October 02, 2014 @10:06AM (#48046935)
    I'm sure the Xen project can keep track of all of its major players and inform them ahead of time. And nobody is a "minor player" with something so complex as Xen. It's complex enough that most users probably have support contracts with larger users. It's a lot easier to discreetly distribute a patch to that audience than to literally nearly every SSL server on the internet or every Linux user.
    • by Buzer ( 809214 )

      And nobody is a "minor player" with something so complex as Xen

      There are hundreds, probably thousands, of "minor players". Just look at something like http://lowendbox.com/ [lowendbox.com] or WebHostingTalk. Most of them use OpenVZ because it has less overhead, but Xen is still pretty common as it has fewer limitations (like you can load whichever module you want).

  • Perhaps there isn't a single universal best police in regards to this. What works best in one situation is not necessarily what works best in all of them.
  • by nedlohs ( 1335013 ) on Thursday October 02, 2014 @10:18AM (#48047069)

    That some idiot decided to publish the prenotification is just more likely when you have something in as widespread use as bash.

  • So unless you lived under a rock, most people knew there was a security bug out there (why else would the big cloud providers be forcing a restart of all my VMs?) We didn't know what it was, and because I'm not part of the preferred client group my servers didn't get patched prior to disclosure. So for me, no this isn't sufficient. I prefer the more open way of doing it, versus fixing it in a closed "preferred client" way that they handled this.

    • So unless you lived under a rock, most people knew there was a security bug out there (why else would the big cloud providers be forcing a restart of all my VMs?) We didn't know what it was, and because I'm not part of the preferred client group my servers didn't get patched prior to disclosure. So for me, no this isn't sufficient. I prefer the more open way of doing it, versus fixing it in a closed "preferred client" way that they handled this.

      What more open way do you propose?

      With this approach the major vendors got patched first, while you and the attackers both got some forewarning that a vulnerability and patch were going to be made available and got to find out about both at the same time.

      In a more open system the only real difference seems to be the major vendors would end up in the same boat as you, having to race the attackers to apply the patch. It removes a comparative advantage they have but seems to make users less secure in general.

  • by Hizonner ( 38491 ) on Thursday October 02, 2014 @10:30AM (#48047175)

    Predisclosure is very risky. You don't really know which members of your "predisclosure list" have good control over who finds out and which don't. And even with perfect control, if you're going to patch something the size of Amazon at all, you're going to have to tell a lot of people. Are you sure you want every individual who happens to have a certain job at Amazon to have the chance to exploit other people's systems?

    You're not really trusting organizations. You're trusting collections of individuals. And with that many individuals, you are going to have some bad actors. But you'd have a problem even if you could think of organizations as units with perfect policy enforcement. Suppose the NSA comes to you and says they're running a big Xen cluster (they probably are somewhere). And it's critical to national and maybe global security (it could very possibly be). Do they get on the list? How are you going to feel when they use that preannouncement to break into somebody else's system?

    Furthermore, people inferred that there was probably a Xen vulnerability from Amazon's downtime, before the official announcement. So how, exactly, was that better than having the Xen project actually announce that fact, with or without details or a patch?

    Also, it's not so easy to really know what's a "critical deployment". The fact is that, whether you're Xen or you're bash, you don't really know who's using your stuff. You don't really know what's critical. And you definitely don't know who's trustworthy.

    And all of THAT assumes that you even control the disclosure at all. If you find a problem in your software, that problem is "new to you". That does not mean that a bunch of other people don't already know about it. Especially the sort of people who make a business of exploiting these things. So you don't even know for sure who you're depriving of the knowledge.

    There's always an exception. Maybe Xen is that exception. But the idea that predisclosure should be the normal approach for software in general, whether open source or otherwise, is a very dangerous one.

    • Furthermore, people inferred that there was probably a Xen vulnerability from Amazon's downtime, before the official announcement. So how, exactly, was that better than having the Xen project actually announce that fact, with or without details or a patch?

      There was no inferring. Amazon made an oops in their announcement and said that it was due to a bug in Xen. If they hadn't named Xen, then people may have inferred Xen but not known. There are quite a few other parts of the stack that can require system reboots.

      None of the other Xen hosts specified that it was a bug in Xen until the embargo was lifted, and Amazon has indicated that in the future they won't specify which part of the stack is making them do the reboot. AWS gives users notifications of reboots all the time for various reasons, so all that was out of the ordinary was that it was such a large reboot wave that they made an official announcement.

    • I found out about the predisclosure via reddit a few days before the patch.. So by then the cat was out of the bag....

  • by Sheik Yerbouti ( 96423 ) on Thursday October 02, 2014 @10:44AM (#48047315) Homepage

    The problem is "security researchers" want to gain notoriety so they can make more money as consultants and doing paid appearances. The way you do that is irresponsible disclosure that causes a big stir. If you tell someone I discovered heartbleed they have heard of that and will take it as a credibility indicator. If you tell them I discovered XSA-128 everyone says never heard of it. It's all about marketing and PR and making bucks. That's how these things end up with catchy names. The people doing this are acting rationally but with questionable motives and their dedication to actual security should be under great scrutiny.

    • The problem is "security researchers" want to gain notoriety so they can make more money as consultants and doing paid appearances. The way you do that is irresponsible disclosure that causes a big stir. If you tell someone I discovered heartbleed they have heard of that and will take it as a credibility indicator. If you tell them I discovered XSA-128 everyone says never heard of it. It's all about marketing and PR and making bucks. That's how these things end up with catchy names. The people doing this are acting rationally but with questionable motives and their dedication to actual security should be under great scrutiny.

      The fact that heartbleed made more noise, also means more people know they need to apply a patch. What you call "irresponsible disclosure" is to some of us the only responsible way to disclose it.

  • So this situation really was handled with aplomb. However, saying that we "should" handle things this way is about as dangerous as saying we "should" shout out the details of every vulnerability we find. Keeping things internal prevents the community from stepping up. I doubt that all the folks who have dealt with heartbleed were involved in SSL beforehand. But they were helpful because they knew they were needed, and their ignorance would have hurt us badly. On the other hand, shouting everything out feels

  • If vendors want to withhold detailed notification until a patch is available I don't much care.

    However withholding patches until after a "chosen" subset of user community has casually had the opportunity to fix their shit is great for you if your chosen and worse if your everyone else.

    Better to announce in advance on such a date at such a time patch of import x shall be released to all concerned... let everyone who cares plan their maintenance windows accordingly.

  • by Todd Knarr ( 15451 ) on Thursday October 02, 2014 @11:47AM (#48047881) Homepage

    This is the right way to handle things, yes. The problem is that most researchers are used to dealing with proprietary software vendors whose reaction to any bug report is at best to ignore and bury the report and deny there's any problem, at worst to attack and sue the researchers. The only sane reaction to that situation is to handle things the way Heartbleed and Shellshock were: immediately publicly disclose all the details so that there are too many independent confirmations and too much publicity for the vendor to ignore the situation or deny the problem. When 99% of the time you need to follow one course, it's easy to lose track of when you're dealing with the other 1%.

  • Defense is depth matters, many network exploitable vulnerabilities can be mitigated before they ever get to the server gear. In case of bash only some cases/methods can be blocked. This does mean in many cases it's more than just the authors that need to know.

  • by yacc143 ( 975862 ) on Thursday October 02, 2014 @02:49PM (#48049939) Homepage

    Pick any random Linux box, and it will have bash/openssl installed.

    Xen on the other hand, rather specialized software, hence you have a couple of mega-users. It's easy to coordinate with a couple of professional organisations that are critically interested.

    Without such usage clusters, it's much more difficult.

  • What is comes down to is agility and design. You don't control other peoples knowledge and communications to mitigate your flaws, you ensure that you have a process for managing them effectively if you care. If a service is that valueable to you ensure that you have two service delivery mechanisms. Yes its expensive but you've just said that this channel is that important to you. Or live with the risk and keep the cash, in both cases the ball is in your court.
    The whole censorship approach is created by anal

  • I really see these exploits and publishing their existence as a double edged sword problem. On one hand, as an asministrator of systems I'd like to know immediately that there is an exploit so I can shut down a service, if I can, so it doesn't get exploited before I can patch it. On the other hand if the knowledge of an exploit gets into the public then more people are going to try to exploit systems that may be vulnerable, so keeping it quiet may also be beneficial.

    Where do you draw the line, and for what

  • Pre-disclosure and a quarantine period is useful for both infrastructure security bugs like xen and widely deployed security bugs like bash and openssl.

    In the case of an infrastructure bug the quarantine period allows the service providers a chance to patch and restart their services before their customers become vulnerable.

    In the case of a widely deployed security bug it give the OS/software vendors the chance to investigate the bug. Create patches and test/verify their patches and pre-stage them so when

If it wasn't for Newton, we wouldn't have to eat bruised apples.

Working...