Xen Cloud Fix Shows the Right Way To Patch Open-Source Flaws 81
darthcamaro writes Amazon, Rackspace and IBM have all patched their public clouds over the last several days due to a vulnerability in the Xen hypervisor. According to a new report, the Xen project was first advised of the issue two weeks ago, but instead of the knee jerk type reactions we've seen with Heartbleed and now Shellshock, the Xen project privately fixed the bug and waited until all the major Xen deployments were patched before any details were released. Isn't this the way that all open-source projects should fix security issues? And if it's not, what is?
"Gave them time" not "Waited" (Score:5, Funny)
Re:"Gave them time" not "Waited" (Score:5, Interesting)
Actually, the flaw in bash was also embargoed for a couple of weeks. The problem is that the original patch that was given time to circulate didn't fully fix the issue, and nobody realized that until after the embargo was lifted and the problem became public knowledge. "Responsible disclosure" was exercised in both cases, it just didn't work out well with Shellshock.
But the media would lose... (Score:4, Insightful)
Maybe? (Score:4, Insightful)
I mean, some open source projects don't actually have anyone doing live support and a patch happens when someone "gets around to it".
And some exploits are out there whether you say anything or not. Slashdot users pretty regularly complain about this with bumper sticker wisdom about "security through obscurity".
And just because the deployments are all fixed, doesn't mean someone has used that. Heartbleed(cited in the summary) was fixable within a couple days on every major linux distro with a simple update. That didn't mean no one got hacked.
All-in-all, sure it's a good policy, but not the magic perfect, oh-lets-all-be-like-xen thing the summary makes it out to be.
Re: (Score:3, Interesting)
I get that a lot of people just chant the "security through obscurity" mantra, but obscurity really is a layer of
Re:Maybe? (Score:5, Informative)
your salted password hash is just an obscured version of your password.
Negatory. Salted hashes are not reversable without a huge damned rainbow table particular to the salt, and most passwords are hashed, not encrypted.
There isn't actually a password to recover from that.
Re: (Score:2)
Any security can be broken with enough time and effort, like the tens of thousands of years of computer time it takes to build that rainbow table.
Re: (Score:2)
Hell, a password is a form of security through obscurity -- your salted password hash is just an obscured version of your password.
Not really; hashing a password throws away information, so it is more secure than storing the password (obfuscated or not) in the case that an attacker compromises the data store.
Likewise, salting adds information to a given password, so it is more secure than using the unsalted password (obfuscated or not) in the case that an attacker is brute-forcing against known hashes (eg. rainbow tables).
Passwords themselves are a pure form of obfuscation: their entire reason for existence is to not be known by others
Re: (Score:1)
Of course, bad passwords, like "password" even with salts makes pretty
Re: (Score:3)
Re: (Score:2)
I mean, some open source projects don't actually have anyone doing live support and a patch happens when someone "gets around to it".
True but a delayed publication of the bug isn't really going to affect them.
And some exploits are out there whether you say anything or not. Slashdot users pretty regularly complain about this with bumper sticker wisdom about "security through obscurity".
I'm not sure that specific complaint is that common. Certainly if a project sits on a security bug for months, or even years, then the security through obscurity criticism is valid. But the vast majority seem to feel it's alright to wait a couple weeks to get a patch together and inform the major users, that seems to be the fastest way to protect the most people as quickly as possible.
And just because the deployments are all fixed, doesn't mean someone has used that. Heartbleed(cited in the summary) was fixable within a couple days on every major linux distro with a simple update. That didn't mean no one got hacked.
All-in-all, sure it's a good policy, but not the magic perfect, oh-lets-all-be-like-xen thing the summary makes it out to be.
AFAIK Heartbleed was fixed before the disclosure
Re: (Score:1)
Apples and Oranges (Score:5, Insightful)
Sure, it's an ideal situation where a bug was identified, fixed quickly and a patch pushed out and applied by large users quickly but Xen is a program which is much more centrally controlled than BASH or OpenSSL. BASH and OpenSSL are more key infrastructure bits than Xen is. What I mean is that they are integrated into FAR more devices and systems making a silent patch nearly impossible.
Re:Apples and Oranges (Score:5, Insightful)
... BASH and OpenSSL are more key infrastructure bits than Xen is. What I mean is that they are integrated into FAR more devices and systems making a silent patch nearly impossible.
Quite correct.
.
Just try to estimate the number of devices affected by Heartbleed and Shellshock. It's probably in the billions.
As a case in point, a single Zen installation can host hundreds, maybe even thousands, of vulnerable installations of Shellshock and Heartbleed.
It is truly an apples and oranges comparison.
Re: (Score:2)
Yeah, they can come up with a fix before they announce, but for as ubiquitous as BASH is, you can't really keep it secret until it's been applied. You'd have to notify various distribution points (every Linux and Unix distributor), creating such a long list of people who are notified that it amounts to a public disclosure.
At least, I'd imagine that's the case.
Re:Yes, Exactly (Score:2)
Well said.
It is a valid strategy (Score:3)
the Xen project privately fixed the bug and waited until all the major Xen deployments were patched before any details were released. Isn't this the way that all open-source projects should fix security issues?
I do see value in that approach. When a vulnerability is found, it's better to report it discretely to the authors. Shouting the details to the world in the name of "openness" just causes script kiddies to go wild and nuke a bunch of machines which could have been otherwise avoided.
Re: (Score:1)
I do see value in that approach. When a vulnerability is found, it's better to report it discretely to the authors. Shouting the details to the world in the name of "openness" just causes script kiddies to go wild and nuke a bunch of machines which could have been otherwise avoided.
In every case I have seen where someone is "shouting the details to the world" the authors responsible for fixing the bug was given plenty of weeks to fix it but ignored it since no-one was shouting about it.
Then after a few weeks when it didn't get fixed and the authors never responded, the bug-reporter decided to go public with it. After the shit-storm was raised the authors accused the bug-reporter of "shouting the details to the world".
Black hat (Score:3)
The question is whether black hat hackers are aware of the security holes.
Since Open Source projects communicate in the open (even if just version control commits), I find it quite likely that all major security-related projects are monitored by black hat hackers. The few weeks waiting period gives them ample time to use the security hole.
Re:Black hat (Score:4, Interesting)
Re: (Score:2)
Re:Black hat (Score:4, Informative)
That's why the Xen Project doesn't put the fix into version control until after the embargo period is over. Only people on the predisclosure list (or those able to listen in) would be able to learn about the vulnerability without doing their own audit of the code to find the bug themselves (which is very expensive).
There's basically a balance to be struck. All users not on the predisclosure list (and thus who cannot update their systems until the embargo period is over) will continue to be privately vulnerable during the embargo period: anyone who happens to have dug deep enough and found the bug can still exploit it. But as soon as the announcement is made, everyone who hasn't yet updated is publicly vulnerable: Nobody has to search to find the bug, they just have to write an exploit for it. Being privately vulnerable is certainly bad, but being publicly vulnerable is far worse. The goal of the embargo period is to try to reduce the time that users are publicly vulnerable by extending the time they are privately vulnerable. Two weeks has been found to be a reasonable cost/benefit trade-off in our experience.
Re: (Score:1)
What if someone who privately knows about the vulnerability gets the idea to exploit various installations of competitors (or even common users!) during the embargo period? Do you trust large enterprises not to misuse their knowledge to their own advantage?
Re: (Score:2)
Of course that's a risk. But again, is it worse to have a handful of people who are trying to be secretive know about the vulnerability while vendors update and carefully test their software, or for for the entire world to know about the vulnerability wh
Re: (Score:2)
What if someone who privately knows about the vulnerability gets the idea to exploit various installations of competitors (or even common users!) during the embargo period? Do you trust large enterprises not to misuse their knowledge to their own advantage?
A patch cannot be prepared "privately" without a number of people knowing about it: Developers, testers, reviewers, server admins etc. At each of the organizations that are privy to the predisclosure.
There are money to be made from that. To gain access to an exploitable vulnerability before a patch can be distributed broadly is a massive opportunity.
What if someone starts to sell off their knowledge to blackhatters?
What if someone sets up what looks like a legitimate business (a fake antimalware) and uses i
Re: (Score:2)
Different audiences (Score:3)
Re: (Score:1)
And nobody is a "minor player" with something so complex as Xen
There are hundreds, probably thousands, of "minor players". Just look at something like http://lowendbox.com/ [lowendbox.com] or WebHostingTalk. Most of them use OpenVZ because it has less overhead, but Xen is still pretty common as it has fewer limitations (like you can load whichever module you want).
Re:Ignorance is not bliss (Score:5, Funny)
Ignorance is not bliss
I didn't want to know that!
yum update. Remember the huge DNS DOS? (Score:2)
> how does it improve the situation of the smaller companies
By the time the issue becomes public, Red Hat will already have a good update ready for you, so you can just "yum update xen" and you're good to go.
Contrast this with shellshocker, where after the issue was in the media, Red Hat started trying to figure out how to handle it, and over the next several days they put multiple patches trying to address it in different ways, while upstream bash went in a completely different direction. The bash team
One-size-fits-all (Score:2)
That's how the bash issue was handled (Score:5, Informative)
That some idiot decided to publish the prenotification is just more likely when you have something in as widespread use as bash.
closure (Score:1)
can we find and name that idiot?
Doesn't work for everyone (Score:1)
So unless you lived under a rock, most people knew there was a security bug out there (why else would the big cloud providers be forcing a restart of all my VMs?) We didn't know what it was, and because I'm not part of the preferred client group my servers didn't get patched prior to disclosure. So for me, no this isn't sufficient. I prefer the more open way of doing it, versus fixing it in a closed "preferred client" way that they handled this.
Re: (Score:2)
So unless you lived under a rock, most people knew there was a security bug out there (why else would the big cloud providers be forcing a restart of all my VMs?) We didn't know what it was, and because I'm not part of the preferred client group my servers didn't get patched prior to disclosure. So for me, no this isn't sufficient. I prefer the more open way of doing it, versus fixing it in a closed "preferred client" way that they handled this.
What more open way do you propose?
With this approach the major vendors got patched first, while you and the attackers both got some forewarning that a vulnerability and patch were going to be made available and got to find out about both at the same time.
In a more open system the only real difference seems to be the major vendors would end up in the same boat as you, having to race the attackers to apply the patch. It removes a comparative advantage they have but seems to make users less secure in general.
Re: (Score:2)
In what way do you think this isn't illegal anti-competitive behaviour?
This effectively creates a system where the bigger vendors have serious competitive advantage over smaller ones, as they get to patch their systems in private, while leaving smaller players out in the cold until they get large enough to join the cabal. It will only be a matter of time before one vendor advertises that their security vigilance is better than another, executed on the back of private information.
If there is a patch available, it should be announced and distributed immediately. If there's no patch and no temporary mitigation strategy, then minimizing disclosure is probably fine.
It's not illegal anti-competitive behaviour because the intent isn't to be anti-competitive, it's to reduce the number of victims.
The problem with your proposal is you're creating an additional security risk in the name of fairness.
Predisclosure should NOT be the normal practice (Score:3)
Predisclosure is very risky. You don't really know which members of your "predisclosure list" have good control over who finds out and which don't. And even with perfect control, if you're going to patch something the size of Amazon at all, you're going to have to tell a lot of people. Are you sure you want every individual who happens to have a certain job at Amazon to have the chance to exploit other people's systems?
You're not really trusting organizations. You're trusting collections of individuals. And with that many individuals, you are going to have some bad actors. But you'd have a problem even if you could think of organizations as units with perfect policy enforcement. Suppose the NSA comes to you and says they're running a big Xen cluster (they probably are somewhere). And it's critical to national and maybe global security (it could very possibly be). Do they get on the list? How are you going to feel when they use that preannouncement to break into somebody else's system?
Furthermore, people inferred that there was probably a Xen vulnerability from Amazon's downtime, before the official announcement. So how, exactly, was that better than having the Xen project actually announce that fact, with or without details or a patch?
Also, it's not so easy to really know what's a "critical deployment". The fact is that, whether you're Xen or you're bash, you don't really know who's using your stuff. You don't really know what's critical. And you definitely don't know who's trustworthy.
And all of THAT assumes that you even control the disclosure at all. If you find a problem in your software, that problem is "new to you". That does not mean that a bunch of other people don't already know about it. Especially the sort of people who make a business of exploiting these things. So you don't even know for sure who you're depriving of the knowledge.
There's always an exception. Maybe Xen is that exception. But the idea that predisclosure should be the normal approach for software in general, whether open source or otherwise, is a very dangerous one.
Re: (Score:1)
Furthermore, people inferred that there was probably a Xen vulnerability from Amazon's downtime, before the official announcement. So how, exactly, was that better than having the Xen project actually announce that fact, with or without details or a patch?
There was no inferring. Amazon made an oops in their announcement and said that it was due to a bug in Xen. If they hadn't named Xen, then people may have inferred Xen but not known. There are quite a few other parts of the stack that can require system reboots.
None of the other Xen hosts specified that it was a bug in Xen until the embargo was lifted, and Amazon has indicated that in the future they won't specify which part of the stack is making them do the reboot. AWS gives users notifications of reboots all the time for various reasons, so all that was out of the ordinary was that it was such a large reboot wave that they made an official announcement.
Re: (Score:2)
I found out about the predisclosure via reddit a few days before the patch.. So by then the cat was out of the bag....
Grandstanding (Score:3)
The problem is "security researchers" want to gain notoriety so they can make more money as consultants and doing paid appearances. The way you do that is irresponsible disclosure that causes a big stir. If you tell someone I discovered heartbleed they have heard of that and will take it as a credibility indicator. If you tell them I discovered XSA-128 everyone says never heard of it. It's all about marketing and PR and making bucks. That's how these things end up with catchy names. The people doing this are acting rationally but with questionable motives and their dedication to actual security should be under great scrutiny.
Re: (Score:2)
The problem is "security researchers" want to gain notoriety so they can make more money as consultants and doing paid appearances. The way you do that is irresponsible disclosure that causes a big stir. If you tell someone I discovered heartbleed they have heard of that and will take it as a credibility indicator. If you tell them I discovered XSA-128 everyone says never heard of it. It's all about marketing and PR and making bucks. That's how these things end up with catchy names. The people doing this are acting rationally but with questionable motives and their dedication to actual security should be under great scrutiny.
The fact that heartbleed made more noise, also means more people know they need to apply a patch. What you call "irresponsible disclosure" is to some of us the only responsible way to disclose it.
Ideal situation =/= codified law of alwaysness (Score:1)
So this situation really was handled with aplomb. However, saying that we "should" handle things this way is about as dangerous as saying we "should" shout out the details of every vulnerability we find. Keeping things internal prevents the community from stepping up. I doubt that all the folks who have dealt with heartbleed were involved in SSL beforehand. But they were helpful because they knew they were needed, and their ignorance would have hurt us badly. On the other hand, shouting everything out feels
No (Score:2)
If vendors want to withhold detailed notification until a patch is available I don't much care.
However withholding patches until after a "chosen" subset of user community has casually had the opportunity to fix their shit is great for you if your chosen and worse if your everyone else.
Better to announce in advance on such a date at such a time patch of import x shall be released to all concerned... let everyone who cares plan their maintenance windows accordingly.
Re: (Score:2)
Have you actually read the Xen Project's policy at http://www.xenproject.org/secu... [xenproject.org] - it is really quite inclusive and doesn't really discriminate against small service providers.
This is one of those games you can't win.
Being inclusive means your opsec is shit and you have effectively blabbered to *everyone* who has a stake in knowing. A secret known by all is no secret.
The problem isn't open source projects (Score:3)
This is the right way to handle things, yes. The problem is that most researchers are used to dealing with proprietary software vendors whose reaction to any bug report is at best to ignore and bury the report and deny there's any problem, at worst to attack and sue the researchers. The only sane reaction to that situation is to handle things the way Heartbleed and Shellshock were: immediately publicly disclose all the details so that there are too many independent confirmations and too much publicity for the vendor to ignore the situation or deny the problem. When 99% of the time you need to follow one course, it's easy to lose track of when you're dealing with the other 1%.
Your assuming their is only one mitigation method (Score:2)
Defense is depth matters, many network exploitable vulnerabilities can be mitigated before they ever get to the server gear. In case of bash only some cases/methods can be blocked. This does mean in many cases it's more than just the authors that need to know.
Xen != bash != openssl (Score:3)
Pick any random Linux box, and it will have bash/openssl installed.
Xen on the other hand, rather specialized software, hence you have a couple of mega-users. It's easy to coordinate with a couple of professional organisations that are critically interested.
Without such usage clusters, it's much more difficult.
Open publish perish (Score:2)
What is comes down to is agility and design. You don't control other peoples knowledge and communications to mitigate your flaws, you ensure that you have a process for managing them effectively if you care. If a service is that valueable to you ensure that you have two service delivery mechanisms. Yes its expensive but you've just said that this channel is that important to you. Or live with the risk and keep the cash, in both cases the ball is in your court.
The whole censorship approach is created by anal
Damned if you do, damned if you don't (Score:2)
I really see these exploits and publishing their existence as a double edged sword problem. On one hand, as an asministrator of systems I'd like to know immediately that there is an exploit so I can shut down a service, if I can, so it doesn't get exploited before I can patch it. On the other hand if the knowledge of an exploit gets into the public then more people are going to try to exploit systems that may be vulnerable, so keeping it quiet may also be beneficial.
Where do you draw the line, and for what
More constructive difference between xen & bas (Score:2)
Pre-disclosure and a quarantine period is useful for both infrastructure security bugs like xen and widely deployed security bugs like bash and openssl.
In the case of an infrastructure bug the quarantine period allows the service providers a chance to patch and restart their services before their customers become vulnerable.
In the case of a widely deployed security bug it give the OS/software vendors the chance to investigate the bug. Create patches and test/verify their patches and pre-stage them so when