Open-Source != Security; PGP Provides Cautionary Tale 225
Porthop points out this "interesting developer.com story
regarding the security of open source software, in regards to theories that many eyes looking at the source will alleviate security problems."
It ain't necessarily so, emphasis on necessarily. Last week it was discovered that, in some (uncommon) cases, a really stupid brainfart bug makes PGP5 key generation
not very random.
The bug lived for a year in open-source code before being found. If you generated a key pair non-interactively with PGP5 on a unix machine, don't panic and read carefully; you may want to invalidate your key. Update, next day: several people have pointed out that although
PGP5's code
is available (crypto requires code review), it can't be used for any product without permission. Incentive for code review is therefore less than for other projects of its importance, and I really shouldn't have called PGP
"open-source." Mea culpa.
open source sees more bugs (Score:3)
Open Source != Security (Score:3)
Umm... since when is Open Source = security?? Somebody has already posted this link [acm.org] on a previous story already. It describes a kind of trojan that not even source code auditing can prevent.
But of course, seeing that slashdotters never bother to do their research (in spite of habitually telling newbies to RTFM), here comes my obligatory Slashdotter response poll :-P
Poll: Most typical response to this article:
Open Source contributions. (Score:4)
Not because I'm anti-open-source, or anti-PGP. Because I think that open-source has led to a few bad habits
1) It's 'good' software. By this I mean most people (Including myself) think that the software, while looking like it works - does exactly what you think it's doing. Oh, some other programmer has checked it I'm sure. Unfortunately I don't think that's the case anymore, after releasing a few things myself - and receiving one piece of feedback for about 1000 downloads.
2) Constant upgrading. I do it. You do it. Everyone does it. I'm not saying that constant upgrades are a bad thing, but it does seem that releases (Aside from the more major of projects) are tested at any deep level. This is more of a bad habit of programmers (Once again I raise my hand, I suck at Q&A) I'd love to see some open source Q&A people inside a project.. I've yet to see an internal release be posted before going up. I know that's what the x.x.1 version is for, but a lot of bugs shouldn't even be in there and they're from 4am coffee splurges and should be checked by friends or whatnot.
3) Ripping code that isn't tested with that setup. *cough* This part really bit me once with some network stuff. Ohh, they did it this way - I want it that way too! Not the best approach in my experiences. It's great to re-use code, but check it out first. I've seen snippets from other peoples code that is both broken and misused, and of course causes small bugs to show up in the app.
K, that's my rant. My 3 bad habits anyway.
Open Source and Security (Score:3)
I used to think that security through obscurity was a valid security model, reasoning that so long as no one knew how or why something was built, at least in source terms, than it would be better for everyone. A person can't exploit something they don't know is there. The largest problem with the obscurity model is the fact there *are* people who just look for exploits. they get home from work/school and hack away at these utilities. By not allowing the source to be released, and scrutinized, you're going to see bug-fixes arrive later than they should, you're going to see exploits that go for months/years completely unpatched. This makes for all around buggier programs, and, by inference, more exploitable programs.
Open source is by no means the best practice in some specific situations (at least right now). There are other factors than just bugginess and exploitability that software manufac's take into account. But in *general*, the open source model is much more effecient and robust than the *alternative*
FluX
After 16 years, MTV has finally completed its deevolution into the shiny things network
Not that suprising (Score:1)
Linux has a entropy pool based /dev/random (Score:2)
pgp5i will eat out of
In linux, with entropy based random (take time between irq requests into a 'pool', then feed into randomness generator as seeds) it is just fine.
Its the other unicies that are broke, not pgp5i.
Second open-source security concern in a week (Score:1)
Why open source could only help (Score:1)
This is a complete joke, if a person wants to find a security hole in a program don't care one bit if their copy of the source code was obtained legally, they will just get it any way they can, whether it be downloaded from an illegal site or decompiled themselves.
The friendly programmers however, do care about the legality of their source code, and are the ones who will gain access through open source.
So quite simply, open source means little increase in hackers finding flaws to exploit, but gives a huge increase in the number of programmers solving the problems.
Somebody found it didn't they? (Score:3)
It's not just the number of eyeballs, but quality (Score:1)
But... (Score:1)
I wish authors would learn to read. (Score:1)
I.e. If it wasn't open-source the problem would *STILL* be unfound. PGP 5i isn't a bazarr developed app, so it still has all the 'benifits' of commercial development. It was OSS that found that bug.
OSS make it more secure. The article that claims otherwise is irresponciable journalism.
Re:open source sees more bugs (Score:1)
Closed source systems have security bugs too
Try hitting escape in a Win 9x Password dialog.
Open Source has no automatic benefits (Score:2)
The reality of the issue is, at least in the few projects I'm involved with, that just distributing software in source format doesn't mean it will be looked at. Not by the end users (this is obvious), but not that much by developers either -- even the core developers usually divide workload by assigning module owners and such, and as a result, code in someone-elses-module rarely gets properly reviewed. Sure, someone might keep an eye on the commit logs but that's hardly a decent way to review evolving code.
So, with regards to security issues, I think things boil down to this: unlike proprietary, binary-distributed software, open source as a distribution mechanism isn't explicitly designed to prevent code review.
If the opportunity for peer review has been left unutilized in a single project, others can use the example to learn. Open source isn't about automatic benefits in software quality -- it's about making work towards better software possible.
pgp5i not open source, either (Score:1)
Which cave were you living in all this time? (Score:4)
Does this mean that Microsoft now employs about 5 staff worldwide? So far I yet to see Microsoft get it "right". Yes opening up code to a million eyes does mean that more idiots see the code, but it also means that more vetern programmers see it. When was the last time you took a look at any Windows source code?
So a bug was discovered in Open Source software, big deal. It'll get fixed and people will move on. To fix a bug in Windows, you first have to beat Microsoft over the head serverly with it, then, when they deny it exists, you have to create some program that illegally demostrates their bugs. Only then will they admit that there was an unplanned "feature (read bug) and will promptly proceed to shut your program/site/self down permanently... oh and if they get some time... maybe... they might fix the bug (in service pack 13).
Non Interactive Keygen is a Hard Problem (Score:5)
Everybody, generating keys non-interactively is ridiculously difficult, because to be honest there's a very small amount of entropy in your system. Clock differentials and specific CPU traces are pretty good, but everything else other derives from the network(and is therefore remotely attackable) or traces itself back to PRNGs(various memory assignment algorithms, etc.)
That's not to say that this isn't a problematic bug, and that it doesn't need correcting. But non-int keygen just isn't that common(yet; I'm working on that), so the exposure is thankfully smaller than it otherwise might be.
As for Microsoft, to be honest I have very little confidence that the RNG's in any web browser are anything that would survive an audit by Counterpane Labs. MS does very good stuff; crypto isn't generally among them(though any of us would be a fool to not note that they're shipping 128 bit SSL by default.)
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Buffer overrun (Score:2)
staticunsigned
pgpRandPoolAddEntropy(256);
pgpDevRandomAccum(intfd,unsignedcount)
{
charRandBuf;
unsignedshorti=0;
pgpAssert(count);
pgpAssert(fd>=0);
for(i=0;i<=count;++i) {
RandBuf=read(fd,&RandBuf,count);
pgpRandomAddBytes( &pgpRandomPool,(byte*)&RandBuf,sizeof(RandBuf));
}
return(i);
}
If count is anything over 1, that call to read() is gonna stomp on the stack.
Far Reaching Conclusion (Score:1)
While no one argues that open source provides perfect security, it isn't fair to say that open souce is insecure. Bottom line: what would the likelyhood of discovering this bug and getting a fix out be if the source was closed?
Bill Joy on many eyes... (Score:3)
is that having a lot of people staring at the code does not find the
really nasty bugs. The really nasty bugs are found by a couple of
really smart people who just kill themselves. Most people looking at
the code won't see anything
contributing and achieve a high standard."
Nope; Found by outsiders. (Score:2)
The bug was found by outsiders, so OSS made it more secure.
Re:open source sees more bugs (Score:3)
Slashdot is almost as insecure as Windows, and delivers only bare-minimum security.
I challenge you to find a security bug in any version of VMS past 4. This is one of the most closed, propritary operting systems in production, and also one of the most secure (even attained B2 - when is an open source OS going to get a security rating?)
Open-Source is better than nothing (Score:1)
make some money [slashdot.org]
Missing the point. (Score:5)
The idea is that security through obscurity is perfect until someone finds the hole, then it's worthless. In contrast, when using an open source solution, the security is inheirently flawed, because there is no obscurity, but as time goes by it gets less and less flawed, as responsible people find and patch holes, to the point where it's a safer bet than the obscure method.
The most effective real-world security may be to combine both, or only use open methods that have been analyzed long enough that they're virtually certain to be secure.
The security of obscure methods is simply harder to quantify, and you don't know when they become worthless.
Kevin Fox
Re:Linux has a entropy pool based /dev/random (Score:1)
Please read the article before you respond.
on the topic... (Score:2)
Re:Linux has a entropy pool based /dev/random (Score:5)
Here is this relevent code:
char RandBuf;
for(i = 0; i <= count; ++i) {
RandBuf = read(fd, &RandBuf, count);
From the read man page:
ssize_t read(int fd, void *buf, size_t count);
On success, the number of bytes read is returned
As you can see, RandBuf was being set to the number of bytes read, instead of the byte read.
In fact, I have my own issue with that code. The for loop should read:
for(i = 0; i < count; ++i)
But I am not very familiar with the context of this code. The original code would loop count + 1 times while my version will loop count times. This may or may not be the desired behaviour. I guess I'll go send in another bug report
Anyone notice that Extrans doesnt seem to be working? or is it just me.
Opensource != Security, and generalizations (Score:1)
Likewise, you can't argue that Closed Source is or isn't Secure, because closed source could be more secure if it's professionally audited, or it could have blatantly dangerous bugs pass through if it's not.
I think what I'm really trying to say, is that I'm not surprised, and I don't understand why everyone else is surprised about this. Good thing they found the bug though!
Re:Linux has a entropy pool based /dev/random (Score:1)
The count parameter is always set to the value 1 by the calling code. The byte read from the file descriptor fd into the RandBuf buffer is subsequently overwritten with the read() function's return value, which will be 1. The actual random data are not used.
Err... did i miss something?
phobos% cat
Re:Open Source and Security (Score:1)
It's obvious that security through obscurity is not the ideal, however I'm not so sure that open source programs are more secure.
Consider a program (such as PGP) which is written with open and closed source versions. Both are just as likely to have bugs at first, but both are inspected by other programmers. The closed source program is inspected by a couple of programmers inside the company who are quite knowledgable about security. The open source program is inspected by dozens of programmers, most of which know very little about security. The score so far? I'd say about even.
Now the programs are released and people start using them. Hackers start trying to find exploits in them. If an exploit is found by a "white hat", it is reported and fixed. If it is found by a "black hat" it is used to attack systems for a while before being noticed. For the closed source program it is less likely an exploit will be found outside the company than for the open source porgram. So it's likely more exploits for the open source program will fall into the hands of the "black hats".
Of course, this assumes similar numbers of white and black hats, if there are more white hats then bugs in an open source program will be found quickly. Apparently this has not happened for the programs discussed.
Offset against this is the fact that bug fixes are likely to be much quicker in the open source program.
I'll still use open source programs for another reason. Security flaws can also be intentionally introduced for several reasons. This is one type of bug which is extremely unlikely to occur in an open source program.
Re:Linux has a entropy pool based /dev/random (Score:1)
If you'd read the entire thing you'll see that the bug caused the program to not read ANY (real) data from
The page specifically lists Linux as one of the systems that this bug occurs on.
Re:Open Source contributions. (Score:1)
I think this is possibly one of the worst habits in programming, and it's why I never rip code unless I know why and how it does what it does. At least, I think I don't... =)
There's a guy at work who has no clue how to program, and yet somehow has ended up with the job of writing various stuff, usually shell scripts or Windows batch files. He is literally unable to do anything unless he rips an example from somewhere and tweaks it. Problem is, he can't find relevant examples.
For example, he needed to check whether a file existed. I suggested various methods, but instead he found some code to automatically compress the largest file in a directory. I have no idea why he did this, except maybe because both problems had the word "file" in them. He ended up adding 20 more lines which somehow made it sort of do what was needed. Now there are 30 lines of DOS batch file code, only 2 of which are needed or even relevant.
Re:Bill Joy on many eyes... (Score:2)
That's a non sequitur if I ever saw one.
A 'high standard' is set by having each part of the program do exactly what it's supposed to do - nothing less, nothing more. This is not some ethereal concept - it can (and should) be verified mathematically or with equivalent methods such as 'design by contract'.
If more people did that - if programmers working in complex and 'mission-critical' (I hate the term, but I can't come up with anything better at the moment) systems would just admit that they aren't perfect and that they should use all tools at hand to get everything just right (as opposed to just working with bare-bones tools, which unfortunately seems to be a favourite of *nixers) - then more software projects would achieve truly high standards, whether said software were Free or proprietary, whether it were designed by 2 or 2000 people.
Because otherwise, it's just the same story from both sides - just as 'proprietary' doesn't necessarily mean 'high-quality', neither does 'open source'.
Then again, I guess I can't expect much from Mr Joy, after he made an ass out of himself by playing Prophet of the Apocalypse... Ah well.
Source code is Greek to me. (Score:1)
I can't believe no one else saw this (Score:1)
Here is a direct link [cryptome.org], read the first article, although I doubt you will be surprised.
Try -reading- the article. (Score:2)
The alternative of closed-source was mentioned, and dismissed as not being any better.
This article was, in short, saying 'this is a shortcoming of open-source' but it was -not- rehashing the security-by-obscurity argument from the closed-source camp, but discussing the fact that those many eyes may not be looking as close as we assume.
Your response makes -no- sense at all, and has -nothing- to do with this article. It's an answer to the -usual- security debate around open-source but has nothing whatsoever to do with -this- article.
--Parity
Re:Open Source and Security (Score:1)
Or we just don't notice it... Remember that UNIX backdoor perpetuated through the C compiler? The one that would perpetuate itself when the compiler was recompiled? It would be completely invisible when looking at the source.
not quite right... (Score:1)
i looked at the report, and it does not appear that your assessment is correct. the problem is in pgp5i, being that it does NOT read from /dev/random where it should. it doesn't matter if the entropy is there or not. basically the problem was something like
randomBuf = read(random_fd, &randomBuf, 1)
intending to read one random byte into randomBuf. which it does in evaluating read, but then promptly overwrites that value with the return of the call to read, which is the number of bytes read, which is always 1 (unless you get an error, but how often does that happen?)! so the buffer is always '1', even on linux. damn, if that doesn't suck.
a comment on the issue of open source having more eyes on the code...
it may be nice to have more eyes on the code, but what worries me is testing. it's what even the most experienced coder wouldn't think of that can come up in those really weird deviant test cases, after which you smack your forehead, say "shit!", and fix it. true, we have tons of people using and reviewing the code, but does it really get as rigorously tested as when, in commercial development, people are paid to do nothing other than put it through the wringer? just a thought.bug or deliberate flaw (Score:1)
I guess I'd be interested in knowing how long the flaw has been in the code, and also who wrote this particular block of code.
The scary part. (Score:2)
The scary part is things similar to this that HAVEN'T been found.
I get the feeling that the really succesful crackers are probably the types of people who spot things like this and never mention them, just exploiting them for their own use.
Security Through Carefully-Chosen Incompetence (Score:4)
If you want people to carefully look over your code, make sure that you put an error in it, one that generates a really obvious error. I've been using this technique for a long time now, and it's worked wonders.
Those PGP people are too competent for their own good. If outsiders trust PGP too much to check it, everybody loses.
On a related note, my own incompetence has saved me from this bug--because I've never memorized the command-line options to PGP, I have to use it interactively.
Re:Linux has a entropy pool based /dev/random (Score:2)
pgp5i will eat out of
I have no idea where you got that from. It sounds like you don't either. Check out this alert [computerworld.com] in CompuWorld:
_____________
Re:open source sees more bugs (Score:1)
The Windows password dialog is not meant as a secure log-in, it is meant to provide different user options to different users who share a computer. Windows doesn't even have file permissions....
Lame attemp at tongue in cheek humour aside you highlight the point that I was trying to make beautifully.
This bug is not in the code. Login security is totally absent from the code.
Win9x security was designed to be backwards compatible with a security flaw in DOS 1.0
Also, I don't hava a VAX handy is there a port of VMS for the i386?
When is an open source OS going to get a security rating?
Good question: Does anyone know of work in progress?
Hold it here! (Score:1)
If PGP was NOT opensource this "flaw" would have never been released to the public. this is why Opensource works. Yes it took a YEAR for this flaw to surface. BUT IT SURFACED! Besides, pgp is pretty sophisticated software, it makes the linux kernel look like a "hello world" program.
Over the past few weeks, I have seen some very por choices for headlines, you guys either need to hire an editor to process stories before publishing, or start thinking before posting.. (Gee, something all slashdotters should do!)
I would hate to compare slashdot reporters with that of the holland sentinel's or muskegon cronicles reporters (Sensational first! facts last!)
change that headline or post an apology, PGP is proof that Opensource is more secure! (If microsoft owned pgp, they'd call that a feature!)
Re:Non Interactive Keygen is a Hard Problem (Score:1)
FUD? (Score:1)
Re:open source sees more bugs (Score:1)
SGI is creating B1/Orange Book Linux.
Security through Open Source Obscurity (Score:1)
I propose a new method of security programming that takes the best from both models.
Just imagine; the security and stability of Win2000 excellently obfuscated with the ease of use and proprietory extensiveness of all *nices.
I call it "Steve"
Damn it. (Score:5)
The peer-review aspect of open-source is just a nice feature, and actually works most of the time. It isn't an ultimate and guaranteed aspect of it.
People trying to be smart saying that "oh most people looking at the code aren't qualified." Wow, such a revelation. Yes, we thought there was a mystical army of highly trained CS experts poring over all open source code for bugs.
Things slip through the cracks, even in the scientific community's peer review. Humans aren't perfect. Get it through your head.
And yet, people fail to turn this accusing finger all the way around and wonder the same about commercial software. They just excuse it saying "Oh their jobs depend on it, they must check it."
The major driving force in open source is that the programmers actually *use* the software they create. If a bug is found, they *want* to fix it because they are using this software too. They are directly affected. In the case of commercial software, even expensive software, they are not directly affected. Does Microsoft really want to fix bugs? No, it costs them money. In most cases, compatibility issues require companies to buy their software anyway.
So you might say "Hey paying a lot for softare ensures getting good software because the company can pay for experts to pore over every line of code for bugs." Well yeah, but who says they will? They'll only do it as long as it's profitable. Then you'll be stuck with the bugs as fast as you can say COBOL. Oh wait, it will be worse than that because you CAN'T fix it.
No one said open-source was perfect, and just because it isn't doesn't mean the alternative is automatically better.
Maybe there should be a Frequently Used Arguments list. I bet a whole bunch of posts say about the same thing I have. That was a pretty stupid flamebait comment in that article. Oh was it supposed to make us stop and think about something? There are better ways to do it than pasting FUD-style(yes, it was.) flamebait.
Re:Which cave were you living in all this time? (Score:3)
If you read the EULA on the pirated Microsoft software that you install, IT CLEARLY STATES THAT MICROSOFT HAS ABSOLUTELY NO ACCOUNTABILITY OR FAULT IN THE FAILURE OF SAID PRODUCT.
Re:Slashdot == Censorship; Rob Provides Example (Score:1)
b) Having your postings at -1 is hardly censorship. I read at -1 all the time.
c) Arguing that having a post at -1 is censorship is like saying "hey, my letter to the editor in your newspaper isn't on the front page, I'm being censored!"
d) Did I mention its Rob's site, he can do whatever he wants? If you don't like it, you are free to post / go elsewhere.
Open source as a deterrent (Score:3)
Against: If you open the source code, you are making it much easier for crackers to find flaws in your system.
For: Yeah, but there will also be good guys finding flaws too, which will let us fix the bugs faster.
For: If you close the source code, it doesn't mean that crackers won't find flaws. A determined cracker will get in, eventually.
Against: Yeah, but just look around. There are a lot of good guys finding holes in closed source software as well, e.g., Bennett Haselton of Peacefire.
For: Yeah, but the many eye-balls effect is a unique advantage of open source. Closed source software doesn't have that.
Against: Well, the many eye-balls principle is just that, a principle. As this article shows, a lot of people just assume that others are doing the security audit; most are not competent to find flaws even if they are looking; nobody wants to look at a tangled mess of C code, etc. In reality, if your program is not an obviously security-related product (say it's your run-of-the-mill application), you've to admit that many eye-balls won't find any problems there. But a lot of systems are still put at risk because of these "applications".
I think what the critics of open source security are missing is the deterrent power of open source. If they are really right in their claim that more crackers than good guys will be finding flaws in my program, then that's a strong deterrent for me to just code away as I wish. I have a sort of moral responsibility for the code I write (the warranty disclaimers notwithstanding) and I would be peeved if a cracker penetrated a system because of gaping security holes in my work.
The incentive for writing better code is that much lesser if I know that "hell, who's going to be spending time disassembling this code, I've got a deadline to meet".
Sreeram.
----------------------------------
Observation is the essence of art.
Re:Hold it here! (Score:1)
Re:The scary part. (Score:2)
I have to say this topic has been quite a bit unnerving. I'm a web developer like many of you are out there, I imagine. I am responsible for designing and programming small to medium size web applications for my company. I can use M$ products if I want, or I can use open source products. It's my call, but also my ass on the line.
I am a good programmer, but I am *not* a security expert, nor do I have the time to learn how to be one on top of my other responsibilities. I don't want to use M$ products like IIS and ASP, but I know that if I do - and if a bug or security hole is found - it will pretty much be written off as M$' fault, and not mine, although I will probably have to go back and fix the damage
However, I choose open source software, and we get hacked, my company will *definitely* view it as my fault. Now, I'm not one to play it safe, and I've got Linux/Apache/MySQL/PHP/Perl running all over the place, but still.....this topic makes me worry.
Does anyone else have any thoughts on this? Feel the same way as me?
OT: IQ (Score:1)
huh? Isn't 100 the median IQ? Doesn't that mean that 50% of the population has an IQ under 100, not 80?
Re:Linux has a entropy pool based /dev/random (Score:1)
> for(i = 0; i <= count; ++i) {
in fact, I have my own issue with that code. The for loop should read:
for(i = 0; i < count; ++i)
Yep. The original coder probably thought that using ++i instead of the "standard" i++ would somehow magically make the increase happen before the loop and the test.
--
Not quite (Score:1)
The actual bug is assigning the return from read() back to Randbuf. DOH! Turning on compiler warnings probably would have found the bug at the first compile, because the conversion from ssize_t (the return type of read()) to char loses precision. (and I would argue is one of the many implicit conversions in C that shouldn't exist anyway).
Re:Bill Joy on many eyes... (Score:2)
idea that everyone's contribution is equally valid: I don't suppose he
means that code submissions necessarily pollute the code. His general
point is well taken: most of the running on a successful and ambitious
open source project is done by a small number of people.
Exacting software engineering techniques such as design by contract
are less likely to make their way into the democratic free-for-all of
free software than the totalitarian discipline of in house
development.
Bill Joy isn't no opponent of open source. He is simply critical
of the idea that many eyes are of much use in spotting really subtle
bugs, especially ones to do with security. I have to agree with him
on this. And yes, Bill Joy isn't infallible - csh was a pretty bad
idea - but he should be regarded as one of the pioneers of open
source.
Good demonstration of insecure closed source... (Score:2)
Consequence : unseen bug.
Since viewable by open source community, the bug was discovered.
Far from being a demonstration of the insecurity of open sourced code, it's a perfect example of the contrary.
Re:Open Source and Security (Score:3)
I think the principle that people are missing is that, all things being equal, a bug/security hole is going to be found a LOT quicker by examining the source than by simply using the program.
No. Finding any type of bug by using is a heck of a lot easier than finding bugs by examining source. Just imagine auditing 50k lines of source. Now imagine using a program, and discovering some subtle flaw in the output, like the wrong number of significant digits in some tabulated data displayed on a web page.
The value of Open Source is not the ability to find bugs, but to fix them. In fact, one of the strong motives for free releases of betas is so that the program will have lots of users, thus increasing the chances that bugs will be found before the official release.
It would be interesting to do a study. I bet that if you graph bugs/line it falls proportionately to the number of users for both closed and open source programs.
In other words... test Test TEST. And then test again. And when your finished testing, you might want to consider some tests.
Open Source still has an advantage (Score:4)
It doesn't look like open-source provided an advantage in finding this bug. But because PGP is open source, there are still two advantages:
Disturbing (Score:3)
God knows whether this thing will format ok when it turns up on
Not too comfortable with the sizeof(unsigned char) stuff, probably better as something like sizeof(*ReadBuf). Anyway, I'm sure theres plenty of errors, get stuck in.
static unsigned
pgpDevRandomAccum(int fd, unsigned count)
{
unsigned char *RandBuf;
unsigned i;
pgpAssert(count > 0);
pgpAssert(fd >= 0);
RandBuf = malloc(sizeof(unsigned char)*count);
pgpAssert(RandBuf);
for (i=0; icount; i++) {
if (!read(fd,RandBuf,count))
break;
pgpRandomAddBytes(&pgpRandomPool,RandBuf,count*si
pgpRandPoolAddEntroy(256);
}
free(RandBuf);
return(i);
}
Re:Disturbing (Score:2)
Re:Not quite (Score:2)
That's like coding while saying "We don't have to handle that case; it'll never happen". Bad code is bad code, whether or not the effect is immediately seen.
Re:The scary part. (Score:3)
However, I choose open source software, and we get hacked, my company will *definitely* view it as my fault. Now, I'm not one to play it safe, and I've got Linux/Apache/MySQL/PHP/Perl running all over the place, but still.....this topic makes me worry.
It shouldn't matter which technology you use. if you get hacked, it's your fault or it isn't regardless of which set of stuff you pick. Obviously, if your employer or whatever is going to assign blame because you picked something "weird", you have to cover your ass.
But the point I want to make is that it doesn't matter if you're a security expert or not. Someone, you, the OS vendor, the web server vendor, has already screwed up. There's a decent chance that someone might find said screw-up. If they come after you, you'll be defaced, and there's not a lot you can do to prevent it. In such a situation, the thing to do is to prepare a plan on how to react and recover.
This includes things like buy-in for downtime to apply patches, whether or not you'll want to do forensics and prosecution, or whether you'll just try to get back on line as quickly as possible.
The advantage of open-source is that you'll probably get a patch quicker, or you might even be able to make your own when you see a vulnerability report.
Re:Missing the point. (Score:2)
No; I'd say OSS is the far more secure approach in the long-run. That being said, however, security through obscurity is a pretty wise approach for short-lived apps. For example, I have a feeling the reason with didn't see a Slash release for years is because they were still cutting their teeth on Perl and Apache security for the first few years. Releasing the source would really have screwed Slashdot - every "haxor" would have found some sort of hole and messed with the site at a pretty crucial time in its history. I know this is pure speculation, but Taco has admitted numerous times to massive code overhaul, either through posts or interviews. One can only guess why that was. Even now, when Slash is quite mature, people have found ways to exploit it, BTW.
--
Propaganda (Score:2)
Ready, Aim (at foot)... (Score:3)
Jamie, before you go stating that "OSS != Security," please consider:
PGP's license has never met the Open Source Definition (it's free to use only under certain circumstances). Despite this technicality, your headline is stupidly sensational and self-defeating. Wouldn't it have been much better to title it "Key Generation Bug Found in PGP 5"?
Re:Which cave were you living in all this time? (Score:2)
A month or so later we finally got a reply from MS which said that they received our report, but that the problem was in compliance with the spec, and not a bug in their driver. So I replied with a direct quote from the spec that showed that they were indeed doing it wrong.
Another month or so went by, and I got another reply from them. This time they conceded that they didn't match the spec, and assured me that this "feature" would be added in the next version.
Don't know if it ever got fixed...by this time we had given up on it and moved on to other things.
Re:Disturbing - You missed the bug fix :) (Score:2)
Nope. Because I allocate the correct buffer length etc, and because I don't assign the read return value into the buffer.
Re:Disturbing (Score:2)
done in pgpAssert, haven't looked.
Thats exactly what pgpAssert() does
Re:Non Interactive Keygen is a Hard Problem (Score:3)
Best solution I found mentioned hooking a AM radio mistuned up to the mic port--then people mentioned FM had more entropic properties. Your big problems are, 1) You've seriously got to deal with the fact that a 60hz bias is coming off of the nearby AC transmitter/power supply, and 2) an attacker can pretty easily broadcast patterns at you on the exact frequency you're trying to be mistuned to. Since anything that's receiving a signal is also transmitting it(thus causing major privacy issues when a parking lot scans to see what stations people are listening to by picking up their "sympathetic"(corrent word?) retransmissions), you should remotely be able to determine the AM/FM band being used. Not Good.
I was thinking for a bit that deriving entropy from a the differential sync between many different NTP servers might be decent, but A) This doesn't scale and B) The differential sync, even at the minute scale, likely isn't more than a couple bits per resync. So you'd need to scan a few hundred servers a dozen times before you could create a 2048 bit key.
I need to create about 200 of 'em. A day. Soon to be 500. *sigh*
Interesting thought of the hour: Randomness isn't contained in the numbers themselves. Is a Royal Flush random? Depends how it was dealt.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Re:Non Interactive Keygen is a Hard Problem (Score:2)
I'd probably be more secure if the camera was mounted such that the entire image was a near microscopic scale view of the melting wax--but even then I'd be curious literally how many different possibilities of wax melting, unmelting, and wax separation there might be. It's not miniscule, but I do have to wonder how high it might be.
The real thing that comes to mind isn't that you need 100% accuracy...it's that there's probably a good amount of work you can do by eliminating 90% of impossible occurances(like the wax flying out from the lava lamp!)
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
P.S. That's not to say that the Lavarand system isn't the coolest damn RNG ever invented.
Re:open source sees more bugs (Score:2)
Care to fork over the funds? I am kinda broke.
My question for you. What in the hell does the security of Slashdot have to do with anything?
Common bug ? (Score:2)
So somebody finds a bug in a open source product, and suddenly that's a proof of how insecure open source is ?
Gimme a break ! This is a proof of the contrary.
Seriously folks, how many bugs like this do you think exist in closed-source commercial products ??
You know, the type of softare where you will never know about bugs like this.
--
Why pay for drugs when you can get Linux for free ?
The difference here is that - (Score:2)
Here we can see that it is fixed, and we can learn from it. _we_ are assured that it was our error of omission, noone else's.
I am willing to pony up and say I screwed up for using open source without reading the source. I have no problem accepting my part of the blame.
Re:Randomness. (Score:2)
There are better sources that are more environmentally sound--dirty diodes and whatever they've built into the Pentium III look pretty decent.
--Dan
Why ?? (Score:2)
> Versions 2.* and 6.5 of PGP do NOT share this problem.
>
This is how this was fixed in pgp 6.5i:
if((fdesc=open( devrandom, O_RDONLY|O_NONBLOCK)) > 0) {
while((numread = read( fdesc, buffer, BUFFSIZE)) > 0) {
for( p = buffer ; numread > 0 ; numread--, p++ ) {
PGPGlobalRandomPoolAddKeystroke(*p);
*p=0;
}
RandBits = PGPGlobalRandomPoolGetEntropy();
StillNeeded = TotalNeeded - RandBits;
}
}
<conspiracy mode>
This bug was introduced in PGP 5.0 and fixed in PGP 6.5. Why wasn't
this reported on bugtrack, a long time ago ? Although the code is
substantially rewritten, I am would be very suprised if the author
of this code in 6.5 didn't see this bug (after all he fixed it
</conspiracy mode>
Re:Non Interactive Keygen is a Hard Problem (Score:2)
Yup, but some bits are more random than others. With a static camera, there will be bits that are entirely determined from variations in light and sensitivity.
There's likely to be enough bits to seed a RNG, but the extensive work I've heard of being done by eliminating impossible combinations(31 round Skipjack was defeated in greater than brute force, while official Skipjack is 32 round!) leaves me wondering.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
PGP is not Open Source (Score:2)
bye
schani
Re:Bill Joy on many eyes... (Score:2)
Most of the exceptions are rigidly controlled by a single person, and have wildly-varying parts. C.F. Linux, FreeBSD.
Are there exceptions? Of course. But look closely before you call something an exception; a lot of it started with a single-person core, and has added cruft from there.
However, all of this misses a vital point; it doesn't matter if 99.9% of the eyes looking at a given program are incompetent eyes, if that remaining
There are people at RSA who use PGP. Bruce Schneier uses PGP. Lots of folks who are good at writing crypto use PGP.
They see the bugs. They don't see them all; if they did, they'd have been fixed long ago.
Meanwhile, PGP is still better than most of the alternatives. That's *BECAUSE* it's open, not in spite of it being open.
Open Source benefits me even if I never look at the code, because if PGP had been written by, say, RSA, people like Bruce Schneier would never have been able to look at the code either.
The advantage of Open Source is that those few really good people can look at it, even if they work for different companies.
Unless, of course, the lawyers screw it all up by demanding employees not look at outside code.
--
It was found, and lesse the fix.. (Score:2)
Now, lesse how long it takes for there to be a source patch to correct the problem. I bet it doesn't linger for months and/or years as other OS's due.
Heck, look how long it took MS to fix the darned smurf attacks..
Re:Ah the hipocrisy... (Score:2)
However, the source is available so the bug has *been* found *and* located, and most importantly a world-verifyable patch has been produced. Beat that, you closed-source fanatic you...
If M$loth make a mistake they try to close it up, which is utterly stupid. If an open-source project has bugs, they get fixed.
~Tim
--
Re:Ready, Aim (at foot)... (Score:3)
Comments like yours are our editorial accountability :-)
Many crypto bugs are hard to find. This bug should not have been. Passing in a pointer to a buffer and then assigning the function result to that same buffer? I bet there exists an automated tool which understands the parameters to read() and would find that error.
It's not like read() is an obscure system call. Using it improperly like this is practically criminal.
And I never said "OSS != Security," in fact, I explicitly said the two were not necessarily equal, "emphasis on necessarily."
OK, you got me there - Dan Kaminsky also wrote in to mention that its license prohibits commercial use, adding "many of the eyes that would have otherwise been directed at the PGP codebase wouldn't touch the product."
I'm not entirely sure that's true. PGP should naturally attract a lot of eyes by virtue of being high-profile. Many of the people who would be or should be looking for bugs like this one are up-and-coming cryptographers, for whom finding a bug in PGP would garner street cred. They wouldn't care whether they could use the code commercially.
Still, point taken. Let me talk to a friend who knows PGP better than I do, and I'll look into revising the headline and/or updating the story in the next few hours.
When we get two submissions that are both important, and related, it makes for a more interesting discussion to link them together. Unfortunately I think many readers are only reading the PGP story, and skipping John Viega's excellent article [earthweb.com] - or at least there hasn't been much discussion of it, which is a shame.
Jamie McCarthy
Your rights are none anyway (Score:2)
Good for you!
Given that if you are using stolen Microsoft code your rights are nill, what rights and accountability did you GAIN by buying that Microsoft code?
The EULA applies, and the quote of "MICROSOFT HAS ABSOLUTELY NO ACCOUNTABILITY OR FAULT IN THE FAILURE OF SAID PRODUCT." is valid.
So, lets connect the dots, as you won't answer the accountability question, and instead launch into a comment on stolen software.
Lets look at OpenSource
1) You claim No accountability
(Reality: Most OpenSource has a contract that says the author is not responsible and is to be held harmless)
2) Code is aviable to fix bugs if you find one
3) Bug fix time can be under a week
4) Abandoned programs are supportable by users who find the program still useful
Now lets look at closed source
1) Contract claims No accountability
2) No code is aviable to fix bugs if you find one
3) Bug fix can be never
4) Abandoned programs are abandonded, never to run on newer platforms
Looking at this list, OpenSource puts a burden of responsibility on the user to support themselves, whereas because you can't support yourself with closed source, the burden is (supposed) to be elsewhere. Yet, no closed source company has a LEGAL RESPONSIBILITY to take on that burden. If you are not into personal empowerment or the rights of individuals to control the tools they use, I can see where the 'promise' of some closed source behemoth holding your hand and guiding you along is a sedutive siren song.
As you are an opinionated AC, the question is this: What accountability does Microsoft have to their code?
*SNORT* (Score:2)
The idea of message signing is to PREVENT such things. If you don't sign your message, why are you expecting PGP to protect you against attacks which only message-signing would prevent?
There are two rules with PGP:
1. If you don't want anyone without the correct key reading the message, encrypt it.
2. If you don't want anyone to undetectably alter the message, SIGN IT.
The two are orthogonal considerations.
Another thing to consider (Score:2)
Would it have ever been found?
Would the company have advised its users that they might have weak keys that should be revoked? Or would they fix the flaw silently and keep their customers in the dark?
Open source has other advantages for the consumer.
Re:Missing the point. (Score:2)
--
Use a diode (Score:2)
You can also use an FM radio that does not have muting: tune it off frequency and the FM discriminator will spit out band limited white noise. However, the same caveats apply, and the nice thing about the diode approach is that nobody can screw up your random number source with a simple carrier of reasonable amplitude.
Of course, you always could use a Lava lamp [sgi.com]
A random function should be built into CPU's (Score:2)
Re:A random function should be built into CPU's (Score:2)
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Re:*SNORT* (Score:2)
Consider the CDMA model of Encryption: Know the key, get the data. Don't know the key, you get noise.
Period.
If I can not know the key but modify the data stream--and it still decrypts *without complaint*--then something's wrong. Truncation attacks are different--they're essentially selective DoS where the cipherstream suddenly stops being valid--but if PGP doesn't *complain* that suddenly something broke and this was all that could be read--i.e. "there was more that was part of this message, but I can't read it"--then this is a cryptographic failure.
PGP doesn't protect you against an email server silently deleting your mail--there is no conceptual way it could or should. But silently passing truncated messages means that somebody can reconstruct a message without being able to read it. The fact that avoiding this weakness is as simple is encrypting the one way hash of the message as a whole with each independant truncatable block(such that the hash of the decrypted document would then fail to match the original hash derived before the message would sent) means that this is a weakness that should have been addressed.
Of course, mind you this attack hasn't particularly been verified, and GaryH is the first person I've ever heard to speak positively of S/MIME. But you're completely wrong to state that message authentication is *entirely* orthogonal to encryption. Knowing *who* sent a message is orthogonal. Knowing that *this* specific message--which may contain identifying information in the untruncated blocks--was sent isn't.
It's still tied to the destination aspect to know whether *all* of a given encrypted message reached the destination. I don't particularly accept that any File-Oriented Cryptographic System should, or needs to accept selective DoS. It's just too simple to prevent.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Re:Non Interactive Keygen is a Hard Problem (Score:2)
Unfortunately, the widespread layering of memory caches throughout the computer infrastructure(in OS, in drive controller, in drive itself, etc.) prevent this from being as slick of a solution as I'd like.
--Dan
Re:oh come on! (Score:2)
Yes, I did read the article. Your annoyance is understood, however.
/dev/random, rather than the internal entropy engine, was being called in the first place *because* non-interactive entropy gathering is such a difficult problem. PGP had no similar issues when used interactively because they essentially wrote their own interactive entropy gatherer. They couldn't do the same for non-interactive content, so they wrote a (buggy) bridge to
Obviously, they should have verified that the content coming *out* of that bridge was something other than all ones. But the most interesting thing to me is the similarity of this accident to an airline crash or a school shooting--an intensely rare situation, made notable and newsworthy *by* its rareness. We pay little attention to the moderately common problems(invisible security issues lurking beneath most closed source cyrpto), but both the extremely common issues(buffer overflows) and rare ones(this PGP hack) get lots of press.
Interesting.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Re:open source sees more bugs (Score:2)
I'm a huge fan of OpenVMS. But this is a absurd exaggeration. Making ridiculous claims makes OpenVMS advocates look ridiculous and, by association, doesn't do the perception of OpenVMS any favors.
Compaq is regularly issuing ECOs for OpenVMS, many of which have instructions that insist that ALL CUSTOMERS install at once. Each of these ECOs addresses one or more bugs or at least lack of "perfect engineering".
A lot of OpenVMS these days uses C and internally has C buffer overrun problems. I could quote you the ECOs, if you're interested. I saw lots of these for UCX 4.x.
There was even a system service (System API) that was instituted awhile back that used C style buffers. Now, OpenVMS Engineering later realized the error of their ways and now offer an equivalent system service using the safer string descriptors.
You're right, it doesn't have setuid - if you are referring to setuid scripts/programs and not setuid(), here too, OpenVMS has Persona Services which are equivalent. Instead of setuid scripts/programs, OpenVMS has installed privileged images, that offer a rough equivalent.
I do happen to believe that installed priv'd images offer a number of advantages over setuid scripts/programs, but they offer about the same functionality.
I really do think that OpenVMS has a number of inherent advantages to the alternatives in the areas of security and reliability (not to mention scaleability!), but we need to be objective. Making claims about it being "perfectly engineered" and "bug-free" are not objective.
-Jordan Henderson
Re:Non Interactive Keygen is a Hard Problem (Score:2)
Consciousness is not what it thinks it is
Thought exists only as an abstraction
A weakness (Score:2)
And the fix is obvious, you said it yourself. If you don't sign the message in some fashion, you deserve to lose. Signing the message prevents most (all?) of these attacks.
If you do a 'gpg -e', you had better know what the security issues are. Of course, this applies to using ANY encryption software, no matter what settings you use on it. If you don't know your encryption software and what the options mean, you shouldn't be using it.
BTW, just including (and encrypting) the hash of the decrypted message isn't enough to protect against these attacks.
If the message is for multiple recipients, recipient A could decrypt the message, alter it, compute the correct hash for the altered message, and then repackage and send the altered message along to the other recipients who will accept this message as legitimate. To prevent this, the hash of the correct message should be 'signed' in some fashion where only the origional sender can create it. This process describes GPG's 'sign' option, which we know works.
Signing and encryption are orthogonal features and needed in different situations. If I am sending details of an auction to N people, I don't care who reads them, but I do NOT want them to alter the messages sent to the other recipients. If I just encrypt the message but leave it unsigned, noone can read the messgae, but they may alter it. (Admittedly, not very useful for communication, but useful for storage. For example, storing private stuff on a hard drive/floppy.) Finally, I may encrypt and sign the message.
You could consider it a user-interface issue. Maybe GPG should (by default) sign the message whenever the user requests encryption?
[[BTW, if you want to take this conversation to email, I'll be happy to. Unfortunately, my email address no longer works; I can email you.]]
Re:A weakness (Score:2)
Hadn't considered the multiple recipient problem when it came to unsigned hashes.
But, just as strongly, you haven't considered the reality of PGP allowing me to receive mail from untrusted individuals with a modicum of cryptographic security. Segment out your security scheme--when you're the receiver, you can't control who then sender transmits to. When you're the sender, you can't control who the receiver retransmits to. But if you, as the receiver, can trust that the user hasn't given away their secret(the message, in this instance) to anyone else but you, truncation detection through hashes or anything else lets you recognize when the message/secret you receive is incomplete.
That's valuable--you're able to receive non-multicast messages without concern for the integrity of that message! Essentially, both the verification key and the message itself get condensed down into the content of the message. Presumably whatever it says authenticates the author, provided the message is complete.
Once the security architecture is formalized, this property just can't be suppressed. It's not ideal by any means--you can't extend the trust you've established from one message to any other, as you can with stored private signing keys--but the arbitrarity of the trust is identical, down to the fact that a truncated key offered for verification purposes had best not work either(here's my 2048 bit verification key, oops truncated to 256 bits...)
Go ahead and email me if ya like.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Re:Disturbing - You missed the bug fix :) (Score:2)
Re:Which cave were you living in all this time? (Score:2)
No it wasn't. DES was and remains secure for suitable key length. There are no known bugs in DES.