Mitnick on OSS 286
comforteagle writes "Infamous cracker Kevin Mitnick (turned security consultant) has come out to say that he'd prefer to 'hack' open source code vs proprietary closed code. "Mitnick says that open source software is easier to analyse for security holes, since you can see the code. Proprietary software, on the other hand, requires either reverse engineering, getting your hands on illicit copies of the source code, or using a technique called 'fuzzing'." He further says that open source is more secure, but leaves you wondering questions if enough people are really interested in securing open source code."
Captain Obvious (Score:5, Insightful)
Re:Captain Obvious (Score:2, Redundant)
Re:Captain Obvious (Score:5, Interesting)
Re:Captain Obvious (Score:5, Funny)
It's easier for others to see where you are going when they have their eyes open.
Second Corollary:
It's easier for others to see where you might go when they have their eyes open.
KFG
Re:Captain Obvious (Score:2)
I simply wonder if he is trying to make a security version of "call for help" Tv show/ infocast.
Doublespeak ? (Score:4, Insightful)
When Microsoft says "making our stuff open source will make it easier to find vulnerabilities", people say "Stop FUDing, Microsoft"
I dont see how can you beleive it when Mitnick says it and how you can refute it when Allchin says the same thing.
Re:Doublespeak ? (Score:5, Insightful)
ad (1)
Mitnick did not say "it's easier to hack" (I assume TFA/you mean "crack" [catb.org] here) which would mean that it's easier to get unauthorized access.
In fact TFA quoted Mitnick as saying that finding vulnerabilities in OSS code is easier, since it's easier to analyze for holes. This is true for both black-hats and white-hats, so it gets evened out somewhat. On the other hand, finding holes in closed source is harder for black-hats, but fixing them is impossible for white-hats, so overall this might put black-hats at an advantage.
And you leave out that OSS is not just "GPL the source and put it on a server". Mature OSS projects generally are modularized well, because parallel development is greatly hampered otherwise. Closed projects tend to be much dirtier in this respect.
Incidentially, this separation also helps secure coding.
ad (2)
It should not be a surprise that among > 1,000,000
Actually, what happens is this:
Some people say "duh", because, well, duh, but you leave out the supporting argument that while Mitnick's assertion is obviously true, TFA left out the fact that it is easier to fix also.
Other people say "FUD", because they forget that Allchin is somewhat right: putting Windows in the open now, necessarily with insufficient preparation and code cleanup, would make it more insecure. But that does not mean that it couldn't be more secure had it been constructed in the open from the beginning.
And I can't believe there are idiots who modded you +5 Insightful.
Re:Doublespeak ? (Score:3, Informative)
Yes, http://en.wikipedia.org/wiki/False_dichotomy [wikipedia.org] anytime you create an 'excluded middle' it's a flase dichotomy, so treating the actions of a group of individualas as a 'collective' with a single opinion and trying to point out where they are being 'inconsistant' is ignoring the fact that it's possible for a large group to have two or more subsets of people who believe different points of view are correct.
It also ignores t
Re:Doublespeak ? (Score:3, Insightful)
He didn't quite say that (infact, he didn't really say a lot). My interpretation of his comment was basically that given 2 pieces of software with a similar number of security holes in it's easier to crack the open stuff (well duh).
Of course, that's ignoring the fact that FOSS software _generally_ seems to be more secure than closed software. You can make up your mind as to why that is, but some thoughts:
1. FOSS software has more peo
Obiligatory Soutpark response ... (Score:2)
LOL
Fuzzing and Obfuscation (Score:4, Interesting)
The crux of this attack is using a buffer overflow to gain superuser privileges. This might be trivial on Windows, so I'll relay the "la/ls" story to you regarding how to gain it in Linux. The part of this trick involves figuring out how to get an executable file from your machine to another user's machine. Let's say you know some company or institution is running a webserver on their unix/linux machines from a server and you go to visit their site. Now, their code isn't completely up to date and there's a security hole in one of their web applications. You know (after toying around with said web app on your home machine) that certain large chunks of hex in a field will result in a submission that essentially writes your binary to their $HOME directory. The name of this file will be, of course, "la."
Now hopefully their home directory is like mine and it's full of crap. So they'll never notice the "la" file but everyday they use that machine, they type "ls" to display the file. One day, their finger slips and they type "la" resulting in the execution of my binary. Instantly, another executable is written, this time called "ps" and a thread is started that simply spin locks on the processor--chewing up cycles. The machine might slow or freeze but an admin will notice this process and go into the users directory (as root) and type "ps -al" to see all the existing processes. Instead, it executes your "ps" virus and subsequently, the spinlocking stops with "ps" printed to output with the super user killing "la" and thinking everything is fixed. In the background however, the "ps" process is active
I'm sure there's a hundred things wrong with what I've said, I'm not a hacker--I just like to point out possible security holes.
Improbable but not impossible.
One more thing about the article, the beauty of OSS is that it is impossible to implement security through obfuscation [wikipedia.org]--a major pitfall to security in application design.
Re:Fuzzing and Obfuscation (Score:2, Insightful)
This is all assuming that the home dir or the working dir is in the path.
Re:Fuzzing and Obfuscation (Score:4, Informative)
Re:Fuzzing and Obfuscation (Score:3, Funny)
I feel there has to be a
Re:Fuzzing and Obfuscation (Score:4, Funny)
Re:Fuzzing and Obfuscation (Score:2, Interesting)
Re:Fuzzing and Obfuscation (Score:2, Informative)
The machine might slow or freeze but an admin will notice this process and go into the users directory (as root) and type "ps -al" to see all the existing processes. Instead, it executes your "ps" virus
Do any UNIX-style systems ship with the current directory in $PATH for root? That's a stupid thing to do and as far as I'm aware, this practice died out years ago for precisely the reason you describe.
Re:Fuzzing and Obfuscation (Score:3, Informative)
but that's because in plan9 there is no way to escalate privileges, because there aren't any privileges to escalate to.
Re:Fuzzing and Obfuscation (Score:5, Informative)
You mean, like what you said there :
The machine might slow or freeze but an admin will notice this process and go into the users directory (as root) and type "ps -al" to see all the existing processes. Instead, it executes your "ps" virus and subsequently, the spinlocking stops with "ps" printed to output with the super user killing "la" and thinking everything is fixed
Of course, unless the superuser deliberately destroyed the security of its Linux and added "." to his PATH, this would never happen, as it would not execute the "ps" in the user's directory.
But I see your point.
Re:Fuzzing and Obfuscation (Score:2, Interesting)
Why? - a ps will run from anywhere. I prefer running top - then selecting
offending processes and killing of required.
Alternatively, set ulimits on user accounts and have the spinlock process
kill itself.
"and type "ps -al" to see all the existing processes"
Quick question - which admins are stupid enough to include '.' in thier path?
I would have thought it mu
Re:Fuzzing and Obfuscation (Score:2)
I've seen plenty do it - perhaps not in their login script, but I've definitely seen people add . to their path manually, when running a lot of stuff in the current dir and tired of typing
However, that was the first thing that sprung to my mind; sure, that's all reasonable, but . isn't in root's path by default (or indeed in that of most user accounts).
Re:Fuzzing and Obfuscation (Score:2)
These are easily over-written in an attack situation, and they could be executed as root if I did 'su' instead of 'su -'. I always do the latter though.
Makes no sense (Score:5, Informative)
I'm sure there's a hundred things wrong with what I've said, I'm not a hacker--I just like to point out possible security holes.
Let's dive into what *is* wrong...
First of all, files in your home directory are normally not in your $PATH on any Linux system. Anyone who has their system set up like this, *let alone* having their $HOME have priority over /sbin and /usr/sbin, deserves to be shot.
Secondly, a webserver should (and does by default in any distro I know of) runs as the nobody/httpd/apache/someone user, and does not have a home directory. So any exploit in the web server would not allow you to write a 'la' binary anywhere.
Third, your whole attack scheme is just a big run around for no reason. If you can write a binary called 'la', why wouldn't you just write it as 'ls' in the first place, istead of crossing your fingers and hoping he mistypes? And if you can write a binary to disk, you can also obviously execute it, so why don't you? Why would you wait around? Is it because you hope someone is going to log in as root and run it? Because if that is the case, you will be way out of luck, because root *never* has $HOME in his path (and the webserver shouldn't be able to write to /root anyways).
This isn't how these kinds of attacks work... what *usually* happens is, the buffer overflow allows one to write and execute files as the unprivilidged user. The cracker attacks and does this to gaina remote shell on the machine, as this unprivilidged user. They then use this shell to try to find holes in other system services that may not be remotely exploitable, for example say mysql or postgresql. If mysql is running locally and not set up right, they could use it to gain full superuser privilidge by SELECT'ing to a file. Then, all bets are off.
A Slashdot Orange (Score:5, Funny)
Alexander "brunes69" de Large: Oy! Lookie what we have here, droogies
Droogies: [in unison] HE FORGOT ABOUT PERMISSIONS!
Alexander "brunes69" de Large: [bending over with his cane against his cod piece] That's right. And what happens to slashdotters we viddie that make mistakes?
Droogie A: We brow beat them into a bloody pulp
*Alex and the droogs continually beat the poor slashdotter while emitting "Singing in the Rain"*
eldavojohn: Please
Re:A Slashdot Orange (Score:2, Offtopic)
Re:Makes no sense (Score:2)
Not even in
Re:Makes no sense (Score:2)
You should really be running your servers chrooted
OpenBSD runs Apache chrooted by default and has 0 directories it can write to.
. in the PATH (Score:2)
I dunno if this kind of thing is much used except by malevolent insiders. Same for buffer overruns, I haven't seen any buffer overruns do anything but crash a Solaris or Linux server i
Re:Makes no sense (Score:2)
Re:Fuzzing and Obfuscation (Score:2, Insightful)
Careful with the word impossible.
Can you really guarantee that for every OSS project, there are enough people looking through each bit of code trying to look for any "security through obscurity"-type issues?
If there are 1,000 submitters, most of whom are working on features, can you guarantee that everyone's code is gett
Re:Fuzzing and Obfuscation (Score:2)
Wise words, mate. [acm.org]
Re:Fuzzing and Obfuscation (Score:2)
Re:Fuzzing and Obfuscation (Score:2)
What is Fuzzing? (Score:5, Informative)
I was when I read the article and have done some research and fuzzing is:
What is fuzzing?
- Sending semi-random data to an application
- Semi-random: good enough so it'll look like valid data, bad
enough so it might break stuff
- When people hear "fuzzing" they imediately think http, THERE IS MORE TO FUZZING THAN JUST HTTP !!!
- You can fuzz:
-- Network protocols
-- Network stacks
-- Arguments, signals, stdin, envvar, file descriptors,
-- Api's (syscalls, library calls)
-- Files
In general, most of the time it is a waste of time, but if you are "lucky" you could find a vulnerability and maybe with a little more research a way to exploit the code.
More information can be found at this PDF Article - http://static.23.nu/md/Pictures/FUZZING.PDF [23.nu] (Very Large 90+ Pages)
Re:What is Fuzzing? (Score:2)
When I was at College I spend some of my time cracking software and learning about hacking. For me, the *real* sense of doing that was because of the challenge to reverse-engineer the code. The same was applied for smard cards protocols R.E. (which may be consider hacking =o)).
Now, if we talk about open source applications, I won't say it is "hacking", I would name it more as "code auditing", because, if you find a bug on any given OSS application by seeing at the listing
Re:What is Fuzzing? (Score:3, Funny)
what makes you thing it's so important to let us know... We all do that for christ's sakes
Re:What is Fuzzing? (Score:2)
Re:What is Fuzzing? (Score:2)
Re:What is Fuzzing? (Score:2)
Re:What is Fuzzing? (Score:3, Interesting)
It may be tempting to throw out the fuzzing approach because it's not very smart; unlike things like source code analysis, fuzzing appears to be very undire
In other news... (Score:5, Insightful)
Just because Mitnick has said what thousands - neigh - millions have said before, doesn't mean it's new and exciting. Doesn't make it news.
Re:In other news... (Score:2)
When someone high-profile says something that lots of low-profile people are already saying, it is news because people who don't typically hear that thing said now have the opportunity to hear it.
Your friendly-neihborhood Linux admin can say this stuff all he wants with his other admin buddies, or here on Slashdot. But that doesn't necessarily get heard by even one decision-maker. This will be heard by lots of them, if they are even lightly following online tech-news.
How high-prof
Re:In other news... (Score:2)
Re:In other news... (Score:3, Funny)
Master of the obvious! (Score:5, Funny)
Once again proving his technical prowess!
http://religiousfreaks.com/ [religiousfreaks.com]Re:Master of the obvious! (Score:2)
Or social intelligence. Since hacking proprietary code is a felony via the DMCA, he'd probably spend quite a bit of time indoors as a repeat felon.
Re:Master of the obvious! (Score:2, Funny)
Re:Master of the obvious! (Score:5, Insightful)
Wow, I have a better job than Mitnick, make more $$$ per year than him, don't have to fret with the fame, and I still think he knows less about hacking in todays world than I do. And I've never hacked a system in my life! But your like most lemmings today who believe that if a person roams around talk shows and writes some books on hacking that it he/she must be the defacto guru of hacking. Please. Thats like saying somebody that robbed banks 60 years ago are all-knowing-pros at how to rob the high tech banks of today. Time changes, and with it so do people.
Re:Master of the obvious! (Score:2)
Prefers? (Score:2, Insightful)
The empirical evidence suggests that people don't have an especial lot of trouble cracking CSS.
I guess if you have the source you can grep for reads and examine them for overflow vulnerabilities, but I wonder how much easier even that would be vs. just trying it.
Ask a hacker a question, get a hacker answer (Score:4, Insightful)
Is this really all that suprising? If you've got a mentality of "how can I break this?" it's much easier to figure out how if you can look at how it's built. Unfortunately, having a hacker able to look at a system is not the same thing as having the original designers catch the issue. If you wait until hackers get ahold of it, they'll find ways to exploit the problem before the patch is in wide distribution. That's what makes this dangerous.
Thankfully, the majority of those who are looking at the code have less selfish reasons, and are happy to share any issues they see. Thus the "many-eyes" philosophy depends heavily on the good will of the common man. Personally, I wouldn't have it any other way.
Securing Open Source Code (Score:5, Interesting)
As of now, there's really no penalty with selling code that isn't secure. It's accepted (for some reason) that computer code will have holes, and you really, really have to have a horrible program before anyone will think of ditching it. Even then if it's mission critical (all the more reason to be secure) it seems people are loathe to switch to something else.
So as a coder for a Closed Source app., my motivations would be:
1. Make the boss happy. Get code done.
2. Once program A is done, start work on next money making program.
3. Patch when boss says it's necessary to patch.
For Open Source it's not that much better. The only real motivation to write good code is so that it's either accepted into the project in the first place, and then once accepted everyone doesn't poke holes in your crappy code.
The difference is that people coding OSS are doing it because they want to, so hopefully have a little more motivation to look at the other code in their project. It's interesting to them, so they're a bit more likely IMO to look at it. The person getting paid has no incentive to look at the code (at least while on work time) unless the boss tells them to. Since rehashing old code doesn't usually make money, the only time to look at old code is when a patch is a necessity.
Re:Securing Open Source Code (Score:2, Interesting)
. . . called "craftsmanship."
KFG
Re:Securing Open Source Code (Score:2)
In the process of using it, I found a small bug, and fixed it and notified the author.
While I didn't need to tell the author, I had a number of reasons for telling him:-
Also, sometimes in companies I've found bugs by accident. Like, if the configuration database is wrong, and in the process of debugging, I've noticed somet
Re:Securing Open Source Code (Score:2)
Well, not just interesting... Most of the people who work on OSS projects do so because they need the project to do something for them. Not surprisingly, most of the people who work on Apache are webmasters. The guy who started PHP did so because he needed some tools for ma
There's plenty of Milhouse to go around. (Score:5, Funny)
I'd prefer to hack open source with FEW AUTHORS (Score:5, Insightful)
I think I'd agree with Kevin if he said:
"I'd prefer to hack open source with FEW AUTHORS."
There's no doubt that lots of eyes and a security focus have helped Apache, but there's lots of open source shitware (for example, just Google up a list of PHP messageboards) that don't have basic input validation controls, require too much access to the operating system, use plain-text or unsalted MD5 passwords or contain other gaping holes.
Without those extra eyes helping out...yes, many open source projects are easier to hack than similar closed source projects.
Re:I'd prefer to hack open source with FEW AUTHORS (Score:2, Insightful)
Indeed there is, and lack of recognition of this is one of the "weaknesses" of OSS, however, let me ask you this question:
How many people run this shitware?
Not much point in spending who knows how many hours going over code that nobody uses. The Mother of all UNIX Holes was found in GNU emacs, because that was someplace worth looking for one.
Thus the code that everybody uses gets harder faster.
KFG
Re:I'd prefer to hack open source with FEW AUTHORS (Score:2)
How many people run this shitware?
Often it doesn't matter. For example, if I'm trying to deface site XXX (or inject a form information grabber) and I see that it runs message board YYY, the first thing I do is try to get the source code of message board YYY. In other words, if I know what I'm doing, I'm not using a shotgun/Nessus approach anyway. Instead, I'm first going to drop by as an anonymous web user and see what I can use against you before I fire my first shot.
Re:I'd prefer to hack open source with FEW AUTHORS (Score:4, Insightful)
Accidentally, the answer is "many web hosting providers". If they allow users to upload and execute their own scripts on their site (and who doesn't, these days), they typically end up with several dozen copies of God knows what because web designers find these things on their own and crib them into their own sites. The permissions set to allow these scripts to run are often open enough or there is a powerful enough shared backend database to do something interesting...
I disagree with his statement... (Score:3, Informative)
Oh, really? I think so.
In this day and age with all of the security problems (especially with MS), OSS trying to gain market share, I'd think that every OSS coder would be really mindful of any potential holes. Especially if he knew that another developer would be looking at it. I would be really embarassed (if I were a developer) if I got an email saying something to the effect of "Hey dumbass, nice job of preventing buffer overflow there at line: xxx in abcdef.c! Don't worry, no one will EVER exploit that hole!"
Re:I disagree with his statement... (Score:2)
Did you read every line of code to look for potential security flaws, or did you just write code of your own?
Do you think every single other coder involed in the project read every line of code you wrote, and made sure there was no way it could introduce security holes?
Re:I disagree with his statement... (Score:2)
Re:I disagree with his statement... (Score:3, Informative)
Unfortunate (Score:5, Funny)
Why does race have to enter every discussion on
Re:Unfortunate (Score:4, Funny)
Comment removed (Score:4, Insightful)
Conning != "Social Engineering" (Score:2, Interesting)
Re: (Score:2)
Re:How would it have helped Mitnick? (Score:2, Interesting)
I have walked into data centers and gotten let into the server rooms by security without showing any ID, or having an appointment, or even knowing anyone in the building. I could have destroyed a couple million dollars of equipment, put a server under my arm, and waived at the security guard at the front door and they would have just waived back.
Point being
Re:How would it have helped Mitnick? (Score:5, Interesting)
Reminds me of the neat story (from Psychology of Computer Programming, I think) where a tiger team was trying to crack an installation's security (at the installation's request). Said installation ran IBM mainframes, and received patch tapes from IBM on a semi-regular basis. So the team wrote their own patch, put it on a tape, and sent it to the target along with a typical covering letter on IBM headed paper, and then waited for them to install their backdoor for them.
Which they did :)
Re: (Score:2)
His views have been proved empirically... (Score:3, Funny)
Why not do something CONSTRUCTIVE? (Score:2, Interesting)
These people are like art critics.
They can't write great code themselves so they pick apart other peoples. A valuable niche job to be sure, but not deserving of some sort of "star" status of their own.
Why is there not more attention on the great developers? I don't see many interviews of kernel devs......
Re:Why not do something CONSTRUCTIVE? (Score:2)
There are plenty of interviews with Linus. Good developers get publishing deals. Also, interviews tend to get your book sold, not land you opertunities to get paid to code. When you write a book your doing it to educate. Some people are good teachers. ANd then there are those that lend wait to the cliche, those who can't do . . .
Re:Why not do something CONSTRUCTIVE? (Score:2)
From TFA (Score:2)
Re:From TFA (Score:2)
Re:From TFA (Score:2)
Missed the Point (Score:5, Interesting)
We all know that security through obfuscation in cryptography is stupid: peer review illuminates the crevices the architect never conceived. But is all open source code subject to this same sort of peer review? If you've ever worked on an open source project, how much time to do sit down and pour over the code looking for security flaws.
Essentially, it's the same problem with Wikipedia: peer-review requires 1) the skill of the peers matches or exceeds the skill of the author, and 2) peers are actually reviewing, and 3) peers are trustworthy. It's the second criterion that Mitnick was questioning.
What's more, since it seems like accidental (and very subtle) bugs result in most security holes that don't get noticed. Wouldn't it then be trivial for someone with a great amount of skill to simply insert a hole? Either by subtle manipulation of existing code or by direct implementation in a segment which they are responsible for coding. If its done well, the 'oops, coding error!' excuse could always be proffered in the event the tampering was detected.
If I wanted to attack a system which I knew ran on OSS (and I had mad coding skillz), I think I would try to obtain some method of working on one of their software packages. Either directly or by 'acquiring' someone else's permissions if that was easier. Then I would insert a piece of backdoor code in a little used (or often used-'hidden in plain sight') code segment. Once the next release is running on that system, exploit the code, and get out. Depending on my goals, the operation could very likely be done before a hole is found and a patch is issued. As a small bonus anyone else installing that software would have the same vulnerability. Of course, some user level app won't be able to induce this scenario, but you get the idea.
Proprietary software doesn't have this vulnerability in so much as the programmers are much more tightly regulated by a company who has legal and monetary interests in controlling its code base and holding its employees accountable. (whether this actually happens is another discussion)
For all the self-righteousness of the open source movement, I remain convinced that the primary reason that more open-source packages are not targeted for attack is because they are not an appealing target. Specific implementations are not in popular use (globally), or they are too close to home. Meaning its preferable to attack your enemy than your family.
Dude... (Score:2)
There is absolutely nothing in your little hypothetical situation that couldn't be accomplished in closed source as well- and in actuality, it'd be easier as the audits wouldn't be as intense (Witness the WMF debacle for proof of something that should have been caught that wasn't in Closed Source software.).
Simply put, what you claim isn't. But I'm confusing this discussion by including facts, aren't I?
Still more effects (Score:2)
The standard counterargument for most of them is "A similar potential exists in proprietary software also, but is less easy to detect and repair."
I don't want to flog this point too vigorously, because it's clear that in practice the quality and security of any software is derivative of many factors, both in the development environment and during its installation and operation.
We know
I think... (Score:2, Insightful)
Mitnick may be a smart guy, BUT... (Score:2)
What I mean is, in theory, he feels he can crack an OSS based box because he can analyse the source code, but in reality, it's easier to crack a proprietary box. So his theory doesn't appear to hold up to simple analysis of what happens in the real world.
It's kind of like the theory that SUVs are safer than other cars, which would appear to be common sense. But it falls apart when you con
Re:Mitnick may be a smart guy, BUT... (Score:2)
I've seen no conclusive (or reliable) means of measuring vulnerability statistics that show what OS is the most secure. Vulnerability rates are hard to trust, because many vendors don't report vulnerabilities the same way, nor is it always clear what vulns affect the default install, the properly locked down install, etc. (for instance, Gentoo consistently releases more
Which is a great technical advance... (Score:3, Funny)
Never understand when people say OSS is secure (Score:2, Insightful)
How and why?
I think people are deluded into thinking that because a project like Linux is secure, and that Linux is Open Source, ergo Open Source software must be secure. This is convoluted and dangerous logic.
I think OSS is the most insecure software out there. Think of it. Anybody could take RedHat's source code, create their own distro filled with back doors and zombie daemons, and then di
Re:Never understand when people say OSS is secure (Score:2)
Actually, Linux is more secure than Windows in spite of the fact that OSS is easier to hack than CSS, for reasons that have very little to do with the difference between OSS and CSS.
I suggest that factoring, i.e., modularity is a far more significant reason that Linux is more secure than Windows. The commercial interest of integrating featurism into the OS is probably the biggest source of security flaws in Windows.
If that turns out to be true, it suggests that OSS is more secure than CSS because the de
Re:Never understand when people say OSS is secure (Score:3, Insightful)
Yes.
How and why?
Because holes are more likely to be brought to your attention. If a good guy has access to your source, they may well look through it, and if they're doing that, they may well spot any holes, even if they weren't looking from a security standpoint, if they were just looking to improve your code. Whereas the only person who's going to bother looking for holes in a closed progra
And we care because? (Score:2)
a decade later, he's an industry pundit, and people pay attention to what he says. How many thousands of people did the same things Mitnick did, but didn't get caught?
Should we worry about their opinions too?
Other uses of source code (Score:2)
Get ahold of Digital Equipment Corporation's source code and use it to blackmail DEC employees into doing what you want or else you'll distribute the code.
So, Mitnick, were you ever indicted for that one?
Why listen to Mitnick? (Score:5, Informative)
Why? (Score:3, Interesting)
He proved he was a moron when he used the same MIN/ESN pair for his OKI the entire time Shimomura was tracing him down.
Re: Fuzzing... (Score:5, Funny)
For teenagers it means to skip shaving for a few days.
Not sure how that helps crack software, though. Maybe it gives you a 1337 look that inspires more experienced crackers to share their secrets.
Re: Fuzzing... (Score:2)
Re:Fuzzing... (Score:2)
Fuzzing is feeding a program with malformed input to see whether there are any vulnerabilities such as buffer overruns which may be exploitable. Of course, you'd know that if you'd read the article...
Re:obvious but often denied (Score:2)
Err, no. (Score:5, Insightful)
The fact that Mitnik says this doesn't damage the case for open source at all. The Captain Obvious comments are just pointing out that Mitnik is just saying, "I like easier work over harder work." Or maybe, "It's really fucking tedious to analyze a binary without the source." Does that stop people from finding bizzare bugs in closed source code? [slashdot.org] Absolutely not.
Dangerous mistake. (Score:4, Insightful)
Now, for what it's worth, much that seems obvious isn't true. It seems like a good notion that open software allows people to more easily figure out how to fix holes. This is certainly true. However, it also makes it easier for hackers to find holes as well.
The fact is, assuming we had two nominally identical projects, one closed-source and one open-source, bugs would be easier to find by *everybody,* good and bad. The question, which Mitnick alluded to, is this - are there sufficiently more good-guy" eyes on the code to ensure that bugs are found/fixed more quickly, to account for the fact that bad guys can find bugs faster?
The answer to that question isn't a guaranteed "Yes." In many cases it works, but I don't think in all. I realize that people around here like the notion of free software. I do too. But that doesn't mean that it works in practice the way it does in theory. We have to actually question how many people are actively maintaining the code compared to how many "bad guys" are looking to exploit it. I think for most projects this ends up working for us, but it's not guaranteed.
In other words, taking for granted that OSS is more secure because it's OSS is a dangerous mistake.
Re:Stating the obvious.. (Score:2)
Maybe WE don't. But I know of some people who might.. *cough* mcrsft *cough*.