Mitnick on OSS 286
comforteagle writes "Infamous cracker Kevin Mitnick (turned security consultant) has come out to say that he'd prefer to 'hack' open source code vs proprietary closed code. "Mitnick says that open source software is easier to analyse for security holes, since you can see the code. Proprietary software, on the other hand, requires either reverse engineering, getting your hands on illicit copies of the source code, or using a technique called 'fuzzing'." He further says that open source is more secure, but leaves you wondering questions if enough people are really interested in securing open source code."
Fuzzing and Obfuscation (Score:4, Interesting)
The crux of this attack is using a buffer overflow to gain superuser privileges. This might be trivial on Windows, so I'll relay the "la/ls" story to you regarding how to gain it in Linux. The part of this trick involves figuring out how to get an executable file from your machine to another user's machine. Let's say you know some company or institution is running a webserver on their unix/linux machines from a server and you go to visit their site. Now, their code isn't completely up to date and there's a security hole in one of their web applications. You know (after toying around with said web app on your home machine) that certain large chunks of hex in a field will result in a submission that essentially writes your binary to their $HOME directory. The name of this file will be, of course, "la."
Now hopefully their home directory is like mine and it's full of crap. So they'll never notice the "la" file but everyday they use that machine, they type "ls" to display the file. One day, their finger slips and they type "la" resulting in the execution of my binary. Instantly, another executable is written, this time called "ps" and a thread is started that simply spin locks on the processor--chewing up cycles. The machine might slow or freeze but an admin will notice this process and go into the users directory (as root) and type "ps -al" to see all the existing processes. Instead, it executes your "ps" virus and subsequently, the spinlocking stops with "ps" printed to output with the super user killing "la" and thinking everything is fixed. In the background however, the "ps" process is active
I'm sure there's a hundred things wrong with what I've said, I'm not a hacker--I just like to point out possible security holes.
Improbable but not impossible.
One more thing about the article, the beauty of OSS is that it is impossible to implement security through obfuscation [wikipedia.org]--a major pitfall to security in application design.
Securing Open Source Code (Score:5, Interesting)
As of now, there's really no penalty with selling code that isn't secure. It's accepted (for some reason) that computer code will have holes, and you really, really have to have a horrible program before anyone will think of ditching it. Even then if it's mission critical (all the more reason to be secure) it seems people are loathe to switch to something else.
So as a coder for a Closed Source app., my motivations would be:
1. Make the boss happy. Get code done.
2. Once program A is done, start work on next money making program.
3. Patch when boss says it's necessary to patch.
For Open Source it's not that much better. The only real motivation to write good code is so that it's either accepted into the project in the first place, and then once accepted everyone doesn't poke holes in your crappy code.
The difference is that people coding OSS are doing it because they want to, so hopefully have a little more motivation to look at the other code in their project. It's interesting to them, so they're a bit more likely IMO to look at it. The person getting paid has no incentive to look at the code (at least while on work time) unless the boss tells them to. Since rehashing old code doesn't usually make money, the only time to look at old code is when a patch is a necessity.
Re:Fuzzing and Obfuscation (Score:2, Interesting)
Why? - a ps will run from anywhere. I prefer running top - then selecting
offending processes and killing of required.
Alternatively, set ulimits on user accounts and have the spinlock process
kill itself.
"and type "ps -al" to see all the existing processes"
Quick question - which admins are stupid enough to include '.' in thier path?
I would have thought it much easier to use buffer/encoding overrun in specific daemons (named/sshd) to get root privs - this assumes you are not running a UML instance for external services such as DNS - you can run a live iso/fs match to detect and report "infections".
I lurve UML
Why not do something CONSTRUCTIVE? (Score:2, Interesting)
These people are like art critics.
They can't write great code themselves so they pick apart other peoples. A valuable niche job to be sure, but not deserving of some sort of "star" status of their own.
Why is there not more attention on the great developers? I don't see many interviews of kernel devs......
Conning != "Social Engineering" (Score:2, Interesting)
Re:Securing Open Source Code (Score:2, Interesting)
. . . called "craftsmanship."
KFG
Re:How would it have helped Mitnick? (Score:2, Interesting)
I have walked into data centers and gotten let into the server rooms by security without showing any ID, or having an appointment, or even knowing anyone in the building. I could have destroyed a couple million dollars of equipment, put a server under my arm, and waived at the security guard at the front door and they would have just waived back.
Point being, if you want into a network why waste the time going though code looking for vunerbilities or trying to brute force your way in somewhere, just submit a patch with a backdoor or ask for the password. Many times you will probably get in.
As a sidenote, the data center I mentioned above I was authorized to be in there doing work just nobody there knew that. And I am not a cracker, I do work a good bit in computer security though which means testing the systems I put in place.
Missed the Point (Score:5, Interesting)
We all know that security through obfuscation in cryptography is stupid: peer review illuminates the crevices the architect never conceived. But is all open source code subject to this same sort of peer review? If you've ever worked on an open source project, how much time to do sit down and pour over the code looking for security flaws.
Essentially, it's the same problem with Wikipedia: peer-review requires 1) the skill of the peers matches or exceeds the skill of the author, and 2) peers are actually reviewing, and 3) peers are trustworthy. It's the second criterion that Mitnick was questioning.
What's more, since it seems like accidental (and very subtle) bugs result in most security holes that don't get noticed. Wouldn't it then be trivial for someone with a great amount of skill to simply insert a hole? Either by subtle manipulation of existing code or by direct implementation in a segment which they are responsible for coding. If its done well, the 'oops, coding error!' excuse could always be proffered in the event the tampering was detected.
If I wanted to attack a system which I knew ran on OSS (and I had mad coding skillz), I think I would try to obtain some method of working on one of their software packages. Either directly or by 'acquiring' someone else's permissions if that was easier. Then I would insert a piece of backdoor code in a little used (or often used-'hidden in plain sight') code segment. Once the next release is running on that system, exploit the code, and get out. Depending on my goals, the operation could very likely be done before a hole is found and a patch is issued. As a small bonus anyone else installing that software would have the same vulnerability. Of course, some user level app won't be able to induce this scenario, but you get the idea.
Proprietary software doesn't have this vulnerability in so much as the programmers are much more tightly regulated by a company who has legal and monetary interests in controlling its code base and holding its employees accountable. (whether this actually happens is another discussion)
For all the self-righteousness of the open source movement, I remain convinced that the primary reason that more open-source packages are not targeted for attack is because they are not an appealing target. Specific implementations are not in popular use (globally), or they are too close to home. Meaning its preferable to attack your enemy than your family.
Re:How would it have helped Mitnick? (Score:5, Interesting)
Reminds me of the neat story (from Psychology of Computer Programming, I think) where a tiger team was trying to crack an installation's security (at the installation's request). Said installation ran IBM mainframes, and received patch tapes from IBM on a semi-regular basis. So the team wrote their own patch, put it on a tape, and sent it to the target along with a typical covering letter on IBM headed paper, and then waited for them to install their backdoor for them.
Which they did :)
Re:Captain Obvious (Score:5, Interesting)
Re:How would it have helped Mitnick? (Score:1, Interesting)
I guess that is the difference between actual interaction with the users like shipping a tape or walking into a data center as opposed to sending out a mass email phishing for info. You can get caught either way but tracking down a few fake email address' with bad contact info etc is way harder than the security guard just walking you into a room and locking you in until they get some real police there.
Re:What is Fuzzing? (Score:3, Interesting)
It may be tempting to throw out the fuzzing approach because it's not very smart; unlike things like source code analysis, fuzzing appears to be very undirected, and a single run of a fuzzer probably won't catch anything. But the advantage of fuzzing is that it can be done without any guidance; you can set up a fuzzer, let it run on the target for a day or two, and log the things that make the target crash. Those are your buffer overflows, and you found them much more easily than with automated source code analysis.
Further, source code analysis is only good for checking for very specific types of flaws; for instance, having your source code analyzer check for use of "strcpy" is fine, but why not just use a more secure function ("strncpy")? Things that can be easily added to automated source code scanners can just as easily be phased out. Fuzzing, on the other hand, has the advantage of potentially (if done right) reaching deep into the code and, because it doesn't involve checking for some predefined blacklist of bad things to do, finding problems that nobody knew existed.
And given how quickly you did your research, I'm a tad skeptical about your expertise.
Re:Fuzzing and Obfuscation (Score:2, Interesting)
Why? (Score:3, Interesting)
He proved he was a moron when he used the same MIN/ESN pair for his OKI the entire time Shimomura was tracing him down.