Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Programming IT Technology

Mitnick on OSS 286

comforteagle writes "Infamous cracker Kevin Mitnick (turned security consultant) has come out to say that he'd prefer to 'hack' open source code vs proprietary closed code. "Mitnick says that open source software is easier to analyse for security holes, since you can see the code. Proprietary software, on the other hand, requires either reverse engineering, getting your hands on illicit copies of the source code, or using a technique called 'fuzzing'." He further says that open source is more secure, but leaves you wondering questions if enough people are really interested in securing open source code."
This discussion has been archived. No new comments can be posted.

Mitnick on OSS

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Monday January 30, 2006 @11:37AM (#14598580) Journal
    I figured I'd add a little more to how "fuzzing" works as the article left me a little disappointed as to what it actually is. There are a few things online about it, including a decent white paper [events.ccc.de] written by Ilja van Sprundel. There's also a large amount of fuzzing going on to test the security of WAP. It's basically the standard buffer overflow [wikipedia.org] attack.

    The crux of this attack is using a buffer overflow to gain superuser privileges. This might be trivial on Windows, so I'll relay the "la/ls" story to you regarding how to gain it in Linux. The part of this trick involves figuring out how to get an executable file from your machine to another user's machine. Let's say you know some company or institution is running a webserver on their unix/linux machines from a server and you go to visit their site. Now, their code isn't completely up to date and there's a security hole in one of their web applications. You know (after toying around with said web app on your home machine) that certain large chunks of hex in a field will result in a submission that essentially writes your binary to their $HOME directory. The name of this file will be, of course, "la."

    Now hopefully their home directory is like mine and it's full of crap. So they'll never notice the "la" file but everyday they use that machine, they type "ls" to display the file. One day, their finger slips and they type "la" resulting in the execution of my binary. Instantly, another executable is written, this time called "ps" and a thread is started that simply spin locks on the processor--chewing up cycles. The machine might slow or freeze but an admin will notice this process and go into the users directory (as root) and type "ps -al" to see all the existing processes. Instead, it executes your "ps" virus and subsequently, the spinlocking stops with "ps" printed to output with the super user killing "la" and thinking everything is fixed. In the background however, the "ps" process is active ... silently idling waiting to do it's malicious purpose ...

    I'm sure there's a hundred things wrong with what I've said, I'm not a hacker--I just like to point out possible security holes.

    Improbable but not impossible.

    One more thing about the article, the beauty of OSS is that it is impossible to implement security through obfuscation [wikipedia.org]--a major pitfall to security in application design.
  • by Alcimedes ( 398213 ) on Monday January 30, 2006 @11:45AM (#14598640)
    To be honest, when you look at the incentive for securing OSS vs Closed Source code, neither one is all that enticing.

    As of now, there's really no penalty with selling code that isn't secure. It's accepted (for some reason) that computer code will have holes, and you really, really have to have a horrible program before anyone will think of ditching it. Even then if it's mission critical (all the more reason to be secure) it seems people are loathe to switch to something else.

    So as a coder for a Closed Source app., my motivations would be:

    1. Make the boss happy. Get code done.
    2. Once program A is done, start work on next money making program.
    3. Patch when boss says it's necessary to patch.

    For Open Source it's not that much better. The only real motivation to write good code is so that it's either accepted into the project in the first place, and then once accepted everyone doesn't poke holes in your crappy code.

    The difference is that people coding OSS are doing it because they want to, so hopefully have a little more motivation to look at the other code in their project. It's interesting to them, so they're a bit more likely IMO to look at it. The person getting paid has no incentive to look at the code (at least while on work time) unless the boss tells them to. Since rehashing old code doesn't usually make money, the only time to look at old code is when a patch is a necessity.
  • by jcaren ( 862362 ) on Monday January 30, 2006 @11:55AM (#14598736)
    "The machine might slow or freeze but an admin will notice this process and go into the users directory (as root)"

      Why? - a ps will run from anywhere. I prefer running top - then selecting
      offending processes and killing of required.
      Alternatively, set ulimits on user accounts and have the spinlock process
      kill itself.

    "and type "ps -al" to see all the existing processes"

      Quick question - which admins are stupid enough to include '.' in thier path?

    I would have thought it much easier to use buffer/encoding overrun in specific daemons (named/sshd) to get root privs - this assumes you are not running a UML instance for external services such as DNS - you can run a live iso/fs match to detect and report "infections".

    I lurve UML :-)

  • by Anonymous Coward on Monday January 30, 2006 @12:02PM (#14598792)
    I fail to understand the obsession with hackers and security!

    These people are like art critics.

    They can't write great code themselves so they pick apart other peoples. A valuable niche job to be sure, but not deserving of some sort of "star" status of their own.

    Why is there not more attention on the great developers? I don't see many interviews of kernel devs......
  • by C10H14N2 ( 640033 ) on Monday January 30, 2006 @12:04PM (#14598820)
    Can we please stop calling common conning "social engineering?" The term itself if a con to make a common shyster seem like a legitimate professional. Unless he was involved in, say, eugenics or public education, this term painfully overstates the actions and qualifications of its practitioners.
  • by kfg ( 145172 ) on Monday January 30, 2006 @12:07PM (#14598846)
    For Open Source it's not that much better. The only real motivation to write good code is. . .

    . . . called "craftsmanship."

    KFG
  • by cli_man ( 681444 ) on Monday January 30, 2006 @12:16PM (#14598944) Homepage
    I have often said it is easier to just ask for a password then try and get it brute-force. The same could be said for most any computer security.

    I have walked into data centers and gotten let into the server rooms by security without showing any ID, or having an appointment, or even knowing anyone in the building. I could have destroyed a couple million dollars of equipment, put a server under my arm, and waived at the security guard at the front door and they would have just waived back.

    Point being, if you want into a network why waste the time going though code looking for vunerbilities or trying to brute force your way in somewhere, just submit a patch with a backdoor or ask for the password. Many times you will probably get in.

    As a sidenote, the data center I mentioned above I was authorized to be in there doing work just nobody there knew that. And I am not a cracker, I do work a good bit in computer security though which means testing the systems I put in place.
  • Missed the Point (Score:5, Interesting)

    by geekyMD ( 812672 ) on Monday January 30, 2006 @12:17PM (#14598951)
    All of you who are commenting that this is an obvious idea may be missing the point.

    We all know that security through obfuscation in cryptography is stupid: peer review illuminates the crevices the architect never conceived. But is all open source code subject to this same sort of peer review? If you've ever worked on an open source project, how much time to do sit down and pour over the code looking for security flaws.

    Essentially, it's the same problem with Wikipedia: peer-review requires 1) the skill of the peers matches or exceeds the skill of the author, and 2) peers are actually reviewing, and 3) peers are trustworthy. It's the second criterion that Mitnick was questioning.

    What's more, since it seems like accidental (and very subtle) bugs result in most security holes that don't get noticed. Wouldn't it then be trivial for someone with a great amount of skill to simply insert a hole? Either by subtle manipulation of existing code or by direct implementation in a segment which they are responsible for coding. If its done well, the 'oops, coding error!' excuse could always be proffered in the event the tampering was detected.

    If I wanted to attack a system which I knew ran on OSS (and I had mad coding skillz), I think I would try to obtain some method of working on one of their software packages. Either directly or by 'acquiring' someone else's permissions if that was easier. Then I would insert a piece of backdoor code in a little used (or often used-'hidden in plain sight') code segment. Once the next release is running on that system, exploit the code, and get out. Depending on my goals, the operation could very likely be done before a hole is found and a patch is issued. As a small bonus anyone else installing that software would have the same vulnerability. Of course, some user level app won't be able to induce this scenario, but you get the idea.

    Proprietary software doesn't have this vulnerability in so much as the programmers are much more tightly regulated by a company who has legal and monetary interests in controlling its code base and holding its employees accountable. (whether this actually happens is another discussion) ;)

    For all the self-righteousness of the open source movement, I remain convinced that the primary reason that more open-source packages are not targeted for attack is because they are not an appealing target. Specific implementations are not in popular use (globally), or they are too close to home. Meaning its preferable to attack your enemy than your family.
  • by Tim Browse ( 9263 ) on Monday January 30, 2006 @01:11PM (#14599462)
    Point being, if you want into a network why waste the time going though code looking for vunerbilities or trying to brute force your way in somewhere, just submit a patch with a backdoor or ask for the password. Many times you will probably get in.

    Reminds me of the neat story (from Psychology of Computer Programming, I think) where a tiger team was trying to crack an installation's security (at the installation's request). Said installation ran IBM mainframes, and received patch tapes from IBM on a semi-regular basis. So the team wrote their own patch, put it on a tape, and sent it to the target along with a typical covering letter on IBM headed paper, and then waited for them to install their backdoor for them.

    Which they did :)

  • Re:Captain Obvious (Score:5, Interesting)

    by brunson ( 91995 ) on Monday January 30, 2006 @01:25PM (#14599585) Homepage
    Besides, Mitnick did most of his "hacking" through social engineering, not technical exploits.
  • by cli_man ( 681444 ) on Monday January 30, 2006 @01:31PM (#14599628) Homepage
    Of course it takes alot more guts to try some of this stuff for real, when you know if you get caught you can get out because you were allowed to try and get around the system you can try much more risky stuff.

    I guess that is the difference between actual interaction with the users like shipping a tape or walking into a data center as opposed to sending out a mass email phishing for info. You can get caught either way but tracking down a few fake email address' with bad contact info etc is way harder than the security guard just walking you into a room and locking you in until they get some real police there.
  • Re:What is Fuzzing? (Score:3, Interesting)

    by KrispyKringle ( 672903 ) on Monday January 30, 2006 @02:02PM (#14599879)
    Why do you say it's a waste of time? The vast majority of vulnerabilities that lead to code execution are buffer overflows resulting from malformed input--things like file parsers that don't properly parse invalid files, network stacks that don't properly parse malformed packets, and so forth. These are all exactly the sorts of things that fuzzers catch.

    It may be tempting to throw out the fuzzing approach because it's not very smart; unlike things like source code analysis, fuzzing appears to be very undirected, and a single run of a fuzzer probably won't catch anything. But the advantage of fuzzing is that it can be done without any guidance; you can set up a fuzzer, let it run on the target for a day or two, and log the things that make the target crash. Those are your buffer overflows, and you found them much more easily than with automated source code analysis.

    Further, source code analysis is only good for checking for very specific types of flaws; for instance, having your source code analyzer check for use of "strcpy" is fine, but why not just use a more secure function ("strncpy")? Things that can be easily added to automated source code scanners can just as easily be phased out. Fuzzing, on the other hand, has the advantage of potentially (if done right) reaching deep into the code and, because it doesn't involve checking for some predefined blacklist of bad things to do, finding problems that nobody knew existed.

    And given how quickly you did your research, I'm a tad skeptical about your expertise.
  • by Schraegstrichpunkt ( 931443 ) * on Monday January 30, 2006 @02:04PM (#14599893) Homepage
    IIRC, old versions of Slackware (3.5) and Red Hat Linux (5.1) had "." in their default PATH. I remember because I didn't learn about "./" until I switched to Debian.
  • Why? (Score:3, Interesting)

    by GodBlessTexas ( 737029 ) on Monday January 30, 2006 @02:38PM (#14600186) Journal
    Why do people listen to Kevin Mitnick on technical issues? He never once wrote a single line of code. He never once used anything he himself had created. All he was good at was using other people's tools, making hime a glorified script kiddie with connections to get the tools he needed. The only difference between him and your average script kiddie is he had specific targets that usually had something he wanted which motivated his attacks, instead of just randomly hitting vulnerable systems.

    He proved he was a moron when he used the same MIN/ESN pair for his OKI the entire time Shimomura was tracing him down.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...