Forgot your password?
typodupeerror
Security Programming IT Technology

Mitnick on OSS 286

Posted by Hemos
from the hacking-it-out dept.
comforteagle writes "Infamous cracker Kevin Mitnick (turned security consultant) has come out to say that he'd prefer to 'hack' open source code vs proprietary closed code. "Mitnick says that open source software is easier to analyse for security holes, since you can see the code. Proprietary software, on the other hand, requires either reverse engineering, getting your hands on illicit copies of the source code, or using a technique called 'fuzzing'." He further says that open source is more secure, but leaves you wondering questions if enough people are really interested in securing open source code."
This discussion has been archived. No new comments can be posted.

Mitnick on OSS

Comments Filter:
  • Captain Obvious (Score:5, Insightful)

    by Fusen (841730) * on Monday January 30, 2006 @10:37AM (#14598577)
    In other news, it's easier to see where you are going when you have your eyes open.
    • by IAmTheDave (746256)
      Seriously. I know it's Slashdot, but this particular nugget of wisdom - even from beloved Kevin Mitnick - doesn't really count as news.
    • by kfg (145172) on Monday January 30, 2006 @11:03AM (#14598809)
      First Corollary:

      It's easier for others to see where you are going when they have their eyes open.

      Second Corollary:

      It's easier for others to see where you might go when they have their eyes open.

      KFG
    • Next week he will announce that it is much easier to add ram to your computer if you remove the cover or access door. As well as the fact that your computer is much more insecure if it is currently ON.

      I simply wonder if he is trying to make a security version of "call for help" Tv show/ infocast.

    • Doublespeak ? (Score:4, Insightful)

      by bmajik (96670) <matt@mattevans.org> on Monday January 30, 2006 @12:38PM (#14599683) Homepage Journal
      So when Mitnick says it is easier to hack OSS software, people say "duh"

      When Microsoft says "making our stuff open source will make it easier to find vulnerabilities", people say "Stop FUDing, Microsoft"

      I dont see how can you beleive it when Mitnick says it and how you can refute it when Allchin says the same thing.

      • Re:Doublespeak ? (Score:5, Insightful)

        by Knuckles (8964) <knucklesNO@SPAMdantian.org> on Monday January 30, 2006 @01:59PM (#14600350)
        You can't believe it because you (1) are making up an argument for the aim to refute it, commonly called a strawman [fallacyfiles.org], and (2) treat a collection of people as an individual. (Is there a fallacy name for this too?)

        ad (1)
        Mitnick did not say "it's easier to hack" (I assume TFA/you mean "crack" [catb.org] here) which would mean that it's easier to get unauthorized access.

        In fact TFA quoted Mitnick as saying that finding vulnerabilities in OSS code is easier, since it's easier to analyze for holes. This is true for both black-hats and white-hats, so it gets evened out somewhat. On the other hand, finding holes in closed source is harder for black-hats, but fixing them is impossible for white-hats, so overall this might put black-hats at an advantage.

        And you leave out that OSS is not just "GPL the source and put it on a server". Mature OSS projects generally are modularized well, because parallel development is greatly hampered otherwise. Closed projects tend to be much dirtier in this respect.
        Incidentially, this separation also helps secure coding.

        ad (2)
        It should not be a surprise that among > 1,000,000 /. users, you find both people who say "duh" in the one, and others who say "Stop Fudding" in the other story.

        Actually, what happens is this:
        Some people say "duh", because, well, duh, but you leave out the supporting argument that while Mitnick's assertion is obviously true, TFA left out the fact that it is easier to fix also.
        Other people say "FUD", because they forget that Allchin is somewhat right: putting Windows in the open now, necessarily with insufficient preparation and code cleanup, would make it more insecure. But that does not mean that it couldn't be more secure had it been constructed in the open from the beginning.

        And I can't believe there are idiots who modded you +5 Insightful.
        • Re:Doublespeak ? (Score:3, Informative)

          by kesuki (321456)
          treat a collection of people as an individual (Is there a fallacy name for this too?)

          Yes, http://en.wikipedia.org/wiki/False_dichotomy [wikipedia.org] anytime you create an 'excluded middle' it's a flase dichotomy, so treating the actions of a group of individualas as a 'collective' with a single opinion and trying to point out where they are being 'inconsistant' is ignoring the fact that it's possible for a large group to have two or more subsets of people who believe different points of view are correct.

          It also ignores t
      • Re:Doublespeak ? (Score:3, Insightful)

        by FireFury03 (653718)
        So when Mitnick says it is easier to hack OSS software, people say "duh"

        He didn't quite say that (infact, he didn't really say a lot). My interpretation of his comment was basically that given 2 pieces of software with a similar number of security holes in it's easier to crack the open stuff (well duh).

        Of course, that's ignoring the fact that FOSS software _generally_ seems to be more secure than closed software. You can make up your mind as to why that is, but some thoughts:
        1. FOSS software has more peo
  • by eldavojohn (898314) * <eldavojohn AT gmail DOT com> on Monday January 30, 2006 @10:37AM (#14598580) Journal
    I figured I'd add a little more to how "fuzzing" works as the article left me a little disappointed as to what it actually is. There are a few things online about it, including a decent white paper [events.ccc.de] written by Ilja van Sprundel. There's also a large amount of fuzzing going on to test the security of WAP. It's basically the standard buffer overflow [wikipedia.org] attack.

    The crux of this attack is using a buffer overflow to gain superuser privileges. This might be trivial on Windows, so I'll relay the "la/ls" story to you regarding how to gain it in Linux. The part of this trick involves figuring out how to get an executable file from your machine to another user's machine. Let's say you know some company or institution is running a webserver on their unix/linux machines from a server and you go to visit their site. Now, their code isn't completely up to date and there's a security hole in one of their web applications. You know (after toying around with said web app on your home machine) that certain large chunks of hex in a field will result in a submission that essentially writes your binary to their $HOME directory. The name of this file will be, of course, "la."

    Now hopefully their home directory is like mine and it's full of crap. So they'll never notice the "la" file but everyday they use that machine, they type "ls" to display the file. One day, their finger slips and they type "la" resulting in the execution of my binary. Instantly, another executable is written, this time called "ps" and a thread is started that simply spin locks on the processor--chewing up cycles. The machine might slow or freeze but an admin will notice this process and go into the users directory (as root) and type "ps -al" to see all the existing processes. Instead, it executes your "ps" virus and subsequently, the spinlocking stops with "ps" printed to output with the super user killing "la" and thinking everything is fixed. In the background however, the "ps" process is active ... silently idling waiting to do it's malicious purpose ...

    I'm sure there's a hundred things wrong with what I've said, I'm not a hacker--I just like to point out possible security holes.

    Improbable but not impossible.

    One more thing about the article, the beauty of OSS is that it is impossible to implement security through obfuscation [wikipedia.org]--a major pitfall to security in application design.
    • Granted, you had a disclaimer about mistakes, but...
      This is all assuming that the home dir or the working dir is in the path.
    • by Anonymous Coward

      The machine might slow or freeze but an admin will notice this process and go into the users directory (as root) and type "ps -al" to see all the existing processes. Instead, it executes your "ps" virus

      Do any UNIX-style systems ship with the current directory in $PATH for root? That's a stupid thing to do and as far as I'm aware, this practice died out years ago for precisely the reason you describe.

    • by ookaze (227977) <ookaze.mail@ookaze@fr> on Monday January 30, 2006 @10:50AM (#14598688) Homepage
      I'm sure there's a hundred things wrong with what I've said, I'm not a hacker

      You mean, like what you said there :
      The machine might slow or freeze but an admin will notice this process and go into the users directory (as root) and type "ps -al" to see all the existing processes. Instead, it executes your "ps" virus and subsequently, the spinlocking stops with "ps" printed to output with the super user killing "la" and thinking everything is fixed

      Of course, unless the superuser deliberately destroyed the security of its Linux and added "." to his PATH, this would never happen, as it would not execute the "ps" in the user's directory.
      But I see your point.
    • "The machine might slow or freeze but an admin will notice this process and go into the users directory (as root)"

      Why? - a ps will run from anywhere. I prefer running top - then selecting
      offending processes and killing of required.
      Alternatively, set ulimits on user accounts and have the spinlock process
      kill itself.

      "and type "ps -al" to see all the existing processes"

      Quick question - which admins are stupid enough to include '.' in thier path?

      I would have thought it mu
      • Quick question - which admins are stupid enough to include '.' in thier path?

        I've seen plenty do it - perhaps not in their login script, but I've definitely seen people add . to their path manually, when running a lot of stuff in the current dir and tired of typing ./ all the time.

        However, that was the first thing that sprung to my mind; sure, that's all reasonable, but . isn't in root's path by default (or indeed in that of most user accounts).
        • I often have ~/bin and ~/sbin in my path ... they're self-writable and executable replacements of system tools I want for myself, but not for the whole system (like my setuid copy of cdrecord).

          These are easily over-written in an attack situation, and they could be executed as root if I did 'su' instead of 'su -'. I always do the latter though.
    • Makes no sense (Score:5, Informative)

      by brunes69 (86786) <slashdot@@@keirstead...org> on Monday January 30, 2006 @10:58AM (#14598757) Homepage

      I'm sure there's a hundred things wrong with what I've said, I'm not a hacker--I just like to point out possible security holes.

      Let's dive into what *is* wrong...

      First of all, files in your home directory are normally not in your $PATH on any Linux system. Anyone who has their system set up like this, *let alone* having their $HOME have priority over /sbin and /usr/sbin, deserves to be shot.

      Secondly, a webserver should (and does by default in any distro I know of) runs as the nobody/httpd/apache/someone user, and does not have a home directory. So any exploit in the web server would not allow you to write a 'la' binary anywhere.

      Third, your whole attack scheme is just a big run around for no reason. If you can write a binary called 'la', why wouldn't you just write it as 'ls' in the first place, istead of crossing your fingers and hoping he mistypes? And if you can write a binary to disk, you can also obviously execute it, so why don't you? Why would you wait around? Is it because you hope someone is going to log in as root and run it? Because if that is the case, you will be way out of luck, because root *never* has $HOME in his path (and the webserver shouldn't be able to write to /root anyways).

      This isn't how these kinds of attacks work... what *usually* happens is, the buffer overflow allows one to write and execute files as the unprivilidged user. The cracker attacks and does this to gaina remote shell on the machine, as this unprivilidged user. They then use this shell to try to find holes in other system services that may not be remotely exploitable, for example say mysql or postgresql. If mysql is running locally and not set up right, they could use it to gain full superuser privilidge by SELECT'ing to a file. Then, all bets are off.

      • by eldavojohn (898314) * <eldavojohn AT gmail DOT com> on Monday January 30, 2006 @11:07AM (#14598849) Journal
        Makes no sense
        *a dazed author of the GP lies under an overpass, gleefully singing about possible Linux/Unix flaws*

        Alexander "brunes69" de Large: Oy! Lookie what we have here, droogies ... someone who's trying to relay a point without including a complete manual on how to do it!
        Droogies: [in unison] HE FORGOT ABOUT PERMISSIONS!
        Alexander "brunes69" de Large: [bending over with his cane against his cod piece] That's right. And what happens to slashdotters we viddie that make mistakes?
        Droogie A: We brow beat them into a bloody pulp ...
        *Alex and the droogs continually beat the poor slashdotter while emitting "Singing in the Rain"*
        eldavojohn: Please ... oof! ... I tried to warn you that I don't write viruses for a living!
      • Secondly, a webserver should (and does by default in any distro I know of) runs as the nobody/httpd/apache/someone user, and does not have a home directory. So any exploit in the web server would not allow you to write a 'la' binary anywhere.

        Not even in /tmp?! (but i see your point)
      • I've been asked to put '.' in the default path at several places. This seems to be a common request in giant-bloated-java-crapware-land where you have to source in half a dozen scripts' worth of environment variables to get things to work properly. When I argued against it people acted me like I had bats flying out of my nose.

        I dunno if this kind of thing is much used except by malevolent insiders. Same for buffer overruns, I haven't seen any buffer overruns do anything but crash a Solaris or Linux server i
    • >One more thing about the article, the beauty of OSS is that it is impossible to implement security through obfuscation [wikipedia.org]--a major pitfall to security in application design.

      Careful with the word impossible.

      Can you really guarantee that for every OSS project, there are enough people looking through each bit of code trying to look for any "security through obscurity"-type issues?

      If there are 1,000 submitters, most of whom are working on features, can you guarantee that everyone's code is gett
    • Whet if i dunt' maje typos evers?
    • Aside from all the other errors already pointed out, everything past the first paragraph has nothing to do with fuzzing.
  • What is Fuzzing? (Score:5, Informative)

    by PlayCleverFully (947815) on Monday January 30, 2006 @10:38AM (#14598583) Homepage
    Many of you may be unfamiliar with the term "fuzzing."

    I was when I read the article and have done some research and fuzzing is:

    What is fuzzing?
    - Sending semi-random data to an application
    - Semi-random: good enough so it'll look like valid data, bad
    enough so it might break stuff
    - When people hear "fuzzing" they imediately think http, THERE IS MORE TO FUZZING THAN JUST HTTP !!!
    - You can fuzz:
    -- Network protocols
    -- Network stacks
    -- Arguments, signals, stdin, envvar, file descriptors, ....
    -- Api's (syscalls, library calls)
    -- Files

    In general, most of the time it is a waste of time, but if you are "lucky" you could find a vulnerability and maybe with a little more research a way to exploit the code.

    More information can be found at this PDF Article - http://static.23.nu/md/Pictures/FUZZING.PDF [23.nu] (Very Large 90+ Pages)

    • Posting wihtout reading the article.

      When I was at College I spend some of my time cracking software and learning about hacking. For me, the *real* sense of doing that was because of the challenge to reverse-engineer the code. The same was applied for smard cards protocols R.E. (which may be consider hacking =o)).

      Now, if we talk about open source applications, I won't say it is "hacking", I would name it more as "code auditing", because, if you find a bug on any given OSS application by seeing at the listing
    • so you did some reasearch within the 3 minute time frame from when the story posted and your comment?
    • Why do you say it's a waste of time? The vast majority of vulnerabilities that lead to code execution are buffer overflows resulting from malformed input--things like file parsers that don't properly parse invalid files, network stacks that don't properly parse malformed packets, and so forth. These are all exactly the sorts of things that fuzzers catch.

      It may be tempting to throw out the fuzzing approach because it's not very smart; unlike things like source code analysis, fuzzing appears to be very undire
  • In other news... (Score:5, Insightful)

    by HaloZero (610207) <`moc.liamg' `ta' `akedotorp'> on Monday January 30, 2006 @10:38AM (#14598592) Homepage
    He's got the same general (valid) outlook that the rest of us have: open-source code is easier to tinker with because you can see how and why it works. That is an intrinsic element of having open-source code.

    Just because Mitnick has said what thousands - neigh - millions have said before, doesn't mean it's new and exciting. Doesn't make it news.
    • New-and-exciting != news.

      When someone high-profile says something that lots of low-profile people are already saying, it is news because people who don't typically hear that thing said now have the opportunity to hear it.

      Your friendly-neihborhood Linux admin can say this stuff all he wants with his other admin buddies, or here on Slashdot. But that doesn't necessarily get heard by even one decision-maker. This will be heard by lots of them, if they are even lightly following online tech-news.

      How high-prof
    • Plus, wasn't Mitnick's main trick social engineering access to restricted areas? Maybe next news cycle a famous cracker will explain how he/she prefers to tackle well documented source code.
    • But when a horse comments on it it becomes insightful?
  • by gasmonso (929871) on Monday January 30, 2006 @10:39AM (#14598597) Homepage
    "Mitnick says that open source software is easier to analyse for security holes, since you can see the code."

    Once again proving his technical prowess!

    http://religiousfreaks.com/ [religiousfreaks.com]

    • Or social intelligence. Since hacking proprietary code is a felony via the DMCA, he'd probably spend quite a bit of time indoors as a repeat felon.
  • Prefers? (Score:2, Insightful)

    by Black Parrot (19622) *
    I wonder what he means by "prefers". Is it more fun to sit around reading someone's crappy code than to use the trial-and-error approach crackers use with closed-source software?

    The empirical evidence suggests that people don't have an especial lot of trouble cracking CSS.

    I guess if you have the source you can grep for reads and examine them for overflow vulnerabilities, but I wonder how much easier even that would be vs. just trying it.
  • Famous hacker says it's easier to find holes when they let you look at the source! News at 11!

    Is this really all that suprising? If you've got a mentality of "how can I break this?" it's much easier to figure out how if you can look at how it's built. Unfortunately, having a hacker able to look at a system is not the same thing as having the original designers catch the issue. If you wait until hackers get ahold of it, they'll find ways to exploit the problem before the patch is in wide distribution. That's what makes this dangerous.

    Thankfully, the majority of those who are looking at the code have less selfish reasons, and are happy to share any issues they see. Thus the "many-eyes" philosophy depends heavily on the good will of the common man. Personally, I wouldn't have it any other way. :-)
  • by Alcimedes (398213) on Monday January 30, 2006 @10:45AM (#14598640)
    To be honest, when you look at the incentive for securing OSS vs Closed Source code, neither one is all that enticing.

    As of now, there's really no penalty with selling code that isn't secure. It's accepted (for some reason) that computer code will have holes, and you really, really have to have a horrible program before anyone will think of ditching it. Even then if it's mission critical (all the more reason to be secure) it seems people are loathe to switch to something else.

    So as a coder for a Closed Source app., my motivations would be:

    1. Make the boss happy. Get code done.
    2. Once program A is done, start work on next money making program.
    3. Patch when boss says it's necessary to patch.

    For Open Source it's not that much better. The only real motivation to write good code is so that it's either accepted into the project in the first place, and then once accepted everyone doesn't poke holes in your crappy code.

    The difference is that people coding OSS are doing it because they want to, so hopefully have a little more motivation to look at the other code in their project. It's interesting to them, so they're a bit more likely IMO to look at it. The person getting paid has no incentive to look at the code (at least while on work time) unless the boss tells them to. Since rehashing old code doesn't usually make money, the only time to look at old code is when a patch is a necessity.
    • by kfg (145172)
      For Open Source it's not that much better. The only real motivation to write good code is. . .

      . . . called "craftsmanship."

      KFG
    • I worked on a website and used a component which was open source (actually BSD licensed).

      In the process of using it, I found a small bug, and fixed it and notified the author.

      While I didn't need to tell the author, I had a number of reasons for telling him:-

      To ensure that any further revisions he made included my changes
      Out of public spiritedness.

      Also, sometimes in companies I've found bugs by accident. Like, if the configuration database is wrong, and in the process of debugging, I've noticed somet

    • The difference is that people coding OSS are doing it because they want to, so hopefully have a little more motivation to look at the other code in their project. It's interesting to them, so they're a bit more likely IMO to look at it.

      Well, not just interesting... Most of the people who work on OSS projects do so because they need the project to do something for them. Not surprisingly, most of the people who work on Apache are webmasters. The guy who started PHP did so because he needed some tools for ma
  • by digitaldc (879047) * on Monday January 30, 2006 @10:46AM (#14598647)
    Separated [google.com] at birth? [tectonic.co.za]
  • by xxxJonBoyxxx (565205) on Monday January 30, 2006 @10:46AM (#14598650)

    I think I'd agree with Kevin if he said:

    "I'd prefer to hack open source with FEW AUTHORS."

    There's no doubt that lots of eyes and a security focus have helped Apache, but there's lots of open source shitware (for example, just Google up a list of PHP messageboards) that don't have basic input validation controls, require too much access to the operating system, use plain-text or unsalted MD5 passwords or contain other gaping holes.

    Without those extra eyes helping out...yes, many open source projects are easier to hack than similar closed source projects.

    • . . .there's lots of open source shitware. . .

      Indeed there is, and lack of recognition of this is one of the "weaknesses" of OSS, however, let me ask you this question:

      How many people run this shitware?

      Not much point in spending who knows how many hours going over code that nobody uses. The Mother of all UNIX Holes was found in GNU emacs, because that was someplace worth looking for one.

      Thus the code that everybody uses gets harder faster.

      KFG
      • How many people run this shitware?

        Often it doesn't matter. For example, if I'm trying to deface site XXX (or inject a form information grabber) and I see that it runs message board YYY, the first thing I do is try to get the source code of message board YYY. In other words, if I know what I'm doing, I'm not using a shotgun/Nessus approach anyway. Instead, I'm first going to drop by as an anonymous web user and see what I can use against you before I fire my first shot.

  • by IAAP (937607) on Monday January 30, 2006 @10:47AM (#14598659)
    "... You'd think that with OSS, with more people looking at the code, you're more apt at finding security holes. But are enough people really interested?"

    Oh, really? I think so.

    In this day and age with all of the security problems (especially with MS), OSS trying to gain market share, I'd think that every OSS coder would be really mindful of any potential holes. Especially if he knew that another developer would be looking at it. I would be really embarassed (if I were a developer) if I got an email saying something to the effect of "Hey dumbass, nice job of preventing buffer overflow there at line: xxx in abcdef.c! Don't worry, no one will EVER exploit that hole!"

    • Have you ever worked on an open source project?

      Did you read every line of code to look for potential security flaws, or did you just write code of your own?

      Do you think every single other coder involed in the project read every line of code you wrote, and made sure there was no way it could introduce security holes?

      • Do you think that it'd be any different in a closed source shop? It typically isn't. Closed Source, Open Source, doesn't matter- it's just that it's more likely to happen in an Open Sourced project because there's more of an incentive to do so (Sense of craftsmanship, etc...). In Closed Source, for most contexts, it costs a LOT more money to accomplish a proper and thorough audit of code for security purposes. Typically, it's NOT done unless we're talking about the stuff the Phone Switching hardware ven
      • Having contributed to OSS projects and seen the process of contributing. I can say that yes code is generally checked out. A common practice is getting automated emails of CVS/SVN commits and seeing what happened. There are people on projects whose primary job is monitoring those commits. Patches get reviewed before getting put into CVS. But the Primary benefit is the testing. People run the software and report bugs. Lot's of bugs. They find those holes and they find them quicker than in Closed Source devel
  • Unfortunate (Score:5, Funny)

    by Anonymous Coward on Monday January 30, 2006 @10:53AM (#14598714)
    Infamous cracker Kevin Mitnick (turned security consultant) has come out to say [...]

    Why does race have to enter every discussion on /.?
  • by jcr (53032) <jcrNO@SPAMmac.com> on Monday January 30, 2006 @10:56AM (#14598740) Journal
    The dude was a social engineer. I've seen no evidence that he ever wrote an exploit himself.

    -jcr
    • Can we please stop calling common conning "social engineering?" The term itself if a con to make a common shyster seem like a legitimate professional. Unless he was involved in, say, eugenics or public education, this term painfully overstates the actions and qualifications of its practitioners.
      • Can we please stop calling common conning "social engineering?"

        No, I won't. "Social engineering" is a derogatory term, which takes ironic note of the perp's inability to do any real engineering.

        -jcr
    • I have often said it is easier to just ask for a password then try and get it brute-force. The same could be said for most any computer security.

      I have walked into data centers and gotten let into the server rooms by security without showing any ID, or having an appointment, or even knowing anyone in the building. I could have destroyed a couple million dollars of equipment, put a server under my arm, and waived at the security guard at the front door and they would have just waived back.

      Point being
      • by Tim Browse (9263) on Monday January 30, 2006 @12:11PM (#14599462)
        Point being, if you want into a network why waste the time going though code looking for vunerbilities or trying to brute force your way in somewhere, just submit a patch with a backdoor or ask for the password. Many times you will probably get in.

        Reminds me of the neat story (from Psychology of Computer Programming, I think) where a tiger team was trying to crack an installation's security (at the installation's request). Said installation ran IBM mainframes, and received patch tapes from IBM on a semi-regular basis. So the team wrote their own patch, put it on a tape, and sent it to the target along with a typical covering letter on IBM headed paper, and then waited for them to install their backdoor for them.

        Which they did :)

  • by QuietLagoon (813062) on Monday January 30, 2006 @10:56AM (#14598742)
    We all have seen how difficult it is to hack Microsoft's closed-source, proprietary code.
  • by Anonymous Coward
    I fail to understand the obsession with hackers and security!

    These people are like art critics.

    They can't write great code themselves so they pick apart other peoples. A valuable niche job to be sure, but not deserving of some sort of "star" status of their own.

    Why is there not more attention on the great developers? I don't see many interviews of kernel devs......
    • They can't write great code themselves so they pick apart other peoples. A valuable niche job to be sure, but not deserving of some sort of "star" status of their own.

      There are plenty of interviews with Linus. Good developers get publishing deals. Also, interviews tend to get your book sold, not land you opertunities to get paid to code. When you write a book your doing it to educate. Some people are good teachers. ANd then there are those that lend wait to the cliche, those who can't do . . .
    • Why do most history books focus on wars and generals and not on scientists or buisness leaders? Why do 90% of movies deal with danger or violence? Because destruction is sexy, baby!
  • He served five years in prison, including eight months in solitary confinement after it was alleged that he could launch nuclear missiles by whistling into a telephone.
    Subsequent to his release, when he was among the victims of a restaurant robbery, the perpetrators had no difficulty locating his wallet in their bag of loot...
  • Missed the Point (Score:5, Interesting)

    by geekyMD (812672) on Monday January 30, 2006 @11:17AM (#14598951)
    All of you who are commenting that this is an obvious idea may be missing the point.

    We all know that security through obfuscation in cryptography is stupid: peer review illuminates the crevices the architect never conceived. But is all open source code subject to this same sort of peer review? If you've ever worked on an open source project, how much time to do sit down and pour over the code looking for security flaws.

    Essentially, it's the same problem with Wikipedia: peer-review requires 1) the skill of the peers matches or exceeds the skill of the author, and 2) peers are actually reviewing, and 3) peers are trustworthy. It's the second criterion that Mitnick was questioning.

    What's more, since it seems like accidental (and very subtle) bugs result in most security holes that don't get noticed. Wouldn't it then be trivial for someone with a great amount of skill to simply insert a hole? Either by subtle manipulation of existing code or by direct implementation in a segment which they are responsible for coding. If its done well, the 'oops, coding error!' excuse could always be proffered in the event the tampering was detected.

    If I wanted to attack a system which I knew ran on OSS (and I had mad coding skillz), I think I would try to obtain some method of working on one of their software packages. Either directly or by 'acquiring' someone else's permissions if that was easier. Then I would insert a piece of backdoor code in a little used (or often used-'hidden in plain sight') code segment. Once the next release is running on that system, exploit the code, and get out. Depending on my goals, the operation could very likely be done before a hole is found and a patch is issued. As a small bonus anyone else installing that software would have the same vulnerability. Of course, some user level app won't be able to induce this scenario, but you get the idea.

    Proprietary software doesn't have this vulnerability in so much as the programmers are much more tightly regulated by a company who has legal and monetary interests in controlling its code base and holding its employees accountable. (whether this actually happens is another discussion) ;)

    For all the self-righteousness of the open source movement, I remain convinced that the primary reason that more open-source packages are not targeted for attack is because they are not an appealing target. Specific implementations are not in popular use (globally), or they are too close to home. Meaning its preferable to attack your enemy than your family.
    • Get real... Apache's an appealing target. Which web server has more exploits for it? IIS.

      There is absolutely nothing in your little hypothetical situation that couldn't be accomplished in closed source as well- and in actuality, it'd be easier as the audits wouldn't be as intense (Witness the WMF debacle for proof of something that should have been caught that wasn't in Closed Source software.).

      Simply put, what you claim isn't. But I'm confusing this discussion by including facts, aren't I?
    • You're right to suggest that there are several ways in which some particular open source software could be compromised.

      The standard counterargument for most of them is "A similar potential exists in proprietary software also, but is less easy to detect and repair."

      I don't want to flog this point too vigorously, because it's clear that in practice the quality and security of any software is derivative of many factors, both in the development environment and during its installation and operation.

      We know

  • I think... (Score:2, Insightful)

    by mangus_angus (873781)
    Mr. Mitnick is forgetting that most people want to see the proprietary software code because it is closed to prying eyes. Where as OSS being open to everyone is less appealing. And any issues that need to be fixed will be in a shorter time due to more people around the globe working on it. Where as with Proprietary software you have a small team working on it. They also have the added task (in Microsoft's case) of it having to be test on many different systems due to the large and various types of machi
  • And he may know a few things more than a typical /. person, but his "theory" hasn't held up under any sort of scrutiny.

    What I mean is, in theory, he feels he can crack an OSS based box because he can analyse the source code, but in reality, it's easier to crack a proprietary box. So his theory doesn't appear to hold up to simple analysis of what happens in the real world.

    It's kind of like the theory that SUVs are safer than other cars, which would appear to be common sense. But it falls apart when you con
    • Er, do you have evidence, citations, anything to back your claim? Or should we just trust you because a man named tkrotchko can't be wrong?

      I've seen no conclusive (or reliable) means of measuring vulnerability statistics that show what OS is the most secure. Vulnerability rates are hard to trust, because many vendors don't report vulnerabilities the same way, nor is it always clear what vulns affect the default install, the properly locked down install, etc. (for instance, Gentoo consistently releases more
  • by jpellino (202698) on Monday January 30, 2006 @11:39AM (#14599192)
    "Mitnick was arrested in 1995 by the FBI for hacking. He served five years in prison, including eight months in solitary confinement after it was alleged that he could launch nuclear missiles by whistling into a telephone." ...following the previous 40 years of whistling past the graveyard to deal with nuclear missiles.

  • You exposing your entire source code for public scrutiny, and this is more secure the closed proprietary software?

    How and why?

    I think people are deluded into thinking that because a project like Linux is secure, and that Linux is Open Source, ergo Open Source software must be secure. This is convoluted and dangerous logic.

    I think OSS is the most insecure software out there. Think of it. Anybody could take RedHat's source code, create their own distro filled with back doors and zombie daemons, and then di
    • Actually, Linux is more secure than Windows in spite of the fact that OSS is easier to hack than CSS, for reasons that have very little to do with the difference between OSS and CSS.

      I suggest that factoring, i.e., modularity is a far more significant reason that Linux is more secure than Windows. The commercial interest of integrating featurism into the OS is probably the biggest source of security flaws in Windows.

      If that turns out to be true, it suggests that OSS is more secure than CSS because the de

    • You exposing your entire source code for public scrutiny, and this is more secure the closed proprietary software?

      Yes.

      How and why?

      Because holes are more likely to be brought to your attention. If a good guy has access to your source, they may well look through it, and if they're doing that, they may well spot any holes, even if they weren't looking from a security standpoint, if they were just looking to improve your code. Whereas the only person who's going to bother looking for holes in a closed progra

  • A guy gets arrested for tricking people into giving him passwords and then using them.

    a decade later, he's an industry pundit, and people pay attention to what he says. How many thousands of people did the same things Mitnick did, but didn't get caught?

    Should we worry about their opinions too?
  • Get ahold of Digital Equipment Corporation's source code and use it to blackmail DEC employees into doing what you want or else you'll distribute the code.

    So, Mitnick, were you ever indicted for that one?

  • by penguin-collective (932038) on Monday January 30, 2006 @12:13PM (#14599482)
    Why would you listen to anything Mitnick has to say? His attacks were based on social engineering, and he got caught. He's missed nearly a decade of technological development, and he wasn't a technical genius to start with either. And if it hadn't been for Shimomura's and Markoff's success in manipulating and blowing the story out of proportion for their own fame and fortune, Mitnick wouldn't have been more than a footnote.
  • Why? (Score:3, Interesting)

    by GodBlessTexas (737029) on Monday January 30, 2006 @01:38PM (#14600186) Journal
    Why do people listen to Kevin Mitnick on technical issues? He never once wrote a single line of code. He never once used anything he himself had created. All he was good at was using other people's tools, making hime a glorified script kiddie with connections to get the tools he needed. The only difference between him and your average script kiddie is he had specific targets that usually had something he wanted which motivated his attacks, instead of just randomly hitting vulnerable systems.

    He proved he was a moron when he used the same MIN/ESN pair for his OKI the entire time Shimomura was tracing him down.

What this country needs is a dime that will buy a good five-cent bagel.

Working...