Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Bug OS X Security

The Story of a Simple and Dangerous OS X Kernel Bug 230

RazvanM writes "At the beginning of this month the Mac OS X 10.5.8 closed a kernel vulnerability that lasted more than 4 years, covering all the 10.4 and (almost all) 10.5 Mac OS X releases. This article presents some twitter-size programs that trigger the bug. The mechanics are so simple that can be easily explained to anybody possessing some minimal knowledge about how operating systems works. Beside being a good educational example this is also a scary proof that very mature code can still be vulnerable in rather unsophisticated ways."
This discussion has been archived. No new comments can be posted.

The Story of a Simple and Dangerous OS X Kernel Bug

Comments Filter:
  • by girlintraining ( 1395911 ) on Sunday August 30, 2009 @12:50AM (#29249431)

    "Beside being a good educational example this is also a scary proof that very mature code can still be vulnerable in rather unsophisticated ways."

    Since when did the age of code become a metric for evaluating its trustworthiness? Code should only be trusted after undergoing in-depth analysis by people with training and experience in information security. Code should also be written with security in mind from the beginning. The story of this kernel bug is simple and goes like this: "I was in a hurry."

    • by Idiot with a gun ( 1081749 ) on Sunday August 30, 2009 @01:01AM (#29249467)
      I believe the implied meaning of this is "in the absence of exhaustive security analysis, a code's age/maturity is one of the better indicators of its security". While I'm not particularly sold on this notion myself, it does bear a lot of semblance to the idea that code can be proven "secure" if it stands after a multitude of random attacks, which is basically one of the tenets of OSS.

      A million monkeys with typewriters....
      • by Secret Rabbit ( 914973 ) on Sunday August 30, 2009 @01:19AM (#29249553) Journal

        Well, assuming that the code is actively (and properly) maintained, then that isn't a bad metric. Essentially, it's because any security flaw is the result of a bug. It's just a bug that can be exploited. So, if the code is maintained properly, then bug fixes will be continuous and as such, reduce the number of exploitable bugs.

        Good metric, yes. Absolute metric, no.

        """... which is basically one of the tenets of OSS."""

        And where did you hear that? Because, I never have and I've been around for a while.

        • by johanatan ( 1159309 ) on Sunday August 30, 2009 @01:40AM (#29249671)

          Essentially, it's because any security flaw is the result of a bug. It's just a bug that can be exploited. So, if the code is maintained properly, then bug fixes will be continuous and as such, reduce the number of exploitable bugs.

          It depends on your scope of consideration. Design flaws are not 'bugs' in the traditional sense of the word (i.e., implementation-related). However, if you expand your scope to include design specs then your statement is true. There do exist though exploits of perfectly-implemented but imperfectly-designed code.

        • by Jurily ( 900488 )

          There are also cases where this just isn't true. See malloc [itworld.com].

        • """... which is basically one of the tenets of OSS."""

          And where did you hear that? Because, I never have and I've been around for a while.

          It's implied, in my opinion. Because OSS stuff rarely gets the benefit of expensive and time consuming security analysis. Mostly because often nobody has the money/time/skill set, or because the code moves so damned fast. Admittedly, this setup appears to work quite nicely (I'm a *nix user myself).

      • by Kjella ( 173770 ) on Sunday August 30, 2009 @01:29AM (#29249617) Homepage

        Well... I think that depends a lot on the reason why it's old code. I've met my share of code with the warning "There be dragons!".

      • by _Sprocket_ ( 42527 ) on Sunday August 30, 2009 @01:38AM (#29249661)

        While I'm not particularly sold on this notion myself, it does bear a lot of semblance to the idea that code can be proven "secure" if it stands after a multitude of random attacks, which is basically one of the tenets of OSS.

        I'm pretty sure that's not a tenet of OSS. If someone is pushing that as a tenet, then they really need to pay closer attention to history. A history of resilience is a nice metric - but it's not "proof" that code is bug-free rather just that nobody has found a given bug or made it public. People who get caught up in vulnerability counts forget that the real metric is response to a given vulnerability.

        One tenet you hear bandied about is "given enough eyeballs, all bugs are shallow." Criticism tends to revolve around whether enough eyeballs have been put to any particular piece of code. Although one could argue that it's not just the number of eyeballs - but whether said eyeballs have the training to look for particular kinds of bugs that might not show up in normal use of the given code. None of that has anything to do with the frequency of attack.

        • You're right and it's worth remembering that some bugs will cause incorrect behavior on a cycle that is so long that our Sun will go nova before it shows up.

      • Since when are age and maturity synonyms?
      • Re: (Score:3, Insightful)

        by LO0G ( 606364 )

        In my experience, a code's age/maturity is one of the better indicators of it's INsecurity.

        We've learned a LOT about security (and more importantly about writing secure code) over the past 10 years.

        10 years ago, nobody knew about arithmetic overflow vulnerabilities or heap overflow vulnerabilities. Now every coder needs to worry about them. And all that old code was written non knowing about those vulnerabilities so it's highly likely to contain issues.

    • The general assumption is that code that is old has been run a lot (meaning it has been tested) and has been inspected for bugs periodically. Of course, this doesn't apply to code that's been sitting in a cupboard for a long time, but all other things being equal a part of a program that is two years old is likely to contain fewer bugs than a brand new part, because they will have contained approximately the same number on release, but the older part will have had some fixed.
  • by ynososiduts ( 1064782 ) on Sunday August 30, 2009 @12:54AM (#29249445)
    I call fake. It's OS X! It's bullet proof! Steve Jobs would not let this happen! Macs are immune to crashes! Et cetera!
    • by davmoo ( 63521 )

      Dammit, I was going to post that!!

    • Macs have a history of having far less vulnerabilities than Windows.

      But now they're catching up with Microsoft in that, as well as average patch time! :D

      • by benjymouse ( 756774 ) on Sunday August 30, 2009 @04:12AM (#29250165)

        Macs have a history of having far less vulnerabilities than Windows.

        From IBM research: IBM Internet Security Systems X-Force® 2008 Trend & Risk Report [ibm.com]

        Look under "most vulnerable operating system". Yes, right at the top, for several years going sits OS X. It actually consistently experiences 3 times the number of vulnerabilities compared to Vista.

        You can also do some secunia digging yourself. It shows the same tendency even in the raw data.

        OS X may be less exploited but it has far more vulnerabilities. On top of that OS X lacks many of the anti-exploit mechanisms found in both common Linux distros and in Windows Vista.

        Vulnerabilities does not have much to do with exploits. A single vulnerability may leads to several independant exploits. Many vulnerabilities will pass unexploited. The difference is incentive. And if pwn2own has showed us anything it certainly confirms this. Macs have consistently been the first to fall, literally within seconds.

        • by TheRaven64 ( 641858 ) on Sunday August 30, 2009 @07:26AM (#29250751) Journal
          While I agree that there are a lot more security holes in OS X than there ought to be (and you only need one to win pwn2own), I seem to recall reading that study when it came out and being less than impressed with their methodology. They did not weight the types of vulnerability in a sensible way, and they included a number of bugs in components of OS X that have no equivalent bundled with Windows (and several of the most vulnerable ones are not enabled by default), but doesn't include exploit counts for Windows equivalents.

          On top of that OS X lacks many of the anti-exploit mechanisms found in both common Linux distros and in Windows Vista.

          Not sure about that. OS X has a very advanced sandboxing system, and has since OS X 10.5. This is why the mDNSResponder bug last year was a remote root hole on Linux, Windows, and OS X 10.4 and a DoS on OS X 10.5 and above.

        • by walt-sjc ( 145127 ) on Sunday August 30, 2009 @08:19AM (#29250989)

          All studies analyzing security vulnerability reports or released patch sets as a measure of OS security simply prove that the researcher is a fucking idiot. It's IMPOSSIBLE to measure security in this way because you are comparing lawn tractors to jet skis. The reasons are basic: everyone that releases an OS has their own way of dealing with reports and patches. The raw data is MEANINGLESS.

          It doesn't matter what anti-exploit technology is in the OS because it has been proven time and time again that no matter WHAT the warning, Users hit OK anyway. In fact, studies have shown that even when presented with a dialog that says something like "If you click OK, your computer will be infected by a virus," users STILL click OK 50% of the time. Windows is particularly bad in this regard because it is CONSTANTLY asking permission to do this, that, or the other thing. A typical work day for me I get 100-1000 requests for permission. It's no wonder users click OK all the time.

          Due to "OS conditioned" user behavior, NONE of the anti-malware software out there is actually effective at preventing infection. Most can clean it up after the fact (with the drive pulled and scanned from another machine.)

          Users also continue to use stupid passwords like "password", "1234", etc. no matter how much training given. Forcing complex passwords just ensures that there will be a postit on the monitor with the password, and a 100x increase in calls to the help desk to reset passwords.

          The ONLY measure we REALLY have is subjective, and based on my experience, the reality is that windows users are probably 1000 times more likely to have malware on their systems.

          I don't have any good solutions to this problem other than to suggest that we need security technology that actually analyzes a program's behavior, possibly simulating it by running in a mini-secure sandbox before talking to the user about it. Maybe apps could be be checked against a reputation database... Known good could be passed with no prompting thus reducing the amount of warning dialogs to the user. The current situation has proven dire however.

          • I'm not entirely sure I want to know the answer to this but:

            A typical work day for me I get 100-1000 requests for permission.

            Makes me wonder just what the hell you're doing all day. Feel free not to answer....

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          In the report (page 40, or rather; page 44. Was it really that hard to refer to a page?) it talks about number of disclosed vulnerabilities. There a re few things wrong with that list:
          1) IBM's own OS is at the bottom. As they built the report, one should start questioning that. I'm ignoring "Others.".
          2) It's the number of DISCLOSED vulnerabilities. I wouldn't be surprised if most of those fully-closed OSes (really just 1 of them) fixes a lot of stuff they don't disclose
          3) It's the NUMBER of vulnerabilities.

          • by stewbacca ( 1033764 ) on Sunday August 30, 2009 @01:00PM (#29253019)
            Yes, the severity of the exploits is what matters. I didn't read TFA, but a lot of people keep bringing this up in this thread. If the IBM study doesn't properly address the "magnitude of effect" (i.e. the seriousness of any differences between means or other comparative or inferential statistics), then it is ripe for biased-respresentation. People can pick and choose what they want to say without an accurate discussion of the findings. Raw numbers don't mean crap.
        • Could that have something to do with the fact that the vulnerability reports for OS X include tons of third party stuff (including Java or things that aren't used by default), that those for Windows don't?
    • by Daniel Dvorkin ( 106857 ) * on Sunday August 30, 2009 @02:05AM (#29249741) Homepage Journal

      You know, at this point there are probably about a thousand times as many people whining about this supposed attitude on the part of Mac users than there are Mac users actually displaying it.

  • by noidentity ( 188756 ) on Sunday August 30, 2009 @12:56AM (#29249449)
    Sadly I couldn't get my Mac OS X 10.3.9 (PowerPC) machine to panic with the C code.
    • Sadly I couldn't get my Mac OS X 10.3.9 (PowerPC) machine to panic with the C code.

      Try letting it see you use an Android phone while simultaneously unwrapping a new Zune media player...

      Sorry, I should probably let my meds kick in before I post in the mornings...

  • I read (Score:4, Insightful)

    by Runaway1956 ( 1322357 ) on Sunday August 30, 2009 @01:16AM (#29249535) Homepage Journal

    Alright, I read TFA. I read the earlier slashdot article. I even googled around a little bit. What I find is, an obscure little bug, if exploited locally, enables a user to crash his machine. What I don't find is an exploit that makes use of this bug.

    Am I missing something?

    I suppose that I could accomplish something similar on my current Ubuntu installation. If I thought it made a difference, I could install a few other flavors of Linux and try doing something like that. But, why?

    MS astroturfer's posts above are noted. And, I also note that MS bugs are routinely exploited, locally and remotely. The unwarranted superiority complex looks pretty pathetic, doesn't it?

    • Re:I read (Score:5, Insightful)

      by Thantik ( 1207112 ) on Sunday August 30, 2009 @01:21AM (#29249571)
      It's not the fact that it is local exploit code, it's the fact that local and remote exploits and the line between them are being blurred every day. TFA mentioned being able to write memory in 8-bit pieces, ANYWHERE in kernel memory. That's pretty dangerous if you ask me.
      • Re: (Score:3, Informative)

        by Sir_Lewk ( 967686 )

        local and remote exploits and the line between them are being blurred every day

        Citation please? The line between local and remote seems to be pretty concrete and fine to me.

        • "The line between local and remote seems to be pretty concrete and fine to me."

          Indeed, for those having trouble spotting it, it's the line with the flashing green light next to it.
        • You're missing the point. The line between local and remote vulnerabilities is indeed being blurred these days, given the rise in network services running on workstations (instead of just servers). Add in the fact that even on servers application-level vulnerabilities can be greatly exacerbated by the potential for kernel exploits. This was neatly illustrated with the recent Linux kernel vulnerability, which essentially turned every remote exploit that allowed arbitrary code execution into a kernel exploit.
    • Am I missing something?

      Possibly. An active exploit might not be available, it may still be in the underground, or we may be dealing with a series of code flaws that resemble the old tenets of CISCO fame - "We're unexploitable, all you can do is cause DoS". It might just be we have to wait for someone to turn around and go "oh really" before an active exploit can be retrieved from a crash.

    • Re:I read (Score:5, Informative)

      by emurphy42 ( 631808 ) on Sunday August 30, 2009 @01:25AM (#29249597) Homepage
      The relevant part is:

      The problem is the data in the buggy case is whatever we give as a third parameter in the fcntl code. Considering that the 8 bytes are controlled by the user it means he can write that amount of information anywhere in the kernel memory!

      followed by an example of actually doing it and proving that it worked (not a particularly malicious example, but it seems enough proof of concept to me).

    • by pathological liar ( 659969 ) on Sunday August 30, 2009 @01:32AM (#29249627)

      What are you, a Linux kernel dev? ;)

      The bug lets you write arbitrary, user-controlled bytes into kernel space. The first thing that comes to mind is that you could change the current process' priv structure in memory. Now you're root. Or why not use it to hook syscalls, or do really whatever you want? You're in ring0, go nuts.

      It's far more than just a DoS.

    • Re: (Score:3, Insightful)

      So your argument is that even though the bug exists, it's okay because no one took the time to massively exploit it? you do realize that if OSX had anywhere near the market share of windows, this would've been exploited years ago, right? i accept that 'security through obscurity' is perfectly valid, but you need to recognize it for what it is.
      • Re:I read (Score:4, Insightful)

        by Runaway1956 ( 1322357 ) on Sunday August 30, 2009 @01:47AM (#29249683) Homepage Journal

        Yeah, I've read this "market share" argument used as a defense for shoddy MS code time and time again. That just doesn't cut it.

        Mac has a presence in the business world. If it were as buggy as MS, crackers would be launching fishing expeditions for vulnerable Macs, so that they could gain access to company networks.

        What I asked for were examples of exploits, or reasons why this bug were really dangerous. Posts before yours are attempting to put things into perspective. Please, no more lame defenses of from MS astroturfers - there are enough of those even before you arrive at my question.

        Market share, indeed. Remind me that the next time I want a cheap padlock, I should purchase a no-name lock. Since it has no market share, burglars won't try to pick it or break it.

        • Re:I read (Score:5, Insightful)

          by nmb3000 ( 741169 ) on Sunday August 30, 2009 @02:24AM (#29249825) Journal

          Mac has a relatively tiny presence in the business world.

          Fixed that for you.

          What I asked for were examples of exploits, or reasons why this bug were really dangerous.

          And a bunch of people already pointed out that this bug gives you write-access to the kernel's memory. That's bad, privilege escalation bad.

          Market share, indeed. Remind me that the next time I want a cheap padlock, I should purchase a no-name lock. Since it has no market share, burglars won't try to pick it or break it.

          That's funny, because I recall seeing all sorts of instructions on how you can open MasterLock(TM)(R) and (ALL THAT) combination locks. They were so detailed, they would even specify which serial numbers of which models were vulnerable to which cracking techniques. And yet, I never saw any instructions for opening the Wal-Mart special RandomBrand of padlock.

          Market share does matter when it comes to investing time and money into exploiting flaws in a product. To say it is the only factor in operating system security is false, but saying it doesn't matter at all is just as wrong.

          • Re:I read (Score:5, Insightful)

            by node 3 ( 115640 ) on Sunday August 30, 2009 @05:26AM (#29250367)

            Market share does matter when it comes to investing time and money into exploiting flaws in a product. To say it is the only factor in operating system security is false, but saying it doesn't matter at all is just as wrong.

            No one is saying that it's not a factor. On the other hand, there are countless people who make the reverse mistake and state that Macs don't have exploits solely due to market share.

            This is easily debunked by:

            1. IIS exploits.
            2. Linux exploits (Linux market share is to Macs as Mac market share is to Windows)
            3. Mac apps. People still write apps for the Mac, why not viruses?
            4. There are plenty of viruses for the classic Mac OS.
            5. There are tens of millions of Mac users. Even though Windows has hundreds of millions, tens of millions is still a large and lucrative group to attack.

            The key isn't that Mac OS X is flawless or too low of a market share, it's that Windows is so easy to exploit. Design decisions made decades ago are still impacting Windows today. If you look at the typical Mac OS X bug and the typical Windows bug, you'll see that the Mac bugs tend to be very Unix-like in nature, that they are some part of the system can be tricked into crashing by being passed data in a specific way. Many a Windows bug is not due to getting something to crash, but by using some feature in a way that tricks it to allow unwanted things to happen.

            • Re:I read (Score:4, Insightful)

              by LurkerXXX ( 667952 ) on Sunday August 30, 2009 @08:43AM (#29251093)

              Some problems with your arguments.

              #1 IIS is a web server. That is a juicy target. It's much jucier than a lot more home OSX machines.

              #2 Linux is used very often as a server. Once again, a much much jucier target than some OSX desktop.

              #3 They do, just much less of them, as with apps. Less written, less looking, less discovered.

              #5 Modern *nixes have pretty much all implemented this feature which OSX has neglected to implement. What else haven't they done right as the other *nixes have?

            • Re:I read (Score:4, Insightful)

              by Simetrical ( 1047518 ) <Simetrical+sd@gmail.com> on Sunday August 30, 2009 @02:13PM (#29253639) Homepage

              2. Linux exploits (Linux market share is to Macs as Mac market share is to Windows)

              What are some examples of widely-exploited Linux bugs? (Of course there are isolated exploits, but the same is true for Mac. And of course there are very severe security vulnerabilities all the time, but that doesn't mean they're exploited in practice. And of course Linux machines are compromised on a regular basis, but that might be due to weak passwords and such.)

              3. Mac apps. People still write apps for the Mac, why not viruses?

              Apps have to compete with each other. If a particular niche is filled on Windows but not Mac, then a new Windows-only app will have to compete with the existing apps. A new Mac app won't, so it will get a much larger slice of a smaller pie. On the other hand, twenty different viruses can recruit your computer into twenty different botnets with no problem, as any Windows user should be able to attest. (How often does a virus scan on an infested computer turn up only one virus?)

              Besides, the number of apps for Mac is tiny compared to Windows.

              4. There are plenty of viruses for the classic Mac OS.

              Such as? I'm not doubting you, but I've never heard that claimed before.

              5. There are tens of millions of Mac users. Even though Windows has hundreds of millions, tens of millions is still a large and lucrative group to attack.

              It's a matter of cost and benefit. If it's three times the effort to write Windows exploits and you get twenty times the victims, there's just no reason to write viruses for Macs. Hackers are usually motivated by money, pure and simple.

              Anyway, I don't have any credentials in hacking. So I'll rely on Charlie Miller, who's a professional hacker (security expert). He demonstrated that he knows something about Mac exploits by cracking a Mac in two minutes flat a while back and winning pwn2own. In an interview, he said [tomshardware.com] (emphasis added):

              Between Mac and PC, I'd say that Macs are less secure for the reasons we've discussed here (lack of anti-exploitation technologies) but are more safe because there simply isn't much malware out there. For now, I'd still recommend Macs for typical users as the odds of something targeting them are so low that they might go years without seeing any malware, even though if an attacker cared to target them it would be easier for them.

              He's far from a Microsoft shill; he works for a security consulting firm and uses a Mac himself.

        • Re:I read (Score:5, Insightful)

          by beuges ( 613130 ) on Sunday August 30, 2009 @03:05AM (#29249953)

          The bug is really dangerous because it allows userspace to write anywhere to kernelspace. Yes, it's a local-only exploit, so the attack surface isn't that large. Or is it? How many pieces of software do you have running on your system right now that may contain vulnerabilities? It would be trivial for a skilled hacker to find an exploit in some arb application, with the payload being an exploit of this particular issue. So your local-only exploit has a remote entry-point from any other piece of software thats running on your system.

          Local-only exploits are only less dangerous than remote exploits if your system has no contact with other systems. When you expose your system to others, all of your local exploits become remote exploits the moment any piece of software that you run has a remote exploit. Recently there have been a number of reports of vulnerabilities in common applications like Firefox, and Adobe doesn't have a particularly great security track record either. Ideally, a vulnerability in one of these applications would only be able to run code as the user, or attack the user's home directory. Except since you can now modify any address in kernel space, you can craft code that tells the kernel your userid actually has root permissions, in which case you now have complete control over the whole system.

          Every kernel-level exploit is *really dangerous*. Marketing people will try to play it down by saying that since its local-only, it's not that bad, so that they can carry on making dumb 'im a pc, im a mac' adverts and patting themselves on the back. But all they're doing is lulling their userbase into a false sense of security.

        • Re: (Score:3, Insightful)

          by benjymouse ( 756774 )

          Yeah, I've read this "market share" argument used as a defense for shoddy MS code time and time again. That just doesn't cut it.

          So you think that an attacker thinks he must exploit each platform proportional to the market share?

          Or do you believe that each attacker randomly chooses a platform to specialize in proportional to market share. Or do they keep a list with number of slots according to each OS's market share?

          Consider this:

          1. Imagine you were on a shooting range. You can shoot for two different targets, one labelled "OS X" and the other one "Windows"
          2. One "OS X" target is 3 times larger than the other (OS X has 3 times the
    • by Alef ( 605149 )
      From Apple's summary of the bug:

      Description: An implementation issue exists in the kernel's handling of fcntl system calls. A local user may overwrite kernel memory and execute arbitrary code with system privileges. This update addresses the issue through improved handling of fcntl system calls. Credit to Razvan Musaloiu-E. of Johns Hopkins University, HiNRG for reporting this issue. [Emphasis mine]

      If you have the ability to alter kernel memory at an arbitrary place, you can accomplish pretty much anythin

    • by rekoil ( 168689 )

      By itself, a local exploit, say, a privilege escalation exploit, is only dangerous if you don't trust your local users.

      The real danger of a local exploit is that it allows a remote exploit, which normally can be contained by server process permissions, chroot jails, etc. to become more dangerous - if you can get a remotely-exploitable process to run local exploit code, you can own the box no matter what privilege restrictions the server with the remote attack vector is running with.

      In other words, via a loc

    • Re: (Score:3, Informative)

      by chefmonkey ( 140671 )

      Am I missing something?

      Yes, you are.

      The exploit allows users to choose an arbitrary location in memory -- including kernel space -- and write 8 bytes to it at a time. The 8 bytes are chosen by the terminal program attached to the TTY that the system call is made on. So, if a local program attaches to a TTY in the same way that a terminal does, and then makes this system call, it can load executable code into the kernel (or anywhere else, for that matter).

      In other words, it makes it ridiculously simple to ru

    • And, I also note that MS bugs are routinely exploited, locally and remotely. The unwarranted superiority complex looks pretty pathetic, doesn't it?

      Would you like to cite how you can remotely exploit NT 6? I would be fascinated to hear that. Unless you're just saying that local exploits are distributed and then run locally by users.

      So no, it doesn't. NT 6 is the most secure desktop operating system, followed by maybe Red Hat or SuSE linux and somewhere down the line perhaps Mac OS X. That's the security situation we're looking at, like it or not.

      Microsoft exploits get huge press-- Microsoft has lots of enterprise customers and needs to be very clear wh

      • "NT 6 is the most secure desktop operating system,"

        So, what you seem to be saying is, you have faith that over the next several months, as NT6 is adopted by more and more people, we will see an end to Windows exploits.

        Ohhhh-kay. Good luck with that. I remember similar expectations when Win9.x was finally dropped in favor of NT.

        I will grant that the security model seems to be improved over NT5.x I'll readily admit that security defaults are much improved over NT5.x But, I honestly believe that NT6.x will

  • Mature code? (Score:5, Insightful)

    by Casandro ( 751346 ) on Sunday August 30, 2009 @01:27AM (#29249611)

    I'm sorry, but what has MacOSX to do with mature code? Code is mature when it has lasted for _decades_ and no significant bug has been found. MacOSX is just your average kernel. OK, there are _much_ worse around, but that doesn't make OSX any better.

    What _really_ is a shame that it took them 4 years to fix it.

    • ...no significant bug has been found, but the code has regularly been reviewed.

    • By your definition, there is hardly any mature code out in userland. Adding features means you will create bugs, and since users crave features, there won't ever be a full set of software (app, os, daemon, etc) labeled mature by your definition, and only a small number of code segments that would be unchanged over a decade, let alone multiple decades.
      • Re: (Score:3, Insightful)

        by Casandro ( 751346 )

        Yes precisely, there is very little mature code. That's why you still have buffer overruns and other security critical bugs.

        New features don't have to mean that old code will be changed or made more insecure. There are many attempts at making computer systems modular so adding one piece of code will add a lot of new features to unchanged programmes. The oldest concept incorporating it is the UNIX concept where you have lots of small single-purpose programs which you can connect via pipes to serve any more c

      • by treat ( 84622 )

        By your definition, there is hardly any mature code out in userland.

        Of course not.

        Name a nontrivial example of mature code in wide use anywhere today.

        Not a single legacy system. Not a few lines of code in a huge application or OS. An actual complete mature application in use today. Name one.

        It doesn't take quibbling over the definition of mature for this to be readily apparent. If you're finding bugs in it yourself, if bugs aren't fixed because there are higher priority bugs to fix - it isn't mature!

    • Re:Mature code? (Score:5, Interesting)

      by TheRaven64 ( 641858 ) on Sunday August 30, 2009 @07:38AM (#29250795) Journal
      Well, it has lasted for decades, although bugs have been found (which is rather the point, and how something achieves maturity; code doesn't become mature by sitting untested). Mac OS X is a linear descendent of NeXTSTEP. Development is now 25 years old, and some bits of the kernel date back to earlier BSD and CMU Mach projects. Last bits of the kernel I read had comments date-stamped 1997 and these were commenting on modifications to older code.
      • At the same time, there's been big hunks grafted on recently, not just to the kernel but all the big pieces which were heavily revised, e.g. DPS to DPDF. When you change mature code, it's not mature any more. If they actually made use of the microkernel design of OSX (and used the microkernel as anything other than a HAL, which is literally all it does there) then maybe this situation would be different... but it's not.

        • by hondo77 ( 324058 )

          When you change mature code, it's not mature any more.

          So by your really interesting way of thinking, Apple shouldn't patch the bug at all because modifying the code would make it not mature and mature (i.e. old) code is always better than immature code. Which is great if you're working on VMS every day, I suppose.

          • When you change mature code, it's not mature any more.

            So by your really interesting way of thinking, Apple shouldn't patch the bug at all

            If this is what you got from my comment, you're stupid. What I said is that you can't call it mature code once someone has been grafting new functionality onto it, especially when they ARE throwing away big chunks of code that you could actually call mature. Bug fixes make code more mature; adding features makes it less so. This should not be a complicated concept. Maturity does not, of course, automatically lend quality; that only happens in responsible hands.

  • summary (Score:5, Informative)

    by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Sunday August 30, 2009 @01:56AM (#29249699)

    Despite its relative obviousness, it took me a bit of reading there to figure out what the cause of the bug was, since I was rusty on my Unix system calls, so here's a short summary.

    ioctl(2) is essentially a way of specifying system calls for drivers without actually making a system call API, so drivers can register their own calls in a more decentralized way. A call to ioctl(fd, cmd, args, ...) on a special/device file 'fd' gets routed to the driver that owns 'fd', which handles the command. The arguments might be values, or might be pointers to locations in which to return data.

    fcntl(2) provides a way to perform operations on open (normal) files, like locking/unlocking them. It has the same parameters as ioctl(), except that there's always a single integer argument.

    One way of implementing fcntl is essentially like ioctl -- find who owns the fd, and pass the cmd along to the relevant driver. But, Apple's code did this even for the operations on special devices normally manipulated via ioctl, so you could basically do an ioctl via fcntl. But, this bypasses some of the arg-checking that ioctl does, since fcntl always has one integer argument. So an easy exploit arises: call an ioctl that normally takes one pointer argument to assign something to. ioctl would normally check that the pointer is valid (something the caller is allowed to write to) before writing to it in kernel mode. But you can pass in any memory location at all as an integer via fcntl's argument. Voila, you get data written to arbitrary locations in memory. As an added bonus, some calls let you manipulate what data gets written--- the example exploit uses a "get terminal size" ioctl, so you can vary what gets written by changing your terminal size.

  • by Bromskloss ( 750445 ) <auxiliary.addres ... nOspAm.gmail.com> on Sunday August 30, 2009 @02:00AM (#29249715)

    The mechanics are so simple that can be easily explained to anybody possessing some minimal knowledge about how operating systems works.

    So then do so in the summary!

    • While the concept is simple. The example given really isn't that good to prove it.

      1. Including of Uncommon (in terms of everyday use) libraries and headers.
      2. The function calls and enumerations/global variables really have horrible names.

      So unless you use these uncommon features in your work and even if you do have a good understanding of Operating Systems, that example isn't really that good.
      So in really the post is just the guy see how 7337 I am. I found a way to hack a computer in a twitter line.

  • Oh god (Score:5, Funny)

    by clarkkent09 ( 1104833 ) * on Sunday August 30, 2009 @02:02AM (#29249727)
    This article presents some twitter-size programs that trigger the bug.

    Ok, I get libraries of congress and olympic-sized swimming pools, but twitter is a new one. Is it used for measuring how long a program is or how pointless it is?
  • by ygslash ( 893445 ) on Sunday August 30, 2009 @03:09AM (#29249967) Journal

    Even after the recent security update on Tiger, I still get a kernel panic with the Python code supplied in TFA:


    import termios, fcntl
    fcntl.fcntl(0, termios.TIOCGWINSZ)

    Yeah, I'm planning to upgrade to Snow Leopard soon, after having skipped Leopard. But has Tiger already been abandoned to this extent?

  • This article presents some twitter-size programs that trigger the bug.

    Out of interest, what's the justification for linking to the article on "programs that trigger the bug" and not in the blindingly obvious place ("This article")?
    I ask because it seems to be in-line with some kind of brain-dead in-house Slashdot linking style, and I'm curious to know the reasoning behind it.

    • Out of interest, what's the justification for linking to the article on "programs that trigger the bug" and not in the blindingly obvious place ("This article")? I ask because it seems to be in-line with some kind of brain-dead in-house Slashdot linking style, and I'm curious to know the reasoning behind it.

      I gave up long ago trying to determine what, if anything, a given link in an article summary had to do with anything. Either determining where to put a link in a sentence is a lot harder than I think,

  • Love the editing (Score:5, Insightful)

    by Pedrito ( 94783 ) on Sunday August 30, 2009 @07:18AM (#29250715)
    or lack thereof:

    "The mechanics are so simple that can be easily explained to anybody possessing some minimal knowledge about how operating systems works."

    "...so simple that it can be easily..."

    The choice of "some minimal" is a bit questionable too. "some" or "minimal" alone would have been sufficient to convey the meaning. Together, it sounds almost redundant.

    "Beside being a good educational example this is also a scary proof that very mature code can still be vulnerable in rather unsophisticated ways."

    "Beside" means "next to". "Besides" means "other than".

    Not that it really matters. The mainstream news sites can't seem to compose articulate sentences either. Grammar has really gone to crap and it really bugs me that English based news providers can't be bothered to produce fluent English stories.
    • Re: (Score:2, Informative)

      by Megane ( 129182 )
      "Mechanics" is plural, and yet you want to use "it" as a pronoun for a plural word? Grammar really has gone to crap. (Also, how operating systems work, another case of mis-matched plurality.)
    • or lack thereof:

      "The mechanics are so simple that can be easily explained to anybody possessing some minimal knowledge about how operating systems works."

      "...so simple that it can be easily..."

      Since we're being grammar Nazis:

      "...so simple that they can be easily..."

  • Finder (Score:3, Interesting)

    by Weezul ( 52464 ) on Sunday August 30, 2009 @11:21AM (#29252135)

    You can find a major privilege escalation hole in Finder quite easily :
    http://ask.metafilter.com/131473/Does-this-create-a-local-root-exploit-for-Mac-OS-X-using-Finder
    Finder isn't setgid but may access any gid!

Trap full -- please empty.

Working...