Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Java Programming Security

NULL Pointer Exploit Excites Researchers 327

Da Massive writes "Mark Dowd's paper "Application-Specific Attacks: Leveraging the ActionScript Virtual Machine" has alarmed researchers. It points out techniques that promise to open up a class of exploits and vulnerability research previously thought to be prohibitively difficult. Already, the small but growing group of Information Security experts who have had the chance to read and digest the contents of the paper are expressing an excited concern depending on how they are interpreting it. While the Flash vulnerability described in the paper[PDF] has been patched by Adobe, the presentation of a reliable exploit for NULL pointer dereferencing has the researchers who have read the paper fascinated. Thomas Ptacek has an explanation of Dowd's work, and Nathan McFeters at ZDNet is 'stunned by the technical details.'"
This discussion has been archived. No new comments can be posted.

NULL Pointer Exploit Excites Researchers

Comments Filter:
  • Binary blobs (Score:5, Insightful)

    by should_be_linear ( 779431 ) on Friday April 18, 2008 @05:37AM (#23115228)
    Some years ago I had many binary proprietary blobs on my computer: SUN Java browser plugin (now OSS), Adobe Acrobat (don't need it any more, OSS alternatives are equal now), nVidia driver (still needed but solution is on the way -> looking forward to switch to ATI as soon as GPL drivers get there), MS media codecs (don't need it any more, Flash ate MS' streaming video pie). Now, only Flash player remains that I don't see replacement in OSS world in foreseeable future. Add to it security concerns, 64-bit version and it clearly becomes major PITA of Linux desktop users. Doesn't look it will change any time soon.
  • by Roofus ( 15591 ) on Friday April 18, 2008 @06:48AM (#23115458) Homepage
    I was wondering the same thing with the Java tag. My thought is the editors are actually ignorant and biased.
  • Re:fubar (Score:3, Insightful)

    by Anonymous Coward on Friday April 18, 2008 @06:57AM (#23115494)
    The NX bit doesn't get rid of the problem entirely, though. To use this example, it sounds like an exploit can be written pretty much entirely in ActionScript bytecode. Also, just because the stack is non-executable, what's to stop me from replacing the return address to point at, say, libc's system(), placing a nasty shell script on the stack?
  • Re:hmm. (Score:2, Insightful)

    by keysersoze_sec ( 1229038 ) on Friday April 18, 2008 @07:01AM (#23115506)

    does it run Linux?
    Definitely. Just need to massage some asm code to make it fit.

    part of the security of Linux relies on the smaller audience is not so attractive a target
    Dude, if you feel safe just because you're running Linux, you could be surprised some day. Plus, the "smaller audience" is not so small anymore, thanks to Ubuntu and the like. On the other hand, projects like PaX and grsecurity, constant code reviews and bug monitoring do make Linux a pretty safe place.
  • Re:boring? (Score:5, Insightful)

    by ubernostrum ( 219442 ) on Friday April 18, 2008 @07:02AM (#23115510) Homepage

    Wow, an error in a program. This seems akin to ground-breaking front-page news: a cat stuck in a tree rescued by firemen.

    Actually, it is a big deal, as you'd know if you'd read the article(s). But you're too lazy for that, so here's the short summary:

    Lots of interesting (and important) security problems revolve around figuring out a way to take an error in a program, and turn it into a way to have that program execute arbitrary code of your choice. Traditionally, NULL pointer exceptions have not been fruitful ground for this because, well, a NULL pointer is NULL -- there's nothing on the other end of the pointer for the unsuspecting program to read or execute, so it simply crashes. And merely crashing the program isn't really all that interesting, since at best it lets you execute a denial-of-service. But this guy (Dowd) found what would have been a run-of-the-mill NULL pointer exception in Flash and parlayed it into full-scale arbitrary code execution through a series of fairly impressive tricks. You really should go read Ptacek's summary, because it has all the gory details and will, if nothing else, make you realize what an amazing hack this is.

  • by ChunderDownunder ( 709234 ) on Friday April 18, 2008 @07:05AM (#23115534)
    Probably because they see JavaScript, bytecode and virtual machine all in the same sentence. Put two and two together and you end up with five.
  • by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Friday April 18, 2008 @07:24AM (#23115592) Homepage
    On a modern OS you have to work hard to make malloc fail. OSs will grant memory requests far above the amount of physical memory, and will even overcommit the virtual memory on the theory that you're not going to use all of it anyway.

    The only way I've seen to get it to consistently fail is not on low memory but by asking for ludicrous amounts like 4GB at once on a 32bit system. Try it - get your system into a low memory condition and execute a few mallocs.. they don't fail - the OS merely continues to increase virtual memory and swap more and more.
  • by jo42 ( 227475 ) on Friday April 18, 2008 @07:29AM (#23115620) Homepage
    So you would ban knives from the kitchen because they are sharp and can cut you?

    If you think Flash sucks now, wait until it is written in 100% Java.
  • by pla ( 258480 ) on Friday April 18, 2008 @07:58AM (#23115738) Journal
    Assuming that Flash is made in C or C++, here is another very vivid example of why these languages should be banned.

    You do understand that all those nasty loosely-typed pointer-based exploits you and others disdain in C, exist because C nicely mirrors how the actual hardware handles similar concepts?


    If failure of allocation threw an exception, instead of just returning null, there would be no problem.

    And if programmers would check that the allocation succeeded, we would also have no problem.

    In your hypothetical "safe" language (C#, for example), I can't count how many times I've seen system calls wrapped in a try/catch to hide the exception, then carry on pretending the call worked just fine. Guess what? SAME DAMNED PROBLEM!



    Don't blame the pipe-wrench for making a poor hammer. Blame the craftsman too lazy to find a hammer.
  • Re:Big deal (Score:5, Insightful)

    by Anonymous Coward on Friday April 18, 2008 @08:06AM (#23115778)

    Because it can probably be made to work cross-version, cross-platform and cross-architecture?

    Because everyone has Flash installed?

    Because it opens up a whole class of common bugs previously thought to be unexploitable?

    Because the way he does it is nothing short of godlike?

    This is HUGE.

  • by hey! ( 33014 ) on Friday April 18, 2008 @08:07AM (#23115782) Homepage Journal
    The kitchen? No. The Nursery? Might be a good idea.
  • by Anonymous Coward on Friday April 18, 2008 @08:10AM (#23115798)
    That's right, because those old languages like C++ don't support those new-fangled exceptions do they? We should all be writing in Java or C# and running our nice, safe code inside nice, safe virtual machine runtimes which do things like bytecode validation and address range checking. It's well known that virtual machine runtime implementations never have bugs in them that could be exploited.

    Wait, what was this article about again?
  • Re:fubar (Score:2, Insightful)

    by morgan_greywolf ( 835522 ) * on Friday April 18, 2008 @08:30AM (#23115904) Homepage Journal

    The NX bit doesn't get rid of the problem entirely, though. To use this example, it sounds like an exploit can be written pretty much entirely in ActionScript bytecode. Also, just because the stack is non-executable, what's to stop me from replacing the return address to point at, say, libc's system(), placing a nasty shell script on the stack?
    This assumes, of course, that you know the entry point of libc's system(). Since glibc is typically a dynamically-linked ELF .so these days on Linux, this means that you need to know the architecture on which your target is running, the specific version of glibc in use, etc.

    While you can determine this easily for any given architecture/Linux distro pair, determining what particular distro and architecture are present from remote is problematic at best.

    Furthermore, if I'm not mistaken, there are certain SELinux rules you can use to prevent shell scripts from doing nasty things.
  • Re:hmm. (Score:3, Insightful)

    by JasterBobaMereel ( 1102861 ) on Friday April 18, 2008 @08:33AM (#23115924)
    The difference is that in Linux the browser runs as you and so can only affect your own files ... (Which you have backed up?) On Windows the browser runs as an elevated user and so can affect much more ...

    If however it can run arbitary x86 code directly all bets are off and operating system security is basically non-existent except at the hardware level ....

  • by spir0 ( 319821 ) on Friday April 18, 2008 @08:34AM (#23115926) Homepage Journal

    Assuming that Flash is made in C or C++, here is another very vivid example of why these languages should be banned.
    look, there's really no nice way of saying this, but ... well, you're an idiot.

    what should be banned are those useless coders who think that a language should do everything for you, thus making you lazy and bloated like your code.

    remember: you are in control of the machine, not the other way around.
  • by siride ( 974284 ) on Friday April 18, 2008 @08:43AM (#23115988)
    You have to have contiguous sections of address space large enough for the allocation. If you're on Windows, and you've already allocated, say, a gigabyte of heap space, plus whatever is taken up by your code, stack and loaded libraries, then even a relatively small request might end up failing, even if there is enough memory available. There is just no free chunk large enough to satisfy it. In fact, on a 32-bit system, I can say with 95% confidence that you could never allocate, say, 1GB in a single allocation and have it succeed. There are probably smaller numbers that work here as well.
  • Re:fubar (Score:1, Insightful)

    by Anonymous Coward on Friday April 18, 2008 @08:49AM (#23116018)
    If you find that it is possible that a wheel can fall off your car, do you fix the problem because:

    a) if the wheel falls off at just the right time, your car can swerve into oncoming traffic and kill an entire bus full of precious children,

    or

    b) wheels coming off cars are a generally bad thing.

    ?

    Rephrasing the question: do we fix a NULL pointer problems because

    1) some clever guy with time to waste might be able to figure out how to convert the problem into the erroneous detonation of a nuclear warhead,

    or,

    2) because it is just an easy-to-fix stupid bug, and bugs are just bad in general?

    My assessment of this situation is that far too much analysis was undertaken. Rather than piecing together some careful exploit, this guy could have spent the time looking for other bugs in the code instead. NULL pointer dereferences are among the easiest bugs to find and fix. Report it, fix it, and move on. Filling the net with manifest useless fear while you try and prove your cleverness accomplishes ... what?
  • Re:fubar (Score:3, Insightful)

    by Hal_Porter ( 817932 ) on Friday April 18, 2008 @08:55AM (#23116062)
    You're aware this site is News for Nerds, right? If this sort of thing is boring to you, maybe you're on the wrong site.
  • Re:Big deal (Score:5, Insightful)

    by n0-0p ( 325773 ) on Friday April 18, 2008 @09:19AM (#23116260)

    It's news because it's a general method for code execution from a common class of NULL pointer dereferences. He turned something that most people consider a crash bug into a code execution bug. Here's a simpler example from Dowd's blog: http://taossa.com/index.php/2007/04/15/bored-games/ [taossa.com]

    The other reason why it's news is that his method for exploiting Flash in this case is technically brilliant. I can understand if you don't appreciate it, but any security people out there are just overwhelmed.

  • by Anonymous Coward on Friday April 18, 2008 @09:28AM (#23116344)
    If you think Flash sucks now, wait until it is written in 100% Java.

    A great many people may consider that Flash suck for reasons others than its graphical performances. In other words Flash may be using the snappiest/smoothest/nices fonts/graphics ever, it could still be considered by a great many to be very sucky.

    On your comment about Java... The JVM has proved that everything written in pure Java is 100% immune to buffer overflow/overruns. You simply are NOT going to see a 'null exploit' for a Java JVM. This is not just a happy thing: it is a fact that can be provably demonstrated. You simply are NOT breaking out of Java-written code. Oh, of course there have been a few exploits here and there... Notably one on Linux where a malicious user could escape the JVM by exploiting the C-written zlib (how fun isn't it? good old C exploits).

    I seriously wish many of these libs were 100% Java and many more userland programs (everything not related to the kernel's internal) were written in Java.

    Give me a Web broser 100% Java, give me flash 100% Java, give me media players 100% Java, give me games 100% Java.

    Oh wait, past century's Java troll modded +5 insightful "Java is slow" ?

    We just finished moving a nightly monte-carlo simulation app to price options in a major bank from C++ to Java. Perfs ? Identical.

    But, yeah, Java suck, Java is slow, etc.

    Seriously dude, wake up : Java is in your pocket, you can't do a non-cash payment without having Java involved in a process, you can't use the Internet without going to Java-backed webservers, Java is part of the Bluray specs, there are entire countries where people's ID cards are Java SmartCards. Google, eBay, FedEx: they all realized the value of the JVM. I code my apps and they work on any Un*x, Windows and OS X machines. Why the heck should I bother with *anything* OS specific unless I'm a kernel hacker ? Because I'm living in a cave since 30 years ?

    Oh, it's not that the language is great (it's not). But the JVM is the best thing that happened to the IT world in decades: the Real World [TM] got it and the JVM's success is 100% justified.

    Java makes the real world go round and you better adapt and accept it, for the real world won't adapt to your bogus view about Java.

  • by Abcd1234 ( 188840 ) on Friday April 18, 2008 @10:24AM (#23116988) Homepage
    In your hypothetical "safe" language (C#, for example), I can't count how many times I've seen system calls wrapped in a try/catch to hide the exception, then carry on pretending the call worked just fine. Guess what? SAME DAMNED PROBLEM!

    Yeah, which is why you *don't catch exceptions you can't recover from*. It's a basic design tenant, and it's *easier* to do than fucking it up by incorrectly handling the error. Basically, in a language like C, you have two options:

    1) Check for the error and handle it, possibly incorrectly, leading to the problem you describe.
    2) Ignore the error, and your program could misbehave.

    An exception-based language gives you these options:

    1) Catch the error and handle it, possibly incorrectly, leading to the problem you describe.
    2) Ignore the error, and the program violently terminates.

    Gee, which one do you think is better from a security standpoint?

    Nope, sorry buddy, but *any* language that uses exceptions as the model for error indication will be superior, as far as security goes, to a language like C. The real problem with the GP is that that also includes C++ (which, at least in a decent compiler, throws an exception if new fails to allocate a chunk of memory).
  • by spir0 ( 319821 ) on Friday April 18, 2008 @10:55AM (#23117398) Homepage Journal

    Besides, having the computer automatically do things like memory management behind the scenes means that you don't have to do them in your code, making it simpler, not bloated.
    I disagree. your source may not be bloated, but the binary at the end will be.

    making programming simpler only means that simpler coders will be coding.
  • by n0-0p ( 325773 ) on Friday April 18, 2008 @11:43AM (#23118272)
    I don't think that it has much at all on automating the workflow, which makes sense to me. Tools and fuzzers are changing so fast that they aren't well served by books. I already have a few books on those topics, and they've all grown stale within a year or two.

    The books that I keep around for a long time are the ones that really cover the essentials. I put this book in that category because it explains vulnerabilities more clearly and thoroughly than anything else out there. And it lays out all the process and tricks for finding security bugs. That's the kind of stuff that will be relevant for years.
  • Re:fubar (Score:3, Insightful)

    by mr_mischief ( 456295 ) on Friday April 18, 2008 @01:49PM (#23120216) Journal
    The fact that it's a bug and needs to be fixed hasn't changed. Its priority come debugging time just jumped up quite a bit because it went from a stability issue to a security issue.

    Debugging is generally handled in a triage fashion. The first bugs to fix are easily exploitable remote exploits that allow arbitrary code execution with elevated privileges. Then come those that allow easy remote exploitation and arbitrary code execution at the user's restricted level. It goes on like that all the way down to a bug equivalent to "sometimes on platform X the last character of output sometimes is an extra newline that gets appended which is unnecessary".

    If you have the time to make sure every piece of software you write is entirely and certifiably bug free, then that's great. Somehow, though, I imagine maintenance programming for the rest of us will probably still be prioritized based on severity.
  • by SL Baur ( 19540 ) <steve@xemacs.org> on Friday April 18, 2008 @02:29PM (#23120756) Homepage Journal

    The GP is a fucking moron if he thinks "throwing an exception" is a cure-all for insecure code. One guy develops a complicated NULL-pointer exploit that's valid in ONE virtual machine, and the GP reflexively supports banning C and C++.
    Crudely put, but correct. TFA outlines clearly the logic errors in the bytecode interpreter that make this possible. There's a period of time between input validation and usage that can be exploited and it is. Duh. There's an unwise distinct difference between how the input validator and the executor treat invalid input that can be exploited and it is. Duh.

    I thought this was going to be something interesting like the 0 page exploit described in Bach's Unix System V Internals book where on certain kinds of hardware, it was possible to read NULL and near NULL pointers without the machine faulting allowing access to kernel data (which worked on my M68K Unix System V box at the time). Instead it's just a sloppily written byte code interpreter bug.
  • by Have Brain Will Rent ( 1031664 ) on Friday April 18, 2008 @02:46PM (#23120958)

    Most of the patches I see popping up in Ubuntu's update notifier seem to involve buffer overflows. Now there is this exploit because someone didn't check a pointer (failed malloc()/new/etc?).

    When I taught CS, back when dinosaurs ruled the earth, I would have given a homework assignment with anything like that a D or an F.

    I mean WTF, fast forward a few eons and people are still writing code like this??? What do we have to do - shoot them???

    A programmer not checking for a null pointer return or a buffer overflow is the equivalent of... geez, I don't know... a surgeon forgetting to wash his hands before operating?

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...