Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software

Overeager Compilers Can Open Security Holes In Your Code 199

jfruh writes: "Creators of compilers are in an arms race to improve performance. But according to a presentation at this week's annual USENIX conference, those performance boosts can undermine your code's security. For instance, a compiler might find a subroutine that checks a huge bound of memory beyond what's allocated to the program, decide it's an error, and eliminate it from the compiled machine code — even though it's a necessary defense against buffer overflow attacks."
This discussion has been archived. No new comments can be posted.

Overeager Compilers Can Open Security Holes In Your Code

Comments Filter:
  • by iggymanz ( 596061 ) on Friday June 20, 2014 @04:02PM (#47284407)

    well known for decades that optimizing compilers can produce bugs, security holes, code that doesn't work at all, etc.

  • by Anonymous Coward on Friday June 20, 2014 @04:05PM (#47284427)

    Any code removal by the compiler can be prevented by correctly
    coding the code with volatile (in C) or its equivalent.

  • by NoNonAlphaCharsHere ( 2201864 ) on Friday June 20, 2014 @04:13PM (#47284475)
    That's why I always use a pessimizing compiler.
  • by Anonymous Coward on Friday June 20, 2014 @04:17PM (#47284505)

    The kinds of checks that compilers eliminate are ones which are incorrectly implemented (depend on undefined behavior) or happen too late (after the undefined behavior already was triggered). The actual article is reasonable— it's about a tool to help detect errors in programs that suffer here. The compilers are not problematic.

  • by Smerta ( 1855348 ) on Friday June 20, 2014 @04:35PM (#47284617)

    The classic example of a compiler interfering with intention, opening security holes, is failure to wipe memory.

    On a typical embedded system - if there is such a thing (no virtual memory, no paging, no L3 cache, no "secure memory" or vault or whatnot) - you might declare some local (stack-based) storage for plaintext, keys, etc. Then you do your business in the routine, and you return.

    The problem is that even though the stack frame has been "destroyed" upon return, the contents of the stack frame are still in memory, they're just not easily accessible. But any college freshman studying computer architecture knows how to get to this memory.

    So the routine is modified to wipe the local variables (e.g. array of uint8_t holding a key or whatever...) The problem is that the compiler is smart, and sees that no one reads back from the array after the wiping, so it decides that the observable behavior won't be affected if the wiping operation is elided.

    My making these local variables volatile, the compiler will not optimize away the wiping operations.

    The point is simply that there are plenty of ways code can be completely "correct" from a functional perspective, but nonetheless terribly insecure. And often the same source code, compiled with different optimization options, has different vulnerabilities.

  • by KiloByte ( 825081 ) on Friday June 20, 2014 @04:37PM (#47284647)

    Or rather, that optimizing compilers can expose bugs in buggy code that weren't revealed by naive translation.

  • by vux984 ( 928602 ) on Friday June 20, 2014 @04:45PM (#47284707)

    Any code removal by the compiler can be prevented by correctly coding the code with volatile (in C) or its equivalent.

    Knowing that the code will be removed by the compiler is paramount to using the volatile keyword.

    That requires knowing a lot more about what the compiler is going to do then one should presume. The developer should not have to have foreknowledge of what compiler optimizations someone might enable in the future, especially as those optimizations might not even be known in the present.

    The normal use case for 'volatile' is to tell the compiler that you know and expect the memory will be modified externally; and beyond that you don't really need to know exactly what the compiler does. As the developer reasonably knows that the memory is supposed to be modified externally, it is not unreasonable that he mark it as such.

    Most security related vulnerabilities arising from compiler optimization tend to revolve around the idea that you are defending against memory being modified externally that should not normally be modified or read from externally. But that is the DEFAULT assumption for all memory.

    If you do not expect the memory to be written to or read from, you don't mark it volatile.

    So now you are saying, well, if you REALLY don't expect the memory to written to or read from externally, then you should mark it volatile.

    So... that has us marking everything volatile that we know will be modified or read from externally, and also everything that we know should NOT be modified or read from externally... so clearly we should mark everything volatile all the time... because pretty much all memory is either "supposed to be read and written to externally" or "not supposed to be read and written to externally" and the only situation where its safe not to use volatile then is when you don't care if its read or written to externally because you KNOW that it can't cause a bug or expose a vulnerability.

    I can't offhand think of many situations where I could say with any degree of certainty that if something read or wrote to memory externally that it wouldn't matter, and it would rarely be the best use of my time to try an establish it... so really... mark everything volatile all the time.

    Clearly THAT isn't right.

  • by Marillion ( 33728 ) <<ericbardes> <at> <gmail.com>> on Friday June 20, 2014 @04:45PM (#47284715)
    Right. The other part of the issue is why didn't anyone write a test to verify that the buffer overflow detection code actually detects when you overflow buffers?
  • Re:Simple. (Score:3, Insightful)

    by Desler ( 1608317 ) on Friday June 20, 2014 @04:48PM (#47284739)

    C became popular because it was vastly more portable and performant than its predecessors. It still is today. None of those "better" languages that came before it or after it can beat that. And yes, extreme portability does matter when you have 100s of millions if not billions of devices that can't run anything but assembly or C. It's why the people saying that OpenSSL should be written in Java or C# are morons. Care to tell me how that's going to run on a, for example, Linksys WRT54G with only 8 or 16 MB of RAM, 2 to 4 MB of Flash storage and a 125 to 240 mhz MIPS CPU? Yeah, it's not.

  • by AuMatar ( 183847 ) on Friday June 20, 2014 @04:51PM (#47284749)

    Because it worked in debug mode (which generally has optimizations off)?
    Because it was tested on a compiler without this bug? The people writing the memory library is usually not the people writing the app that uses it.
    Similarly, it was tested on the same compiler, but with different compiler flags?
    Because that optimization didn't exist in the version of the compiler it was tested on?
    Because the test app had some code that made the compiler decide not to apply the optimzation?
    Life is messy. Testing doesn't catch everything.

  • Re:Simple. (Score:3, Insightful)

    by Desler ( 1608317 ) on Friday June 20, 2014 @05:02PM (#47284819)

    Well I'd be pretty pissed as well if my pet language was relegated to the graveyard of obscurity by a language that was usable for real work. Dennis Ritchie was a pragmatist who got shit done not some guy wanking over the greatness and purity of the language he created. People to this day are still jealous of that.

Nothing is finished until the paperwork is done.

Working...