Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security

Stack-Smashing Protector 28

XNormal writes "It's not exactly new but for some reason it doesn't seem to be getting the attention it deserves. The stack smashing-protector developed by Hiroaki Etoh at IBM's Tokyo Research Lab is a patch for GCC that provides effective protection against buffer overflows. It protects against cases not covered by StackGuard and StackShield. It it well-supported on multiple versions of GCC and multiple platforms. Why is it not getting enough attention? Perhaps it needs a CatchyName instead of 'ssp'? I'll ponder this question while I'm recompiling all my executables that have an open port and the libraries they depend on."
This discussion has been archived. No new comments can be posted.

Stack-Smashing Protector

Comments Filter:
  • by Anonymous Coward
    How does this compare to OpenBSD's new non-executable stack? I am confused by how this works, and also how a non-executable stack works. I would appreciate it if someone could explain this in laymans terms.
    • by btellier ( 126120 ) <btellier@gm[ ].com ['ail' in gap]> on Monday August 05, 2002 @02:36AM (#4010634)
      Basically it's like this:

      both OpenBSD's non-exec stack and this new stackguard nonsense protect against one form of of a particular class of vulnerabilities: Stack Based Buffer Overflows. The way these usually work is that by stuffing too much data into a variable you are able to cause the program to overwrite other variables which control what the program will do next. You can, in effect, tell the program to not execute whatever it was going to execute before, but instead execute your code. Typically this is accomplished by putting some low-level machine language inside that original variable you overflowed and then pointing the program to that machine code.

      This will fail with obsd's method because you aren't allowed to execute machine code if it lies in the memory region that holds the special variables that control program flow.

      This is the most ridiculously trivial of all "protections" to defeat because you can execute machine code from other portions of memory. All you have to do is figure out a way to get the program to, at some point, load that machine code into memory. It might get slightly tricky if it's a remote exploit but when it's a local one you can usually just set an environment variable to some machine code. Since these vars get loaded onto the heap you can simply point the control variable to them and it'll execute anything you want.

      This will fail with this new method because between the variable you overflow and the variable that controls program flow there is a random number. Before looking to see where the control variable is pointing to the program will check to see if the random number is the same as it was before the function started. If you overwrote it in order to modify the control variable the program will stop.

      Meanwhile this protection won't stop other kinds of buffer overflows, such as .data based overflows and heap overflows that smash memory trees.. that's a bit more complicated though and difficult to describe in layman's terms.
      • On the intel 0x86 arch you can set memory so that:-

        It's executable,
        and or
        Read/Writable.

        So if the stack/heap was set to R+W not but X
        and the main application set to X but not R+W, then bufferover flows(or self modifying code)
        would never be possible

        The problem is that most kernels are tooo crap to handle this properly or to put it another way, the kernel doesn't provide sufficent architecture to allow you to write all of your code that way.

        • So, when your app allocates memory in which to load a dynamically-linked library, and this goes on the heap, and the heap is set to RW but not X, how will you make calls to the library?

          What if your code, as a fun optimization or hack, wants to store executable code in a variable and execute the code stored in the variable?

          Should we disallow an entire class of actions because people are too stupid and lazy to check the length of their data before they write it to memory?

          Justin Dubs
          • Loading of dynamically-linked library should be handled by the kernel not the application.

            The kernel should provide the architecture so that the application never has to do any of this stuff, all relocation etc... should be handled by the kernel and not the application.

            If the kernel does all of this then you can say that the operating system is secure to buffer overrun exploits, the apps will still crash though and could be comprimised in other ways.

            Ring 0(should only be the kernel) can change the W+R+X of a memory address if it want's to.

        • So what about function pointers? These need to be on the heap, right? What if you want to call one function but the function pointer is corrupted, causing you to call something else?

          Granted, this would severely narrow the window that the Phrack article mentioned, (it talked about overwriting function pointers in the GOT, which in your scheme would presumably be on a read-only page) but so long as buffers are overrunable, there will be some code that experiences some type of exploit.

          Looking at the patch, it doesn't seem that this patch fully addresses this issue - it deals with reordering local variables to avoid some of the less desireable effects of overruns, but what about function pointers stored in classes or structs? Those can't be reordered.
          • You don't call the code in a function pointer. The function pointer is just a variable and it only contains the address of the code you want to run. The machine instruction only gets the address of the function pointer, loads the value of the address from memory and jumps to the memory location given by the function pointer's value. The only thing you'll need to do is to read the function pointer's value (no executable memory needed).

            Of course you probably want to modify the function pointer's value, so it has to be in writable memory, thus the kernel has really no way to protect it. So it doesn't have to be on the heap, but that doesn't help you much.
            • Exactly - my point was that one doesn't need write access to code that is also marked as executeable in order to alter the program's execution. Of course function pointers have no need to be on executeable pages; however, if they are writeable at all, we have a problem. One just needs write access to something that is used in determining the path of execution. The phrack article pointed out that the GOT (which is essentially a whole load of function pointers - one for every function called in a dynamic library) is loaded to a page that is left writeable. Without adjusting things so that ld-linux.so is much more tightly tied to the kernel, I don't see how it's possible to avoid that. (I suppose there could be some kernel call that would remove the writeable bits from a given page of memory so that that process could never make the pages writeable again - is there such a call?)

              While function pointers are the most flexible variable used in determining execution flow, even in a scenario without a writeable GOT or any function pointers (which would require major redesign of at least libc and gcc), you may still be in trouble. As long as a variable that can be overwritten is used in a decision (i.e. if or switch) or as an array index a buffer overrun can affect the program flow. For example, it might be possible in some poorly written program to overwrite some piece of the program's configuration information through a complicated buffer overflow attack - many of even the most secure programs can be made insecure by a bad configuration.

              The only absolute solution to containing the potential damage from buffer overflows is to avoid buffer overflows altogether. Each of these steps simply minimizes the number of ways to exploit buffer overruns, which raises the bar and may even shrink the pool of potentially exploitable programs.
              • Prevent applictaions proccess from running in memory ranges they havn't requested privilage for.

                If you can't write directly into the application, and you can't redirect the application to execute outside it's requested space. Then it becomes very difficult to get those m$ type exploits.

                I think Microsoft is doing something like this with paladium? (I only said like mind you!)

                The kernel is responsible for loading executables
                You can request access mode change for a page of memory from the kernel.

                The Kernel won't permit the applications EIP be in pages that the application hasn't requested access for.

                when you malloc you request your access mode.

                This kinde of model also helps threading,because the kernel knows where self modifying code is, where read-only pages are etc... and can make far better use of caches and page flushes between threads.
        • To further elaborate, you can set it so only certain regions are read/write or executable/(read optional) - the only thing is, dividing your code/data/stack into "segments" like this isn't at all used, because it's easier just to use all the pages in memory in one big segment, aliased into code/data/stack. It's just convenient. The memory itself (page tables) don't contain such attributes (rw, x). Therefore it's easy to write code into the stack and execute it in the "code" segment (which is really the same chunks of memory).
    • by Anonymous Coward
      This approach catches a different variation of buffer overflow attacks. Basically, the stack of a function holds information local to that function, such as its data and where to go(return) when the function completes.

      Stack attacks are possible when a function allocates a fixed amount of data on the stack for input, and more data is stored than fits in this buffer. The extra data could then overwrite all kinds of data following the buffer.

      This protection mives the buffer(s) to the last position in the stack. This will protect the return address of the function. Even if the attacker manages to put (executable) code in this buffer, he is unable to reach it. He's also unable to jump to existing sensitive areas in your code.

      In comparison, the non-executable stack protection allows you to reach that code but the moment you reach it the OS faults your program.
      It can't protect you against existing code in your program.
  • by cpeterso ( 19082 ) on Sunday August 04, 2002 @05:10PM (#4009192) Homepage
    Microsoft Visual C++ .NET (aka MSVC7) has a similar feature called Buffer Security Check [microsoft.com]. This is for "unmanaged" C++ code, not C#/.NET/CLR code. This new compiler option /GS is on by default.

    /GS (Buffer Security Check)

    The /GS option is used to detect buffer overruns, which overwrite the return address -- a common technique for exploiting code that does not enforce buffer size restrictions. This is achieved by injecting security checks into the compiled code.

    On functions subject to buffer overrun problems, the compiler will allocate space on the stack before the return address. On function entry, the allocated space is loaded with a security cookie that is computed once at module load. Then, on function exit, a compiler helper is called to make sure the cookie's value is still the same. If the value is not the same, an overwrite of the return address has potentially occurred, and so an error will be reported and the process (or at least the thread) terminated.

    • I forgot to mention that the "security cookie" pushed on the stack before the return address is called a canary. I thought that was pretty clever. :-)
    • by Anonymous Coward
      Right, and if I recall correctly this was the subject the the 1st .Net "exploit".

      True it was hyped as an exploit when it really isn't, but it goes to show that there is no replacement for skilled and careful coding.

      See http://www.cigital.com/news/mscompiler-tech.html [cigital.com] for more info.

      Jesse
    • On functions subject to buffer overrun problems

      How is this determination made? Is it just looking for functions that make certain calls, or what?

      Seems to me all the attacker has to do is figure out how to spoof the security cookie. What prevents this?

    • So what you mean is that /GS behaves the way StackGuard does. (calling the cookie a canary is not a practice Microsoft initiated).

      If you read the phrack article linked to in the story, they discuss situations where this manner of buffer overrun protection is insufficient. True, most exploits out there today do use straight overruns onto the return address, but that's only because they can.

      That being said, I imagine that the conditions described in the phrack article for getting a manipulable pointer are less common than the authors would like to think.
  • Heh (Score:1, Funny)

    by Anonymous Coward
    When I saw the headline, for some reason I read "Smashing Pocket Protector". I was beginning to think that this really is "News for nerds". :P
  • by funkhauser ( 537592 ) <zmmay2@u[ ]edu ['ky.' in gap]> on Sunday August 04, 2002 @05:39PM (#4009291) Homepage Journal
    I really don't think the name is the problem. I mean, gcc gets by fine with a rather non-descript name.

    It seems like it would be difficult to get a whole lot of developers to move over to this at once, so perhaps that's why it's not catching on? If one major group of developers (Red Hat, Debian, whoever) started using this patch, perhaps their influence could sway others? It's not like the world of M$ where the necessary constant upgrading forces users to switch to technologies that Microsoft thinks are important. (Although buffere overflows are a very important issue, I'm just commenting on the fact that it's Microsoft that pushes the technologies, not the needs of the developers.)

  • by maeglin ( 23145 ) on Sunday August 04, 2002 @06:19PM (#4009408)
    The reason stack protection stuff isn't being widely used isn't because it's got an obscure name or something simple like that. It's because not everyone can agree whether it's effective or just lures people into a false sense of security. There have been a couple of "discussions" of this on the Linux Kernel Mailing List and the end result is always a stalemate [zork.net].

    dan
    • Why does the Linux kernel set the exec flag for stack pages? I don't see any reason why it should. If the program needs to load code, then it can just use one of the lower level calls to allocate the memory as executable. Pages used only for storing data should not be executable. I think I'll try to find this patch...

      • It's a matter of principle. It's the kernel's job to protect programs from each other, not from themselves. To remove execute permission from the stack would be the start of a slippery slope, whereby the kernel starts to coddle buggy programs, resulting in a false sense of security.

        The problem, of course, occurs when a program trusted with security responsibilities contains a bug. In such cases, the answer is to fix the bug. This is not, and should not be, the kernel's responsibility.

        Having said that, if you want to apply the non-executable stack patch because you don't trust your "trusted" programs, go ahead. :-)

      • It was thought to be the most efficient way to implement some C++ constructs (I don't remember which).

        Some other (Lisp?) compilers use similar tricks.
      • Why does the Linux kernel set the exec flag for stack pages?

        Executing code on the stack isn't unheard of in legitimate programs--it's sometimes used for performance reasons and sometimes to simplify implementation. Usually it's done in cases where the program's control flow is somewhat complicated:

        * Linux puts signal handlers on the stack. They need to be executable.
        * Kaffe and other vms put code on the stack for efficiency reasons.
        * Many functional programming languages write code on the stack for performance reasons.
        * Some garbage collectors write code on the stack
        * Some user-space threading libraries put code on the stack

        I'm sure there are others. I know Solar Designer's noexec stack patch had some workarounds for gcc's trampolines, I'm not sure if they worked with everything or not.

        Sumner

  • It is much easier to convince someone to use a specific flag, than to install a third party patch. The later takes more time, and require you to trust more people.

What is research but a blind date with knowledge? -- Will Harvey

Working...