Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Software Programming IT Technology

Data Execution Protection 254

esarjeant writes "In addition to a number of other security features, anti-virus vendors are starting to push buffer overflow detection. This will be part of Microsoft's future direction with Data Execution Prevention (DEP) and is already integrated with McAfee 8.0i. So it looks like everyone is going to upgrade all of their software again, will software vendors be able to keep up with the support calls?"
This discussion has been archived. No new comments can be posted.

Data Execution Protection

Comments Filter:
  • by King Of Chat ( 469438 ) <fecking_address@hotmail.com> on Monday February 28, 2005 @11:12AM (#11803032) Homepage Journal
    Who buys viruses?
  • virus vendors... ???
  • support calls (Score:5, Interesting)

    by millahtime ( 710421 ) on Monday February 28, 2005 @11:13AM (#11803039) Homepage Journal
    So it looks like everyone is going to upgrade all of their software again, will software vendors be able to keep up with the support calls?

    Yes, with more automation, more people on the other end (most likely in India) and more cost passed onto the customer. When I used to work we used to have a saying. "If it weren't for Microsoft, we would all be out of jobs"
    • I definitely envy my linux/mac IT friends, who come in at noon and leave at 2! I do remember that time back in 1994 when there was that one bug in the linux kernel which required a patch.
  • by 2020hindsight ( 581464 ) on Monday February 28, 2005 @11:13AM (#11803047)
    Virus vendors have been pushing buffer overflows for quite some time ...
  • by kevb ( 816796 ) on Monday February 28, 2005 @11:14AM (#11803059)
    Virus venders.. hmmm For just £39.95 a month, you too can recieve the latest virii, trojans and worms directly to your inbox.
  • by Anonymous Coward on Monday February 28, 2005 @11:15AM (#11803070)
    I'm just a microcontroller guy, but can't the PC guys check their goddamn counters and pointers when using buffers? And why the hell do we still need to code buffers? Isn't there a library or a call to handle buffers in a safe way?
    • by ThosLives ( 686517 ) on Monday February 28, 2005 @11:28AM (#11803224) Journal
      I don't even think it's due to not checking pointers and NX bits or anything like that. The problem is the way in which our modern OSs map out the memory. Intel chips have the capability to map segments to be either code or data, and the chip will generate a fault if you try to execute anything in a data segment (inherent NX capability). This is part of the segment descriptors used in all programs. The problem is that, as far as I can tell, Windows maps both the code and data segments to the same logical addresses! This is kind of foolish; it should be possible to simply map these two segments to different areas and be completely transparent to the application. As long as applications are behaved and don't have segment overrides all over the place, this should be just fine. Then, when you try to jump to an address that's in the stack, the processor will trip a general protection fault (because the stack must be in a segment defined as data, well, stack to be precise).

      Basically this is just laziness in the Windows architecture that overlaps the code and data segments. Separate these and the problem is solved with no new hardware, minimal application rework, and the like.

      Incidentally, my perusal of the setup routines in Linux (well, it was version 1.0, so I don't know if this is still the case) show that it also maps code and data to the same actual addresses, which makes it vulnerable as well.

      Sure, you can use "smart" languages and NX bits and stuff like that, but it's all assembly at some level, and the processor manufactures actually built in sufficient protection decades ago when they came up with segmented memory. (PowerPC architecture can also distinguish between code and non-code).

      I am always amused at how the memory management community hasn't nipped this one in the bud ages ago when the tools to fix it already exist.

      • another flaw of modern software is race conditions when working with multiple threads.

        lets say you have a global pointer to an object... in thread A, you are deleting an instance of the object, then the OS jumps threads in the middle of this operation, and thread B goes and tries to access information from the same object. this is known as a race condition.

        basically, which one gets their first, and how much damage will it do? the more you start working with shared memory and threads, the more code prote
      • Stuffing a buffer past its end is poor coding no matter how you slice it.
      • It's an intel flaw (Score:3, Informative)

        by spitzak ( 4019 )
        The problem you are describing is not with Windows or Linux. What you are describing is in fact exactly the lack of a "NX" bit. The Intel processors could not make memory readable and not be executable. Thus if you want to read the data on your stack then it was also possible to jump to it and execute it. The fact that Windows or Linux were unable to fix this problem is not their fault.

        Possibly you are confused by 80286 segments, which could make memory readable without being executable (because you could
      • by Anonymous Coward
        You obviously don't remember how painful it was to program for 16-bit Windows. Segments and thunks and far pointers and all that other bullshit.

        Windows uses a 4GB flat address space. The same memory model used by Linux and all other modern OSes. Segmentation, though supported by the hardware, is (1) inefficient and (2) more difficult to program for. Even CPU vendors realize it was bad technology and are moving away from it. Example: The new AMD64 chips support all the segmentation crap in compatibilit
      • This description of the problem is flat-out wrong.

        Nobody uses segments any more. Win32 programming uses a flat 32-bit address space.

        The problem stems from the fact that, under the Intel architecture, procedure local variables are allocated on the stack right next to the return address pointer. If a lazy programmer allocates a 256 byte buffer and does a strcpy() that doesn't have a null within the first 256 bytes, strcpy() will keep copying data until it hits a null character, clobbering the return addre
    • No, the "PC guys" cannot guarantee that every single allocation and buffer access in a 100,000 line program is done correctly. Hell, the "PC guys" find it very hard just to make the program work at all.

      I'm one of said guys, so I'm well and truly familiar with this pain.

      Part of it is culture - features and fast development above correctness. A project I'm involved with now is a horrifying mass of spaghetti that barely works at all, yet they're still adding new features with little focus on cleanup. There i
    • can't the PC guys check their goddamn counters and pointers when using buffers?

      We try our best, but we're humans. We make mistakes.

      And why the hell do we still need to code buffers? Isn't there a library or a call to handle buffers in a safe way?

      Yes. In fact, most modern languages like Java and C# handle memory for us; no more deletes necessary, and buffer overflows, while not impossible, more much less likely to happen with higher level languages.
    • by codegen ( 103601 ) on Monday February 28, 2005 @01:25PM (#11804580) Journal
      Part of the problem is the reliance on langauges which are over permissive. There was a whole class of languages developed in the 80's and 90's such as Euclid, Turing (both from U of T), and Modula which were much more strongly checked. Indeed the semantics of the languages allowed for many of the runtime checks to be statically eliminated. See the papers "Proof Rules for the Programming Language Euclid", R.L. London et al., Acta Informatica, And "On Legality Assertions in Euclid", D.B. Wortman, IEEE Transactions on Software Engineering.

      C and C++ put the reliance on the programmer to check the rules under the assumption that compiler provided checks are too expensive. They are only too expensive if you assume the everthing-is-a-pointer model that underlies these languages. Java and C# gain some safety since they do not allow arbitrary pointers, but, in my opinion, have still inherited too much from the parent laguages.

      Part of the problem is the everything looks like a nail approach. There are some wonderful languages out there that are much more appropriate for many of the tasks, and have syntax and semantics that make many of the security problems much easier to solve. However, they are not the "mainstream" langauges and as such do not get the developer attention.

  • "Will software vendors be able to keep up with the support calls?" No. Customers are going to have to wait on hold for... Oh... Nevermind.
  • by hardcoredreamer ( 551324 ) on Monday February 28, 2005 @11:15AM (#11803073) Homepage
    "So it looks like everyone is going to upgrade all of their software again, will software vendors be able to keep up with the support calls" I will be optimistic that despite the development into a new direction, and the occasional headaches, things will be better in the future. That said, why are people so negative about change? So Microsoft's SP2 broke some programs, at least they finally released it. So we have more than 640K of memory and you had to use a memory manager, at least we got past conventional memory. So at least in theory, there will be less buffer under runs in patched/upgraded systems. Would you prefer they didn't try?
  • by lecithin ( 745575 ) on Monday February 28, 2005 @11:15AM (#11803074)
    I hate to ask, but as a person that has never been into 'code', I have never understood what a buffer overflow was.

    I am asking as a person that isn't a programmer but understands the concepts that go behind the smoke and mirrors.

    • by Anonymous Coward
      Wikipedia to the rescue [wikipedia.org]
    • It's usually where you've assume that user input or decoded data won't exceed a certain length, and if the user deliberately enters too much data then they can scribble over the call stack and e.g. change the function return pointer and take control of the program. See Wikipedia [wikipedia.org].
    • by alc6379 ( 832389 ) on Monday February 28, 2005 @11:23AM (#11803154)
      This is the way I understand it, and I'm not really a programmer. So, I know someone's going to clarify or refute:

      You have some memory allocated for some type of variable, or something. That's called a buffer, and it's usually a certain number of bytes "big". There's a function in your program that puts a value into that variable. If you can feed more data into the buffer than it can handle, you can have a buffer overflow.

      The reason why this is dangerous is because that data "spills" into another portion of the memory, which could already be occupied by anything from more data, to executable code. In the latter case, if you've overwritten executable code, you can replace that code with your own executable code, and do all kinds of nasty things that the original program wasn't intended to do.

      ...And again, this is from one layman to another-- that's how I understand it.

      • If you're not a techie that's a very concise, understandable, and accurate answer. I'd mod if I had the points
      • Yep, pretty good explanation. It's pretty hard to catch all of them if you are using a language susceptible to this problem.

        When I was subscribing to bugtrack I read about people who had found a security problem in a simple game written in C included in many Linux distros. The overflow? Second player name. :-)

      • by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Monday February 28, 2005 @12:18PM (#11803711) Homepage Journal
        You got it write, except that overwriting other data can be just as bad as overwriting executable code:
        char buffer[100];
        int dataHasBeenVirusChecked = 0;
        gets(buffer);
        if (dataHasBeenVirusChecked) { sendAsEmailAttachment(buffer); }

        In this case, if "buffer" gets overfilled just so, then the program may incorrectly believe that the data it contains is safe to operate on even though it might not be. Remember, folks, there are other ways to exploit an overflowable buffer then the standard "write executable code to stack and jump to it" method.



    • Great explanation of buffer overflows here [helpbytes.co.uk]

      • See, I appreciate this explanation, and the one below which reframes the explanation as occuring on the stack. These are the explanations I've always understood. And which, frankly, didn't fully cut it with me.

        *Many* moons ago, I took an OS writing course from Intel, on the 80286. The way I was taught, a buffer overflow is something that would not have been possible in the processor architecture. There were code segments, and data segments. If ever the twain should overlap, processor exceptions occur, whet
        • I guess as one who doesn't try to write malware, just the very idea of these overflow explanations seems so unlikely that even if I were wanting to write such programs, I wouldn't consider buffer or stack overflow as an idea.

          Dude, you're making it sound like it's a matter of faith whether stack/heap overflows can be done at all. :-)
          Noone said it's easy and quickly done to write a working exploit. It takes time to find the vulnerabilities, and still much more time to write code exploiting them.

          Add to a

        • Look up the "solar designer" patch for Linux, included in some versions of secure Linux. It uses a segment to address the stack, and manages to execute-protect it, even though the processor is running in page mode and does not have an execute permission bit in its page-table implementation. IMO Microsoft could use this and does not. But Linus has objected to it as ugly and insufficient and won't accept it into his own sources.

          The basic problem is that Intel didn't include an execute protection bit in the i

    • by goombah99 ( 560566 ) on Monday February 28, 2005 @11:31AM (#11803258)
      The most common form is as follows. When a subroutine is called the return address is placed on the stack. Then all the local variables for the subroutine are placed on the stack. the subroutine runs and when it finishes it jumps to the return address on the stack. However if the subroutine were to write data into an array or string on the stack and tried to push more data into the string than space was allocated it would continue writing past the end of the array and eventually overwrite the return address. This allows a way to substitute a new return address for a virus maker. If this return address happened to jump right back onto the string itself then in principle the data string will now be exceuted as code.

      partial remedial solutions include commands that prevent decleared data from being executed, having the return address stored on a different stack from the data stack, explicitly testing the stack integrity before executing a return from a subroutine, and putting up "electric fences" --basically buffer regions around every memory allocation that are not owned by the application requesting space.

      • You were at your buddy's place to watch the Super Bowl. The chips ran out in the second quarter. Since you knew there was no way the half time show was going to be as interesting this year, you volunteered to go get more.

        On your way out you made a mental note to come back to your buddy's place, rather than your own. This is the return address. You also made a mental note that you needed potato chips and another case of beer. That list is in your buffer.

        Your other "friend", a known sponge who still ow
      • Words or phrases you used that a non-programmer probably does not understand well:

        * Subroutine
        * Return Address
        * Stack
        * Local Variables
        * Jumps
        * Array
        * String
        * Push
        * Allocated

        Thank you and goodnight.
      • It seems really silly and dangerous to mix code and data stacks. Why is it so common?

        Maybe it will slow down CPUs, but I think that if a CPU knows that a stack will ONLY ever contain return addresses and another stack only contains data there can be a fair number of optimizations.

        If you want to really be paranoid, have 3 stacks. One stack for code (return addresses), one stack for data (variables), and one stack for metadata - e.g. each entry could store the end location of the data (e.g. the data stack p
    • by nudicle ( 652327 ) on Monday February 28, 2005 @12:02PM (#11803558)
      Quite a good writeup of stack buffer overflows can be found here [insecure.org].
  • by TripMaster Monkey ( 862126 ) on Monday February 28, 2005 @11:16AM (#11803084)
    This will be part of Microsoft's future direction with Data Execution Prevention (DEP)


    I feel safer already.

  • by redelm ( 54142 ) on Monday February 28, 2005 @11:18AM (#11803099) Homepage
    Malware doesn't need to bring in code, there's plenty of code in the target executable. All it needs is to be able to grab control via the return address on the stack. Then fill the stack with exploit data and set the return addr to something like an exec() syscall.

  • by wschalle ( 790478 ) on Monday February 28, 2005 @11:18AM (#11803100)
    Cisco Systems CSA product does this and more.
  • Looks like... (Score:4, Interesting)

    by eno2001 ( 527078 ) on Monday February 28, 2005 @11:18AM (#11803103) Homepage Journal
    Microsoft and Intel are finally catching up to where DEC was back in 1992. DEC Alpha + OpenVMS = no such thing as a buffer overflow and 64 bit processing as well. Whatever happened to the future again? ;P
  • Not a silver bullet (Score:5, Informative)

    by TwistedSquare ( 650445 ) on Monday February 28, 2005 @11:18AM (#11803106) Homepage
    DEP will not prevent all buffer overflow attacks. It is intended to protect from the attack where the return address of the stack is overwritten to make the program jump into the stack. However, the program could still jump into a useful portion of existing code, or simply crash, or keep running but overflow a flag variable on the stack that will cause odd behaviour. It can also prevent things like JIT/HotSpot compilation. I'm not saying it's not useful at all, but it is one of many measures that all help a little.
    • No,

      Visual C++.NET 2003 has a compile switch that makes your app check the return address, so that is nothing new. DEP interacts with the NX bit on the CPU to stop data-only memory from being executed, which should prevent a lot of buffer overflows.
  • If they are a small user (grandmom), they will box the damn thing up and go back to reading the paper and playing bingo at the VFW.

    Eventually, someone will create a ROM-booted web applicance that has flash and pdf capability built in that they will feel comfortable using, will work for 10 years without an upgrade, and is immune to viruses because when you turn it off - everything is wiped. Their "Desktop" will be on google or somewhere external to their own system.
    • That's been done. It was called the "I-Opener [fastolfe.net]. Worked fine. Booted from non-volatile memory. Ran QNX. Microsoft killed it by making IE incompatible and took over the browser market.

      You could redo the I-Opener today. In fact, you could even get the latest version of QNX with the new embedded browser and load it into an I-Opener.

      Something like that should be in every hotel room, where you really want a stateless client machine.

  • by nurb432 ( 527695 ) on Monday February 28, 2005 @11:23AM (#11803157) Homepage Journal
    The basic architecture is fundamentally flawed in today's 'consumer grade' computers. Using a strict Harvard architecture, where data is *separate* from code, would eliminate a lot of today's troubles.

    Is it too late to change? Well, we have had new chips arise ( like power , or CELL ) so, its not impossible.. just difficult.
    • That's exactly what's happening. When a buffer overflow is executed, the processors throws up a red flag saying, this is suspicious, you're trying to execute code sitting in the data cache instead of the instruction cache.
    • Using an executable bit for memory pages, you have no executable problem. If you use a strict Harvard arch, then you lose the huge flexability that comes with using one memory. What if you run out of data memory but have huge amounts of code memory left? Someone will inevitably create a swapping module move memory from one type of memory to another, nullifying any benefit from a Harvard architecture. Modern processors do split their L1 cache into code and data anyway, already taking advantage of the ext
    • Sure, you can have your code and data in separate spaces if you want. Atmel will sell you a very nice computer that works that way. Of course, you need to reprogram it externally if you want to run something different on it. The notion of running different programs without something external replacing the program memory doesn't work on a Harvard architecture.

      And a Harvard architecture doesn't help, anyway, if your program contains routines that an attacker would like to run with chosen data, because the st
  • by weave ( 48069 ) * on Monday February 28, 2005 @11:23AM (#11803160) Journal
    I just got done reading an interesting article [redhat.com] about SELinux. I'm just curious as to the strengths and weaknesses of each approach.

    The SELinux approach sounds to me like a far better way to approach this, actually controlling the permissions of a process with some high degree of precision, down to what files it can use and what other processes it can invoke.

    Anyone learned in this stuff care to give a non-flamed opinion of the two approaches strengths and weaknesses? Also, do or will the newer Linux kernels do anything similar regarding stack protection?

    • Such functions have been available for years on Windows and Unix, such as the Cisco security agent (formly Okena). When properly configured, you can run a Windows system without apply a patch for a whole year and not get exploited (which is very hard to set up).

      The problem is that administrating these permissions is a real pain in the ass. It's not simple at all, and is usually more of a hassle than its worth. Different version of the same product may require different rule sets. Even a simple, small
    • The two approaches are orthogonal and can be combined. For example, Red Hat FC3 has both SELinux and ExecShield (which includes library address randomisation and W^X-style checks like what MS calls "data execution protection").
    • by kbielefe ( 606566 ) <karl@bielefeldt.gmail@com> on Monday February 28, 2005 @01:02PM (#11804266)
      This [gentoo.org] is a good introduction to the main solutions to software exploits in Linux and the different kinds of protection they provide and why.

      Most people recommend a combined approach including mandatory access control, chroot jails for services on the internet, stack smash protection, address space layout randomization, non-executable memory pages, firewalls, virus and spyware scanning, intrusion detection, regular vulnerability patching, and user education (did I leave anything out?). No one will tell you that you are safe after implementing just one of these solutions, but the more you do implement, the more secure your system will be.

      All of the above have been available on Linux for some time, but are not implemented by default in any popular distribution that I am aware of, which is a shame because I believe it is only a matter of time before someone writes a really nasty worm for Linux. Most Linux users I know seem to believe they are safe with only regular patching and a firewall.

      Gentoo is the best distro I have found for implementing these security measures and tries to build them in as an option wherever possible. Gentoo has great documentation on security and is all about custom configuration and compiling. Since some of the above solutions require special compiler technologies, Gentoo is a perfect fit.

      Each of those solutions take a certain amount of effort to implement and will break certain existing applications in different ways. Basically, Microsoft is taking the next step and implementing the least disruptive and easiest solution that will provide some protection for all software running on the system. They should probably also compile their own software with stack smash protection and make address space layout randomization available as a next step.

  • by the_skywise ( 189793 ) on Monday February 28, 2005 @11:24AM (#11803176)

    "Hey, my 3ghz computer is running as slow as a Pentium 1.5ghz... Why is that?"
    "Oh that's all the new virus checking that runs the executables before they run to make sure they don't have any viruses in them."

    So y'see... Viruses ARE good for the industry!
    • You are kidding but I bet that AV software led to more moneys spend/wasted on "customer" side than all the viruses combined. Still it does not protect you (I mean - your average PC user) from majority of threats.
    • "Hey, my 3ghz computer is running as slow as a Pentium 1.5ghz... Why is that?"
      It's because you bought a computer from Dell, Compaq, IBM or any number of vendors who bundle vast amounts of memory-resident crap on their already-crappy bargain-basement hardware with half the RAM it should have.
  • by hkb ( 777908 ) on Monday February 28, 2005 @11:25AM (#11803184)
    It was included with Windows XP SP2. It's also in the soon-to-be-released SP1 for Windows Server 2003.

    It appears that if the hardware doesn't support DEP, it will enable some sort of software DEP, instead.

    W2K3 SP! also includes a new, XPSP2-like firewall interface with some nice logging and an easy-to-use rules interface. There's also the new Security Configuration Wizard, which seems to do a pretty damned good job of really locking down 2003 for those that need it.
  • by Doc Ruby ( 173196 ) on Monday February 28, 2005 @11:26AM (#11803207) Homepage Journal
    Compilers should store data in separate protected memory segments, never embedded inside code, where overwrites can change adjacent instructions. JMPs to data segments should issue compiler warnings, and execution past original allocations should set a flag, at least in the VM. The compiler and existing VM can protect from most overflows, and is a centralized link in the chain to guard the software from every programmer, no matter how naive. If CPU vendors want to get on the bandwagon, they can offer an interrupt triggered by such boundary transgressions. Making buffer execution the exception, requiring handling, rather than the default, is a better model of coding suited best to the compilers that translate our directions to the computer into instructions to the CPU.
    • by bani ( 467531 )
      compilers already do this.

      the problem is not jmp to data segments. the problem largely is executable stacks. which is exactly what stack smashing is about.

      the other problem is that executable stacks are required for some legitimate compiler functions such as trampolines.

      the real solutions are:
      _complete_ separation of code and data segments.
      code is _never_ writable, under any circumstances.
      data is _never_ executable, under any circumstances.
      no executable stack.
      no more mprotect().

      this will solve the arbi
  • by Aslan72 ( 647654 ) <psjuvin.ilstu@edu> on Monday February 28, 2005 @11:29AM (#11803234)
    The huge problem with McAfee 8.0i has been figuring out a policy that protects from buffer overruns and keeps your developers happy; I've had to loosen the restrictions for those folks because as you put together stuff in vstudio and attempt to debug it, McAfee's Buffer Overrun flags it and doesn't allow it to run :(.

    --pete
  • by billstewart ( 78916 ) on Monday February 28, 2005 @11:45AM (#11803411) Journal
    C is one of the best languages out there for many things, but nobody should still be using it, because there are too many people who are careless about subtle things and shoot themselves in the foot with it. Yes, if you're writing device drivers, C is probably still the language of choice, but the number of people who do that is pretty limited, and they can run lint and doublecheck their code to make sure they don't get overrun errors. C++ isn't much better - you _can_ write code using constructs that don't get buffer overflows, but you don't have to (if anything, the nicest thing about C++ is being able to fall back to C when you need it), so a random C++ program is no more trustable than a random C program. It's not the 20th century any more - stop doing dangerous things!

    (And yes, I still write C/C++ when I need it, but that's laziness after 25 years of habitual use, and usually I use shel when I need to program :-)

    • ... users didn't have to be members of the Administrator group. Then the system files would be somewhat more protected in that the user wouldn't have write privileges. I'm not saying the issue goes away entirely... just that unless what you are running requires some kind of administrator/superuser privileges, you can contain damage to the process at hand.

      You are quite right, however, in that buffer overflow is a result of careless programming. Making assumptions about length of strings is fine if you're

    • by Anonymous Coward
      With respect, this is complete rubbish.

      You can't blame a programming language for sloppy coding. Sloppy code is sloppy code and it makes no difference what language you use, if you're a crap coder then you are going to have problems.

      And stating that "C is probably still the language of choice, but the number of people who do that is pretty limited" is just plain wrong. Back in the REAL world, C is used more now than it has ever been.

      And besides, what do you suggest you write an OS is ? Perl ? TCL ? Vusua
  • A Tough Transition (Score:3, Interesting)

    by cyngus ( 753668 ) on Monday February 28, 2005 @12:08PM (#11803612)
    Having recently gotten my hands on a Windows XP box with a P4 that supported the NX bit I thought I'd turn it on, good idea right? yeah, great idea if you don't want to use half your applications. The NX bit stayed used for about five minutes. I wonder how many of Microsoft's apps will actually work with this protection turned on.
    • I haven't used a CPU that supported NX natively. My only experience with DEP is on an IBM T21 laptop with XP SP 2. I have had no problems with IE, Office 2003 (Outlook, Excel, Word, PowerPoint), Firefox, Symantec Antivirus, GAIM, Perl scripts (mostly my own), several Java apps, and several other third party applications written in Delphi, C/C++, etc.

      I even enabled DEP for all programs and services, not the default "essential" ones.

  • by mccrew ( 62494 ) on Monday February 28, 2005 @12:09PM (#11803622)
    Sounds like folks are reinventing Avaya Labs' incredibly useful Libsafe [avayalabs.com]. This is a library that you can set up to preload before libc, either on a process-by-process or system wide basis, and it defines its own set of functions (strcpy, et al) to override those in the standard C library. It is able to detect many stack smashing attacks. When a stack smashing attack is detected, the offending process is terminated, and the administrator is sent an e-mail with copious technical detail.

    I am surprised that major distributions have not picked up and run with this great tool. One of the first things I do on any new machine is to ensure that all internet-facing services are being run with libsafe preloaded.

  • by ajs318 ( 655362 ) <sd_resp2NO@SPAMearthshod.co.uk> on Monday February 28, 2005 @12:10PM (#11803634)
    Sorry, but the whole "No Execute" thang is aceite de serpiente, as they say in Madrid. Even the much-vaunted {by people who don't understand it, anyway} Harvard Architecture {i.e. using separate buses for data and instructions, thereby breaking the Neumann principle totally} doesn't work. If the computer can make some kind of decision based on the content of memory location x, then this is tantamount to x being an executable location.

    Now, if you had a "Take no action whatsoever based on the content of this location, in fact, whenever you are asked even to read it, always return the same value" flag -- that might prevent the execution of unwanted code. Chances are your system would also be computationally incomplete.

    As it stands, NX is trivially defeated by persuading the user to install a simple piece of code -- effectively an emulator.

    Basically, NX is answering the wrong question. The question that needs to be asked is "How can we best persuade users not to run arbitrary code when they don't know what the hell it does?" My own answer would be for every processor to have its own, unique instruction set; so only code compiled for that one particular individual processor would ever run on it. {Obviously you'd have to have a compatibility mode for bootstrapping, so you could compile the compiler to compile the unique-ified software; but this would have to be accessed by some deliberate hardware action that no software could get around.} I'm sure that is not impossible; but I'm not sure that it's feasible as long as the likes of Microsoft want to do things their way.
  • DEP has nothing (Score:3, Interesting)

    by bluefoxlucid ( 723572 ) on Monday February 28, 2005 @12:13PM (#11803663) Homepage Journal

    DEP actually can be evaded [blogspot.com] because it supplies no ASLR. If the attacker can reasonably know where some data exists in memory--particularly, his exploit and msvcrt.dll for memcpy() and VirtualAlloc()--he can basically switch DEP off during an attack. Believe it or not, this is pretty easy if everything is in the same place every program run.

    Fortunately in Linux we have PaX, which supplies much better protection than W^X, Exec Shield, or DEP with "competetive" (i.e. comparable, potentially lower; it can actually viably compete) compatibility. Red Hat of course has convinced the GCC devs to make GCC mark everything to have an executable stack if the compiler is at all not sure that it can operate without one; but PaX ignores that and still only "breaks" a few packages (and nVidia's glx).

    PaX, GrSecurity, IBM's SSP (ProPolice), and PIE executable binaries should pave the future on Linux; but people are trying so hard to avoid them. It's not even much work to maintain a distro using those.

    DEP is basically like vanilla Linux on AMD64.

    • From the linked blog post:

      This could only be properly protected against by incorporating Address Space Layout Randomization into the protection scheme.

      I don't believe that. Using a canary would stop the attack discussed in that post (which is an attack strategy that is already well known).

      MS Visual C++ has offered the option of canary protection for some time (even if they did not use Cowan's name for it). I would have expected that SP2 involved recompiling most/all code with the check prior to a

  • Better idea? (Score:3, Informative)

    by CODiNE ( 27417 ) on Monday February 28, 2005 @12:18PM (#11803718) Homepage
    I'm kind of a jr programmer and here's the idea I had. Could be done by the compiler and is probably already out there in some form.

    Character arrays have an extra byte stuck on the end of them. When the compiler sees that it's being called by an unsafe method or some sort of strcpy it puts a random value into that byte, and rechecks it after the call. There is no way for the buffer overflow code to know what the value was and when it is changed the program is immediately killed. Then again your overflows still have a 1 in 256 chance of working. ;-)

    So is this already being done somewhere or is there any reason why this just wouldn't work?

    Seems to me OSS along with GCC has the potential to fix overflow problems a LOT easier than a commerical OS vender could.

    -Don.
    • This is already done on many systems. It is usually done for all calls, not just those which are believed safe. Of course, for heap memory the compiler is unlikely to know the size of the buffer at the point of potential overflow, so you would get delayed errors if you tried to check all buffers.

      I just use IBM Rational Purify to build a version of my code, chuck random crap at the purified version and fix all the problems it finds. It's relatively expensive, but I think it's worth more than it costs.

      Ph
  • by Tom ( 822 )
    The article linked doesn't say much about this "breakthrough technology", but from what I could gather, it looks rather like a cheap (and incomplete!) knock-off of OpenBSD's W^X (write-xor-execute). Anyone know more technical details?
  • by SunFan ( 845761 ) on Monday February 28, 2005 @12:58PM (#11804195)

    I've had stack protection for quite some time with Solaris and OpenBSD. The Windows platform is a few years late to the party; doesn't Microsoft realize how much easier their life would be if they acted earlier?

    Companies with Windows are like a person persisting to wear worn-out shoes. They're uncomfortable, they cause blisters, they don't keep water out, yet they keep them, because going barefoot is worse, I guess. The software industry still has a lot of growing-up to do.

  • A fitting name. In German, a Depp is another word for stupid idiot.
  • Early versions of Unix (circa early '80's) running on PDP-11's split application memory into Instruction and Data space. Instruction space was the programs instructions and data included static, stack, and heap space. Data space was not executable. So a buffer overrun could result in a core dump (GPF) but not a security violation. Please keep this in mind, when MS is issued a patent on their novel DEP scheme.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...