Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Operating Systems IT Technology BSD

GCC Compiler Finally Supplanted by PCC? 546

Sunnz writes "The leaner, lighter, faster, and most importantly, BSD Licensed, Compiler PCC has been imported into OpenBSD's CVS and NetBSD's pkgsrc. The compiler is based on the original Portable C Compiler by S. C. Johnson, written in the late 70's. Even though much of the compiler has been rewritten, some of the basics still remain. It is currently not bug-free, but it compiles on x86 platform, and work is being done on it to take on GCC's job."
This discussion has been archived. No new comments can be posted.

GCC Compiler Finally Supplanted by PCC?

Comments Filter:
  • by RLiegh ( 247921 ) on Monday September 17, 2007 @12:06PM (#20637571) Homepage Journal
    I notice that TFS doesn't say that anyone is actually able to compile anything (other than PCC) with it. The BSD folks would love to have a BSD-licensed drop-in replacement for GCC; but it doesn't sound like this is it. Not yet at least.

    Wake me up when you're able to use PCC instead of GCC to do a 'make world' (or ./build.sh or whatever).
  • by Sigismundo ( 192183 ) on Monday September 17, 2007 @12:13PM (#20637701)

    Indeed, the linked article says that PCC is 5-10 times faster than GCC, but currently performs only one optimization... What use is speed of compilation of the binaries produced are slower?

  • That's dumb. (Score:5, Interesting)

    by imbaczek ( 690596 ) <(mf.atzcop) (ta) (kezcabmi)> on Monday September 17, 2007 @12:18PM (#20637775) Journal
    pcc will take YEARS to get the functionality and optimizations that gcc has. Even if it compiles slowly and sometimes generates dumb code.

    Either way, they'd much, much better off if they imported LLVM and redirected their compiler brain power to clang [llvm.org].
  • LLVM / clang (Score:5, Interesting)

    by sabre ( 79070 ) on Monday September 17, 2007 @12:18PM (#20637787) Homepage
    PCC is interesting, but it's based on technology from the 70's, doesn't support a lot of interesting architectures, and has no optimizer to speak of.

    If you're interested in advanced compiler technology, check out LLVM [llvm.org], which is an ground up redesign of an optimizer and retargettable code generator. LLVM supports interprocedural cross-file optimizations, can be used for jit compilation (or not, at your choice) and has many other capabilities. The LLVM optimizer/code generator can already beat the performance of GCC compiled code in many cases, sometimes substantially.

    For front-ends, LLVM supports two major ones for C family of languages: 1) llvm-gcc, which uses the GCC front-end to compile C/C++/ObjC code. This gives LLVM full compatibility with a broad range of crazy GNU extensions as well as full support for C++ and ObjC. 2) clang [llvm.org], which is a ground-up rewrite of a C/ObjC frontend (C++ will come later) that provides many advantages over GCC, including dramatically faster compilation and better warning/error information.

    While LLVM is technologically ahead of both PCC and GCC, the biggest thing it has going is both size of community and the commercial contributors [llvm.org] that are sponsoring work on the project.

    -Chris
  • by joe_n_bloe ( 244407 ) on Monday September 17, 2007 @12:27PM (#20637929) Homepage
    Let me get this straight. A compiler that has been production-quality for over 15 years, compiles everything on every architecture, and has been continuously improved every minute of its existence needs to be replaced by ... Son of pcc? Because of a license?

    Sure, I prefer BSD-style licenses, and so do some other people, but what drives gcc development is the GNU license. I think I'll stick to the compiler that's debugged. Oh, that's right, I forgot, it comes with a debugger too. If you like that sort of thing.
  • Re:Interesting... (Score:5, Interesting)

    by Anonymous Coward on Monday September 17, 2007 @12:28PM (#20637951)
    It has less to do with the license and more to do with GCC's increasingly spotty support for some of the hardware platforms that NetBSD and OpenBSD run on. That and GCC internals are a maintenance nightmare, and its development process is getting even less commmunity-driven than it was before (which was never that much). Asking for a new compiler warning might take anywhere from a day to years just to get a response. The license is definitely gravy though.

    The BSD license that PCC is under, I understand, is actually a problem even to the BSD folks: PCC is actually extremely old (it was originally written for the PDP11!) and apparently it still carries the advertising clause.
  • Re:LLVM / clang (Score:3, Interesting)

    by sabre ( 79070 ) on Monday September 17, 2007 @12:35PM (#20638117) Homepage
    clang is fairly early on, but so is PCC. PCC supports almost no GCC extensions (e.g. inline asm, attributes, etc), doesn't support C99 fully, and has many other problems. The clang parser is basically done for C and clang has support for several other source analysis tools other than "just code generation". See the slides linked of [llvm.org]http://clang.llvm.org/ [llvm.org] for details. I'd expect clang to be fully ready for C use in the next year.

    llvm-gcc is quite mature (it has built huge amounts of code, including apps like Qt and Mozilla), supports C/C++/ObjC and bits of FORTRAN/Ada if that is your thing. Using llvm-gcc you get the advantages of the LLVM optimizer and code generator with the GCC front-end.

    -Chris
  • by TheRaven64 ( 641858 ) on Monday September 17, 2007 @12:46PM (#20638303) Journal
    This has been on Undeadly for a few days now. There was a very informative post by Marc Espie [undeadly.org] (who maintains GCC on OpenBSD) explaining this.

    This has been a long time coming. If you've ever looked at GCC code, you'll be familiar with the feeling of wanting to claw your eyes out (I had to for an article on the new Objective-C extensions *shudder*). I am somewhat surprised it's PCC not LLVM, but it makes sense. OpenBSD wants a C compiler in the base system, that can compile the base system and produces correct code. Support for C++, Objective-C, Java and Fortran would all be better off in ports. PCC is faster than GCC, smaller than GCC, more portable than GCC, easier to audit than GCC, and already compiles the OpenBSD userspace. I wouldn't be surprised if it replaces GCC in the OpenBSD base system soon. If it does, GCC (or maybe LLVM) will still probably be one of the first things I install from ports, but I'd still regard it as a good idea.

  • Re:Interesting... (Score:5, Interesting)

    by sunwukong ( 412560 ) on Monday September 17, 2007 @12:51PM (#20638399)
    I believe it was de Raadt that once mentioned he'd prefer a non-optimizing compiler that produced simple, bullet-proof, bug-free code, i.e., in terms of the OS and its base tools, he prefers correct to fast.
  • by julesh ( 229690 ) on Monday September 17, 2007 @12:53PM (#20638443)
    Well that explains a lot. And here I was thinking that all modern compilers were designed correctly with a front-end and back-end. So much for academics.

    Actually, the post you're replying to is total bollocks. GCC has had a clear divide between front and back end (not to mention a source-language independent middle layer for performing optimizations) since I first looked at it in about 1996. Each layer is hideously complex, but they are all there.
  • by j-pimp ( 177072 ) <zippy1981@noSpam.gmail.com> on Monday September 17, 2007 @01:08PM (#20638775) Homepage Journal

    ...Wake me up when you're able to use PCC instead of GCC to do a 'make bzImage'

    You bring up a good point. For years I have been looking for an open source compiler thats about the same quality as GCC, but is anything but GCC. I'm not too picky about the politics, as long as there a different set of politics from the GCC politics. I had great hope for Open Watcom [openwatcom.com], but the license was bad enough for debian to consider it non free, and they are not actively trying to be an alternative to GCC. Its quite a shame, but I really don't blame them. Technically Watcom is about ready for primetime on linux,they just need to get enough people to periodically try to compile there pet open source linux program with it and send a "I cant get this to work" mail to the list, but no one seems to care. PCC, on the other hand has a much larger set of people that have a reason to like PCC for reasons other than its not gcc./p>

  • Re:Interesting... (Score:5, Interesting)

    by j-pimp ( 177072 ) <zippy1981@noSpam.gmail.com> on Monday September 17, 2007 @01:16PM (#20638923) Homepage Journal

    And he continues to write code in C...why?

    Auctually if I could write in C as well as him, I would do so more often. The problem is not him writing in C, its other people writing in C that are not as good as him. Do to the scope of his work, him writing in C does not lead to more bad C being written. So I'm auctually thankful he is coding in C.

    That being said, he should encourage lesser programmers (including myself) to specifically not code in C.

  • Re:Interesting... (Score:3, Interesting)

    by Selivanow ( 82869 ) <selivanow@gmail.com> on Monday September 17, 2007 @01:27PM (#20639135)
    If you read Linus' early posts about the Linux kernel you will notice that he had originally licensed it under its own license. He didn't switch to the GPL until later. It is hard to say whether or not Linux would have taken off without the GPL. It probably would have been fine using a BSD userland even if a userland wasn't quite available yet. In fact, if the kernel wasn't now so dependant on the GNU userland (binutils, glibc, etc.) it would be pretty easy to usethe BSD userland. (I can't recall if there was a BSD lisenced userland in existance in 1991)
  • by Sam ( 408 ) on Monday September 17, 2007 @01:28PM (#20639155)
    It already compiles the majority of the OpenBSD source tree, and did so before it was imported. Work is now going on to make it compile everything.
  • Re:Interesting... (Score:1, Interesting)

    by Anonymous Coward on Monday September 17, 2007 @01:38PM (#20639331)
    (not the GPP, incidentally)

    Even then, the problem is too many wetware cycles wasted on programming C "well", no matter who the programmer is -- If TdR didn't have to be so good at writing good C code (avoiding common pitfalls), he could reoptimize toward greater _creative_ productivity instead of ensured correctness. Isn't this why we use computers in the first place?*

    (* to do things which require correctness and repetitive tasks, i.e. which can be programmed to be done automatically rather than manually)
  • by MoxFulder ( 159829 ) on Monday September 17, 2007 @01:46PM (#20639495) Homepage

    - it is better than GCC. If this is the case, then its too bad the GNU folk cannot benefit from whatever it has that GCC doesn't. Not like they'll admit it's superiority I'd presume.


    Actually... the BSD license is GPLv2 or GPLv3-compatible, because it doesn't impose any restrictions beyond those included in GPLv2 or GPLv3. So BSD code can be incorporated into a GPL program (Theo de Radt's recent rants notwithstanding).
  • by szo ( 7842 ) on Monday September 17, 2007 @01:50PM (#20639579)
    I don't follow politics, so care to explain what's wrong with gcc's politics? Or, what _is_ gcc's politics?
  • by j-pimp ( 177072 ) <zippy1981@noSpam.gmail.com> on Monday September 17, 2007 @01:52PM (#20639609) Homepage Journal

    The page you linked to says a Linux and a FreeBSD port are undergoing.

    The linux compiler has worked at times and they went as far as writing a binary called owcc that takes standard posix flags for cc and executes wcc with the equivilant args. You can get working linux binaries and they will compile non trivial code if you try hard enough.

    The point is that yes Watom lacks in some areas technically. However, and this is especially true on windows where it works great, it more of an issue of lack of interest that makes it a GCC alternative.

  • by evilviper ( 135110 ) on Monday September 17, 2007 @03:17PM (#20641129) Journal

    A compiler that has been production-quality for over 15 years, compiles everything on every architecture, and has been continuously improved every minute of its existence needs to be replaced by ... Son of pcc? Because of a license?

    You couldn't have gotten that statement any MORE WRONG if you had tried.

    GCC's "production quality" is an on-again, off-again thing. Through most of v3.x it had too many bugs to count, and was inherently unreliable. It couldn't even compile ITSELF with the most basic optimizations or the resulting binary would generate incorrect code. Up until v4 it also misaligned stack variable. It had, and still has, MANY bugs. That GCC successfully compiles code at all is almost entirely due to it being so popular that everyone knows it, and works around its bugs without even thinking about it.

    It has never had GOOD support for any other platforms than x86. Remember the RedHat GCC2.96 fiasco? They forked it because they needed it to support more platforms than it currently did. And even through v3.x the non-x86 ports of GCC had even more bugs than on x86, commonly falling apart if you attempt to use any optimizations. Now, they're DROPPING support for those platform entirely, which is a big problem for developers of operating systems for those platforms.

    "Improved" is pretty vague. HURD has probably been "improved" for every minute of it's existence as well... Meanwhile the far younger ICC (Intel's compiler) beats the pants off of GCC without even trying.

    What's more, GCC's "improvements" come at great cost. If you're a full-time developer, for the final release you want optimized code, but while developing, you want to compile and be able to test code frequently, and so as quickly as humanly possible. GCCv3+, even with all optimizations disabled, takes far, far longer to compile binaries than even older versions of GCC, and as it says, something like 10X slower than PCC.

    The license issue is only incidental. These (and other) problems pushed them away from using GCC. Since they happen to be BSD developers, they'd prefer their work to be BSD licensed, and so it is.
  • Re:Interesting... (Score:2, Interesting)

    by Chandon Seldon ( 43083 ) on Monday September 17, 2007 @03:18PM (#20641151) Homepage

    Better idea, let's just get history correct.

    That's a somewhat more difficult goal, and not one that I'm really interested in attempting here. We'd have to get into references and all kinds of other stuff that really isn't worth it for a Slashdot flamewar.

    They got a huge amount of credit, for the work they did. They just didn't get their name in lights ... because they refused to do the work required for that. Then they complained and wanted more recognition than anyone else got who'd done the same amount of work as they had (like Perl or Xorg etc.) ... this created a "slight" backlash by people who actually know what happened.

    The only credit that they didn't get is the specific credit they claim they deserved: The use of their name for the operating system they wrote. People argue three positions on this:

    • The GNU System is an OS developed by the GNU project, therefore it would be polite to refer to it by its name. (the RMS position)
    • The GNU System is an OS developed by the GNU project, but people who redistribute it can call it whatever they want. (the jerk position)
    • The idea of "the GNU system" is foolish. The GNU project just wrote some tools. Operating System is just another word for Kernel anyway. (the Linus position)

    My question to you is this: Are you arguing position #2 or position #3?

    Position #2 is 100% valid (but RMS gets to keep complaining that you're being a jerk). Position #3 is worthy of argument, but position #2 is not an argument in support of it.

  • by j-pimp ( 177072 ) <zippy1981@noSpam.gmail.com> on Monday September 17, 2007 @03:39PM (#20641505) Homepage Journal

    I don't follow politics, so care to explain what's wrong with gcc's politics? Or, what _is_ gcc's politics?

    I honestly don't know. However, it is an old project maintained by people. They have very specific ideas of how things should work, just like linus has very specific ideas about development (no C++ code in the kernel, you could use something besides GIT, but you would be an idiot, etc.)

    Now I don't know much about the inner workings of GCC or Watcom, but I do know this. Several years ago I tried making a linux to windows cross compiler and failed. I think I put a decent amount of effort into my attempts and I definitely knew how toproduce a standard linux hosted linux targeted instance of GCC that would produce working binaries. A few years later I installed watcom and while it did not support Linux, I could install already working binaries that allowed me to compile dos, windows, os/2 and netware binaries from my windows machine.

    Now the reasons for this are largely political. GCC works just fine as a cross compiler, I'm sure today I could get it to work now that I have written a lot more code, compiled more tarballs, and generally know more than I did then. I was able to get a freebsd to windows cross compiler working just fine thanks to the ports collection. Watcom never got a ready for prime time linux compiler, but what they shipped to end users as "experimental" always was a windows hosted compiler targeting linux.

    Now there is no technical reason that gcc or a third party can't make the cross compiling process simpler, but other than poor college students that like to experiment, anyone who needs a cross compiler either can do it themselves, can hire someone that can, or has to do a lot of hoop jumping. Watcom, being open sourced abandon ware, creates binariy releases. Being it currently only supports one cpu and a handful of binary formats, the build system happens to build a compiler for each possible target.

    It all comes down to people and there opinions, and that by definition is politics. The people that use the products and control the development have different ideas and goals, and this reflects in the finished products.

  • Re:Interesting... (Score:3, Interesting)

    by fatal wound ( 582897 ) on Monday September 17, 2007 @04:33PM (#20642443)

    I'm sorry, but C is just a poor choice for ensuring correctness.

    Far too true, however for working with flexible or hazy requirements or looking to make code that is fast, C is very hard to beat. Also, just because this is true today, doesn't make it something that will be true forever.

    F... "hey, you know what? You're ALL right". I'm being facetious, and the C standard has done a great job in promoting C, but the C standard has really not evolved very far in terms of guaranteeing semantics.

    Once again, totally on the money. The standards group was too concerned to fix things like bitfields to make them useful, or standardize the method of determining the size of an "int". I think the standard evolved more to making the compiler writers happy than to make any real effort at fixing vague semantics that are quite prevalent and cause any number of problems.

    But, if you're trying to verify code that's already been written, either by hand or via some automated tool like a static analyzer, it is painful.

    This is so true that it is painful to hear! I worked a couple of years with a tool to analyze code for customers. The variability in the compilers, environments, implementation details, user hacks, or compiler switches that affect code is dizzying. Has anyone else enjoyed the declaration "char myvar[0]" construct?

    However, before you condemn all of the analysis tools, check out tools that perform "semantic analysis" (that what it was called when I was there) of C code. You may be pleasantly surprised. But it is still a challenge to wholly analyze any project. C++ is hideously complex, as is Ada (even the more recent revision, Ada95 I think).

    Once again, no panacea of code correctness tools, but with the body of work that has been placed in C; it would be foolish to just "walk away". I've worked with a number of languages; both object oriented and procedural. Many arcane assembly languages as well. In my work, I've come to the simple conclusion that *ALL* languages have issues in many areas. Something like the old adage "all dogs have fleas"...

    cheers!

  • Re:Interesting... (Score:3, Interesting)

    by DragonWriter ( 970822 ) on Monday September 17, 2007 @04:52PM (#20642729)

    The GNU System is an OS developed by the GNU project, therefore it would be polite to refer to it by its name. (the RMS position)


    No, the RMS position is not that, since Linux is not the OS developed by the GNU project. It is an OS that's common feature is the Linux kernel; as usually distributed, it includes various tools from the GNU project. The RMS position is, roughly, "The GNU System is an OS developed by the GNU Project, and therefore every project that incorporates any components from that system is also morally obligated to include 'GNU' in the name of there product, even though they have express written permission to distribute the product without doing so."

    But even that's not exactly right, because this argument is only applied to Linux, not to other products that incorporate GNU tools.

    The GNU project's own OS doesn't have a general release yet, though its been coming Real Soon Now for my entire adult life.

    The GNU System is an OS developed by the GNU project, but people who redistribute it can call it whatever they want. (the jerk position)


    Actually, that is, properly speaking, "the GPL position", since the GPL does not have any provision requiring adhering to upstream naming conventions or request. If the GNU project had wanted to make that a condition of distribution, they ought to have incorporated it into the license under which they authorized people to redistribute their code and create derivative works.

    Complaining about people doing something you've given them express, written legal permission to do is somewhat childish, especially when you've (since you began complaining) revised the legal terms offered more than once without addressing the thing you keep complaining about.

    The idea of "the GNU system" is foolish. The GNU project just wrote some tools. Operating System is just another word for Kernel anyway. (the Linus position)


    The GNU project may or may not have written an operating system, but Linux isn't it. An operating system is more than a kernel, but a kernel is an indispensable portion of the operating system. Using some OS components GNU developed and some other OS components doesn't make the result the same OS GNU developed. Its something else. Distributors are free to call it things that highlight the kernel by including its name ("Red Hat Enterprise Linux"), things that evoke the name of the kernel without actually using it exactly ("Lindows"), or things that don't mention the kernel at all ("Knoppix"). They could do the same thing with the GNU toolchain. The fact that none of the distributors wan't to label the OS in a way that highlights the GNU toolchain may be disappointing to the members of the GNU project, but since they've expressly (and repeatedly, given the revisions to the GPL) given distributors permission to do exactly what they are doing, they don't really have any leg to stand on in complaining about it.
  • Re:Interesting... (Score:3, Interesting)

    by Goaway ( 82658 ) on Monday September 17, 2007 @05:16PM (#20643113) Homepage
    My point was more that if you might just be better off in the long run by starting from scratch, instead of taking on the maintenance nightmare that is gcc.
  • by Anonymous Coward on Monday September 17, 2007 @05:48PM (#20643627)
    I suspect for the most part it's the usual license zealotry that seems to have reached a peak (or is that a nadir) over the last few months.

    If this really was about getting a faster, more reliable, compiler, that supports more architectures, an older version of GCC would have been forked. PCC is a particularly bad idea:

    - Poor separation of the front and back-ends means it's only ever going to be a C compiler, unlike GCC.
    - Ancient code, not even ANSI C99 level. By the time it's compliant, expect it to be a mess.
    - Speed of compilation at some cost: the compiler does almost no optimizations, not even the uncontroversial ones. Code generated is large and slow. Expect the number of supported architectures to be poor, not because it can't technically generate code for a particular target, but because the timings and size of the kernel would preclude it from running on anything useful.
    - Poor multiarchitecture support (unless you're limiting yourself to 1970s systems and ix86.) This will need to be added before it can be considered credible.

    I mean, that last one's the biggest joke. The complaint is that GCC doesn't support enough architectures, so you're switching to PCC? WTF?

    And why does GCC drop less popular architectures from time to time? Answer: only because nobody is volunteering to maintain them. So, of the two options:

    - Contribute to GCC by maintaining output options for architectures you want

    or

    - Modify an old, woefully outdated, compiler that barely supports most of the architectures you want to support them

    people are seriously picking the latter?

    The proponents of PCC here are following an agenda. It's nice to see antique code given a polish and made to work from time to time, but actually switching OpenBSD to this thing, as proposed here by numerous contributors, is so completely out of left field that I can only assume this is pretty much another salvo in the unnecessary war against the FSF.

    Bizarre.
  • Debian GNU/kFreeBSD (Score:3, Interesting)

    by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Tuesday September 18, 2007 @12:19AM (#20647481) Homepage Journal

    What happened to the Linux distro with the BSD userspace - haven't heard about that in a while now...
    I believe it was the other way around [debian.org].
  • by realnowhereman ( 263389 ) <andyparkins@nOsPam.gmail.com> on Tuesday September 18, 2007 @05:24AM (#20649093)

    Having a closed, BSD-licensed compiler helps restrict competition in their markets, and the BSD license allows this. GPL does not.

    Wow, that sounds great. Sign me up.

    Embedded work is far better when the toolchain is based on GCC. Every proprietary compiler I've used has been a fight just to get started. I recently tried out avr-gcc and was delighted, all the GCC experience I had just dropped straight in.

    Shame on any manufacturer who doesn't add a GCC backend for their CPU.
  • Re: Right on (Score:2, Interesting)

    by Hal_Porter ( 817932 ) on Tuesday September 18, 2007 @11:22PM (#20663445)
    Unfortunately, Microsoft made a decision not to use the boundary protection in their new operating system called "Windows". They ignored most of the work that Intel did providing support in the silicon for a decent micro operating system. The boundary protection could have been built into the programming language and runtime. Things would have been much better in the long term.

    16 bit Windows did use it, but just not for protection.

    Originally 8086s had a simple segmentation mode without protection. Each address was built up of seg<<16|offset. Since both segment and offset were 16 bit, this limited the address space to 1MB. Famously, the designers of the IBM PC reservered the upper 384K for IO, and this is where the 640K limit came from.

    Later on the 80286, protected mode was supported, where the value you loaded into a segment register was a selector, and index into a table of segments. The CPU supported different privilege levels called Rings with only the highly privileged ones allowed to create entries in this table.

    16 bit Windows did used 286 protected mode - it had to to get access to memory above 1MB. When it loaded it would install a DOS extender which would switch the PC into protected mode and allow 16 bit protected mode tasks to run on top of DOS. Since the 286 didn't allow you to switch back, each call into DOS used a triple fault to reset the processor and some Bios code to jump back into Windows.

    It was even possible to allocate buffers bigger than 64K. In that case, windows would set up an array of selectors for each 64K chunk. If C code wanted to access an arbitrary location in the buffer, the compiler would work out which 64K chunk it was in, load a segment register with the corrrect selector (this was a very slow operation since microcode in the 286 had to check permissions), calculate the offset and load it into one of the normal registers and then do the segment read. This was an incredibly slow process.

    It's also worth pointing out that 16 bit Windows used protected mode to get access to more memory, the like Dos the OS didn't protect itself from being damaged by third party applications. And it didn't stop third party applications damaging each other. As Walter Oney put it the philosphy was that it's a personal computer after all - If you're a programmer you can do what you like to it, just like you're free to run your car without oil until it seizes up.

    Once the 386 came out and allowed offsets into segments to be bigger than 64K Windows would even set the limit on the first selector to allow you to access the whole buffer with operand size overides. The 386 also supported Virtual 8086 mode, so protected mode could jump into DOS and segments would work like a 8086. But loading segment registers was still a very slow operation, and Windows NT and Linux which are both designed to stop applications corrupting each other or the system both used page tables to do it instead. But page tables don't protect against buffer overruns.

    Mind you segmentation only protects against buffer overruns if you malloc the buffers. Automatic variables on the stack are not protected. And allocing a stack variable is just a subtract instruction - it is orders of magnitude faster than calling into the OS, switching to Ring 0, allocating the memory, filling in the segment table and the returning to the caller who would load the selector into a segment register. Worse, there are very few segment registers, an each time you access any buffer you need to reload one. On a 286 there is CS,DS,ES.CS and DS are needed for near code and data, so only ES is free for far pointers. The 386 has FS and GS too, but three registers with very slow loads is not a recipe for a speedy machine.

    So Microsoft tried it and it was slow. If they'd have used it enough to avoid buffer overruns - i.e. malloc every buffer rather than allocating some on the stack it would have been really slow. And so all modern OSs rely on the page table for protection instead of segments so they can run on multiple processors. In x64 mode, segment limits aren't even checked by hardware anymore.

For God's sake, stop researching for a while and begin to think!

Working...