Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Programming Open Source Software Technology

GCC Turns 25 192

eldavojohn writes "With the release of GCC 4.7.0, the venerable and stalwart constant that is the GNU Compiler Collection turns twenty five. More ISO standards and architectures supported with this release and surely more memories to come from the compiler that seems to have always been."
This discussion has been archived. No new comments can be posted.

GCC Turns 25

Comments Filter:
  • by Anonymous Coward on Thursday March 22, 2012 @06:59PM (#39446027)

    Those were the days before the EGCS fork.

  • by fnj ( 64210 ) on Thursday March 22, 2012 @07:01PM (#39446043)

    Actually, while PC duffers were futzing with 16 bit Computer Innovations C, Lattice C, Microsoft C 1.0 in 1983, which was pretty much just a ripoff of Lattice C, then through the 80s with Microsoft C 2.0 through 6.0, with the first hesitant entry to 32 bits in 5.0 near the end of that period (even though there was no proper Microsoft 32 bit OS available yet at that time), REAL embedded programmers were working with 32 bit 68000, 68010, and 68020 using Green Hills C and compact deterministic real-time multi-tasking kernels such as VRTX.

    Green Hills C was a significant improvement on the Portable C Compiler that came with SunOS and other BSD based unixes in those days.

    When gcc finally matured, it was an ENORMOUS boon. The action nowadays is moving to Clang/LLVM though. With Clang, you don't have to compile a separate version for every cross-compile target. Every Clang executable is capable of producing code for any of the supported targets just by using the right run-time options. Of course, this doesn't address the point that you still need appropriate assemblers, linkers,libraries, startup code etc for each target. But they are trying to get a handle even on that with the Clang Universal Driver Project.

  • Re:Thanks gcc! (Score:4, Informative)

    by Anonymous Coward on Thursday March 22, 2012 @07:25PM (#39446227)

    No, it doesn't [lwn.net]. Distributions are evaluating it right now and it is failing big time. Even Xcode developers are pissed off [woss.name] that Apple dropped gcc. That shit ain't fully baked, and imbeciles like you who recite Apple propaganda without thought need to pull your heads out of your asses.

  • Pastel and LLNL (Score:5, Informative)

    by Al Kossow ( 460144 ) on Thursday March 22, 2012 @08:00PM (#39446483)

    Pastel was an extended Pascal compiler developed by LLNL for the S-1 supercomputer project
    http://www.cs.clemson.edu/~mark/s1.html [clemson.edu]

    It, and several other significant pieces of software, including the SCALD hardware design language
    were made freely available by LLNL. I have one version of the compiler, which was donated to the
    Computer History Museum by one of its authors. I have been looking for the other pieces since the
    late 80's.

    If you look at the GNU Manifesto, RMS was also looking at using the MIT Trix kernel in the early days
    of the project.

  • beg to differ (Score:4, Informative)

    by Chirs ( 87576 ) on Thursday March 22, 2012 @08:07PM (#39446527)

    You can have the best algorithm in the world, and a good compiler will *still* be able to make it run faster than a bad one.

    Alignment, branch probabilities, inline functions, hoisting stuff out of loops, loop unrolling, removing unused code, etc.--these sorts of things really can make a difference in code that gets called frequently.

    That said, it's not exactly clear that the Intel compiler (icc) is unconditionally better than gcc. There are some benchmarks at http://www.linuxforge.net/docs/bm/bench-gcc-icc.php [linuxforge.net] of a linux-2.6.34 kernel compiled with gcc and icc. The results are close enough that it doesn't make sense for most people to use icc.

  • Re:Thanks gcc! (Score:5, Informative)

    by PaladinAlpha ( 645879 ) on Thursday March 22, 2012 @08:58PM (#39446855)

    Deliberate misinformation. You are free, of course, to do whatever you want to with binaries produced by GCC. GCC's license is completely irrelevant unless you're modifying or extending GCC itself.

    Nice try, though.

  • Re:Thanks gcc! (Score:2, Informative)

    by Anonymous Coward on Thursday March 22, 2012 @09:52PM (#39447157)

    clang never beat GCC. The benchmarks clang/LLVM originally published were run against an already old GCC version (by several releases). Thing is, GCC isn't a static target. It keeps getting better, and so far clang/LLVM hasn't been able to keep up.

    Same thing with error messages. GCC has vastly improved error messages, now. clang's just look flashier because they use ANSI coloring.

    The one place where clang/LLVM has actually managed to keep apace with GCC is in the complexity of the code. While GCC is a macro-filled horror show, clang/LLVM is an impenetrable mass of C++ casts, code generators (for clang, not for the LLVM engine), and other horrors. It's definitely not a showcase for how C++ can simplify development. It's somewhat easier to read, but is aging at an incredible pace.

    And this comes from someone who uses clang all the time. I like both compilers, and I target both. Just like I make sure my code compiles cleanly on all the free BSDs as well as Linux.

  • Re:Thanks gcc! (Score:5, Informative)

    by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Thursday March 22, 2012 @11:07PM (#39447523) Homepage Journal

    GCC's license is completely irrelevant unless you're modifying or extending GCC itself.

    In fairness, Clang and LLVM are designed to be used as libraries and integrated with other projects. Imagine that your favorite text editor uses Clang's own parser to generate its syntax highlighting and check for errors as you type. With the GPL, your editor would likely have to be released under the GPL. Now, I like Emacs so that's not a deal-breaker for me. I can see why Apple and other non-GPL editor authors might not like it, though.

  • by MobyDisk ( 75490 ) on Thursday March 22, 2012 @11:24PM (#39447613) Homepage
    Microsoft Visual Studio supports compiling to: [microsoft.com]
    • ARM licensed technologies for architectures v4, v4T, Thumb, v5TE, and Intel XScale.
    • Hitachi SuperH processors SH-3, SH3-DSP, and SH-4.
    • MIPS licensed technologies developed by NEC, Toshiba, Philips Semiconductor, Integrated Device Technologies, LSI Logic, and Quantum Effect Design.
  • by Chryana ( 708485 ) on Friday March 23, 2012 @12:03AM (#39447787)

    Are you an Apple shill in your spare time?

    I'm trying to read this thread, and I have to put up with your repetitive posts about how great clang is. Why don't you read some of the replies [slashdot.org] to your crap? They do a good job of debunking your claims. I have mod points, but I just hate moderating people down, even if they waste my time repeating unfounded assertions (also known as bullshit).

  • by shutdown -p now ( 807394 ) on Friday March 23, 2012 @03:20AM (#39448395) Journal

    True, but adding // comments would take 10 minutes (assuming you know the code), and giving the ability to declare variables wouldn't take long either.

    You forget about all the associated red tape. Even a minor feature has to be tested, and some QA ultimately has to sign off for that. It also has to be documented by tech writers, complete with code samples, and said documentation to be translated to all supported languages (I believe over a dozen now for VS). Any associated user-visible output (e.g. new errors/warnings) also has to be proof-readed and translated. It all adds up.

    But let's be honest, is there any reason a company with Microsoft's resources can't keep up with gcc?

    It can, if that were the goal. But the ultimate goal is to earn money, same as for any other for-profit company. Hence, things aren't done because they are neat or the Right Thing or because everyone else has them. They're done because the expected profit from the feature - whether direct (from sales of the product) or indirect (from sales of other products propped up by the SDK, like, say Windows) - exceeds expenses. Even more importantly, resources being large but still limited, for every feature the question is not whether it is profitable in and of itself, but whether it is more profitable than something else that could be implemented in its place. It's that stack ranking that really kills C99 support - there's always something more important (read: more profitable).

    Of course, people aren't cogs, and sometimes they feel hard enough about something - purely out of the desire to do the Right Thing - that they play bureaucracy games to come up with a plausible "business case" for their favorite cause, and ultimately convince the management that calls shots to let them put it in. But, there's no-one there who feels hard about C99 (or at least harder than a dozen other things that need to be done first, preferably yesterday - like, say, variadic templates...)

  • by Mr Z ( 6791 ) on Friday March 23, 2012 @07:39AM (#39449323) Homepage Journal

    What's all this about GCC being slow? It's one of the fastest (in terms of compile time) compilers I work with regularly. You need to try out some highly optimizing compilers for embedded processors sometime to reset your expectations.

    Some real numbers: I just recompiled (with all the optimization bucky bits turned out) my Intellivision emulator and SDK. That's just over 100K lines of code. Took 3.75 seconds with make -j6. "But that's a parallel make!" Fine, I'll do it again tying three of my computer's CPUs behind its back: 13.54 seconds. (Only 8.6 CPU seconds.) At work, it can take 5 seconds just for RVCT (ARM's compiler) just to contact the friggin' license server. Or maybe it's our NFS servers. Hard to say.

    Ok, now to be fair, that's nearly all C code. C++ is a whole 'nuther animal. But much of that is C++'s fault, or more correctly the modern C++ libraries. The template processor is a Turing complete functional programming language, if you sneak up on it sideways and catch it off guard. The STL and Boost folks have perfected that snipe hunt and made an industry out of it. That means that C++ code can compile a bit more slowly. (Fine: "a bit" is an understatement. More like a quadword.) BTW, my comment on STL and Boost is not meant to be a flame of their work. It's incredibly useful stuff. But, I consider a bunch of what they're doing something of an abuse of C++'s limited mechanisms. If C++'s metaprogramming facilities were more deliberately designed for this level of use, I think compile times would come down and we'd avoid the "thirty pages of spaghetti because you forgot a comma" error message experience.

    The book "C++ Template Metaprogramming [boostpro.com]" has a rather enlightening chapter (Appendix C) on compiler-time performance for various C++ features. (Get a glimpse here. [amazon.com] Just search for Appendix C.) Unfortunately, it's not terribly up to date--my copy says (C)2005--so it measures GCC 2.95.3 and GCC 3.3. GCC definitely was not a performance leader in that era, but most of the compilers were pretty bad. I'd love to see an updated version of it for the latest crop of compilers. I seem to recall finding a website a couple years ago with updated data that showed GCC fixed some of the quadratic algorithms in this space, but I could be dreaming it. If anyone actually has pointers to some data on this, that'd be great.

  • Re:Thanks gcc! (Score:4, Informative)

    by serviscope_minor ( 664417 ) on Friday March 23, 2012 @12:42PM (#39452835) Journal

    They still do this with GCC. Because of the poor layering, getting a new target often requires hacking up various parts of the middle. Their goal is a working compiler, so they just do the minimum required to get their target working, at the expense of breaking others. These changes won't get incorporated into mainline GCC, so you're stuck with a GCC fork that's going to be unmaintained pretty soon.

    Well, back in the past, in the bad old days, they licensed some proprietary compiler and hacked it. You have no idea how good/bad/ugly that was compared to GCC, yet the end results were terrible.

    Also, most embedded vendors don't make new CPU architectures. They license existing cores (ARM, PPC, 68k, MIPS) for which GCC already works, making the amount of hackery they need (but not want) to do minimal.

    In contrast, LLVM back ends are modular and quite easy to write and, more importantly, don't need to touch any of the rest of the system. This is why ARM is now investing quite a lot in LLVM and companies like Qualcomm have seemingly permanent job adverts for anyone with LLVM experience.

    Or, it's because companies love to propritarize stuff because they think that every fart is a "market advantage" nad love to keep totally unnecessary things secret. Like companies that won't even send you a basic datasheet without signing an NDA and other crazy foolishness like that.

    Once they get hold of LLVM they won't be able to resist the temptation to stamp on their own proprietary "advatage" at great personal cost to the developers unlucky enough to end up working with thir products. GCC kept them sane by forcing them to admit that their insane and poor quality hackery was worth basically nothing to anyone. This seemed to stop them doing it mindlessly.

    I hope, I really hope I am wrong. But, if you've ever suffered under embedded vendors and their desire to stamp their mark for good or bad (usually bad) on their offerings you will understand where I'm coming from: that any method to stop them being idiots and just selling quality hardware is a step in the right direction.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...