Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Interview with Mark Mitchel, GCC's Release Engineer 50

ICC-Rocks writes "In light of the imminent release of the first 'stable' version of GCC version 3, OSNews features an interview with Mark Mitchel, GCC's Release Engineer. They are talking about GCC 3.x, the future and the competition."
This discussion has been archived. No new comments can be posted.

Interview with Mark Mitchel, GCC's Release Engineer

Comments Filter:
  • Question: Blah blah blah irrelevant blah.
    Answer: Blah blah blah I don't know blah.

    Brilliant, really.

  • by Anonymous Coward
    Yes, even RedHat 7.3 still has the piece of crap 2.96 compiler. WE as a community stand up and say to dist people. USE RELEASES PLEASE.

    RedHat, please, just make it easier to play with the system and include the stuff that we all will go and download 20 seconds after install. Please. This compiling compilers like 2.95.3 and 3.0.4 is a waste of my time.

    Note: About my insinuations about GCC 2.96 brokenness, I work side by side with a person who used to be on the GCC/GNU team, and has found strange bugs in certain version of the glibc that has been compiled by the 2.96 series. It went away when using release glibc compiled by release GCC. I personally have seen evidence that this is not FUD concerning GCC 2.96 - so please, all the flaming Bero zealots explain why is it now better to have a kluged compiler when the GCC team has far superceded it?

    Good - they use glibc 2.2.5 - a standard GNU release, but they compiled it with the LAME 2.96 compiler. We shall SEE if they got the compiler It has been said that if a broken compiler compiles a library the library can be strangely broken and very difficult to debug. This goes to show RedHat why they shouldn't do this, and properly couple GLIBC 2.2.5 with GCC 3.0.4 as intended by GNU. Bero seems adamant about maintaining a 2.96 fork, which is costing time and resources and annoying users. I wouldn't care so much if 1.1.2, 2.95.3, 3.04 and RH-BROKEN.296-special were all included, but such is not the case. Lame.

    However, why did gcc3 appear in 7.2 and not 7.3? All I ask is that yes, they can compile how they see fit, and so can I. My only request is to for them to provide the rest of the compilers for me that have been cleanly installed "their way," so that I don't have to go through the same shenanigan every time I upgrade or change a system or install a new one, etc.

    On a side note, as far as GCC 3.X not being prime time, for C is surely is, I don't know about the rest, but for C its, as far as I can see, quite useable, stable and reliable with some interesting new optimizations. I also like ICC, from Intel, but they have very strange and frustrating licensing weirdness, and the kernel can't be compiled with it.

    A lot of the GCC 3 is broke with regards to the C++, that's a crock. Both sides blame the other, but from what I have seen, most of the crap that doesn't compile right on GCC 3.x is the writer's fault, not the compiler. Think, what is harder, writing hello world or writing a compiler to compile hello world. I'm more inclined to believe the compiler guy that has to work on the project.

    I see the reason to maintain binary compatibility to a point. For their manageability it makes sense, to some degree. So if its easier for them to put stuff out, go ahead.

    I think that GNU has been a great force in the world, and to uselessly outpace them or point fingers at them is frustrating and bad for both sides of the camp.

    One more note on RedHat, I am what would be the "customer," I do buy the media and get RH with new systems, etc. "Customers" who use this as a server don't like things being out of whack. I wish I could make it a requirement that the EGCS 1.1.2 release, 2.95.3 release, GCC 3.0.X release be included already to make things easier. It was there in 7.2, and then yanked out. I didn't hear the pissing and whining from the usual suspects about why this was done, but, I can only imagine they went off in some strange direction and have to dig themselves out quietly and slowly form this bastard fork, which, NO "readme," or "install" doc *EVER, EVER* requests. Face it. 2.96 is some RedHat only (Not Mandrake, Not SuSE) strange kluge. Programmers ignore in favor of GNU releases. Debian ignores it. It's a strange wart that needs wart removing acid, now. ;p

    • by Anonymous Coward

      gcc 2.96 is actually more standards compliant than any other version
      of gcc released at the time Red Hat made this decision (3.0 is even more compliant, but not as stable yet).
      It may not be "standards compliant" as in "what most others
      are shipping", but 2.96 is almost fully ISO C99 and ISO C++ 98
      compliant, unlike any previous version of gcc.

      gcc 2.96 has more complete support for C++. Older versions of gcc could
      handle only a very limited subset of C++.

      Earlier versions of g++ often had problems with templates and other
      valid C++ constructs.

      Most of gcc 2.96's perceived "bugs" are actually broken code
      that older gccs accepted because they were not standards compliant - or, using
      an alternative term to express the same thing, buggy.
      A C or C++ compiler that doesn't speak the standardized C language is
      a bug, not a feature.
      In the initial version of gcc 2.96, there were a couple of other bugs.
      All known ones have been fixed in the version from updates - and the version
      that is in the current beta version of Red Hat Linux. The bugs in the initial
      version don't make the whole compiler broken, though. There has never been
      a 100% bug free compiler, or any other 100% bug free non-trivial program.
      The current version can be taken from Red Hat Linux 7.2. It will work
      without changes on prior 7.x releases of Red Hat Linux.
      Since a lot of people claim 2.96 is buggy because of the accusations
      found in MPlayer [mplayerhq.hu] documentation, I have
      included the facts that led them to incorrectly believe that 2.96 is buggy
      here [slashdot.org].

      gcc 2.96 generates better, more optimized code.

      gcc 2.96 supports all architectures Red Hat is currently supporting,
      including ia64. No other compiler can do this. Having to maintain different
      compilers for every different architecture is a development (find a bug, then
      fix it 4 times), QA and support nightmare.

      The binary incompatibility issues are not as bad as some people and
      companies make you believe.
      First of all, they affect dynamically linked C++ code only.
      If you don't use C++, you aren't affected. If you use C++ and link statically,
      you aren't affected.
      If you don't mind depending on a current glibc, you might also want to
      link statically to c++ libraries while linking dynamically to glibc and other
      C libraries you're using:
      g++ -o test test.cc -Wl,-Bstatic -lstdc++ -Wl,-Bdynamic
      (Thanks to Pavel Roskin [mailto] for pointing this
      out)
      Second, the same issues appear with every major release of gcc
      so far. gcc 2.7.x C++ is not binary compatible with gcc 2.8.x. gcc 2.8.x C++
      is not binary compatible with egcs 1.0.x. egcs 1.0.x C++ is not binary
      compatible with egcs 1.1.x. egcs 1.1.x C++ is not binary compatible with
      gcc 2.95. gcc 2.95 C++ is not binary compatible with gcc 3.0.

      Besides, it can easily be circumvented. Either link statically, or
      simply distribute libstdc++ with your program and install it if necessary.
      Since it has a different soname, it can coexist with other libstdc++ versions
      without causing any problems.
      Red Hat Linux 7 also happens to be the first Linux distributions using
      the current version of glibc, 2.2.x. This update is not binary compatible with
      older distributions either (unless you update glibc - there's nothing that
      prevents you from updating libstdc++ at the same time), so complaining about
      gcc's new C++ ABI breaking binary compatibility is pointless. If you want
      to distribute something binary-only, link it statically and it will run
      everywhere.
      Someone has to be the first to take a step like this. If nobody dared
      to make a change because nobody else is doing it, we'd all still be using
      gcc 1.0, COBOL or ALGOL. No wait, all of those were new at some point...

      gcc 3.0, the current so-called "stable" release (released quite
      some time after Red Hat released gcc 2.96-RH), fixes some problems, but
      introduces many others - for example, gcc 3.0.1 can't compile KDE 2.2
      correctly due to bugs in gcc 3.0.x's implementation in multiple inheritance
      in C++.
      Until another set of 3.0.x updates is released, I still claim 2.96 is
      the best compiler yet.

      • yack yack yack.... All kinds of rant about why 2.96 is good

        I use GCC 3.0.4, and it's a lot more standard-compliant than 2.96, has better error messages, and seems just as stable as 2.96. The libstdc++v3 from GCC 3.x is (almost) standard compliant.

        • I use GCC 3.0.4, and it's a lot more standard-compliant than 2.96, has better error messages, and seems just as stable as 2.96.

          Then why won't any distributor use 3.0.x as its compiler ? Why have RH released 7.3 instead of moving to gcc 3.0? The answer is that gcc 3.0 have some fatal bugs which prevent it from compiling KDE correctly. Fancy that-- they actually released something as big as 3.0, without checking that it could compile KDE correctly. Apparently, they've ramped up the C++ requirements for the 3.1 release, which means they're at least starting to take the issue seriously. However, this "ball dropping" exercise has kind of vindicated RH, who unlike the gcc project, need to ship a compiler that will compile the bulk of their distribution.

          • Why have RH released 7.3 instead of moving to gcc 3.0?

            Because RH 7.0 used 2.96 because they needed a new compiler (for various reasons) and 3.0 was going to be too far away. Instead of moving to 3.0 for 8.0 a month or two before 3.1 was put out (and having to use it throughout the 8.0 series), they waited for gcc 3.1.

            In any case, with my experiance with Debian and GCC, whether they released 7.3 had little to do with when 8.0 came out came out. They had important changes for 7.2, and released them.

            Fancy that-- they actually released something as big as 3.0, without checking that it could compile KDE correctly.

            It wasn't a good thing that they did that. However, gcc only has so many people working on it. I know, short of a huge last minute mistake, that my important code will work with GCC 3.1. There are Linux kernel developers who use kernels compiled on GCC 3.1. It should be worth it to someone in the KDE world to compile KDE on GCC snapshots and catch stuff that doesn't work.
      • You wouldn't happen to be one of the wankers at RedHat that decided to ship this wee lump of doo, would you?
    • Note: About my insinuations about GCC 2.96 brokenness, I work side by side with a person who used to be on the GCC/GNU team, and has found strange bugs in certain version of the glibc that has been compiled by the 2.96 series. It went away when using release glibc compiled by release GCC.

      gcc 2.96 contained bugs, but so did 2.95. Improving a compiler sometimes results in bugs that weren't there earlier ("regressions"), and we can see examples of this sort of thing in gcc 3.0. There are a number of improvements in gcc 2.96. As someone who writes C++ code, I've observed some important improvements in support for ISO/ANSI C++.

      It has been said that if a broken compiler compiles a library the library can be strangely broken and very difficult to debug.

      I've got news for you -- all this emotive rhetoric about "broken" gcc 2.96 isn't supported by facts. There are a number of things in gcc 2.95 that are also "broken", and on the balance of it, gcc 2.96 comes out as a somewhat stronger compiler.

      On a side note, as far as GCC 3.X not being prime time, for C is surely is, I don't know about the rest, but for C its, as far as I can see, quite useable, stable and reliable with some interesting new optimizations.

      Hmmmm... I've had some very wierd bugs pop up with the "interesting new optimisations". Seriously, gcc 3.0 is a tremendous improvement in a number of ways, but there are some fairly show-stopping bugs (including a substantial C++ ABI bug which means that it can't compile KDE correcty) Because of this bug, gcc 3.0 is unsuitable as a compiler for a Linux distribution, and this is why no distributor is going to ship it as the primary compiler.

      I think that GNU has been a great force in the world, and to uselessly outpace them or point fingers at them is frustrating and bad for both sides of the camp

      It's not "useless" outpacing. gcc 3.0 was late, and the 3.1 release that dealt with the stuff that needed fixing in gcc 3.0 was almost a year later. What Redhat did is released their own derivative version of the gcc 3 CVS that was customised for use as a distribution compiler. gcc 3.0 on the other hand is not useful as a distro compiler.

      Programmers ignore in favor of GNU releases. Debian ignores it.

      Programmers ignoring Redhat ? Laughable in the extreme. Programmers don't ignore it, but even if they did, it wouldn't matter, because anything that will copmile with gcc 3.0 and gcc 2.95 will compile with gcc 2.96. A number of distributors have included it.

  • GCC white tower. (Score:3, Interesting)

    by Zapman ( 2662 ) on Tuesday May 07, 2002 @02:11PM (#3478552)

    8. What did you think about the Intel Compiler v6 that came out recently? Did you have time to have a look at it?

    Mark Mitchel: I do not have enough information about that compiler to comment on it.

    I know that x86 is just 1 arch for gcc, however it is an important one, being the most common. I would think that those heavily involved with gcc (and especially the x86 backend) would be much more interested in how well the other compilers performed since they are 'the competition' as it were.

    It's kind of depressing that he 'doesn't have enough information to comment'.

    if intel's compiler had been released last month or thereabouts, I could understand, but IIRC, it was released about 6 months ago.


    • I know that x86 is just 1 arch for gcc, however it is an important one,

      Yes, indeed.

      Not knowing the first thing about compilers or the details of how gcc looks on the back end, I'm curious:

      Does the instruction set agnostic nature of gcc severely impede how optimizing the compiler can be?
      Not to complain too much, though. I've always been impressed that gcc runs on such an incredibly broad range of platforms. I've used for over 10 years and would frequently build it on new machines either where the compiler was not bundled and licensed for general use, or where the vendor supplied compiler was unable to compile my code.
      • Does the instruction set agnostic nature of gcc severely impede how optimizing the compiler can be?
        It turns out that compiler optimizations divide nicely into two categories:
        1. Optimizations on the Intermediate Representation (i.e. not specific to any platform)

        2. Optimizations on the generated (CPU sepcific) code
        So the only drawback is that optimizations for a specific CPU may or may not be appropriate for other CPUs. However, even at the CPU-specific level, there's a lot of stuff (e.g. register allocation, instruction scheduling) that could use the same code base, just using CPU-specific callbacks as needed (i.e. use the Framework Pattern).

    • I have used that compiler, ICC, it is impressive. I would recommend it to anyone. It increases speed by at least 15%, and this happens to Athlon as well. It is the best x86 compiler out there bar none - in terms of performance and x86.

      Keep in mind this compiler has strange licensing, weird runtime restrictions that curiously favor the use of IA-64. A way around the linked restrictions in licensing is to build statically.

      It would be nice if Intel gave the GCC team the secret sauce which makes this one some much better at optimizing code. I do not flame GCC at all, I like it , and clearly it is superior at going across architectural boundaries. I just wish HP/DEC/Compaq, Sun and Intel were more willing to show them a 'thing or two' about insightful ways to optimize the compiler.

      Also, note that ICC can not compile the kernel. I know and have been told that there are things present in GCC that allow the successful build of the kernel that do no (CAN NOT?) exist in ICC. It would interesting to see if someone could get the kernel to build with ICC by hacking a mixture of the two compilers.

      Another strange thing about ICC is that they are not entire GNU aware, and have an odd licensing file and install procedure. We had some problems with linking that were an obvious kluge and required a workaround. It would be nice to see the source for this, but that will not happen in the near future.

      Best of luck to both teams, and I suggest that Intel, if you want the world to love x86 (even more), drive the stake in, and show us the source so that we can make GCC considerably better on x86.

      I recommend that people try ICC whenever possible and take measurements.

      I appreciate his honesty in not commenting on it, but seriously, this guy is way too smart to ignore what else is going on in compiler-land, and if he isn't measuring himself to the "competition," then he has stuck his head in the sand. I recommend that where ever possible he builds with the respective compilers every piece of code and looks for comparative anomalies and or performance differentials in order to learn from that.

      Best of luck to the GNU GCC team!

    • The goal of GCC is to work on a wide variety of architectures, which it does.Another goal is to comply with various language standards, which is getting there. Performance is a much lower priority -- it's "good enough" for most uses, and if you seriously need to crunch some numbers, well, then you need to spring for the vendor's compiler. My experience (several years working on software that ran on SunOS/Solaris, AIX, HP-UX, and (Open)VMS) was that GCC was a superior tool for developement and debugging (much better compliance and error messages), but then we went with the vendor compilers for final testing and release to get the performance we needed.

      GCC is *not* competing with ICC, because they have different goals and different target markets. As far as Mr. Mitchel evaluating ICC, he probably can't due to fears of copyright or patent infringement.

      • > As far as Mr. Mitchel evaluating ICC, he probably can't due to fears of copyright or patent infringement.

        That seems pretty unlikely. For patents, if gcc were to step on an icc patent, it would do so whether or not he used ICC.

        For copyrights, I don't think he'd even potentially be liable unless he saw the ICC source code. And Intel keeps the source secret, right?

        IANAL of course.
    • Yes the x86 platform is very important, and therefore I would have expected a competent compiler developer to at least take a look at the free (as in beer) Intel compiler.

      Mitchel comments that the speed of _your_ code it the only important benchmark. This is very true. Well, my code runs, on average, 2.1 times faster on my PC when compiled with the Intel compiler. This speed increase is just too large to ignore.

      A friends application runs more than 3x faster after compilation with icc. -- And that is after spending considerable effort determining the optimal optimisation parameters for gcc, and then just using "-O2 -tpp6" with icc!!!

      I like gcc... but it is currently losing the race.
      • Everyone should read the front page of http://gcc.gnu.org and will see that the DFA branch has been merged into the mainline (aka the future gcc 3.2). A couple of people are working on the DFA based processor descriptions for the x86: Athlon and Pentium I,II,III, and IV, and also 486.
        • Yes but the Intel compiler is here NOW. And, my code is shipping NOW.

          When will a stable gcc 3.2 be available? I expect the Intel compiler will have been improved again by then. Compare icc 5 with icc 6.
      • I like gcc... but it is currently losing the race.

        Any given compiler will lose some races. If the rules of your race call for the most optimizing compiler for the ix86, then GCC loses. If your race calls for a compiler that's portable to the various Linux, Windows and BSD platforms and that your users can get easily (i.e. comes with their OS, preferably), or they call for an Objective C, Java, Fortran or Ada compiler, then GCC leaves icc in the dust.
        • I agree with what you say, but would like to add a few comments:

          - If speed is an issue, then you'd nearly always choose the native compiler. (Supporting multiple compilers does introduce extra issues because lazy programmers will rely on a portable compiler instead of writing portable code.) gcc couldn't be expected to compete, speed-wise, on a huge range of systems.

          - In the case where you ship code, you shouldn't rely on the end-user having a particular compiler. I really, really wish that every system did have gcc though.

          - If you need OpenMP / threaded code, then gcc/gdb aren't even in the race.
          • If speed is an issue, then you'd nearly always choose the native compiler.

            This isn't automatically true. When GCC was first ported to the Sun, it beat the native compiler by, IIRC, 30%. While I have no recent numbers, it wouldn't surprise me if this were still true in some cases; GCC has a lot of optimization work done on it, and the hardware vendor may not have the ability to produce a highly optimizing stable compiler. The ix86, a CISC chip that brings in loads of cash for its maker, is of course an exception.

            Supporting multiple compilers does introduce extra issues because lazy programmers will rely on a portable compiler instead of writing portable code.

            The set of strictly conforming C or C++ programs isn't very large, and the set of C or C++ compilers guarenteed to work correctly is empty. How much work one should take to work around the bugs and hideous malfeatures of many compilers instead of dealing with the bugs and hideous malfeatures of just one compiler is up to the programmer; many programs aren't worth the trouble to get them running for the people who insist on compiling them with WeirdC (we've been working towards ANSI C for almost twenty years now!).

            If you need OpenMP / threaded code, then gcc/gdb aren't even in the race.

            OpenMP isn't available, but threading is; you can use various forms of native threads for C or C++, or you can go straight to Java or Ada and let the compiler deal with the native threads.

            If you need OpenMP / threaded code, then gcc/gdb aren't even in the race.
            • If speed is an issue, then you'd nearly always choose the native compiler.

              This isn't automatically true.

              What? You don't think that it isn't automatically true that you would nearly always choose the native compiler?

              OpenMP isn't available, but threading is; you can use various forms of native threads for C or C++, or you can go straight to Java or Ada and let the compiler deal with the native threads.

              So I assume that you've tried debugging multi-threaded C/C++ code with gdb then? Java isn't an option for typical CPU intensive code. I'd love to use Ada, except I'm not about to migrate half a million lines of code.
    • MM is primarily a C++ front-end guy (when not being a release manager), and the interesting aspects of ICC is mostly in the i32 backend, so ICC is unlikely to be that interesting once MM has time.

      I believe ICC use a version of the generic Edison C++ front-end (most good C++ compilers do), which MM is most likely already familiar with.
  • I respect the guy for saying he doesn't know the answer rather then trying to make it up, but I don't see why the original website ran the story, let alone why it should be posted on Slashdot. The percentage of sentences with useful information seems to be in the low teens, and it's not like it's a terribly long article where that can be expected...
  • Is it me, or did he sound like compiler output?
    "Not to my knowledge"
    "No, It's not."

    Never anything helpful ;)
    • Is it me, or did he sound like compiler output?

      "Not to my knowledge"
      "No, It's not."

      Never anything helpful ;)

      To the contrary - if this had been an interview with a commercial product, he'd never have said no, he would have implied that they were working on or investigating every feature mentioned, and he'd have turned bad points into positive spin.

      This was frank, refreshing and to the point. Short and factual is good.

  • 4. What are your thoughts about moving the codegen after the linker, to allow for global optimization/inlining?

    Compile all the program at once. Create one file that includes all source files and compile it in one step. Works fine and eliminates duplicate string constants.

  • Why was this posted? Totally devoid of any content.

    Think of all the innocent electrons wasted!

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...