Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming

Examining the User-Reported Issues With Upgrading From GCC 4.7 To 4.8 148

Nerval's Lobster writes "Developer and editor Jeff Cogswell writes: 'When I set out to review how different compilers generate possibly different assembly code (specifically for vectorized and multicore code), I noticed a possible anomaly when comparing two recent versions of the g++ compiler, 4.7 and 4.8. When I mentioned my concerns, at least one user commented that he also had a codebase that ran fine after compiling with 4.6 and 4.7, but not with 4.8.' So he decided to explore the difference and see if there was a problem between 4.7 and 4.8.1, and found a number of issues, most related to optimization. Does this mean 4.8 is flawed, or that you shouldn't use it? 'Not at all,' he concluded. 'You can certainly use 4.8,' provided you keep in mind the occasional bug in the system."
This discussion has been archived. No new comments can be posted.

Examining the User-Reported Issues With Upgrading From GCC 4.7 To 4.8

Comments Filter:
  • by Anonymous Coward

    If it ain't broke, don't fix it. No need to upgrade.

  • by Anonymous Coward on Monday January 20, 2014 @12:00PM (#46014921)

    Thanks for another worthless uninformative article.

  • Holy fuck, I sure won't be using this for anything mission-critical.
    • terrible news for you that will shatter your world-view: all compilers of any language and of any version have bugs

    • by NoNonAlphaCharsHere ( 2201864 ) on Monday January 20, 2014 @12:02PM (#46014933)
      99.2% of the people who use the phrase "mission critical" don't have anything "mission critical".
      • This is something I've wondered about for a long time. All software has bugs. It's impossible to write non-trivial software that is absolutely 100% perfect. And that would include compilers. Especially compilers because they are very complex programs. I wonder how many crashes/bugs in software are actually the result of bugs in the compiler?

        • by Anonymous Coward

          I wonder how many crashes/bugs in software are actually the result of bugs in the compiler?

          Far fewer than the ones due to bugs in your actual code. If you want to start blaming the compiler or the hardware for your problems, you better be damn sure.

          • by Anonymous Coward

            If you want to start blaming the compiler or the hardware for your problems, you better be damn sure.

            I've seen a few compiler errors (in 30+ years). Most memorable was back when C++ was so new we used the precompiler to generate C code and compiled that. On archicture it generated (correct) C code which broke the compiler. We got a new version of the C compiler from the vendor to increase some internal limits.

            I've seen one hardware error (not counting assorted Pentium bugs). It was a numerical error o

            • Took a while but we boiled it down to a minimal program which would reproduce the bug

              I did that once to demonstrate a bug in a COBOL compiler. The vendor had trouble understanding what such a short COBOL program did.

        • by 0123456 ( 636235 )

          I wonder how many crashes/bugs in software are actually the result of bugs in the compiler?

          I think I've seen two in twenty years. So they happen, but not often, and usually only when they run into very unusual code.

          • by gl4ss ( 559668 ) on Monday January 20, 2014 @12:54PM (#46015545) Homepage Journal

            obviously you're not an ex-symbian developer!

          • by gnasher719 ( 869701 ) on Monday January 20, 2014 @12:58PM (#46015581)

            I think I've seen two in twenty years. So they happen, but not often, and usually only when they run into very unusual code.

            That's about my rate. Including one where the compiler gave a warning, which didn't match the actual C code, but did match the code generated by the compiler. But add a few occasions where a few people did swear it was a compiler bug and were proved wrong. One where converting -sizeof (int) to "long" produced a value of 65534. One (many years ago) where actually Sun compiler engineers explained sequence points to us :-( One where the same header file was included with different #defines which changed the size of a struct - for that one I could have killed someone.

          • I've seen two and in both cases it was VC 6.0.

          • by chaim79 ( 898507 ) on Monday January 20, 2014 @01:39PM (#46015989) Homepage

            I wonder how many crashes/bugs in software are actually the result of bugs in the compiler?

            I think I've seen two in twenty years. So they happen, but not often, and usually only when they run into very unusual code.

            You see them more often in the Embedded world than on full computers. A big one I ran into recently was with Freescale 68HC12, an ancient processor and compiler. It would randomly decide if incrementing or decrementing (var++; or var--;) would be done as integer increment/decrement (add/subtract 1) or pointer increment/decrement (add/subtract 2). We had a lot of interesting bugs where it would randomly decide that a for loop would do pointer math instead of integer math and we'd skip half the work.

            This was very recent, and with latest patches (for some definition of latest... they were concentrating on their new eclipse based IDE with it's GCC compiler so this one wasn't being worked on).

            • Seconded - years ago I worked with a particularly awful PIC compiler. It would be fine until my compiled output size crossed an unknown threshold. Then it wouldn't just break - it would shatter. Terrible crap. I wasted 6 weeks massaging that POS before I demanded a better compiler. I was new back then.

              But there's a twist - my boss was able to make it work, probably because his code lacked any structure and used all global variables. And he STILL uses it for PIC work. But working on bigger projects has gotte

            • Bad idea to use HC12 with GCC. It never had proper support (thinking way back). The commercial compilers were expensive though.
            • Oh god vendor compilers.

              The horror, the horror.

              Seriously I don't understand it. Hardware companies make nice hardware then ship these amazingly shoddy compilers. Not only that they make them super-extra-mega-proprietary as if there is some great trade secret whereas they should be using such protction to hide their richly deserved shame.

              Why do they get so uptight about the software which they are clearly so bad at?

              Ah, when GCC started taking over from those it was sheer joy. And because of the GPL they seem

            • You should have been using GCC all along if the commercial compiler is so crusty that it can't be trusted.

          • by sjames ( 1099 )

            I've seen a few more, but they generally show up when aggressive optimizations are enabled and go away when they're turned off.

            I did once find one that happened in any case. I ended up changing a couple lines of code to equivalent ones that didn't seem to trip up the compiler.

        • Known bugs vs unknown bugs.

          One you can work around, one you work through.

        • by Bert64 ( 520050 )

          I've had a few where code has to be compiled with optimzation turned off, or set to a lower than usual level in order for the program to work..

        • by AdamInParadise ( 257888 ) on Monday January 20, 2014 @05:34PM (#46018713) Homepage

          Not true. Check this out: CompCert [inria.fr], a formally proven C compiler (i.e. 100% perfect).

          And you can use it today, for free (GPL), on real programs.

      • Do what I do: replace any mention of "mission critical" with "business critical".

        But even then, miscompiles do happen with literally every compiler and are hardly "business critical". One miscompile can't bring the company down. Unlike, for example one melt-down nuclear reactor.

        If tests haven't exposed the problem, then it is rather lack of testing which is the problem.

      • by msobkow ( 48369 )

        But virtually 100% of the people using the phrase "mission critical" are the ones who approve your paycheques and thereby determine your priorities.

      • by Mashdar ( 876825 )

        Just because the mission isn't critical does not mean that nothing is mission critical. :)

    • by tibit ( 1762298 )

      Based on a non-article? I sure hope your mission itself isn't all that critical, 'cuz you fail at reading, and Cogswell fails at articulating his thoughts, if he has any worth articulating, that is.

  • Does this mean 4.8 is flawed, or that you shouldn't use it? 'Not at all,' he concluded. 'You can certainly use 4.8,' provided you keep in mind the occasional bug in the system."

    It reminds me of the [in]famous Windows 9x BSOD whenever I wanted to print some particular Word document. If I wanted it to print without throwing the BSOD, all I had to do was to remove the leading space at the begining of the header. The same document prints fine in Windows XP.

    With this kind of logic, it just doesn't make sense!

  • by queazocotal ( 915608 ) on Monday January 20, 2014 @12:06PM (#46014989)

    Though the code behaves differently with, and without optimisation, and does not work on the new compiler whereas it did on the old,
    this does not mean it is a bug in the compiler.

    GCC, Clang, acc, armcc, icc, msvc, open64, pathcc, suncc, ti, windriver, xlc all do varying optimisations that vary across version, and
    that rely on exact compliance with the C standard. If your code is violating this standard, it risks breaking on upgrade.

    http://developers.slashdot.org/story/13/10/29/2150211/how-your-compiler-can-compromise-application-security [slashdot.org]
    http://pdos.csail.mit.edu/~xi/papers/stack-sosp13.pdf [mit.edu]
    Click on the PDF, and scroll to page 4 for a nice table of optimisations vs compiler and optimisation level.

    _All_ modern compilers do this as part of optimisation.

    GCC 4.2.1 for example, with -o0 (least optimisation) will eliminate if(p+100p)

    This doesn't on first glance seem insane code to check if a buffer will overflow if you put some data into it. However the C standard says that an overflowed
      pointer is undefined, and this means the compiler is free to assume that it never occurs, and it can safely omit the result of the test.

    • >GCC 4.2.1 for example, with -o0 (least optimisation) will eliminate if(p+100p)
      Seriously? Wait, no, I thing Slashdot just ate your &lt, and that should be if(p+100 &lt p)

      edit: Wait, Slashdot silently swallows malformed "HTML tags", but doesn't parse &lt properly? How the $#@! are you supposed to include a less-than sign?

    • The start of the summary was just so bizarre to me. Of course different versions generate different code, that's what happens when you change how code is optimized. Why would someone set out to investigate this, except as a question about how it improves the code.

      Now if there's a bug that's a different issue, and all compilers are going to have some sort of bugs somewhere as these are complex pieces of code. But a change in the output should never be treated as evidence of a bug.

      • The GP has a valid point. Most people complaining of these optimizer "bugs" likely have undefined behavior. In C & C++, the compiler/optimizer/linker is given full freedom on what to do. Often, the compiler will just eliminate the code. It could, in theory, format your hard drive. Yes, compiler bugs do happen, but they tend to be rare and infrequent. Last GCC bug I saw was on a minor revision of 4.1.2 that caused an ICE (internal compiler error) when you had an anonymous namespace at the global namespac
  • by Dan East ( 318230 ) on Monday January 20, 2014 @12:08PM (#46015009) Journal

    I've only run into a few compiler bugs (like the one in this article, most always due to the optimizers), and it was always so incredibly aggravating, because it's easy to believe that compilers are always perfect. Granted, they might not produce the most efficient code, but bugs? No way! Of course I know better now, and most of the bugs I came across were back in the Pocket PC days when we had to maintain 3 builds (SH3, MIPS and ARM) for the various platforms (and of course the bugs were specific to an individual platform's compiler, which made it a little easier actually to spot a compiler bug, when a simple piece of code worked on 2 of 3 architectures).

  • Duh? (Score:5, Informative)

    by Fwipp ( 1473271 ) on Monday January 20, 2014 @12:11PM (#46015047)

    The article basically says:
    "GCC 4.8 includes new optimizations! Because of this, the generated assembly code is different! This might be BAD."

    Like, duh? Do you expect optimizations to somehow produce the same assembly as before, except magically faster?

    The linked "bug" is here: http://stackoverflow.com/questions/19350097/pre-calculating-in-gcc-4-8-c11 [stackoverflow.com] - which says, "Hey, this certain optimization isn't on by default anymore?" And to which the answer is, "Yeah, due to changes in C++11, you're supposed to explicitly flag that you want that optimization in your code."

    So, yeah. Total non-story.

    • Re:Duh? (Score:4, Interesting)

      by CadentOrange ( 2429626 ) on Monday January 20, 2014 @12:15PM (#46015103)
      It's by Jeff Cogswell. I ignore any of the "articles" he writes as they're either misleading bullshit or chock full of Captain Obvious statements.
      • by Fwipp ( 1473271 )

        Oh, thanks for the heads-up. I'll be sure to ignore him in the future, then. :)

      • Wasn't he the one who started out with that ridiculous Java vs C# comparison? http://slashdot.org/topic/cloud/java-vs-c-which-performs-better-in-the-real-world/ [slashdot.org] I usually ignore any article with his name on it, but I am studying GCC for a course. Turns out, I should have continued ignoring.

        • I am studying GCC for a course.

          Curious, can you elaborate on the course? Is it about compiler architecture & theory? If not, what?

          • I misspoke, I am not studying GCC for a course, the course is GCC: CS 715 at IIT Bombay - Design and and Implementation of GNU Compiler Generation Framework taught by Prof. Uday Khedkar.
            The course plan includes studying the various passes (analysis, optimizations, etc.) that GCC makes, adding/modifying passes, and implementing a machine description for GCC. The languages analyzed are C and C++, with initial activity on x86 systems and then on spim, the MIPS simulator.

    • by twdorris ( 29395 )

      The linked "bug" is here: http://stackoverflow.com/questions/19350097/pre-calculating-in-gcc-4-8-c11 [stackoverflow.com] - which says, "Hey, this certain optimization isn't on by default anymore?" And to which the answer is, "Yeah, due to changes in C++11, you're supposed to explicitly flag that you want that optimization in your code."

      That linked "bug" appears to be an actual "bug" since a fix for it was posted to 4.8.2. See here.

      http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57511 [gnu.org]

  • Affects me (Score:3, Interesting)

    by Anonymous Coward on Monday January 20, 2014 @12:15PM (#46015101)

    One of the projects I work on will compile and run perfectly with GCC 4.6 and any recent version of Clang. However, compiling under GCC 4.7 or 4.8 causes the program to crash, seemingly at random. We have received several bug reports about this and, until we can track down all the possible causes, we have to tell people to use older versions of GCC (or Clang). Individual users are typically fine with this, but Linux distributions standardize on one version of GCC (usually the latest one) and they can't/won't change, meaning they're shipping binaries of our project known to be bad.

    • by tibit ( 1762298 )

      You probably have buggy code that depended on implementation-defined behavior (or even undefined behavior), and it came back to bite you. It's on you to instrument your app to get crash reports and figure out what went wrong - if it's "broken" under both 4.7 and 4.8, it's very likely your own bug.

    • Re: (Score:3, Informative)

      by cheesybagel ( 670288 )

      Run your program under valgrind and fix the damned bugs.

    • On a related note, does anyone have any suggestion on how to track down such bugs? Are there for example code-analysis tools that will highlight code with undefined behavior likely to give different results when optimized, or valid code that may trigger known compiler bugs? It seems like such a thing would be immensely valuable - if I have a compiler-related mystery bug *somewhere* in my codebase, being able to narrow that down to even the 0.1% of lines containing "suspicious" code could make the differen

      • A project specifically about finding undefined behavior is STACK [mit.edu]. It didn't find any problems on the two projects I tried it on, but one of those is rather small and the other is pretty mature, so maybe most of the undefined behavior has been fixed already.

        Just setting the warning levels a bit higher ("-Wall -Wextra" in GCC; despite the name "-Wall" doesn't actually enable all warnings) will already help a lot in spotting potentially dangerous constructs.

        Also useful is Clang's analyzer mode ("clang --analyz

        • Thanks!

          I've been playing with cppcheck for a while now, it should be interesting seeing how STACK (ahem) stacks up. I find myself almost hoping it finds some nasty bugs-waiting-to-happen in my code just to see it in action "firsthand" as it were.

    • by Anonymous Coward

      In addition to the previously mentioned valgrind, try compiling with "-Wall -Wextra -pedantic" under recent versions of GCC and Clang.

    • by jmv ( 93421 )

      There's about a 99% chance that the "problem" with gcc >=4.7 is that your code isn't compliant with the C standard (e.g. relies on undefined behaviour) and there's a new gcc optimization that makes a new (legal) assumption that your program violates.

  • by necro81 ( 917438 ) on Monday January 20, 2014 @12:21PM (#46015169) Journal
    So, as has always been the case: use optimizers with caution, and verify the results. This is standard software development procedure. Some aspects of optimization are deterministic and straightforward, and are therefore pretty low risk; others optimizations can have unpredictable results that can break code.
  • by gnasher719 ( 869701 ) on Monday January 20, 2014 @12:29PM (#46015269)
    He actually observed that different assembler code was generated - well how do you think can you generate _faster_ assembler code without generating _different_ assembler code?

    The article does _not_ make any claim that any code would be working incorrectly, or give different results. The article _doesn't_ examine any user-reported issues. So on two accounts, the article summary is totally wrong.
    • by TheRaven64 ( 641858 ) on Monday January 20, 2014 @12:51PM (#46015517) Journal

      Add to that, when we test compiler optimisations we do it on some body of code, on some set of microarchitectures, and enable it if it is a net benefit over our sample set. We don't (and can't) test every possible combination of code and microarchitectures. One of my favourite stories in this regard is actually a CPU optimisation rather than a compiler one. A newer generation of ARM core improved the branch predictor, and most things got faster. One core library used by a certain mobile phone OS got noticeably slower. It turned out that in the old CPU, the wrong branch was being predicted at a specific point that caused a load instruction to be speculatively executed and then discarded. When they improved the prediction, the correct path was taken. The value of the load was required some time later in this case. The bad behaviour was costing them a pipeline flush, but pulling the data into the cache. The good behaviour was causing them to block for a memory read. A stall of a dozen or so cycles became a stall of a hundred or so cycles, even though the new behaviour was effectively better.

      For compilers, you'll see similar problems. A few years ago, I found that my Smalltalk compiler was generating faster code than gcc for a fibonacci microbenchmark. It turned out that gcc just happened to hit a pathological case for cache layout in their code, which was triggering a load of instruction cache misses. Trivial tweaks to the C code made it an order of magnitude faster.

      If you really care about performance for your code, you should be regularly building it with the trunk revision of your favourite compiler and reporting regressions early. Once there's a test case, we have something to work with.

      • As an undergrad we'd get a variety of new machines for my particular year, so every new class it seems we were on a different machine that the department just acquired rather than tried and true VAXen. One of the machines was a new RISC-like computer (Pyramid) and was used by our compilter course. Every so often however the machine would crash and leave all the students in the terminal lab milling about until it came back up. The computer center staff were stumped by this until they tracked it down to on

    • by tibit ( 1762298 )

      It's even worse: I just have no idea what the article's point is, other than having a stab at some poor-man's innuendo. It's like if the author set out to write something, then promptly forgot what. Definitely doesn't cog well, that one.

    • by Megol ( 3135005 ) on Monday January 20, 2014 @01:15PM (#46015747)
      Not assembler code - assembly code. Assembler = compiler for assembly code.

      (Pet peeve - sorry)

  • I _cannot wait_ to see how much hilarity ensues in the Gentoo world, where it's real common for random clowns with no debugging (or bug reporting) ability to have -Oeverything set.
  • by inglorion_on_the_net ( 1965514 ) on Monday January 20, 2014 @12:47PM (#46015487) Homepage

    Having been somewhat involved in the migration of a lot of C++ code from older versions of gcc to gcc 4.8.1, I can tell you that 4.8.1 definitely has bugs, in particular with -ftree-slp-vectorize. This doesn't appear to be a huge problem in that almost all the (correct) C++ code we threw at the compiler produced good compiler output, meaning that the quality of the compiler is very good overall. If you do find a bug, and you have some code that reproduces the problem, file a bug report, and the gcc devs will fix the problem. At any rate, gcc 4.8.2 has been out for a number of months now, so if you're still on 4.8.1, you may want to upgrade.

  • cmpxchg8b (Score:2, Informative)

    by little1973 ( 467075 )

    I haven't tried this with the latest version by even a version 4.x GCC cannot generate inline code with the 8 bytes version of cmpxchg with 32bit code. Doing this in a function is OK.

    I think the problem is that this instruction almost takes up all of the registers and GCC cannot cope with this if you want to do it inline.

    cmpxchg8b is useful for lock-free code.

  • Is gcc 4.8 the one where the compiler source was completely converted to C++?

    /me ducks.

No spitting on the Bus! Thank you, The Mgt.

Working...