Forgot your password?
typodupeerror
Intel Open Source Programming Software

Comparing G++ and Intel Compilers and Vectorized Code 225

Posted by timothy
from the different-lenses dept.
Nerval's Lobster writes "A compiler can take your C++ loops and create vectorized assembly code for you. It's obviously important that you RTFM and fully understand compiler options (especially since the defaults may not be what you want or think you're getting), but even then, do you trust that the compiler is generating the best code for you? Developer and editor Jeff Cogswell compares the g++ and Intel compilers when it comes to generating vectorized code, building off a previous test that examined the g++ compiler's vectorization abilities, and comes to some definite conclusions. 'The g++ compiler did well up against the Intel compiler,' he wrote. 'I was troubled by how different the generated assembly code was between the 4.7 and 4.8.1 compilers—not just with the vectorization but throughout the code.' Do you agree?"
This discussion has been archived. No new comments can be posted.

Comparing G++ and Intel Compilers and Vectorized Code

Comments Filter:
  • by serviscope_minor (664417) on Thursday December 19, 2013 @11:55AM (#45736743) Journal

    I don't think it's troubling.

    Firstly they beat on the optimizer a *lot* between major versions.

    Secondly, the compiler does a lot of micro optimizations (e.g. the peephole optimizer) to choose between essentially equivalent snippets. If they change the information about the scheduling and other resources you'd expect that to change a lot.

    Plus I think that quite a few intresting problems such as block ordering are NP-hard. If they change the parameters of their heuristic NP-hard solver, that will give very different outputs too.

    So no, not that bothered, myself.

  • by david.emery (127135) on Thursday December 19, 2013 @12:14PM (#45736953)

    Unfortunately, that's not unique to GCC. I've seen this happen with several different compliers for different programming languages over the years. Worse, I've seen it with the same compiler, but different Optimizer settings.

    In one case, our system didn't work (segfaulted) with the optimizer engaged, and didn't meet timing requirements without the optimizer. And the problem wasn't in our code, it was in a commercial product we bought. The compiler vendor, the commercial product vendor (and the developer of that product, not the same company as we bought it from) and our own people spent a year pointing fingers at each other. No one wanted to (a) release source code and then (b) spend the time stepping through things at the instruction level to figure out what was going on.

    And the lesson I learned from this: Any commercial product for which you don't have access to source code is an integration and performance risk.

  • by david.emery (127135) on Thursday December 19, 2013 @12:33PM (#45737151)

    Well, in part that depends on your market. Most of my work has been in military systems or air traffic systems, where the cost of failure >> lost opportunity cost. That's a point a lot of people forget; not all markets (and therefore the risk calculations for bugs, etc) are created equal.

  • by PhrostyMcByte (589271) <phrosty@gmail.com> on Thursday December 19, 2013 @01:03PM (#45737481) Homepage
    Your projects were likely doing something which resulted in undefined behavior. It's been extremely rare to have GCC break working standards-compliant code.
  • by drawfour (791912) on Thursday December 19, 2013 @02:12PM (#45738229)
    There is a reason for warnings -- it's because you're doing something wrong. Unfortunately, the compiler lets you do it anyway, probably because there is a ton of legacy code that would suddenly "break" if they were errors by default. But that doesn't mean that you should stop trying to fix these issues. Many of these issues only appear to be benign until you stumble upon the exact issue the warning was trying to warn you about. Static code analysis tools are also your friend. That doesn't mean you can blindly trust them -- static analysis tools do have false warnings. But they're way better than inspecting the code yourself. You'll miss something way more times than the analysis tools will give you a false positive.
  • Mantle? (Score:2, Insightful)

    by Anonymous Coward on Thursday December 19, 2013 @02:16PM (#45738263)

    Mantle is a good idea insofar as it should kick Microsoft and/or NVIDIA up the behind. We desperately need someone to cure us of the pain that is OpenGL and the lack of cross platform compatibility that is Direct 3D.

    Obviously NVIDIA won't play ball with Mantle but I've got a feeling they might have to eventually given that some AAA games developers are going code a path for it. When it starts showing up how piss-poor our current high level layers are compared to what the metal can do, they'll have no choice.

  • by Darinbob (1142669) on Thursday December 19, 2013 @03:05PM (#45738735)

    GCC also works with many CPUs that Intel compiler does not. That includes x86 compatible chips from other vendors, as well as the advanced features in Intel chips that were originally introduced by competiting clones. So maybe Intel is nice, but that's irrelevant if you don't even use Intel hardware in your products.

    If Intel really is basing their compiler off of secret architecture documents, then people should be able to deduce what's going on from looking at the generated assembler. Ie, find some goofy generated code that does not seem to make sense given public documents, get a benchmark to compare it, figure out there's a hidden feature, and then make use of it.

  • by Anonymous Coward on Thursday December 19, 2013 @06:20PM (#45740961)

    If the CPU reports it supports SSE2, and the compiler supports it, I expect it to bloody well use those instructions when told to, not silently produce fucking x87 garbage. Really rocket science apparently.

How much net work could a network work, if a network could net work?

Working...