Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Genetic Algorithms for GCC Optimization 45

captain igor writes "For the power users in the house that enjoy taking the time to squeeze every last drop of performance out of their programs, here's an interesting little program I ran across today call Acovea. Out since 2003, Acovea's main function is using genetic algorithms to determine an optimal set off Gcc optimization flags to squeeze the most performance out of a given program. Certainly an interesting concept, definitely worth a look. Some nice results on a P4 and Opteron can be found here "
This discussion has been archived. No new comments can be posted.

Genetic Algorithms for GCC Optimization

Comments Filter:
  • by MrIrwin ( 761231 ) on Monday April 26, 2004 @07:50AM (#8971118) Journal
    Potentially one could distribute make scripts that would optimize the build of critical sections on installation.

    It's a feature on top of the existing advantages of "compiling into" instead of "installing onto" a system, and a feature that is pretty much esclusive to OSS.

  • I hate to sound this pessimistic, but being a developer, and looking at the flags suggested by the article, it seems that not nearly all the flags are safe in all cases, and having had trouble with some optimizations breaking code with GCC, I'd just accept what the author of a piece of code has selected as correct.

    Ofcourse it's different if your a developer too, but...

  • Humm~~~ (Score:5, Informative)

    by Leffe ( 686621 ) on Monday April 26, 2004 @07:58AM (#8971134)
    In the article the author mentioned that the flags toggled by the -O options were unknown.

    I know that the listings in the manual are fairly accurate, not perfect though. If you want to know exactly what flags are activated when you compile you use the -Q flag (undocumented, AFAIK ;)) and the -v flag to see the information.

    gcc -v -Q file.c

    Output:

    [...]
    options enabled: -fpeephole -ffunction-cse -fkeep-static-consts
    -freg-struct-return -fgcse-lm -fgcse-sm -fsched-interblock -fsched-spec
    -fbranch-count-reg -fcommon -fgnu-linker -fargument-alias -fident
    -fmath-errno -ftrapping-math -m80387 -mhard-float -mno-soft-float
    -malign-double -mieee-fp -mfp-ret-in-387 -mstack-arg-probe -mcpu=pentium
    -march=i386
    [...]


    One thing I noticed with one of my GCCs is that -fomit-frame-pointer was not activated on -O3, even though the manual says it is...
    • Interestingly enough, adding a -fomit-frame-pointer does not change the listing of options enabled. Perhaps its an alias for another option that was listed instead?
    • One thing I noticed with one of my GCCs is that -fomit-frame-pointer was not activated on -O3, even though the manual says it is...
      -fomit-frame-pointer is only enabled with -O on architectures that support debugging without a frame pointer. For example, it is enabled on x86-64 but not on x86-32.
  • by wowbagger ( 69688 ) on Monday April 26, 2004 @08:03AM (#8971158) Homepage Journal
    It does not look like he checked the value of using -mfpmath=sse,387 - only -mfpmath=sse.

    I would question just using SSE for floating point if the code is written just using double values - IIRC SSE doesn't like doing doubles very well. Allowing the compiler to use both the 387 mathco registers as well as the SSE registers might be a win here.

    The other point about using SSE for floating point would be to use simple floats and see what difference the math options make.
    • by MarcQuadra ( 129430 ) * on Monday April 26, 2004 @09:52AM (#8971859)
      There's good reason for that, GCC can make code with '-mfpmath=sse,387' but it's not been 'modeled' properly, nor can GCC properly predict the outcome of trying to use registers that might be shared or step on each other. I'd run SCREAMING from that flag, '-mfpmath=sse' gets you properly-formed code for a fact, and that's what we're all looking for.
      • Major problems (Score:1, Informative)

        by Anonymous Coward
        I have an application which I tried with sse,387 and the speed was slightly worse than sse. But on top of that I got random errors which I tracked down to an if () ... else ... block being exectuted where neither the if nor the else part was taken. A compiler bug for sure.
  • Ouch (Score:5, Insightful)

    by Scarblac ( 122480 ) <slashdot@gerlich.nl> on Monday April 26, 2004 @08:05AM (#8971166) Homepage

    Q: How can you tell you have perhaps gone slightly overboard in making compiler optimization options available?

    A: Your users have not just given up trying to reason out what they should do, or even brute forcing every possible combination, they're inventing fucking genetic algorithms to find out what works best.

    Basically the act of "calling gcc from the command line" is know officially a murky problem space to attack with pseudo-random hill climbing stuff...

    • Re:Ouch (Score:4, Funny)

      by GigsVT ( 208848 ) on Monday April 26, 2004 @08:18AM (#8971219) Journal
      I just see this as an extension to the whole "extreme programming" idea of "code something, anything, until you get the output you want for your test cases, then stop coding".

      It's a good idea in theory, but it seems much better to have a well thought out approach.
      • Re:Ouch (Score:5, Informative)

        by Scarblac ( 122480 ) <slashdot@gerlich.nl> on Monday April 26, 2004 @08:54AM (#8971414) Homepage

        That's a bit unfair, another EP idea is to refactor endlessly, always improving the code as much as possible. In the situations where EP is good, the alternative is "code something, anything, until you think your hack does what the customer/boss wants done within an hour, and besides it did work with the two things you tried out, so let's put it live". In that sort of environment (taken from my job...), your description of EP would be a marked improvement.

        But anyway, this isn't about EP. My opinion of genetic algorithms is "something you try when you don't understand the problem". Instead of solving the problem you try out a bunch of randomly generating tries, slightly systematically, and hope it works out. It's applicable to situations with a lot of variation in different elements, where you don't know much about relations between the choices for different elements, etc.

        So I'd say it's a bad idea in theory, but even so it sometimes works when you can't find a solution by thinking.

        And the idea that finding the right command line options for gcc is somehow a good problem domain for these things, well... :-)

      • I just see this as an extension to the whole "extreme programming" idea of "code something, anything, until you get the output you want for your test cases, then stop coding".

        Or... write test cases and then run an genetic algorithm until the right code is generated :)

        Of course, you'd need to write a whole lot of test cases, it would probably be easier to write the actual function yourself.
      • Re:Ouch (Score:5, Insightful)

        by RML ( 135014 ) on Monday April 26, 2004 @12:20PM (#8973308)
        I just see this as an extension to the whole "extreme programming" idea of "code something, anything, until you get the output you want for your test cases, then stop coding".

        It's a good idea in theory, but it seems much better to have a well thought out approach.


        Figuring out how different optimization passes interact is too complicated for humans to do correctly. The interactions tend to be subtle, unexpected, and sometimes completely counterintuitive. The only way to get it right is to run lots of real-world testcases with different combinations of optimizations and tune the defaults based on the results. Acovea just automates that for the nice folks working on gcc.
    • by RML ( 135014 )
      Q: How can you tell you have perhaps gone slightly overboard in making compiler optimization options available?

      A: Your users have not just given up trying to reason out what they should do, or even brute forcing every possible combination, they're inventing fucking genetic algorithms to find out what works best.


      Actually, this isn't intended for users, really. It's intended to help the compiler writers figure out what combinations of passes work best, so that the compiler writers can give the users simple
    • by devphil ( 51341 ) on Monday April 26, 2004 @02:04PM (#8974430) Homepage
      you have perhaps gone slightly overboard in making compiler optimization options available

      ...I strongly agree. No other compiler in history has this many knobs and dials. They interact in strange and unpredictable ways. Even worse, the -f (functionality/features) options can't be tested with all the possible -m (machine-specific) ones due to the number of platforms required. So we get bug reports complaining that -ffoo combined with -mbar on the Foonly 3000 causes random explosions killing a busload of nuns, and all we can do is shrug and say, "the -ffoo designers didn't have a Foonly, so just Don't Do That."

      One of the things I'm pushing for GCC 4.x is to take an axe to the -f switches. It won't get consensus, of course, but it'll raise interesting discussion.

      now officially a murky problem space to attack with pseudo-random hill climbing stuff...

      There's an insult in there somewhere, but I'm not sure what you're trying to attack.

      Optimization options have always been a murky problem space in GCC. No other compiler targets as many processors as we do. The Right Thing on one chip will not be the Right Thing on another; that's why we have this mess.

      As for what you call "peudo-random hill-climbing stuff," probably with little Dogbert-like waves of your hand, I suggest you take some formal courses in evolutionary algorithms. For solving non-linear discontinuous problem spaces, they are extremely effective, and go well beyond "hill climbing stuff."

      One of the original goals of Acovea was to find better combinations of switches for popular platforms, and then make those the default for future major versions. I haven't followed the project as closely as I'd wished, so I can't say whether that's still the goal. Hope so.

      • There's an insult in there somewhere, but I'm not sure what you're trying to attack.

        No, no, no, I love gcc, it wasn't meant as an insult at all. I just couldn't quite come to grips with the whole idea of using genetic algorithms for something like this. It's surreal to me. The world has become a bit weirder for me and I can't really manage to put the feeling into words :-)

        I can imagine joking about this over some beers in a pub with some other nerds, but some people actually made this and people are ser

      • No other compiler in history has this many knobs and dials.

        I disagree with that. The Texas Instruments Code Composer Studio compiler for the C67xx series DSP's is WAY more complicated and truely scary in how amazing the optimizer is - Even the assembler has optimization flags....

        see docs here [ti.com]

        I belive now there are more tools to graphically analyze optimization bottlenecks.

        I love GCC though and appreciate the multi-platform optimization issues that arise. I believe what we need are computer languag

  • by MacroRex ( 548024 ) on Monday April 26, 2004 @08:06AM (#8971173)
    Using a GA basically means it randomly tries some flags. While this is nice and automated, I'd rather have the developer understand his/her code and the compiler to the extent that the best flags can be picked deterministically. Besides, the "optimizer" might very well pick flags that work with the test cases, but break the executable at some other place. At worst this might result in a some terribly hard-to-track security vulnerability.
    • Most optimizations break code because developers do not follow the standard of a language. For an example -fstrict-aliasing breaks code because people do not follow aliasing rules for the language which they are using. The Linux kernel does not follow them so they have to do -fno-strict-aliasing. Note this flags was truned on by defualt at -O2 for 2.95 and 2.95.1 but turned off for 2.95.3 and then reenabled for 3.0 while people fix their code. Most people have fixed their code but some code out there st
    • by Tom7 ( 102298 )
      Unsafe "optimizations" don't deserve to be called optimizations and should never be enabled by a tool like this.
    • I'm not sure you really understand how Genetic Algorithms (GA's) work. While they do contain a random element, they follow an evolutionary process that produces improved solutions in each "generation", based on genetic operations such as 'mutation'. So rather than just selecting "some" flags, the algorithm gradually proceeds toward an optimal combination. These algorithms are already commonly used in the analysis of medical and engineering data, so their use in compiler optimisation is hardly a major risk.
    • by shaka999 ( 335100 ) on Monday April 26, 2004 @10:02AM (#8971944)
      I use GA's for circuit optimization all the time. They are very effective and often find things I wouldn't have thought of.

      That said, you don't take their results and just run with it. I find you that once a solution is found I have to go back and understand what and why it did what it did. In the case of circuit optimization I need to make sure that the optimizer isn't just exploiting a modeling problem (this has happened a few times). In the case of optimizing for complier flags I'd imagine the same thing applies.

      After the optimizer finishes selecting flags the developer should go back and make sure that all the flags make sense and aren't risky. Its at this point a developer can decide what he/she wants to use.
    • From the article:

      I envision Acovea as an optimization tool, similar in purpose to profiling. Traditional function-level profiling identifies the algorithms most influential in a program's performance; Acovea is then applied to those algorithms to find the compiler flags and options that generate the fastest code.

      In addition to just trying some new flags on critical sections, perhaps you can actually try to understand what parts of these algorithms may be improved instead of just trying a bunch of opti

  • Gentoo (Score:4, Interesting)

    by motown ( 178312 ) on Monday April 26, 2004 @08:46AM (#8971377)
    It would be really cool if this technology could somehow be integrated into the Gentoo [gentoo.org] project.

    Of course it would be unreasonable to have the each single ebuild compile and get benchmarked several times on each user's PC, but these genetical algorithms could be used to predetermine the optimal compiler settings for each architecture/ebuild-combination, store this information in a central database and have portage automatically select the optimal compiler setting from that database, each time it compiles an ebuild.

    No more figuring out what the best compiler options are: the ebuild maintainers will take on that job for you! :)

    • It would be really cool if this technology could somehow be integrated into the Gentoo [gentoo.org] project.

      I'm a gentoo fan, but I'd definitely be in favor of this being done in a distributed kind of way, where the user can feed back a profile mix of usage patterns (eg, Web server, desktop, video encoding, scientific applications, etc.) and hardware and find the best compiler flags from a server. In that database that you described.

      Otherwise, can you imagine doing NFLAG factorial complete builds with s

    • but these genetical algorithms could be used to predetermine the optimal compiler settings for each architecture/ebuild-combination, store this information in a central database

      That database should be called "the GCC source tree".

      Once a particular set of flags has been shown to be consistently better for a majority of sample real-world code, those settings should become the new defaults for that platform on, say, -O2.

      Doing this is slightly tricky, given the current GCC codebase, but that's why God

  • Dupe ... (Score:5, Informative)

    by dago ( 25724 ) on Monday April 26, 2004 @08:58AM (#8971434)
    yep, a pretty old one (18/11/2003), but still, here's the original story [slashdot.org]

  • ... but I think theres a future in using genetic algorithms in the development of gcc itself. Setup a bunch of Pentium4 HT computers, compile gcc compilers with various optimisations, then have them compile something like apache or the linux kernel, run it and benchmark, and get back to compiling the next gcc. Let the whole system run for months and I wonder if the gcc could do something close to the Intel compiler.

    Optimisation features of all processors are documented, but not all compilers know how to us
    • I think that optimisation would first come from analyzing individual programmatic constructs (patterns) rather than the huge conglomerates that are whole applications. But then again I'm an embedded developer so I tend to think small.
  • Why not just brute force this problem, maybe with some distributed computing? GAs are heuristic, and will never be guaranteed to find the global maximum for a given problem. Complicating the issue is the nature of the fitness landscape - series of adjacent vectors in the search space likely do not have smoothly-changing fitnesses, and so a GA is not well-suited to finding the solution.

  • PostgreSQL (Score:4, Interesting)

    by Earlybird ( 56426 ) <slashdot @ p u r e f i c t ion.net> on Monday April 26, 2004 @12:03PM (#8973164) Homepage
    PostgreSQL has been using genetic algorithm internally for some time, to optimize query plans.

    A code optimizer seems like a natural application for GAs. If you can prove a piece of code's logical equivalence to another's, you can have a code generator produce random versions of the same code (functions, loops, blocks) and then run that as a GA to find the best-performing version. On the other hand, compilation might take ages to run.

  • But it sure is old news [sergiocarvalho.com] for nerds...
  • by Anonymous Coward
    poetic that an evolutionary algorithm, analogous to that in nature which evolved the human brain, would be used to optimize compiler options in pareto-optimal search spaces (different platforms).

    Brute forcing all permutations is a very silly way to do this, IMHO. Compare an EA solving close to optimally the travelling salesman problem in a short time with the np-completeness of finding the absolute peak. 99% is good enough if 100% takes longer than the age of the universe to figure out for large inputs.

    Al
  • Acovea was linked here in Genetic Algorithms and Compiler Optimizations [slashdot.org]

The one day you'd sell your soul for something, souls are a glut.

Working...