Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
GNU is Not Unix

GCC 3.3.1 Released 55

Wiz writes "The latest and greatest version of gcc is now out - v3.3.1! As an update to the current version, it is bug fixes only. You can find the list of changes here and you can download it from their mirror sites. Enjoy!"
This discussion has been archived. No new comments can be posted.

GCC 3.3.1 Released

Comments Filter:
  • by Laven ( 102436 ) *
    Suitable for kernel yet?
  • P4/SSE2 fixed (Score:2, Informative)

    by blwrd ( 455512 )
    This finally fixes the pentium4/sse2 bug. Waiting for ebuild... ;)
  • Speed? (Score:4, Interesting)

    by JukkaO ( 199949 ) on Saturday August 09, 2003 @06:39PM (#6656685)

    Rumour has it the plain-old-C compilation speeds are getting slower and slower every gcc release these days.

    I don't have any measurements, I'm just wondering whether the new and cool feature stuff and possible speed increases in the resulting object code warrant migration from, say, 2.95.x whatever.

    Standards conformance improvements are another thing but for the casual developer I guess gcc's been pretty good for quite a while now.

    • Jan Hubicka, SuSE Labs, has contributed a new superblock formation pass enabled using -ftracer. This pass simplifies the control flow of functions allowing other optimizations to do better job.

      He also contributed the function reordering pass (-freorder-functions) to optimize function placement using profile feedback.


      It remains to be seen whether the extra performance gained by running these would offset the extra time spent running them, especially under a self-built version of gcc. The reorder-functions option was way overdue in gcc.

      BTW: "bug-fix release, my ass." You don't add stuff like this in a bug-fix release.

    • The major compilation slowdown occured between 2.95 and 3.0.
      It was mainly due to the new garbage collection based memory management scheme and the new inliner.
      Since GCC 3.0, the compilation speed didn't change that much except maybe for some programs that happened to be particularly bad for the inliner heuristics.

      Marcel
    • Here's a table (Score:5, Insightful)

      by mec ( 14700 ) <mec@shout.net> on Monday August 11, 2003 @12:59AM (#6663125) Journal
      A little google whoring turns this up:

      GCC Compilation Comparison [myownlittleworld.com]

      The rumors have some foundation. For one particular C program, on one particular machine, at the particular optimization level of -O2:

      gcc 3.0.4 takes 28% more time than gcc 2.95.3
      gcc 3.1.1 takes 24% more time than gcc 3.0.4
      gcc 3.2.3 takes 7% more time than gcc 3.1.1
      gcc 3.3 takes 5% more time than gcc 3.2.3
      gcc 3.4* takes 6% more time than gcc 3.3
      gcc 3.5* takes 9% more time than gcc 3.4*

      The "3.4*" and "3.5*" are cvs versions as of a certain date, as these versions are far from release.

      Here are some release dates:

      2001-03-22 gcc 2.95.3
      2002-02-21 gcc 3.0.4
      2002-07-26 gcc 3.1.1
      2003-04-23 gcc 3.2.3
      2003-05-14 gcc 3.3

      Correlating these:

      gcc 3.0.4, 11 months, 28%
      gcc 3.1.1, 5 months, 24%
      gcc 3.2.3, 9 months, 5%

      The next gcc will be gcc 3.3.2 and it is estimated for October 1. If it meets that date, and if it continues to have the same performance as gcc 3.3 and gcc 3.3.1, then that would be: 4 months, 5%.

      If you use Moore's Law to estimate processor speed then your CPU is getting 100% faster every 18 months, or 4% faster per month. So in the period from 2.95.3 to 3.1.1 gcc was getting slower about the same rate as processors were getting faster. Since 3.1.1, gcc is getting slower at just 1% a month or so, and processors are getting faster at 4% a month.

      Refinements to my model welcomed.

      As far the trade-off goes: "compile speed" is one dimension and "new and cool features" is another dimension and "object code speed" is yet another dimension. There is no universal answer about trade-offs between dimensions, you just have to make the decision yourself.
      • At least the percentages of these stats need to be recalculated. In fact, from the absolute numbers, you can see that GCC 3.2.3 is faster at any optimization level compared to 3.1.1. Yet, the author claims it is slower in his percentage table.

        Also, to help developers, an effort has been made to speed up GCC 3.3.x at -O0 where it is faster than GCC 3.1.x and 3.2.x.

        Marcel
        • from the absolute nummbers, you can see that GCC 3.2.3 is faster at any optimization level compared to 3.1.1.

          Wha?

          3.1.1 --- 213.58 252.63 396.93 494.16
          3.2.3 193.48 224.66 269.43 424.74 519.85

          These are run times, in seconds ... what the heck are you talking about?

          Also, to help developers, an effort has been made to speed up GCC 3.3.x at -O0 where it is faster than GCC 3.1.x and 3.2.x

          That's true.

          • I don't know where you get those numbers from, but I'm talking about the numbers at http://www.myownlittleworld.com/computers/gcctable .html

            The numbers in the first table on the page show GCC 3.1.1 to take more time than 3.2.3 for all cases except for -O1 where there is only a 0.01s difference. Also, I have not been able to correlate the obsolute numbers of the first table to the percentages of the second table on that page.

            Marcel
            • ... and I think it's either the web server at myownlittleworld.com or my own Mozilla browser.

              I see seriously different numbers with mozilla and Lynx. The Mozilla numbers are as I described, and the Lynx numbers are as you described.

              The page in Mozilla says "All snapshots were from 20030614" and the page in Lynx says "All snapshots were from 20030725".

              And the numbers for 3.1.1 and 3.2.3 are significantly different between the versions of the page.

              Ouch!

      • Actually 3.3.2 will be faster than 3.3.1 if someone accepts my patch. The person who ran these, had reran these after my patch for 3.4 went in, so in fact 3.4 is faster than what it was.
  • by tzanger ( 1575 )

    Does anyone know when dead code removal will be introduced to gcj? I'm linking statically to keep system dependencies very small but unfortunately my binary is reaching huge proportions but only because dead code removal does not work. (i.e. if I never use libc's printf(), don't put it in the final binary.)

    • by r6144 ( 544027 ) <r6k&sohu,com> on Saturday August 09, 2003 @10:38PM (#6657500) Homepage Journal
      Your problem is that glibc does so much initialization upon startup that a lot of code, such as printf(), (might) get used by that, so the linker have to include them in the statically linked executable. If this can be fixed, it is glibc that should be fixed to avoid unnecessary internal dependencies.

      Also, compared to libgcj, glibc's contribution of executable size is quite small (about 300~400k), so maybe optimizing libgcj is more important.

      Dead code removal means that part of the source code that will never be executed (as decided by the compiler) will not turn into executable code. For the simplest example, in "if (0) { ... }" the whole statement will be skipped in executable code. Sometimes it is harder, for example "a = foo(); if (a == null) a = this; BLAH BLAH BLAH; if (a == null) { ... }".

    • by norwoodites ( 226775 ) <pinskia@ g m a il.com> on Saturday August 09, 2003 @10:49PM (#6657533) Journal
      The dead code removal has to be done in the linker, not in GCC. Note the linker is part of the binutils project, not GCC, complain to them, not us, GCC.

      Also static linking anything will grow your executable more than needed.

      Dead code removal in your code, not libraries is done by GCC already and is being improved still.
  • by leonbrooks ( 8043 ) <SentByMSBlast-No ... .brooks.fdns.net> on Saturday August 09, 2003 @07:48PM (#6656921) Homepage
    The -traditional C compiler option has been removed. It was deprecated in 3.1 and 3.2. (Traditional preprocessing remains available.) The header, used for writing variadic functions in traditional C, still exists but will produce an error message if used.

    Bugger, that's gunner make a lot of older stuff harder to compile. Is there any particular reason that the grim reaper went postal with this version?

    • Because maintaining it was a pain in the ass, and blocked the fixing of other bugs.

      If you must compile older code, rather then a newer version of that code, then use an older compiler.

    • by norwoodites ( 226775 ) <pinskia@ g m a il.com> on Saturday August 09, 2003 @10:21PM (#6657454) Journal
      But most of the time to get these programs to run correctly on any GCC you would have to update it any way because aliasing rules and other reasons.
      Remember most of the time when you use -traditional you really wanted the traditional preprocessor rather than the traditional c compiler so you can still use -traditional-cpp.

      The reason why GCC removed -traditional because ISO C89 is already 13 years old and it was getting to hard to maintain those features.

      Update the code to ISO C90 is not that hard any way because most of the time for varargs it just a replace with stdards and such.
  • It seems (Score:5, Funny)

    by luekj ( 692478 ) on Saturday August 09, 2003 @11:46PM (#6657734) Journal
    That the newer gcc gets, the less people get excited about it. Maybe they should try to spice things up a little bit, maybe name the next one gnu ginac. (e.g. gcc is not a compiler). That'll get them riled up, for sure.
  • GNU says "No" to SCO (Score:3, Informative)

    by Gudlyf ( 544445 ) <<moc.ketsilaer> <ta> <fyldug>> on Monday August 11, 2003 @02:34PM (#6667793) Homepage Journal
    Check out the 'README.SCO' file with the latest distribution:

    As all users of GCC will know, SCO has recently made claims concerning alleged copyright infringement by recent versions of the operating system kernel called Linux. SCO has made irresponsible public statements about this supposed copyright infringement without releasing any evidence of the infringement, and has demanded that users of Linux, the kernel most often used with the GNU system, pay for a license. This license is incompatible with the GPL, and in the opinion of the Free Software Foundation such a demand unquestionably violates the GNU General Public License under which the kernel is distributed.

    We have been urged to drop support for SCO Unix from this release of GCC, as a protest against this irresponsible aggression against free software and GNU/Linux. However, the direct effect of this action would fall on users of GCC rather than on SCO. For the moment, we have decided not to take that action. The Free Software Foundation's overriding goal is to protect the freedom of the free software community, including developers and users, but we also want to serve users. Protecting the community from an attack sometimes requires steps that will inconvenience some in the community. Such a step is not yet necessary, in our view, but we cannot indefinitely continue to ignore the aggression against our community taken by a party that has long profited from the commercial distribution of our programs. We urge users of SCO Unix to make clear to SCO their disapproval of the company's aggression against the free software community. We will have a further announcement concerning continuing support of SCO Unix by GCC before our next release.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...