GCC 3.3 Released 441
devphil writes "The latest version of everyone's favorite compiler, GCC 3.3, was released today. New features, bugfixes, and whatnot, are all available off the linked-to page. (Mirrors already have the tarballs.) Let the second-guessing begin!"
woo! (Score:5, Funny)
gcc 3.3 fails on glibc 2.3.2 (Score:4, Interesting)
Re:gcc 3.3 fails on glibc 2.3.2 (Score:5, Insightful)
Re:gcc 3.3 fails on glibc 2.3.2 (Score:2, Interesting)
The compiler should at least be able to compile a) the Kernel b) the Glibc.
From the release timeframe a) is no problem. But b) usually takes half - one year to show up with a new release. They should at least release gcc and glibc at the same time or at least provide patches. Regardless to that the glibc and gcc people are almost the same persons so they should know best.
Ridiculous (Score:4, Insightful)
On the other hand, the argument that the gcc folks should make sure that the *kernel* (presumably the Linux kernel) compiles is absolutely ridiculous. The kernel has been long broken and not language-compliant. I think recent compilers can compile it, but that's very recent, and hardly the fault of the gcc people. The Linux kernel has no association with gcc, and is not an amazingly clean project. Gcc is used in far more places than Linux is -- on just about every OS and architecture in the world. Blocking a gcc release because the Linux kernel doesn't compile would be insane. Gcc is *far* bigger than Linux. It is the standard available-everywhere compiler.
When someone misuses English, do you correct them or change the entire language to fit their mistake?
Re:Ridiculous (Score:3, Informative)
libc and glibc are not quite the same, however. libc refers to any implementation of the standard c library, while glibc is the GNU version. I use gcc with the MSVCRT, the cygwin libc and the FreeBSD libc. To me glibc is just another piece of software that people who are not
Re:gcc 3.3 fails on glibc 2.3.2 (Score:5, Informative)
Re:gcc 3.3 fails on glibc 2.3.2 (Score:5, Informative)
Most of the times, this is not obvious stuff. A while ago, gcc 2.95 (or was it 2.96?) broke the kernel because of the strict aliasing rules: gcc assumes that an (int*) and a (float*) can't point to the same area (even now, the kernel needs to be compiled with -fno-strict-aliasing). One of the reasons why gcc 3.3 breaks the kernel now is that at some places, the kernel assumes that an inline function will be inlined, otherwise it breaks. The older versions of gcc always made those functions inline, but the new version take inline merely as a hint (like "register"), which is compliant with the standard. The kernel needs to be fixed to say "inline this or die" or something like that instead.
Re:gcc 3.3 fails on glibc 2.3.2 (Score:2, Interesting)
Re:gcc 3.3 fails on glibc 2.3.2 (Score:4, Informative)
SERVAR == TEH VARY SLWO (Score:4, Informative)
The preprocessor no longer accepts multi-line string literals. They were deprecated in 3.0, 3.1, and 3.2.
The preprocessor no longer supports the -A- switch when appearing alone. -A- followed by an assertion is still supported.
Support for all the systems obsoleted in GCC 3.1 has been removed from GCC 3.3. See below for a list of systems which are obsoleted in this release.
Checking for null format arguments has been decoupled from the rest of the format checking mechanism. Programs which use the format attribute may regain this functionality by using the new nonnull function attribute. Note that all functions for which GCC has a built-in format attribute, an appropriate built-in nonnull attribute is also applied.
The DWARF (version 1) debugging format has been deprecated and will be removed in a future version of GCC. Version 2 of the DWARF debugging format will continue to be supported for the foreseeable future.
The C and Objective-C compilers no longer accept the "Naming Types" extension (typedef foo = bar); it was already unavailable in C++. Code which uses it will need to be changed to use the "typeof" extension instead: typedef typeof(bar) foo. (We have removed this extension without a period of deprecation because it has caused the compiler to crash since version 3.0 and no one noticed until very recently. Thus we conclude it is not in widespread use.)
The -traditional C compiler option has been removed. It was deprecated in 3.1 and 3.2. (Traditional preprocessing remains available.) The header, used for writing variadic functions in traditional C, still exists but will produce an error message if used.
General Optimizer Improvements
A new scheme for accurately describing processor pipelines, the DFA scheduler, has been added.
Pavel Nejedly, Charles University Prague, has contributed new file format used by the edge coverage profiler (-fprofile-arcs).
The new format is robust and diagnoses common mistakes where profiles from different versions (or compilations) of the program are combined resulting in nonsensical profiles and slow code to produced with profile feedback. Additionally this format allows extra data to be gathered. Currently, overall statistics are produced helping optimizers to identify hot spots of a program globally replacing the old intra-procedural scheme and resulting in better code. Note that the gcov tool from older GCC versions will not be able to parse the profiles generated by GCC 3.3 and vice versa.
Jan Hubicka, SuSE Labs, has contributed a new superblock formation pass enabled using -ftracer. This pass simplifies the control flow of functions allowing other optimizations to do better job.
He also contributed the function reordering pass (-freorder-functions) to optimize function placement using profile feedback.
New Languages and Language specific improvements
C/ObjC/C++
The preprocessor now accepts directives within macro arguments. It processes them just as if they had not been within macro arguments.
The separate ISO and traditional preprocessors have been completely removed. The front-end handles either type of preprocessed output if necessary.
In C99 mode preprocessor arithmetic is done in the precision of the target's intmax_t, as required by that standard.
The preprocessor can now copy comments inside macros to the output file when the macro is expanded. This feature, enabled using the -CC option, is intended for use by applications which place metadata or directives inside comments, such as lint.
The method of constructing the list of directories to be searched for header files has been revised. If a directory named by a -I option is a standard system include directory, the option is ignored to ensure that the default search order for system directories and the special treatment of system header files are not defeated.
A few more ISO C99 features now work correctly.
A new function attribute, nonnull, has been added which allows pointer arguments to functions to be specified as requiring
What does it mean? (Score:3, Interesting)
BTW, there is a preliminary ebuild in Gentoo.
Re:What does it mean? (Score:5, Informative)
Not many visible changes. Developers have better profiling, which means eventually if they care they can make software faster. Also, you're going to find a lot more compiler warnings, and perhaps the odd piece of software which doesn't compile at all. In the short run, nothing changes. In the long run, programs become better as they stick to better programming guidelines (since gcc doesn't support "bad" programming as well as the previous version).
I've been using gcc 3.3 for months from CVS, and have had no problems with it (except for compiling with -Werror).
Re:What does it mean? (Score:3, Interesting)
Not very promising!! Basically you're saying this won't make much difference to the end user in terms of speed. I'm not arguing -- I'm agreeing.
Personally, I would much rather have a slow compiler which gets the most out of my system. Apparently the gcc2.95-age compilers are faster than the gcc3 series: in my book that's a good thing. But has anyone done any testing? How long does
Re:SERVAR == TEH VARY SLWO (Score:2)
>Intel 386 family
>Windows NT 3.x, i?86-*-win32
Does this mean that they are not going to support win32 any more. I still use mingw (Windows Port of gcc) as my primary compiler because it's free, and I don't have money to replace my PII 350 Windows 98 system.
Still buggy for Dreamcast (Score:3, Interesting)
Sigh (Score:5, Interesting)
Sigh, indeed... (Score:4, Informative)
Right?
Re:Sigh (Score:5, Informative)
inline (Score:5, Informative)
GCC has always treated inline as such a hint, but the heuristics of how to use the hint has changed, so some functions that used to be inlined no longer is inlined.
The kernel has some function that *must* be inlined, not for speed but for correctness. GCC provide a difference way to specify this, a "inline this function or die" flag. Development kernels use this flag.
Re:inline (Score:3, Insightful)
-- Brian
Re:inline (Score:5, Informative)
It was a lot harder in real-mode programming, where you couldn't jump to distant code because you had to change segment registers and you had to make sure you backed them up first. Hard to guarantee with C, easy with ASM.
Besides, there are many optimizations that a compiler has to guess about. It's very hard for it to know if you're relying on the side effects of an operation. If you're looping and not doing anything, are you touching volatile memory each time (where the reads could be controlling memory-mapped hardware) or doing something else similar. That's the most obvious example. There are a ton of pages about compiler optimization. It's really quite fascinating.
Re:Sigh (Score:4, Interesting)
not really; it's a combination of kernel developers trying things to deal with 'intelligent' inlining, or implementing hacks when they discover an idiosyncrasy with GCC. As the gcc team 'resolves' (fixes?
The goal, though, is using the latest kernel with the latest compiler will generate the most correct code. Simply pointing a finger at the kernel developers is incorrect; both sides can be the cause of compiler failures.
disclaimer: not a kernel developer, just a more-than-casual observer.
'fester
Re:Sigh (Score:3, Insightful)
A bug in a deprecated GCC extension (Score:3, Interesting)
I believe the plan is to add a warning in 3.4 and remove it in 3.5.
Re:Sigh (Score:5, Informative)
Re:Sigh (Score:5, Informative)
The hand written parser (Score:5, Informative)
It also exposes tons of errors in existing C++ programs, so expect lots of whining when GCC 3.4 is released.
GCC 3.4 will have precompiled headers (thanks Apple), which will speed compilation up a lot for project that uses them.
I don't have a life (Score:4, Funny)
PCH vs. compile server (Score:3, Informative)
You need to reorganize your Make files to use pch's efficiently. You should
1) Not change any of your source or header files.
2) Add a new "all.h" header including all other headers, and precompile that header and only that header, whenever any other header changes.
3) all.h should not be included directly from any file, instead compile with a special flag that "pre-includes" all.h.
4) Because of header guards (which you must use), none of the normal
Re:Sigh (Score:3, Informative)
Bounds Checking (Score:5, Interesting)
Although it dosnt seem to work with glibc....this is quite annyoing, although it probably will be fixed and re-released in a few days
Re:Bounds Checking (Score:5, Informative)
Indeed, that SIGSEGV becomes a SIGABRT instead. This is dynamic bounds checking; it won't find anything until the bounds error occurs at runtime, so you won't find it any earlier. All it does is make sure that no bounds errors escape *without* crashing the process.
Although it dosnt seem to work with glibc....this is quite annyoing, although it probably will be fixed and re-released in a few days
I guess you didn't read the documentation. This is a "feature". It breaks the C ABI, forcing you to recompile all libraries used in the program, including glibc.
Re:Bounds Checking (Score:4, Interesting)
I hear they have added in some more advanced, and aggressive bounds checking.
What are the run-time performance implications of this bounds checking? It sounds very nice for debugging, and a great thing to turn on even in production code that may be vulnerable to buffer overflow attacks, but it can't be free. A bit of Googling didn't turn up anything; does anyone know how expensive this is?
Be careful (Score:3, Funny)
ABI? (Score:4, Interesting)
Mostly compatible, but... (Score:5, Informative)
Re:Mostly compatible, but... (Score:2, Informative)
Matsushita MN10200, mn10200-*-*
Motorola 88000, m88k-*-*
IBM ROMP, romp-*-*
Also, some individual systems have been obsoleted:
Alpha
Interix, alpha*-*-interix*
Linux libc1, alpha*-*-linux*libc1*
Linux ECOFF, alpha*-*-linux*ecoff*
ARM
Generic a.out, arm*-*-aout*
Conix, arm*-*-conix*
"Old ABI," arm*-*-oabi
StrongARM/COFF, strongarm-*-coff*
HPPA (PA-RISC)
Generic OSF, hppa1.0-*-osf*
Generic BSD, hppa1.0-*-bsd*
HP/UX versions 7, 8, and 9,
Re:Mostly compatible, but... (Score:3, Informative)
Say what?? Don't you mean that only support for Windows NT 3.x has been obsoleted within the x386 family, which is what the changelog actually says?
Re:Mostly compatible, but... (Score:3, Insightful)
Re:Mostly compatible, but... (Score:3, Interesting)
Two things I don't understand. (Score:5, Interesting)
The preprocessor no longer accepts multi-line string literals.
Why was this removed?
And:
The header, used for writing variadic functions in traditional C, still exists but will produce an error message if used.
This I don't understand at all. Does it mean we can't write void somefunc(int argc,
Someone, please explain...
Re:Two things I don't understand. (Score:5, Informative)
Why was this removed?
Link to GCC mail archive on this topic [gnu.org]. It seems like overkill, for sure.
The header, used for writing variadic functions in traditional C, still exists but will produce an error message if used.
This I don't understand at all. Does it mean we can't write void somefunc(int argc,
No. The funcionality is still there, it's just included via <stdarg.h> instead of <varargs.h>.
Re:Two things I don't understand. (Score:5, Interesting)
Re:Two things I don't understand. (Score:5, Informative)
The preprocessor no longer accepts multi-line string literals.
Standard C doesn't allow newline characters in strings. You can still put in '\n' escape characters to represent newlines.
Does it mean we can't write void somefunc(int argc, ...) style funcs any more?
You can, but in the implementation, you need to use the Standard C header stdarg.h instead of the traditional C header varags.h.
Re:Two things I don't understand. (Score:3, Informative)
char *A="A
B";
char *B="B";
char *C="C";
Anymore see how much problems this can cause if you did not have the second quote on the thrid line?
Re:Two things I don't understand. (Score:3, Insightful)
You can write that code but not:
Not to mention the fact that it's unclear exactly what A contains. I can see that it begins with "A" and ends with "B", but what's in between? Some amount of whitespace, clearly, including a newline, but there could be an arbitrary number of tabs and spaces in there as well.
is much better code.
Re:Two things I don't understand. (Score:2)
The second is taking about the `varargs.h' header, the orginal way of doing variadic functions.
Maybe it means varargs.h (Score:3, Informative)
If you use stdarg.h, nothing should break.
I think here "the header" means "varargs.h", which is the old way for writing variadic functions. Should not appear in any reasonably new (post 1995) code.
SuSE already uses it (Score:5, Interesting)
I just got SuSE 8.2 installed this week, which includes GCC 3.3, and noticed that the kernel is also compiled with GCC 3.3. From 'dmesg':
Re:SuSE already uses it (Score:2)
Yes, there is a diffrence.
Re:SuSE already uses it (Score:2, Insightful)
Yes, it does say 'prerelease', which I didn't miss (I did type that dmesg string in, y'know). The date isn't too far in the past, though, so differences between that and the release version shouldn't be that great.
Should be quite stable (Score:2)
Of course, if you want a REALLY reliable compiler, don't use a dot-zero release, whether the official release or the one from a dot-zero release of a distribution.
Compile-time performance (Score:5, Informative)
The new support for precompiled headers will help to some extent but is by no means a panacea. There are a lot of restrictions and caveats. The good news is that the GCC team are very well aware of the compile-time issue and (according to extensive discussions on the mailing list a few weeks back) will be making it a high priority for the next (3.4) release.
Incidentally, for those wanting a nice free-beer-and-speech IDE to use with this, the first meaningful release of the Eclipse CDT is at release-candidate stage and is looking good.
Re:Compile-time performance (Score:2)
Of course it would be still nice for gcc not to get slower, or maybe even to get faster
Re:Compile-time performance (Score:2)
Re:Compile-time performance (Score:3, Insightful)
gcc 3.x compilers have serious C++ perfs issues (Score:5, Informative)
Re:gcc 3.x compilers have serious C++ perfs issues (Score:4, Informative)
Re:gcc 3.x compilers have serious C++ perfs issues (Score:2)
Re:gcc 3.x compilers have serious C++ perfs issues (Score:3, Informative)
Than
We've fixed that for 3.4, and for 3.3 to some (Score:5, Insightful)
"...to some extent." Why give a Subject: line textbox that won't let me use all of it? Grrr.
Anyhow. One of the big speed hits for iostream code was the formatting routines. Some other reply has a subject like "if you're using fstream you're not interested in performance anyhow," which is so wrongheaded I won't even bother to read it. There's no reason why iostreams code shouldn't be faster than the equivalent stdio code: the choice of formatting operations is done at compile-time for iostreams, but stdio has to parse the little "%-whatever" formatting specs at runtime.
However, many iostreams libraries are implemented as small layers on top of stdio for portability and compatability, which means that particular implementation will always be slower.
We were doing something similar until recently. Not a complete layer on top of stdio, but some of the formatting routines were being used for correctness' sake. We all knew it sucked, but none of the 6 maintainers had time to do anything about it, and the rest of the world (that includes y'all, /.) was content to bitch about it rather than submit patches. Finally, Jerry Quinn started a series of rewrites and replacements of that section of code, aimed at bringing performance back to 2.x levels. One of the newer maintainers, Paolo Carlini, has been working unceasingly at iostream and string performance since.
So, all of that will be in 3.4. Chunks of it are also in 3.3, but not all. (I don't recall exactly how much.)
getc_unlocked (Score:2)
As for the mmap option... IMHO it is a little bit complicated, and glibc seems to use mmap on read-only streams anyway.
Everyone loves GCC? (Score:4, Informative)
For Java, Sun One Studio (crappy name)/Netbeans (inaccurate name) floats my boat. There is a light C++ module for Netbeans but I haven't tried it...no need.
Give Kylix a try [borland.com] - there is a free version you know:
Borland® Kylix(TM) 3 Open Edition delivers an integrated ANSI/ISO C++ and Delphi(TM) language solution for building powerful open-source applications for Linux,® licensed under the GNU General Public License
Download it here [borland.com].
Re:Everyone loves GCC? (Score:3, Interesting)
Re: (Score:2)
nonnull function attribute (Score:4, Informative)
Seems useful, though I suspect many derefernced pointers are set NULL at runtime, and so not spottable during build.
Note: I didn't change the wording above at the [sic], but I believe that this should read "all pointer arguments" instead.
Re:nonnull function attribute (Score:4, Interesting)
It's possible to check at compile time. It's not so much that the compiler detects whether a parameter is null or not at compile time, but whether it can be. For example:
can trigger a warning or error, because malloc() does not return a "nonnull" pointer, and so passing p to do_work is dangerous. On the other hand, given the code: then the compiler can work out that the call is safe. This is how LCLint, for example, can do with itsIntel C++ Compiler 7.1 Rules (Score:4, Interesting)
Re:Intel C++ Compiler 7.1 Rules (Score:2, Informative)
Of course as always, Gcc is still your number 1 choice for anything other than x86 compilation.
Re:Intel C++ Compiler 7.1 Rules (Score:3, Informative)
Re:Intel C++ Compiler 7.1 Rules (Score:3, Insightful)
How's its performance on SPARC III? Does it optimize well for the Athlons? How about the PowerPC CPUs? And the MIPS CPUs? Does it cross-compile for the IBM mainframes? Does it run on them?
Although it is not 100% compatible with all the gcc features, and therefore can't compile the Linux kernel, ...
Oh.
How about the object code? Can its object code be linked to code compiled by gcc, or is using this an all-or-nothing proposition?
I hope the day
Another one? WHY?!!! (Score:2)
Why, for the love of god, is there a new version of the de facto standard C compiler every week or two? Why can't binary compatability be maintained? WHAT sort of changes and development occur in the land of compiling a language that (as far as I know) isn't changing??!!
This isn't a rant--these are serious questions. I don't understand why so many changes are being done to a com
From the changes (Score:2, Insightful)
The C and Objective-C compilers no longer accept the "Naming Types" extension (typedef foo = bar); it was already unavailable in C++. Code which uses it will need to be changed to use the "typeof" extension instead: typedef typeof(bar) foo. (We have removed this extension without a period of deprecation because it has caused the compiler to crash since version 3.0 and no one noticed until very recently. Thus we conclude it is not in widespread use.)
Or rather, gcc version >= 3.0 is not in widespread u
So when do we get a working debugger for g++? (Score:3, Interesting)
Re:this is all well and good (Score:2, Informative)
There is one. Its called Visual SlickEdit, it costs $249 dollars.
Re:this is all well and good (Score:2)
Oh man dont spend that money....
anjuta [sourceforge.net] is better and much more useable than visual slickedit as a linux based GCC ide for C and C++ and will create your project's Makefiles and everything else automatically.
I've used it for 6 months and it's easier than anything microsoft has ever written in their IDE and is certianly better than slickedit when you compare the costs.
I strongly reccomend that EVERY programmer download and try it right now
Re:this is all well and good (Score:2)
Could that be because most programmers don't really do that much printing of code? I haven't printed out any source since my form-feed dot matrix printer died years and years ago.
That said, pipe your code through GNU enscript [people.ssh.fi] to print it, and it should generate some decent syntax hilights. Also, if you're interested in publishing code with hilighting to a web page, Vim can outp
Re: (Score:2)
Re:this is all well and good (Score:2)
Re:this is all well and good (Score:2, Interesting)
Re:this is all well and good (Score:3, Insightful)
Borland C++ was okay 6 years ago, but after VC6 came
Re:this is all well and good (Score:3, Informative)
False. GNU make checks the last file modification time. If you have any problems getting consistant incremental builds, it is probably because the Makefile has incorrect dependency information. For example, if a header file changes the size of a struct, every source file that includes it should be recompiled. An automated tool like "gcc -MM" is the best way to ensure the dependencies are correct.
Re:this is all well and good (Score:4, Interesting)
Are you sure you are not getting precedance confused with order of evaluation between sequence points?
C++ has fairly flexible rules in that regard, the much discussed (on comp.lang.c++) undefined behavior and implementation-dependant behaviour. For example i=i++; invokes undefined behaviour that may vary between compiler settings. My instinct would be that that is more likely to be the problem than compiler error in most cases. You should post the problem code to see if that is the case.
Re:this is all well and good (Score:5, Funny)
Nah.
Funny as hell, though - Visual Studio is an absolute joy to use.
Compared to what?
Having your head nailed to a table?
Re:this is all well and good (Score:5, Interesting)
My favorite feature was the scripting ability. You could write VB Scripts (or start by recording them as a macro) to accomplish tasks. I wrote several VB Scripts that wrote out comments in the code.
KDevelop is the only thing I have seen that's close to Visual Studio. I have C++ Builder 3.0 Professional at home, but I still like the design and easy of use of Visual Studio. The C++ Builder interface is missing some things--like scripting.
Re:this is all well and good (Score:2)
I think the argument can be made (Score:4, Interesting)
that open source requires more skill on the part of the developer to get through the learning curve.
A greater amount of knowledge about what is happening at all levels is mandatory to make that GNU\Linux system happen.
Whether this is a but or feature probably depends on your current location on the learning curve. The more I interact with open source, the more I like the fact that there are relatively fewer secrets about what is occuring, a feature that seems lost by the time you reach the West Coast...
Re:I think the argument can be made (Score:3, Insightful)
No offense, but that's just so much self-justification. Personally, I'm growing tired of that kind of faux elitist stance.
Re:this is all well and good (Score:3, Informative)
Amen. I wouldn't have thought it possible to write a product that makes CVS look like a sensible choice for source control if I hadn't seen it with my own eyes.
We used to use SourceSafe, now we use CVS. CVS is an horrific train-wreck of an application but compared to VSS it's a triumph of software engineering.
slower than the last release.... (Score:3, Interesting)
The following changes have been made to the IA-32/x86-64 port:
SSE2 and 3dNOW! intrinsics are now supported.
Support for thread local storage has been added to the IA-32 and x86-64 ports.
The x86-64 port has been significantly improved.
If you wan't compile time performance look at
Precompiled Headers [gnu.org]
Re:slower than the last release.... (Score:4, Funny)
Re:Precompiled header support (Score:3, Informative)
Using Precompiled Headers
Often large projects have many header files that are included in every source file. The time the compiler takes to process these header files over and over again can account for nearly all of the time required to build the project. To make builds faster, GCC allows users to `precompile' a header file; then, if builds can use the precompiled header file they will be much faster.
Re:Precompiled header support (Score:3, Informative)
Geoffrey Keating of Apple Computer, Inc., with support from Red Hat, Inc., has contributed a precompiled header implementation that can dramatically speed up compilation of some projects.
GCC will have it for 3.4.
Re:Precompiled header support (Score:3, Interesting)
If you follow the advice of any good programming book, you shouldn't be including unnecessary header files anyway.
One of the reason you need precompiled header support in Windows, is because to write any meaningful Windows program, you need to include <windows.h>, which include everything.
Re:Precompiled header support (Score:2, Informative)
Including everything into a precompiled header file will infact make the compilation much slower than if you didnt use precompiled headers at all.
<windows.h> is a very bad example on how to make an efficient precompiled header. Granted, it may been what started the entire precompiled header business, but the time won from precompiled headers in general is greater than what you'd think.
The key to create a good preco
Re:Precompiled header support (Score:3, Insightful)
It's not necessary, in the sense that you'll still get correct code without it.
If you follow the advice of any good programming book, you shouldn't be including unnecessary header files anyway.
I don't. Precompiled headers just means that I get my executable more quickly. Some headers are bigger than your usual C or C++ files!
Re:Hmph (Score:3, Insightful)
And I hope you aren't putting "using namespace std" in new code. Ugh.
Re:Hmph (Score:3, Funny)
The line I remembered was "things you've been doing for years that seemed OK are now suddenly wrong." I consider myself a pretty astute programmer, but every new compiler version seem to have some new warning to add.
As a result, I consider my relationship with my compiler no less volatile than that with my wife. Keeps me on my toes anyway.
-HopeOS
Re:Hmph (Score:5, Interesting)
Now I understand what Bjarne Stroustrup wrote, when he described
The standard hasn't changed since 1998.
The extensions are, in many cases, older than the standard. Now they conflict with rules added by the standard. One or the other has to give. And, of course, no matter what happens, somebody out there will declare that GCC "obviously" made the wrong choice.
If you think it's easy, why don't you give it a try? Hundreds of GCC developers await your contributions on the gcc-patches mailing list.
If you don't like it, you should demand your money back.
Again, the standard was published in 1998. The three changes you describe were decided upon even before then, and they haven't changed since. You've had 5 years to walk down to the corner bookstore and buy a decent book, or search on the web for "changes to C++ since its standardization". None of those changes are due to GCC, and trying to shift the blame to GCC only points out your employer's laziness.
You've had half a decade. Catch the hell up.