GCC 3.0 Released 210
Phil Edwards, GCC Developer wrote in
to say: "The first major release of the GNU Compiler
Collection in nine years,
GCC 3.0,
is finished. There is a long list of
new features,
including a new x86 back-end, a new C++ library, and a new C++ ABI,
to pick my three favorites.
Note that the GCC project does not distribute
binary packages,
only source. And right now the server is heavily loaded, so if you
intend to get the source tarball, please /. one of the many
ftp mirror sites
instead. Plans for 2.95.4 (bugfix release), 3.0.1 (bugfix release), and 3.1
(more user-visible features) are all in progress." MartinG points to this mailing list message announcing the upload.
Re:Linux kernel (Score:2)
Re:Why does GCC require so much memory? (Score:2)
Since you seem to have a technical inclination you might want to look at the source and see for yourself. Even a grep/diff between the present and last two versions may prove enlightening.
pingmeep
Re:Why does GCC require so much memory? (Score:2)
On the other hand, support for templates in MSVC++ is notoriously poor. For one thing, the MS compiler & linker don't support automatic template instantiation. You may not have noticed this because the IDE does the bookeeping for you and generates the template declarations needed by the compiler. For some reason, most compilers are faster when you turn of automatic instantiation and explicitly list the template instances you need.
Also, perhaps the MS compiler isn't performing some of the same optimizations on template code.
Re:so (Score:2)
GCC, as most other compilers, bootstraps itself, that is, a small assembler program compiles a subset of the language in which a compiler for the whole language is then implemented (xgcc).
No, it doesn't. That would be grossly nonportable. It uses your existing C compiler to compile itself, without optimization, and then compiles itself with that (to get the benefit of its optimizations &c); then it recompiles itself with *that* and compares the last two. If you don't have a C compiler, you can't build GCC (equally, if you don't have an Ada compiler, you can't build GNAT).
Re:Performance increases in final code? (Score:3)
hello world
0.01user 0.00system 0:00.00elapsed 142%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (52major+9minor)pagefaults 0swaps
3.0:
hello world
0.01user 0.00system 0:00.00elapsed 125%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (31major+11minor)pagefaults 0swaps
Re:GJC! GJC! (Score:2)
GJC [bell-labs.com] is indeed cool, and it had the name before GCJ [gnu.org]. Watch those acronyms. :)
BTW, gcj and gjc work together. :)
time for .0? (Score:2)
Has anyone done any performance tests? some benchmarks maybe? it would be really interesting to see this GCC against the upcoming commercial Intel C/C++ compiler..
Re:New libraries... Wahoo! (Score:2)
Now, stringstream still isn't in the standard library in Red Hat 7.1! One can only say that it's taken too long.
---
Re:C++ standards (Score:2)
I'm greatly anticipating the new C++ library. Finally we will have stringstream! For me the C++ library was the largest hole in the C++ implementation. The compiler itself has been quite good for some time, feature-wise.
--
Re:Why does GCC require so much memory? (Score:5)
This is not a bad thing. In fact, it is a very valid (I'd say good!) design decision. Explanation below.
The reason explicit instantiation is so much faster is that the compiler doesn't have to compile your code twice. Automatic template instantiation requires some sort of support outside the compiler proper. For all the gory details, I recommend Stan Lippman's Inside the C++ Object Model. It's a little out of date and inaccurate (or more properly, misleading) at times, but for anyone interested in why C++ works the way it does and what sort of decisions the compiler makes when generating code, it's a great book. It dispels many of the common myths about C++'s performance and makes an honest evaluation of the cases where performance is negatively affected.
But I digress. One of the most popular strategies for automatic template instantiation involves some sort of "collector" program. The basic idea is to collect all the object files that go into the final link and look for undefined symbols that refer to template code. The GNU collect2 program does this for g++. Once the symbols have been identified, the compiler needs some way of knowing how to recompile the source files that contain the template elements. Strategies include using control files generated by the compiler and collector, entries in the object files themselves (strings or symbols are common) or a combination of the two. Other strategies are possible as well. The driver script (the IDE in VC++) gathers this information and reinvokes the compiler to recompile the source files containing the needed template code, passing flags to tell the compiler to instantiate particular templates.
After having implemented some of this, I have to tell you that it is all a tremendous pain in the neck. It's also quite, quite convenient and necessary for the user. :)
As for the MS IDE, that's just another strategy for handling the problem. No compiler that I know of fully handles automatic template instantiation by itself. The closest that a compiler could come to this would be to aggregate the collector actions into the compiler as a separate phase. This is really no different that running a separate program and the "compiler" becomes the driver script (think g++), with the compiler proper (i.e. translation and transformation) being but one (usually, more than one) phase of compilation.
Is this not true under Windows? I'm curious, as they should have many of the same problems. Optimizing template code is expensive because there is a lot of it and most of it is inlined. Inlining is not as trivial one might initially expect and it has large implications for transformation (optimization). Inlining usually greatly expands the size and scope of functions that are transformed. There are more nodes, more symbols and more analysis bookkeeping to handle. Many compiler algorithms have complexity of N^2 or worse (lots are NP-complete!) so things get dicey as code size expands. Strangely enough, this is also why transformation can speed up compilation -- it often removes nodes and symbols from the program!
--
Re:no binaries? (Score:3)
Bill - aka taniwha
--
Re:FOR loops: a question, ANSI C++, C++98, C++99.. (Score:2)
There's a nifty preprocessor hack you can use to get around this problem in compilers that don't properly support the standard:
#define for if(0);else forThis causes the scope of the for control variable to be correct, without affecting other control flow semantics. The only practical disadvantage is your compiler may warn about the constant value in a conditional. And your sensibilities may be offended by using the preprocessor to redefine a keyword. :)
partially correct (HP-UX) -you need a K&R compiler (Score:4)
AFAIK, gcc requires a *K&R* C compiler, as documented in the first edition of The C Programming Language. It need not support function prototypes or the void type (I think).
On UNIX systems that do not natively support and include gcc, one uses the system's C compiler to generate xgcc, which is GNU C (but not compiled by GNU C). One then uses xgcc to generate a GNU-compiled gcc. I don't know why xgcc is not normally installed and used, but I assume that it would be an ease-of-debugging issue (and you can also debug gcc-optimized code, which most vendor compilers will not do).
HP-UX natively includes a K&R (non-ANSI) C compiler. It is almost useless, but it will successfully compile gcc. On most other commercial UNIX systems, if you lack a compiler, you must rely upon someone who has a compiler to generate a verstion of gcc for you (which accounts for the popularity of packaged gcc versions on many platforms). This can also be complicated by licensing of the system include files and libraries.
Re:pragmas (Score:2)
#pragma once
It's not good, and don't call me Shirley.
Seriously, though, the gcc approach to this is, I think, better, if rather more verbose and awkward-looking. As I understand it, gcc looks for include guards, code like
#ifndef _MYFILE_H
#define _MYFILE_H
...
#endif
and if it finds it, it treats that like #pragma once for the given file. So there's no incompatibility with compilers that don't support #pragma once. Meanwhile, if for some necessary evil you need to include MyFile.h again, simply #undef _MYFILE_H and go.
It is worse than that... (Score:2)
I'd prefer them to swicth to 3.0 at the earliest possible location, though.
Re:FOR loops: a question, ANSI C++, C++98, C++99.. (Score:2)
Solaris is also affected (Score:2)
Re:How about PGCC and EGCS.. (Score:2)
"a repeat of the gcc-2.96-REDHAT fiasco?" (Score:3)
In most other ways, the unofficial gcc 2.96 is an improvement over 2.95, and for C code compatibility is as good as between any two releases of gcc. Mostly GCC 2.96 catches a few bugs that 2.95 failed to notice.
Re:g++ 3.0 compilation speed (Score:3)
Other programs can compile much faster or much slower, depending on what the bottleneck used to be, and what features they excersize.
There are several implementations of precompiled headers for GCC, which are likely to give a large boost in compilation speed when one of them is selected for inclusion.
Re:How about PGCC and EGCS.. (Score:4)
I don't know about PGCC, but since GCC 3.0 has a brand new ia32 backend with focus on Pentium II performance, chances are that PGCC is no longer relevant.
What about Redhat's users though! (Score:2)
It may be great for GCC that RedHat provided them with a wide-scale test of their software, but what about users of RedHat who were stuck with a buggy version of GCC for months?
--
Re:so (Score:2)
So, since the original gcc was non-gpl derived, then everything built with gcc, while it may be gpl derived, is truly non-gpl derived.
So, if you wanna build an application based on gpl code and not distribute it does that mean it's okay?
(Yes...for the GPL or death people, this is a joke)
Re:FOR loops: a question, ANSI C++, C++98, C++99.. (Score:2)
Why does GCC require so much memory? (Score:3)
I remember try to build the TAO (the ACE ORB) a few years ago. Under Linux using GCC, a couple of files consumed all of virtual memory (requiring 250MB of memory). When I had sufficient, it would take my 128MB machine 45 to 70 mins to compile those particular files (lots of swapping). The same files under Windows with MSVC would take less than 20MB and thus compile in under a minute. What is GCC doing that requires so much memory? Is it me, or is this new inliner (that is such an improvement) still a memory hungry hog? Why? (Technical answers preferred).
Re:Here's the funny thing... (Score:2)
C99 does (Score:2)
New libraries... Wahoo! (Score:3)
--
Re:FOR loops: a question, ANSI C++, C++98, C++99.. (Score:2)
The Irix C++ compiler has this problem as well, but worse than VC++ because it produced an error if you declared a variable more than once (apparently VC++ silently allows this, which is why I was not even aware it did the scoping wrong until I tested it explicitly). The Irix problem meant that two for loops with the same "local" index variable would not compile, and since we also had compilers that obeyed C++ rules we have to insert the "int" declaration before the first for loop. This also broke a macro that relied on a local variable.
Fortunately the switch "-LANG:ansi-for-init-scope=ON" turns it on (they seem to have figured out how to be even more verbose than gcc, sigh...) I just recently learned you turn this off (ie switch to C++ mode) with
Re:Umm... (Score:2)
This isn't supposed to take 3 months like M$ sometimes seems to think it should.
The way programmers are pushed today the first x.0 is mostly more of a beta than a release.
Not a problem realy if you got more time at bugfixing but since the company that's behing them don't earn money unless it's out there we won't see a change before that changes.
// yendor
--
It could be coffe.... or it could just be some warm brown liquid containing lots of caffeen.
Re:g++ 3.0 compilation speed (Score:2)
Re:FOR loops: a question, ANSI C++, C++98, C++99.. (Score:2)
Re:But can you debug your programs? (Score:2)
Re:Umm... (Score:2)
But don't forget (Score:5)
Re:Here's the funny thing... (Score:2)
Re:Here's the funny thing... (Score:2)
I believe that those reasons pretty much account for both his reasons why it doesn't matter and why Redhat went with the decisions.
Re:how much stuff with this break? (Score:2)
Futhermore, even the gcc guys have admitted that using gcc 2.96 (which was really the name of the development branch) spead up development because of the bugfixes that were generated.
Why don't we all complain about them including glibc 2.2 as well? I mean, that breaks binary compatibility with all the other distributions too.
In related news (Score:2)
Re:The GCC developers should thank RedHat (Score:2)
(Well, actually for a few weeks from now. This is a *.0 release, so I'll give it a bit of time to settle out, and use 2.95x for now.)
Caution: Now approaching the (technological) singularity.
Re:It is worse than that... (Score:2)
Caution: Now approaching the (technological) singularity.
Re:Umm... (Score:4)
Having said that, this is the first time (that I can remember) that I've seen an officially-planned x.0.1 bugfix release announced at the same time
Cheers,
Tim
Re:how much stuff with this break? (Score:4)
Not much, hopefully.
The only major thing that can affect the binary packages is the new C++ ABI. But for plain C programs, there should be no big difference. Most of the Linux programs and libraries are written in C and should not be affected significantly. This could be a problem for Qt and KDE packages, though.
See also the list of caveats [gnu.org] on the GCC web site.
Re:Still the slowest compiler around? (Score:2)
It has been discussed many times, and there are some experimental implementations.
Or is it really so difficult to correctly implement it?
It is not only difficult to correct implement it, it is also difficult to agree on what it should do. People say "precompiled headers" but what the hell does that mean? What do you compile header files into? Do you compile a single header file into some kind of "precompiled header object file" (doesn't save you that much but is easy to use) or do you use some kind of pre-project data base (saves you more but may be more difficult to use) or what?
Note also that GCC developers use Makefiles, while most commercial Windows developers use an integrated IDE. It is easier to do all kinds of performance tricks when everything is integrated in one big program that you control.
I know there are many people interested in the problem, and I believe some may actually be working on it. It just isn't that easy if you want something that isn't a kludge. My impression (un-verified) is that many of the existing compilers that support pre-compiled header files use various kludges that may be useful but not necessarily well-designed. A pre-compiled header file solution for GCC has to be something we can live with for many years.
Re:GJC! GJC! - GCJ! GCJ! (Score:4)
We did consider the name GJC (back in '96 when the project was started), but for some reason I don't remember (I think it was trademark-related) we decided on GCJ.
I'm very glad to see GCJ in a mainstream GCC release, and hope it will finally get the attention I think it deserves.
Re:Here's the funny thing... (Score:2)
I doubt it. There *is* no C99 compliant compiler... and since C and C++ standards almost always seem to contradict themselves in one or two spots, it's unlikely that there ever will be one.
And it's not such a bad thing - the face="" paramater to the font tag is invalid HTML. I guess the web in general should be red in the face?
--
Evan "And I'll invent *another* dozen specs *after* lunch!" E.
Re:Linux kernel (Score:2)
Good thing I didn't put it on our production server (no I didn't really consider it).
--
Re:FOR loops: a question, ANSI C++, C++98, C++99.. (Score:3)
This is per the ANSI/ISO C++ standard.
But can you debug your programs? (Score:2)
Re:GJC! GJC! (Score:2)
It could be. If I'm understanding it correctly, many common operations have to be performed at runtime anyway (i.e. checking type safety for downcasts, which is usually the case if you use the collection classes, or checking array boundaries, etc.). Please also consider that critical sections are already implemented natively by the Java environments you can find out there, and I doubt these are going to gain anything...
There probably will be a speedup because of optimizations (i.e. inlining, or loop unrolling optimized for the specific platform), but I'd be far more cautios before making enusiasthic statements like that.
Re:its all quite unambiguous... (Score:2)
{
int i;
{
int i;
Is this legal, or a redefinition?
Re:Red Hat's problem... (Score:2)
Exactly. Now, step back and think about WHEN Red Hat 7.0 was released.... Red Hat made a hard decision. They had to release a 1.5-year old C++ compiler that did not track the standard or the latest version of C++ off of the 3.0 development snapshots. They did the latter because they felt it would serve more of their customers' interests, and they had the technical staff to back that decision up.
Then, they got burned from both sides. The folks who were writing non-standard C++ complained because their programs no longer compiled. The folks who were tracking the GCC development and the ANSI standard complained that it did not go far enough.
The kicker is that, had they been Sun, releasing ACC, people would have groned because their code broke, put a bunch of #ifdefs in their code (or modified their autoconfigs) and gone on with life. But, because we have a porthole into the development process, we feel we're qualified to second-guess the distribution-creation process. Personally, I'm impressed that Red Hat (or Debian or SuSe, etc) can package a distribution at all, given the huge number of projects and no real coordination between them.
--
Aaron Sherman (ajs@ajs.com)
Re:Red Hat's problem... (Score:2)
Red Hat's 2.96 was a pre-release of 3.0, which they released because they felt that their customers needed many of those features, and that back-fitting them to 2.95 would be too large a task for too little return to the GCC effort (which, recall Cygnus is a MAJOR participant in).
--
Aaron Sherman (ajs@ajs.com)
Re:Red Hat's problem... (Score:4)
You have to understand, Red Hat bowed to their customers. Many of their C++ customers told them they needed ANSI C++ compliance badly. GCC 2.96 offered that.
Red Hat has a history of working hard on the compiler, and distributing a custom version. They were the first distribution to ship egcs (remember, 6.2 ran egcs for the C++ compiler). They also did a lot of work on making egcs work for the Linux kernel.
--
Aaron Sherman (ajs@ajs.com)
Press release available (Score:4)
Re:FOR loops: a question, ANSI C++, C++98, C++99.. (Score:2)
Which is why I always code these sort of loops as:
...
...
Just to avoid problems on either compiler.int i;
for (i = 0; i < x; i++) {
}
for (i = 0; i < x2; i++) {
}
Just to avoid problems on either compiler.
Performance increases in final code? (Score:3)
Re: Speedup (Score:2)
Linux kernel (Score:2)
Re:Linux kernel (Score:3)
Plans are for 3.1. (Score:2)
Re:New libraries... Wahoo! (Score:2)
There are two C++ libraries: v2 and v3. Only strstreams were available in v2. v2 has been dead for a couple years now; it gets bare-minimum maintaining while everybody works on v3.
GCC 2.9x, including the RH versions, ship with v2. Many people have written their own implementation of stringstreams for use with 2.9x.
v3 has had stringstreams for quite a while, but GCC 3.0 is the first official release to ship with v3.
Not just the name-mangling algorithm, either. (Score:2)
My girlfriend just asked me this question. Here was my answer:
An ABI for a platform/language/environment combination specifies things like the byte sex, the location of certain global variables (global offset pointers, etc), which registers are used for passing parameters during a function call, whether it's the calling function or the "callee" function that saves and restores register contents during function calls (and which registers can be ignored), the order of parameters passed on the stack... basically all the things that have to be agreed upon at the bitwise level in order for you to use Compiler A to make foo.o, and me to use Compiler B to make bar.o, and then to be able to link foo.o and bar.o together successfully. Usually for C++ everything all had to be done with the same compiler because there wasn't an ABI that everyone could agree on. Now various vendors can use various compilers and this stuff will "just work" on IA-64 families.
so (Score:4)
Re:My informal benchmark... (Score:2)
------
Re:so (Score:3)
My theory, however, involves a freak accident involving a cage full of monkeys, a box of hand-held hole-punchers, a large stack of stiff paper and a punchcard reader.
Re:how much stuff with this break? (Score:2)
And it is the policy of the gcc developers, isn't it, and the developers of the linux kernel itself that Red Hat had to also supply their "kgcc" secondary system-space compiler for?
Or is there someone out there who thinks the people are obligated to support the "gcc-2.96-REDHAT" stuff?
Re:how much stuff with this break? (Score:2)
Red Hat's problem... (Score:4)
Or if Red Hat users will now be forced to continue to use what has become obsolete software?
x86 performance? (Score:2)
Here's the funny thing... (Score:5)
Re:its all quite unambiguous... (Score:2)
No version of the C language allows a declaration in the middle of a block (i.e. between two statements), although C++ does. Your explanation assumes that such a thing is permitted in C.
C99 specifically addresses how constructs like "for (int i=0; i10; i++) STMT" should be handled; it specifies that the compiler should treat it as if there were a new enclosing block around the "for
So C99 says that:
for (int i=0; i10; i++) STMT
should be transformed to act like (in older C standards):
{ int i; for (i=0; i10; i++) STMT }
C99 makes it unambiguous indeed.
(I can't speak towards the ISO C++ standard, but I would imagine that they also stipulate that the scope of the defined variables ends after STMT above.)
Re:partially correct (HP-UX) -you need a K&R compi (Score:4)
No, it does not require such a compiler; it is required to be bootstrappable with such a compiler.
And K&R did have void. They didn't have pointer to void.
On UNIX systems that do not natively support and include gcc, one uses the system's C compiler to generate xgcc, which is GNU C (but not compiled by GNU C).
Not really. xgcc is the name of the result of the stage1 and stage2 bootstraps; the stage2 one is created by the stage1 one (i.e. by GCC).
I don't know why xgcc is not normally installed and used
It effectively is, if you type "make install". The bootstrap process just ensures that the result of the stage3 bootstrap has identical object files as the result of the stage2 bootstrap, which formed xgcc. In other words,
that optimized GCC compiles itself into the same optimized GCC - a consistency check. Those binaries are what get installed, so it is effectively the same GCC.
but I assume that it would be an ease-of-debugging issue (and you can also debug gcc-optimized code, which most vendor compilers will not do).
Nothing to do with it. Everything after stage1 is a consistency check.
Re:so (Score:2)
Imagine that you want your compiler to support a "vertical tab" escape, '\v'. When you write the compiler, you'll have some statement somewhere that reads a character and decides what to do with it, and in that statement you'll have something like:
case '\v':
printf(0x0B);
Compile this code, and suddenly your compiler can recognize the vertical tab character. Now, since it can, you can simplify the above code. You modify it to:
case '\v':
printf('\v');
You compile this code with your new compiler, and, because you can recognize the '\v' character escape, everything works. Now you can just replace the original source with the above source and, using your compiler, it will compile. Strangely enough, however, nowhere in the source is it evident that '\v' == 0x0B!
This can be applied in nefarious ways, as well. Let's imagine that I want to install a back door in the 'login' program. I can write the code to give 'login' a backdoor, but a code audit will show that it's there. Instead, I can modify the C compiler so that when it recognizes that it's compiling 'login' it will modify the code to have a backdoor. However, as above, an audit of the C compiler source will show that this is going on.
The solution? I modify the compiler such that when it recognizes that it is compiling a compiler, it adds in the code the recognizes the 'login' binary being compiled and adds a backdoor, and the code the recognizes a compiler and makes it modify compilers in the proper way. Then I compile the new compiler and replace the code with the old, unmodified code.
Now anyone can audit the source code for the compiler and find that it's perfectly clean. If they compile it on the system with the modified compiler, however, their compiler will have both the 'login' backdoor and will make compilers that have the backdoors included. All from clean source.
The moral of the story? It doesn't matter how trusted your source is, you always place an implicit trust in lower level utilities unless you're writing opcodes for the processor directly (and even then, a processor microcode virus isn't so far-fetched that you can completely disregard the risk). That's why your C library and your compiler, while seemingly unrelated to system security, are actually a critical part of your system's integrity.
Re:Question (Score:5)
Re:C99 has dynamic arrays (Score:2)
-------------
In 100 Words Or Less Describe ABI (Score:2)
The GCC developers should thank RedHat (Score:5)
I'm sure this release came about sooner with a lot less bugs due to Redhat's move to use the earlier snapshot in their distro.
-Karl
Re:so (Score:2)
Re:My informal benchmark... (Score:2)
My informal benchmark... (Score:3)
To my surprise, the Jikes version ran much faster, about 2X, than the native code. Only when I recompiled with GCJ with the option to skip array-bounds-checking, did the native version run at about the same speed as Jikes.
Re:how much stuff with this break? (Score:2)
It was a snapshot, which was then QAed and patched selectively for some time before release - it was better than any other alternative at the time, as the compiler we shipped before was rather old, and 2.95.x was buggy. We also got support for IA64, so we didn't have to use lots of different compiler for different architectures.
The only thing wrong about what happened, was that we didn't properly communicate that this was a Red Hat, not an FSF, release. Other than that, it showed the power of free software by allowing us to do what we felt was needed and not being locked in- nothing wrong about that. It has served us well for two releases now - in the future, we will eventually move to gcc 3.0(.x?), but this was obviously not an available compiler back then.
PS: Since our initial release, Mandrake has done the switch as well.
Re:FOR loops: a question, ANSI C++, C++98, C++99.. (Score:3)
Re:Here's the funny thing... (Score:2)
(Although a number of compilers still provided a non-strictly-conforming 'traditional' mode which would allow such constructs, along with various other bits and pieces that we (and our code) had all got used to)
K.
Re:In 100 Words Or Less Describe ABI (Score:4)
It's the format for putting things like function names in object files so that the linker, when fixing up function calls across object files, can match the call with the correct function.
While this isn't a problem in C (just use the function's name for christ's sake), C++ allows overloaded functions; multiple functions with the same name but different parameter lists. For the linker to match the correct call to the correct function, the parameter list needs to be munged into the function name stored in an object file.
The new way of doing it is generally less clunky and takes up less space in your object files than the old way. But is incompatible with it.
(Note that this only matters where GCC is being used as a native compiler to compile files that only need to be linked with each other. If compiling for a platform where gcc is not the native compiler (e.g. using GCC on, say, solaris, alongside the default compiler) you need to use the ABI defined for that platform to allow your object files to be linked with all the other object files you have that were compiled with the native compiler.)
Yeah, it's more than 100 words. Sue me
Twiddling my thumbs (Score:2)
Normally, I'll snag most anything off the server to live on the bleeding edge, but not this time...
--
Re:Question (Score:3)
Re:pragmas (Score:3)
5 points (Score:3)
a) printf statements and I/O inside loops in a performance benchmark? hello, McFly... you aren't really testing the compiler there.
b) gcc only as source??? (see your installation media for any free unix to get binaries, cygwin for win32, etc. etc. GNU may only distribute source but other folks can and do distribute binaries, and I'm sure gcc 3.0 binaries will be released for your platform of choice Real Soon Now)
c) FP perf: what do those numbers mean? There is no explanation given of how they're arrived at or what scale they're on. "Naked numbers unlike naked ladies aren't terribly interesting."
d) Ease of Use/Installation: Totally subjective and totally irrelevant to the merits of the compiler. Just because a preteen could install VC++ doesn't make it's code any better.
e) "overhyped","not ready" gcc: ok, so you're a troll. Just try not to be flaming stupid while you're at it. If it isn't ready then why is the operating system I'm using to type this reply on built with it? Is gcc the best compiler ever, well, no, there's no such thing. Frankly I wish gcc supported something more recent in the fortran family than F77 (not that I like Fortran per se but as a scientific coder it's sort of common and stuff).
--
News for geeks in Austin: www.geekaustin.org [geekaustin.org]
Re:Question (Score:5)
Re:Umm... (Score:4)
No matter how good your QA process, the chances of catching and squashing every single bug before release are minimal
Unless you're Microsoft; then you're an incompetent, obnoxious, FUD-spouting dinosaur and every bug that escapes is an indictment of you, your business practices, design methodology, family heritage, preference for breakfast cereal, haircut and anything else associtated with you or anyone you have ever met, slept with, laid eyes upon, or casually passed on the street.
Re:its all quite unambiguous... (Score:2)
This made somewhat sense in that declarations in one scope should last until the end of that scope, so it made the language simpler (for compiler writers (not users)), but it was definitely wrong in the sense that it wasn't very useful.
In summary: The standard got it right, and the original poster got it backwards.
Re:how can I keep two versions of gcc.. (Score:2)
By setting your environment variables properly, so that you only use one at a time (and needs to run a shell alias each time you want to switch the compiler you use)
But really, that is only nessecary if you use a prebuilt compiler by someone else. If you download the source and read the build instructions you will see that it is no problem at all. I don't remember any details, but it was definitely not an issue last time I experimented a bit with the gcc source (as an expirement in using it as a backend for my own toy language, but gcc turned out to be much too difficult for a toy language :-)
g++ 3.0 compilation speed (Score:3)
Re:so (Score:5)
if you look in this Makefile, you'll see that in stages one and two it builds a program called xgcc, which it later deletes, but not before it compiles your cross-compiler.
the nice thing about doing this is that the compiler that is finally built when the entire compilation process is complete doesn't have to necessarily be "real C" in the source code. It could be a nice intermingling of any number of languages. It isn't, but it does give them that freedom.
So to review:
gcc (installed) -> xgcc -> new gcc compiler
Re:so (Score:5)
This process is somewhat similar to the beginning of the universe, which according to Perl zealots started when tiny bits of eval() statements arose from quantum fluctuations. These immediately produced more and more eval()s, resulting in a big bang of code. Eventually, other functions appeared, forming the Universe as we know today.
There is also a controversial theory which asserts that the first GCC was written with assembler, and that the first assembler was written in binary code by Real Men (TM), but the evidence for this is questionable.
--
Re:Red Hat's problem... (Score:5)
The existence of a new version of software does not (unless you're dealing with the ugly, ugly world of MS Office documents and their lack of backwards compatibility) automatically break older versions. You will not go bald, get cancer, or get attacked by a pack of rabid dogs just because you're using an older version of gcc. You will not see:
[erasmus@localhost ~]$ gcc -o test test.c -Wall
gcc: Version 3.0 has been released. You must upgrade. Sorry.
Furthermore, from the Slashdot blurb (right at the top of the page -- you don't even have to click a link), we've got the following:
Plans for 2.95.4 (bugfix release), 3.0.1 (bugfix release), and 3.1 (more user-visible features) are all in progress.
In short, the 2.9x line (which Redhat admittedly bastardized a bit by grabbing a snapshot and calling it 2.96) will still be fixed for bugs.
Also, for a production system, new, untested code is considered unacceptable. There are bugs in the 3.0 version of gcc, period. Over time, they will get fixed. But just like running an experimental kernel (or even the very first new stable release of a previously experimental kernel) is a great way to shoot yourself in the foot, you don't want to jump to gcc 3.0 unless you've got a reason to use it. And people who have a reason will generally be able to locate and install a copy of gcc 3.0, anyway. Hell, give it a few weeks and they should be able to locate and install an RPMed copy of gcc 3.0. And I'm sure there are people out there who will need gcc 3.0 -- it just won't be the core Redhat demographic, yet.
In short, did the people at work bitch at me for running RedHat 6.2 on our production machines at work? No. Would they have bitched if I had upgraded to 7.0 when it came out, broken things in the process, and used the excuse that they just need to wait a year for RedHat 7.2? Hell, yes. Even Slashdot does something similar -- there's plenty of lag between the latest version of Slashcode and what's running on Slashdot.
Re:Umm... (Score:3)
Actually what is happening is that as the tools we create and use become more and more sophisticated (and thus more lines of code in use), the harder it becomes to catch all the possible things that could go wrong in an application. With big projects like the Linux Kernel, GCC, Mozilla (now in beta), KDE, Gnome, XFree86- it is just realistic to assume that even though the developers worked very hard for a stable release, people will find bugs.
gcj now integrated (Score:4)
GJC! GJC! (Score:5)
I can finally write in Java and not get made fun of by my elite C++ hax0r friends!
In case you weren't aware, GCJ is the first Gnu toolset for Java, and it's not just a nasty rehash of Sun's stuff...it's JRE, JIT and NATIVE CODE COMPILER rolled into one. They have an odious refutation of the Write Once Run Anywhere creedo which I don't necessarily agree with (the guy must be writing some pretty fierce code if he's had problems like he mentions, I've done distributed Java with the Swing libraries for about a year and never had a problem that wasn't related to Netscape sucking). What I care about, though, is the speed ups. Finally, all my keen little utility programs I've written in clean, attractive Java code (to do stuff like rename files, play music and so on) will run as fast as OS level stuff. I intend on compiling the sweet ass netbeans [netbeans.org] ide as soon as they get AWT working. Maybe I'll finally be able to get it to run as fast on my shitty Celeron windows machine as it does on my MACOS lappy.
GNU TOOLS FOR LINUX: BECAUSE LINUX USERS HAVE A RIGHT TO CLEAN, ATTRACTIVE, EFFICIENT OBJECT ORIENTED CODE, TOO.