GCC 3.2 Released 311
bkor forwards the GCC 3.2 release announcement, without attributing it as such: "The GCC 3.2 release is now available, or making its way to,
the GNU FTP sites. The purpose of this release is to provide a stable platform for OS distributors to use building their next OS releases. A
primary objective was to stabilize the C++ ABI; we believe that the interface to the compiler and the C++ standard library are now stable. There are almost no other bug-fixes or improvements in this
compiler, relative to GCC 3.1.1. Be aware that C++ code compiled by GCC 3.2 will not interoperate with code compiled by GCC 3.1.1. More detail about the release is available. Many people contributed to this release -- too many to name here!"
Finally, ABI stabilization. Now about optimization (Score:5, Informative)
For example, if I compile the modified Quake engine project I work on without -fno-strict-aliasing bizarre graphical errors occur. (Or used, need to check 3.2 now
Or if I compile with -march=athlon I get fairly mixed results, code that sometimes segfaults for no apparent reason, etc.
Anyway, congrats to the gcc3.2 team...
Re:Finally, ABI stabilization. Now about optimizat (Score:5, Informative)
Bugs that come and go depending upon whether strict aliasing rules are assumed or not are generally due to broken code. The C standard is quite explicit about when aliasing is allowed and when it isn't. (Aliasing is when there are two or more pointers to the same region of memory. This is generally OK if the pointers are of the same type, or if an appropriate union is used. Two pointers of different types pointing to the same region of memory are generally veboten (char* is an exception).)
The aliasing rules tend to be a source of trouble since violating them was fairly common in pre-standard days. (The V6 Unix kernel used to use generic pointers -- like "register *p" -- just about everywhere, something that is prohibited under ANSI.) They exist to allow the compiler to optimize based on the assumption that only pointers of the appropriate type can be used to access a stored value, and thus that value can be assumed to be unmodified (allowing redundant accesses to be eliminated) in a larger number of cases.
A Google search on "C aliasing" will turn up a fair amount of info on the subject.
Re:Finally, ABI stabilization. Now about optimizat (Score:2, Informative)
Even if the aliasing case is indeed a bug in the code somewhere, the fact that -march=i686 works perfectly, while -march=athlon can cause X to segfault, the program segfault, or the program not show all the graphics leads me to believe it still needs help.
Don't get me wrong, I was happy to see the new Athlon option, but it doesn't do me much good when very minimalistic code gets generated incorrectly...
Re:Finally, ABI stabilization. Now about optimizat (Score:5, Informative)
Read the gcc docu for the details: With the alias analysis
The new C standard contains very strict rules about pointers, e.g. writing into an array with a "double *" pointer, and reading back with a "long *" pointer is now undefined.
Have you tried Intel's compiler, set to maximum optimization?
Re: Finally, ABI stabilization. Now about optimiza (Score:4, Insightful)
You know that GCC has been tested more with optimization turned on than with -O0 (no optimization)?
About two years ago, I was compiling linuxconf. The Makefiles forced -O0 (no optimization), and its author, asked, said that "there will be errors by the compiler when I turn on optimization, so I force it to be off for everyone."
It turned out there was a bug in his code. It wasn't gcc's fault. It just showed up when you used optimization. But, btw, the code of Linuxconf has been ugly as hell since I first saw it.
Code that won't compile (or break) at -O1 is crap.
Re:Finally, ABI stabilization. Now about optimizat (Score:2)
And occams razor doesn't seem to apply here. You're seeing a problem, and there one of two equally possible sources.
Re:Finally, ABI stabilization. Now about optimizat (Score:3, Insightful)
If you have no insight whatsoever into the internals of a system -- such as is the case when viewing the inner workings of the universe from a pre-Einstein perspective -- then Occam's Razor can indeed prove useful.
But when you are told by people who understand what's going on inside what are the implications of failures appearing when compiling with certain optimizations enabled, then Occam's Razor no longer applies in the way that you think.
In fact, it applies in the exact opposite way, to wit:
A compiler represents, compared to the vast amount of code it compiles, maybe .01% of the total code involved (compiler plus code compiled by that compiler).
(The GCC compiler probably represents much less than that.)
Testing the compiler includes making sure it "correctly" compiles a substantial portion of the target software.
If the compiler offers an optimization that end users can turn on, one which is carefully documented and at least moderately well-tested, but that is known to expose bugs in the code it is compiling, yet has not been found to itself have a bug when enabled while compiling a substantial portion of the code...
Otherwise, one would presume a much larger body of software would fail in ways that are easy to track down to a compiler bug (especially in GCC's case, where the compilation phases are so transparent via RTL dumps and the like) when that optimization was enabled.
Especially in this case, where the kind of application bug is of a type that was widely deployed due to a combination of factors, even though I know the internals of GCC argue pretty strongly for the ability of a bug in the optimization being discussed to hide most of the time and still bite a few applications, I'd tend to lean, based on what you've said, towards a 75-25% likelihood of application bug versus GCC bug in this particular case.
In short: general rules are very useful, but they cease to apply to the extent specific information is available.
Re:Finally, ABI stabilization. Now about optimizat (Score:2)
I've had many bugs that disappear with "-g" turned on.
No, no, no. You didn't just turn off optimization there, you turned on debug mode as well. Debug mode is well known to do things that regular compiles don't -- including initializing variables to zero and the like. Most coders have run into situations where compiling with debug on works and without it doesn't, and they are amongst the more difficult bugs to stomp out. But they (generally) have nothing to do with optimization. I've seen bugs from compilers before, even in things as benign as loop unrolling.
Do not mix apples and oranges. No optimization != debug mode.
-g and bugs (Score:2)
When I was at Microsoft, we would build three versions of Word: the "ship" build (full optimizations), the "debug" build (no optimizations, debug enabled, asserts enabled) and the "hybrid" build (full optimizations, no debug, but asserts still enabled). I still do this.
steveha
no (Score:2)
you could call it a bug in 2.95 (Score:3, Informative)
As an example, some code used to do things like write to a float* and then read it back as a long* (since on 32-bit systems, both are 32-bit values). This used to work, but under the current C and C++ standards is undefined. If the compiler does no optimization (-O0), you might get lucky and the code might still work, because you'll be physically reading and writing to the same memory address with same-sized pointers. But if you allow the compiler to optimize, it'll take advantage of the fact that when you write a float*, you can only legally read it back as a float*, and then your code breaks.
compiles FreeBSD great (Score:2, Informative)
What about binary-only packages (Score:2, Interesting)
yes they will work (Score:5, Informative)
Re:What about binary-only packages (Score:2)
Re:What about binary-only packages (Score:2, Informative)
1. ldd to determine what shared libraries are used -- at least, the ones that were specified at link time. Run-time linking, you'd need to determine by testing and perhaps strace().
2. Put copies of compatible versions of these libraries in a directory set up for this purpose.
3. Write a script that sets LD_LIBRARY_PATH to that directory, runs the program, and unsets it afterwards. Don't put this directory in
Then the binary should look in LD_LIBRARY_PATH first for the libraries.
If it's SUID/SGID... you'd probably need to do something more, like imprisoning the program in a chroot() jail with its own set of libraries, because ld.so will ignore the LD_LIBRARY_PATH variable in that case.
GCC3.2 and GDB compatibility (Score:2, Interesting)
Re:GCC3.2 and GDB compatibility (Score:3, Informative)
That day is not here.
The reason you can't do things like useful debugging of optimized programs isn't because GCC can't produce the debug info necessary. It can, and would (IE patches exist that make it do so, and would be accepted without any trouble), but GDB can't handle the information.
This is unlikely to change unless someone whacks the GDB team upside the head. Most are embedded developers, and just don't get that the majority of GDB users don't care about cross-debugging 18 versions of the same chip, over a serial line. They want to be able to debug their desktop programs.
This is why things like namespace support, dwarf2 location expression/location list support, etc, are *not* priorities for gdb, but things like "multi-arch" (using one gdb to debug 18 OS/ISA/ABI variations of a given architecture) are *requirements* for targets not to be obsoleted.
Re:GCC3.2 and GDB compatibility (Score:3)
<HUMOR>
Darn that Microsoft! You can't even debug Linux code properly without needing an IE patch!
</HUMOR>
Thank God! (Score:5, Funny)
Good thing I didn't waste an entire f*cking week compiling Gentoo 1.3 with GCC 3.1. It would have been a STUPID WASTE of time if I had done that. Yeah, good thing I saw this coming.
GOD DAMNED PIECE OF F*CKING SH*T!
Re:Thank God! (Score:2)
Re:Thank God! (Score:4, Insightful)
Re:Thank God! (Score:3, Informative)
Re:Thank God! (Score:3, Informative)
Yes, it's very nice that it was mentioned in the announcement for GCC 3.1.1 [gnu.org].
Huh? (Score:2)
Especially since this fact was announced weeks ago. Amazing skillz of foresight ya got there.
Stabilized C++ (Score:2, Insightful)
Hopefully they will freeze the C++ interface for a long time and linux/bsd distributions will not have backward compatibility issues when upgrading GCC. THANK YOU SO MUCH to al the GCC developers/contributors.
UltraSparc, Linux, and RAID1 (Score:3, Informative)
The Debian 3.0r0 install went fine (although for those trying it, be sure to select "rescue" when you boot off the CD). I recompiled my kernel using egcs64 and added RAID1 support into the kernel (with RAID0 as a module). I was able to setup my RAID partitions without difficulty, and the RAID0 arrays mount just fine. Unfortunately, when I try to mount the RAID1 arrays I get an oops, and any attempts to access the RAID device after that simply hang (that's a technical term
After a few searches on various mailing list archives, I found this post on the Linux-Sparc list [theaimsgroup.com]. I tried this particular patch and was able to mount the RAID1, but after a few minutes copying data to the drive gave me another oops.
So, one supposition was that the oops was due to a compiler bug, but since egcs64 is so old (from 1998 I think) it's not going to get fixed. So I was looking at GCC 3.1.1 yesterday and got it installed but I was unable to compile a kernel with it (make couldn't find the compiler).
Is GCC 3.2 usable for Sparc? The GCC site had a report of a successful build of 2.4.18 using GCC 3.1.1 so I expect 3.2 would also work for UltraSparc. However, I tried to get GCC 3.1 working on a Gentoo install I did on my U30 and it died a horrible death. If GCC 3.2 will work, how do I install it as a replacement for GCC 2.95 and egcs64? When I installed the debs for 3.1.1 they didn't seem to replace either GCC or egcs. Can I pass some arguments to make-kpkg to provide the location of the compiler, as well as the -m64 option for UltraSparc?
Re: UltraSparc, Linux, and RAID1 (Score:2)
Huh? Have gcc with exactly this name in your path (check with type -p gcc . This didn't work?!
Just compiled a kernel with it. (Score:4, Interesting)
803130 Aug 15 13:18 vmlinuz
804713 Aug 6 09:08 vmlinuz.old
At least by a tiny bit. Those are both Linux 2.4.19 kernels with the same
Re:Just compiled a kernel with it. (Score:3, Funny)
ABI ?? (Score:2, Interesting)
Re:ABI ?? (Score:5, Informative)
ABI's define what is necessary for two pieces of compiled code to interoperate properly. So you have OS ABI's (which define syscall interfaces, argument passing, etc), Programming language ABI's (C++'s ABI generally includes virtual table format, name mangling format, exception handling format, etc), etc.
Think of it as the API defined for compiled code.
Compiled code that is compliant with a given ABI will interoperate properly with other code compliant to that ABI.
Re: ABI ?? (Score:3, Informative)
API: Application-Programmer-Interface.
ABI: Application-Binary-Interface.
ABI is the convention used when creating object files (.o). How the assembly calling convention is, how the symbols are named... this sort of stuff.
Doesn't compile glibc 2.2.5 (Score:3, Informative)
Too bad it isn't Friday, or else I'd just blow it off for the weekend. I'll probally look into fixing it now. (Don't worry I'll Google first).
Re:Doesn't compile glibc 2.2.5 (Score:2)
Looks like Ulrich Drepper didn't think the patch that was proposed was quite right, and has his own [redhat.com].
Precompiled headers (Score:4, Interesting)
Re:Precompiled headers (Score:3, Informative)
Re:Precompiled headers (Score:5, Informative)
What does this mean for OS X? (Score:4, Interesting)
AFAIK, the entirety of Jaguar is compiled with GCC 3.1 [google.com]. Replacing all the libraries with v3.2 is gonna be some mighty huge software updates...
Re: What does this mean for OS X? (Score:3, Informative)
This only affects C++ code. And only libraries and object files.
Also, it's perfectly possible to have both compilers on the same system. No need to rush for gcc-3.2, anyway.
Re:What does this mean for OS X? (Score:4, Informative)
Re:What does this mean for OS X? (Score:2, Interesting)
The problem isn't only system libraries which are based on C++ but their developer tools which use gcc 3.1. This means when Apple moves to 3.2 they will have to inform everybody that all their libraries will also have to be redistributed with their programs.
In the meantime I worry that bugs in 3.1 which are fixed in 3.2 will be difficult to merge with 3.1 which only Apple will be using. What a pain |-\
I'm not sure if drivers are effected because they use a limited C++ subset. If they are Apple will probably maintain 3.1 for a long time, even though they will be the only ones doing it |-\ Anyway, there is no telling if 3.2 is stable either!
Re:What does this mean for OS X? (Score:4, Informative)
Switching Compilers (Score:3, Interesting)
Re: Switching Compilers (Score:4, Informative)
If you compile programs or libraries with GCC 3.2, they won't be able to link against libraries that were compiled with prior GCC compiler versions. But this only affects C++ code! C code is unaffected.
And: This isn't really a problem if you compile on your own anyway. You just don't need to "switch compilers". Just do a parallel installation. For example: ../gcc-3.2/configure --program-suffix=32 --prefix=/usr && make bootstrap and it will end up
as "gcc32" in your system.
Stable C++ ABI??? (Score:5, Interesting)
1. This means that C++ _including objects+classes+ will, with a bit of grunt work, be able to be integrated with real-oo scripting languages just as easily as C - it's the constantly changing C++ ABI that has prevented, until now, "easy" bridging of, say, C++'s object model to Common Lisp's CLOS, without having to recompile everything in sight at the drop of a hat - it will now be possible to produce a C++-to-lisp analogue of, say, CMUCL's excellent "alien:" lisp package (nothing to do with the deb2rpm tool), or SWIG-but-for-proper+C++ for python and perl.
2. It will mean that third-party binary distribution of C++ code is a lot more viable. Remember the way Netscape, Realplayer and flash used to break with every new RedHat release? - well, that was primarily becuase of libstdc++ not linking properly due to changing ABI.
3. This should also mean that the prelink "hack" and it's ld.so-integrated successor can stabilise and become part of standard linux distros - no mare agonisingly slow KDE startup times!
(Sorta-kinda OT) - GCC3 and GCC 2.95.3 coexist? (Score:2)
Some time back, I decided to try out GCC 3.0.1 (I think it was). I downloaded the code and ./configure'd it with what the configure script said was the option to add a suffix (with the notion that I'd end up with my 'default' compiler being "gcc" (GCC 2.95.3) but have the option of trying something with "CC=gcc3".
I did a "make bootstrap" and a "make install"...and ended up with two "gcc's" on the hard drive - no suffixes. It ended up causing a lot of odd problems due to compiled programs trying to link to both versions of the libraries and so on. Of course, there is no "make uninstall"...I finally managed to track down all the files GCC 3.0.1 had installed by hand and delete them, now my system works again...
I'd actually like to try out gcc 3.1.1, at least - any advice on getting it to coexist as an obvious, separate "option" from the default gcc, and will the same advice actually work for gcc 3.2?
Re: (Sorta-kinda OT) - GCC3 and GCC 2.95.3 coexist (Score:2)
Use /path/to/configure --prefix=/usr --program-suffix=SOMETEXT and you will end
up with a binary installed as " /usr/bin/gccSOMETEXT ".
This works.
Re:(Sorta-kinda OT) - GCC3 and GCC 2.95.3 coexist? (Score:4, Insightful)
"Uninstalling" and "upgrading" becomes as easy as "for F in `ls -l
At the aforementioned job, the directory I installed to was actually
Re:(Sorta-kinda OT) - GCC3 and GCC 2.95.3 coexist? (Score:3, Interesting)
I configure GCC 3.2 like this:
This way, *all* files will be installed in subdirectories inside
To make sure GCC will work:
- make a symlink
- add
I just installed GCC 3.2 an hour ago, but I installed GCC 3.1 using the same method (except that the prefix is
Benchmarks of generated code? (Score:2)
Are there any benchmarks anywhere that compares the speed of code generated by GCC 3.2, 3.1, 3.0 and 2.9x?
That would be interesting to see, especially on modern processors with SSE, SSE2, etc.
Re:Benchmarks of generated code? (Score:2)
GCC 3.1.1/3.2 optimize even better.
Hints for compiling gcc-3.2 (Score:3, Informative)
- you really should get/compile/install binutils--2.13.90.0.4 first!
- make sure you specify "--enable-shared --enable-threads=posix --enable-__cxa_atexit" when you do a 'configure' of GCC-3.2. Otherwise it won't be fully ABI compatible!!!
- then the usual "make bootstrap" etc
Good luck, Max
C++ stability (Score:3, Interesting)
automatically imply that GCC is now fully C++
standards compatible? If not then what is
left to change?
Re:C++ stability (Score:2, Informative)
Re:C++ stability (Score:2, Informative)
Apples and oranges (Score:5, Informative)
The C++ standard says nothing about ABIs. (Well, there are some layout rules when dealing with POD structs, but nothing about a C++ ABI.)
We're not meeting the C++ standard in two regards (at least I can't think of any more): first, we don't have export for templates. That will largely be a fallout of the precompiled header projects (two or three PCH branches have been in the repository for a long time now; both Apple and Red Hat have been contributing their implementations).
Second, we don't do two-stage name lookup for templates. Which most user don't need to worry about. That will come when the current 15-year-old parser has finished being rewritten (and there are branches doing that already as well).
Also, keep in mind that although the compiler C++ ABI is stable, the C++ library ABI is not. Declaring it stable at this point would be a massively stupid thing to do; there are far too many optimizations to be made still, and those involve changing the ABI. For example, there's a reworking of the memory allocator that currently exists on my whiteboard, and as soon as it gets finished off and checked in, the library ABI will be broken. Vendors already have methods in place for dealing with multiple versions of a library installed; this will be nothing new to them.
GCC 3.2 and Solaris (Score:2)
-Aaron
Athlon performance? (Score:3, Interesting)
works with Gentoo i686 glibc-2.2.5 gnome2 (Score:2, Informative)
That is, gcc 3.2 is the ONLY gcc on this computer. So the ABI interop issue isn't a problem, I suppose.
A mess for those running old/current distros... (Score:3, Interesting)
But now I have to recompile every single C++ library on the system in order to make the new compiler truly useful.
Yet, at the same time, I want to be able to run my old C++ binaries and also want to be able to drop back to my older compiler if necessary.
The best way to accomplish this, I suppose, is to put the new libraries in their own directory and modify gcc 3.2's spec file to include a -L directive to the linker to reference those libraries at link time and a -rpath directive at link time so that the resulting binary references that directory at runtime.
The other options I thought of involve either renaming the libraries (thus requiring manual intervention when compiling any package that would refer to the libraries by their "old" names) or changing their major version number (surely a serious abuse of the purpose of the major number! And it would cause problems when linking using the old compiler anyway). Neither of those are very palatable.
Anyone have better ideas on how to do this?
Re:A mess for those running old/current distros... (Score:3)
Worked fine for me, although I'm not sure if that was the wisest thing to do.
Re:A mess for those running old/current distros... (Score:3, Informative)
Newer versions of gcc have different sonames for the libstdc++ library. For example, gcc 3.1 used libstdc++.so.4. Backward binary compatibility is a non-issue. The only tricky issue is recompiling C++ libraries that you wish to use for your development with the new compiler.
Re:Is this front page material? (Score:4, Insightful)
This is front-page material because GCC is one of the most important building blocks of the free/open-source software world.
GCC is the de facto compiler for GNU/Linux and *BSD systems. Furthermore, the Linux kernel currently hasn't been ported off GCC. Without GCC, free *NIX systems would have nowhere near the importance they have now.
Re:Is this front page material? (Score:2)
So let's celebrate by giving front page space to a version update!
Re:Is this front page material? (Score:3, Insightful)
Seeing news like this to be discussed and picked apart in a large forum is always a good idea in my opinion. GCC is important for the way my gentoo linux box operates, so yes, I'm VERY interested in these articles. This is news that lets me be aware of issues I may encounter in the future.
It seems to me people complained when an article was posted about Yet Another Kernel Release. I can understand those people weren't interested and may have had better sources elsewhere, but I found slashdot a great place to discuss these issues since the beginning.
Re:Is this front page material? (Score:2)
Re:Is this front page material? (Score:2)
And now for Macintosh as well! MacOSX uses gcc as standard compiler.
Re:Is this front page material? (Score:2)
Redhat is waiting, Mandrake is waiting....
So is this a frontpage material?
Definetly yes. It informs, very well on other topics, which are waiting for that gcc releasem not only on gcc.
Re:Is this front page material? (Score:3, Informative)
gcc is the Foundation of Open Source (Score:2)
Everything GNU is predicated on the gcc compilers -- which makes a major release of the compiler very pertinent to anyone who cares about free software.
And given that gcc 3.2 is released specifically for Linux and BSD distro developers, I'd say it's kinda important. ;)
Re:Breaking interoperability... again??? (Score:5, Informative)
Martin Tilsted
Re:Breaking interoperability... again??? (Score:3, Interesting)
They also promised that the ABI was finished in 3.0. In fact, that was one of the reasons given for taking so long in getting 3.0 out the door. Not a good track-record, IMHO.
Still, it will be good to get closer to standards-compliance. C++ has been standardized for four years, now, and fully compliant compilers are just now starting to appear. GCC has actually been fairly good compared to some (cough, Microsoft, cough) in adhering to the standard.
Re:Sorry, stupid Q: What is an ABI? (Score:2, Informative)
"Calling convention" isn't an accurate enough term since it could describe a number of things in application development. ABI is better since it accurately implies it has something to do with the binaries themselves.
Essentially, the ABI is how a object files and libraries are linked together.
Re:Sorry, stupid Q: What is an ABI? (Score:2, Informative)
C++ support operator overloading, so you can have
class foo::operator+(int);
class foo::operator+(const &foo);
class foo::method(int);
class foo::method(int, int);
class foo::method(enum xyx, const char *);
etc.
How can the linker tell which of the 3 functions was meant to be called? The compiler must (well, all of them work this way) mangle the name, so you actually are calling foo__method__int or foo__method__int_int (only it mangles it even more). As long as the compiler mangles the same way as the (static or dynamic) libraries it links against, they'll link fine. Incidently, the extern "C" prevents the name mangling (C doesn't support operator overloading).
The second issue is with the virtual lookup table. Basically, every class with virtual functions has a hidden array function pointers for the method (since the compiler/programmer doesn't know which method will ultimately be invoked. When a virtual function is called, instead of jumping to a subroutine directly, (foo__method__int(this, int)), it gets the function pointer from the virtual table within the class instance, and calls that function. The format of the virtual function table differs between compiler vendors, and between gcc versions in this case.
Re:Breaking interoperability... again??? (Score:2)
But its nice they finally settled on a stable ABI, so all releases forward of this one will be compatible.
Re:Breaking interoperability... again??? (Score:2)
Re:Breaking interoperability... again??? (Score:2)
The ABI was changed for (hopefully) good reasons. These reasons probably included speed, ease of compilation/optimization, stability, etc. Further, it is safe to presume that these changes could not have been addressed successfully without changing the ABI. There comes a time when backwards compatible is more of a hindrance then not.
Finally, and perhaps this is a misperception of mine, I would think that most of the individuals using this software have access to the source code for the programs they use. So... they can recompile everthing. Not a trivial task, to be sure, but still possible.
Re: Breaking interoperability... again??? (Score:4, Insightful)
Please remember that the C++ standards comitee encouraged vendors to use different (incompatible) ABIs for C++. C++ compilers were not supposed to interoperate, because they thought that this would never work, because the compiler had to do far too much things outside the object files (compilation units) for exception handling and initialization code.
And, for all compilers I know, they were right.
You really cannot blame the GCC people for this. Whenever they have to change the internal handling of exceptions, templates and stuff internally, it really is the best choice to change the ABI.
The C++ standard was never designed to make code compiled by different compilers link. And gcc2, gcc3.1, and gcc3.2 are different compilers because the internal handling of these very complex structures changed.
Re:Breaking interoperability... again??? (Score:2)
Maintaining backwards-compatibility at all costs has its own disadvantages... I wonder how much of the disk bloat of modern Windows installations is due to code written to ensure that 10-year-old Win3.x apps, or 20-year old DOS apps, will still work?
An answer from a maintainer (Score:5, Informative)
Hi. I'm one of the hundred-odd GCC maintainers.
Because the idea of backwards compatability never occured to any of us until we read your Insightful post. My God, what a concept! I'll go tell them at once!
Seriously, what makes you think the entire team doesn't already understand this point? Do you think such decisions are made lightly? Go read the archives; they agonized over this for months, and that was before the heavy debating started.
Here's the simple fact: there is a C++ ABI designed for compatability and interoperability. Here's another simple fact: there were bugs in our implementation of the ABI. The choice was to be backwards-compatable with previous GCC 3.1 and incompatable with other vendors implementing the same spec -- which would pretty much defeat the purpose of a common ABI. Or we could fix the bugs and break compatability in a couple of corner cases.
Of course, after all the details are worked out is when all of the geniuses with answers to all of life's problems decide to reveal The One True Solution on /. posts...
Re:Breaking interoperability... again??? (Score:2, Interesting)
I would intrepet this as saying: If you have say a shared library that was compiled with 3.1.1 and you attempt to use that library with a program compiled with 3.2 then it won't work because the compiled results are not interoperable.
But you can simply recompile both parts using 3.2 and they will work fine. you don't need to change the source code.
Re:Breaking interoperability... again??? (Score:2)
Things compiled with MSVC 6.0 works with
There's nothing special or great about source compatibility. It is a _requirement_ plain and simple. You're not supposed to change source code just to make them compile with several versions of the same compiler.
What we're talking about here is BINARY compatibility.
Re:Translation: (Score:2, Funny)
Heh..Like to keep your job?
Or are you just planning to be the sole maintainer for the life of the code? =)
Re:Translation: (Score:2)
So write readable code. And comment. They're inseperable necessities.
Unless, of course, you do want to maintain the code forever. Personally, I'd rather go off and write new, nifty stuff than maintain old, cruddy stuff forever (and yes, all the old code is cruddy by definition -- if you can't think of improvements to something then you didn't learn anything).
Re:Any good compilers out there. (Score:4, Informative)
Since release 3.1, GCC produces *fast* code. On my Pentium 233 MMX, bzip2 is 25% faster when compiled with GCC 3.1 than the binary produced by GCC 2.95.2. The optimizers have been greatly improved, and can compete with Intel C++. On some areas, GCC is faster, while on other areas, GCC is slower than Intel C++. But all in all, GCC is quite good.
Re:Any good compilers out there. (Score:3, Informative)
Re:Any good compilers out there. (Score:2)
I only tested this on one program though, so I'm not sure wether all executables will be 40% smaller.
Re:Any good compilers out there. (Score:2, Insightful)
I'd like to see a real benchmark of this. The same exact thing is said about every language/compiler by its proponents (think java vs c/c++, etc)...
Re:Any good compilers out there. (Score:2)
Re:Heh I wonder if it compiles the Linux kernel (Score:2)
As soon as it is done...let me look, oh crap out of disk space, okay, fixed that...I'll build some kernels with it. 2.4.19 for my work machines, and I'll try out the latest 2.5 for my machine at home cause it has a nice sound card only supported by ALSA.
Re:Heh I wonder if it compiles the Linux kernel (Score:2)
Terratec EWS MT-88. That is MultiTrack 8 channels in 8 channels out. True 96kHz/24-bit sample rate/depth. I run it into/outof an Alesis 10 channel rack mount mixing board, using an 8 channel punch-in snake. The board outputs to Alesis M1 Active monitors.
I write custom synths/noise generators/filters/effects, that I'll be compiling with GCC 3.2. Ha! Kept it on topic.
Re:Heh I wonder if it compiles the Linux kernel (Score:2)
Being a electro-music head myself, I've never seen much stuff in *nix that I could use in my music production
Re:Heh I wonder if it compiles the Linux kernel (Score:2, Insightful)
I have two types of tools that I have written. When I first started messing with audio creation, I followed the usual Unix tool route. Read from the standard in write to the standard out. I wrote filters to do things like:
cat 16-bit-unsigned.raw | noise-gate 1024 > 16-bit-unsigned.raw~
To set all samples below 1024 to 0.
I basicly scoured the web for C implimentations of various digital filters and effects. Once I got the idea of what was going on I started to write my own.
I also have written tone generators for all the basic wave forms.
What I'm working with most now, is a program I call an audio render. I had used POV-Ray since the 0.5 release, and loved the scene description language (very C like). I thought it would lend itself well to generating a stream of samples rather than pixels. So far the results are very interesting. I'd like to release it at some point, but I used most of POV-Ray's parser, and some other bits of code. It is most definatly a derived work, even if it does something totally different. Unfortunitly POV-Ray's license is kinda restrictive in this respect. Besides it isn't ready for primetime yet, and the POV Team are talking about relicensing at GPL, so maybe I'll get it to a point where I think others could use it about the time POV gets GPLed.
Wow, I wrote a lot. Please go gentle on the moderation.
Re:Heh I wonder if it compiles the Linux kernel (Score:2, Interesting)
My friends are the ones who are muscians. They have their own tools and seem to use Windows or just dedicated studio hardware. I mostly make noise, I think they are calling it Intelligent Dance Music this week.
Most of my stuff comes from my head and is performed in code. If I don't like how something sounds I change the code. So I have to do little production work.
I use the inputs on the mixing board (along with it's mic preamps) when I need to aquire something from the analog world. Such as vocals from one of my friends (I have promissed I will not try to sing again).
I divide the vocal sample up into phrases that I can then "render" into the final output.
One thing that is nice, is to render into an 8 channel audio file, seperating vocals or other parts that are very discrete so I can get a rough idea at what levels I should mix each channel using the analog slides. I can just loop the track or selection and sit there and tweak until it sounds good, then look at the values on the sliders to try in a final 2 channel rendering.
So to simply answer you question. I don't sequence or produce in the standard definations of the words.
Oh well who cares about the moderation, I'm talking about something I enjoy.
Re: kde speedups ? (Score:3, Interesting)
Isn't this rather part of ld(1) (binutils)?
Re:Mandrake (Score:3, Informative)
Re:Is that wise? (Score:2)
Re:API _FINALLY_ Stable?! (Score:2, Informative)
Re:The compiler who cried wolf? (Score:2, Insightful)
That's just like the "viewer response" I saw CNN airing sometime around 2001-09-13:
It's a nonsensical question, although you can take personal action to resolve your concerns, if you choose. (You can simply decide to feel safe, or to never feel safe again; you can decide to undertake the kind of in-depth study of C++ ABI issues and GCC's code base to determine whether it's stable, or to accept that it'll never be stable. These are all coping mechanisms. Nobody else can answer the questions for you.)
In the meantime, if you want to "stick with 3.1's slightly-broken ABI", then what's preventing you from doing that?