acidblood writes "GCC3.1.1 has been released. Many improvements in performance, code optimization, standards compliance, and a few bug fixes in the C++ ABI (full changelog here). Download from the main GNU FTP or use the nearest mirror."
This discussion has been archived.
No new comments can be posted.
If you plan on building a distribution (hello Redhat and Mandrake), it's probably wise to wait for gcc 3.2, since binary compatibility will change. Binaries from gcc 3.2 are binary incompatible with anything older.
Well, 3.2 hasn't been released yet, but that hasn't stopped Red Hat from including it in their Rawhide release.
gcc-3.2-0.1.i386.rpm
I assume it's a pre-release, and they intend to move to a full release before Rawhide becomes 8.0. That should be a relatively safe bet for them, considering not only their unique position in regards to gcc, but also that the GCC web page [gnu.org] cites an expected release date for 3.2 as being 2002-07-2x.
I mean, okay, I understand that at some point, there may be a good reason. But it seems that before releasing a new version of a compiler with a new ABI, you'd get everything *right* first, given the huge number of compatibility problems that this switch causes.
So maybe the move from gcc 2.7 to 3.0 was well-founded. Red Hat just happened to unfortunately release an interim release. But now there's going to be a release incompatible with 3.0/3.1? Come *on*, guys!
Gcc is great, but this causes tons of grief for all the developers out there trying to use C++ in their code and support users.
But now there's going to be a release incompatible with 3.0/3.1? Come *on*, guys!
And 3.2 is compatible with the V3 ABI. Sure, they could just keep the current ABI, and remain incompatible with compilers from Intel and other commercial vendors forever. That doesn't seem like a particularly great path to me, though.
The reason 3.2 is coming out a few days after 3.1.1 is so RedHat, Mandrake, FreeBSD, SuSE, etc can have time to QA it for their next releases. I don't know of any distributions using 3.0 or 3.1 anyway. Debian and *BSD are still on 2.95.x, Redhat/Mandrake are 3.0-beta. Not sure about SuSE though. So, basically, it's not as if the 3.{0,1}/3.2 ABI changes actually affect anyone, because while 3.2 is incompatible with previous 3.0 releases, 3.2 and 3.0/3.1 are "equally incompatible" with whatever the systems are using now.
SUSE was one of the parties which asked for an early 3.2 release, so they could base their distribution on that (instead of 3.1). Red Hat, Debian and FreeBSD was three other names I can remember from the discussion.
Because the C++ ABI is _really_hard_ to get right. In the current case of course there was a design phase of the ABI that was as complete as possible, but in the course of the implementation some bugs were discovered that made compilation of some valid code impossible.
In contrast to earlier, the GCC 3.x ABI is based on a written, cross-vendor standard. This means the implementation can have bugs. The bugs was subtle enough not to be caught before 3.0, but given the diversity of C++ coding styles, it is really hard to say how common the affected code is.
Before, the GCC C++ ABI was whatever GCC produced, so the ABI couldn't really have bugs.
They could of course declare the GCC ABI to be the ABI descrbed in the standard, except for the bugs not caught by 3.0. However, I'm glad they are going with the written standard, rather than trying to Microsoft their bugs into being a de-facto standard.
# G++ now supports the "named return value optimization": for code like
A f () {
A a;...
return a;
}
G++ will allocate 'a' in the return value slot, so that the return becomes a no-op. For this to work, all return statements in the function must return the same variable.
This is a wonderful thing, but does this go against the C or C++ specification?
since the A constructor and destructor will be called fewer times, and they may have side effects.
You mean without the optimization, A's copy constructor is called when returning the value, and with the optimization, it isn't? I thought the optimization was just assigning the register that the value a is stored in so the return statement did not require a move instruction. If so, they should have been more explicit in describing the optimization!
Not true. It _does_ affect the behaviour if the constructors or destructor have side effects (such as static instance counters, or cout<<s, or whatever).
However, the standard explicitly indicates that this optimisation and change in behaviour is permitted, and therefore that you mustn't rely on every apparent constructor actually being called.
The reason the change in behaviour was permitted by the standards body is because it decided that the potential for optimisation would be worth it.
AIX/i386 gets mentioned a few times in the OS/2 Warp manuals, in the chapters that deal with disk partitioning and the IBM Boot Manager. Other than that, it's pretty much forgotten.
I think I was one of the few people to ever try doing software development on AIX/i386 (it was back when an IBM Model 70 was hot stuff).
The only reason we did it was because the IT managers of one of our biggest clients was willing to pay $$$ for us to port from SCO Unix to AIX/i386. This guy was the poster child for 'no body ever got fired for buying IBM' (hell, he even bought a Model 50 for home because 'by the end of the year, everyone will be making MicroChannel machines').
The only good thing about doing the port was that the contract kept the company alive through a lean period.
"The C++ ABI now conforms to the V3 multi-vendor standard."
That's a very good reason to break binary compatibility, they should be congratulated on the change. This could well mean we will have C++ libraries working with each other across compilers and versions on the same platform. Always a positive.
Wait for 3.2 (Score:5, Informative)
Re:Wait for 3.2 (Score:4, Informative)
(there's quite some discussion about this now in the gentoo forums)
Red Hat: 3.2 is in Rawhide (Score:5, Informative)
gcc-3.2-0.1.i386.rpm
I assume it's a pre-release, and they intend to move to a full release before Rawhide becomes 8.0. That should be a relatively safe bet for them, considering not only their unique position in regards to gcc, but also that the GCC web page [gnu.org] cites an expected release date for 3.2 as being 2002-07-2x.
Re:Red Hat: 3.2 is in Rawhide (Score:2)
Re:Red Hat: 3.2 is in Rawhide (Score:1)
Nope. RedHat (and SuSE and
Basically they didn't want the same thing to happen with 2.96 vs 3.0 again.
Re:Wait for 3.2 (Score:1)
Mandrake just jumped to gcc-3.2 in their devel. branch. Beta 2 of Mandrake 9.0, all subsequent betas, and 9.0 final will use gcc-3.2.
Re:Wait for 3.2 (Score:2, Insightful)
Why does the ABI keep changing? (Score:2)
So maybe the move from gcc 2.7 to 3.0 was well-founded. Red Hat just happened to unfortunately release an interim release. But now there's going to be a release incompatible with 3.0/3.1? Come *on*, guys!
Gcc is great, but this causes tons of grief for all the developers out there trying to use C++ in their code and support users.
Re:Why does the ABI keep changing? (Score:3, Interesting)
And 3.2 is compatible with the V3 ABI. Sure, they could just keep the current ABI, and remain incompatible with compilers from Intel and other commercial vendors forever. That doesn't seem like a particularly great path to me, though.
The reason 3.2 is coming out a few days after 3.1.1 is so RedHat, Mandrake, FreeBSD, SuSE, etc can have time to QA it for their next releases. I don't know of any distributions using 3.0 or 3.1 anyway. Debian and *BSD are still on 2.95.x, Redhat/Mandrake are 3.0-beta. Not sure about SuSE though. So, basically, it's not as if the 3.{0,1}/3.2 ABI changes actually affect anyone, because while 3.2 is incompatible with previous 3.0 releases, 3.2 and 3.0/3.1 are "equally incompatible" with whatever the systems are using now.
SUSE asked for 3.2 (Score:2)
Re:Why does the ABI keep changing? (Score:1)
In the current case of course there was a design phase of the ABI that was as complete as possible, but in the course of the implementation some bugs were discovered that made compilation of some valid code impossible.
The only way around this is to change the ABI.
They are implementing a standard ABI (Score:2)
Before, the GCC C++ ABI was whatever GCC produced, so the ABI couldn't really have bugs.
They could of course declare the GCC ABI to be the ABI descrbed in the standard, except for the bugs not caught by 3.0. However, I'm glad they are going with the written standard, rather than trying to Microsoft their bugs into being a de-facto standard.
Speaking of standards ... (Score:1)
# G++ now supports the "named return value optimization": for code like
A f () {
A a;
return a;
}
G++ will allocate 'a' in the return value slot, so that the return becomes a no-op. For this to work, all return statements in the function must return the same variable.
This is a wonderful thing, but does this go against the C or C++ specification?
Re:Speaking of standards ... (Score:2)
It does affect the behavior (Score:2)
However, the optimization is explicitly allowed by the standard, thus code that depend on the side effects in the example is broken.
Re:It does affect the behavior? (Score:1)
Re:Speaking of standards ... (Score:1)
However, the standard explicitly indicates that this optimisation and change in behaviour is permitted, and therefore that you mustn't rely on every apparent constructor actually being called.
The reason the change in behaviour was permitted by the standards body is because it decided that the potential for optimisation would be worth it.
THL.
um, am I missing something? (Score:2)
# MIPS:
Um, I was under the impression that RiscOS was written for the ARM processor(s)? ARM processor(s)? [riscos.com]* RiscOS, mips-*-riscos*
Did somebody just turn two pages at once?
Re:um, am I missing something? (Score:1)
Seriously though, MIPS [mips.com] (I believe) ported Unix to their processors, and called it RiscOS, independent of the RISCOS [riscos.com] guys.
Re:um, am I missing something? (Score:1)
Heh. Bet you didn't know about AIX/i386 then.
Yes, it seriously did exist. Or at least, GCC supported it. I suppose it's possible someone added it in support for it as a joke...
Re:um, am I missing something? (Score:1)
AIX/i386 gets mentioned a few times in the OS/2 Warp manuals, in the chapters that deal with disk partitioning and the IBM Boot Manager. Other than that, it's pretty much forgotten.
Re:um, am I missing something? (Score:1)
I think I was one of the few people to ever try doing software development on AIX/i386 (it was back when an IBM Model 70 was hot stuff).
The only reason we did it was because the IT managers of one of our biggest clients was willing to pay $$$ for us to port from SCO Unix to AIX/i386. This guy was the poster child for 'no body ever got fired for buying IBM' (hell, he even bought a Model 50 for home because 'by the end of the year, everyone will be making MicroChannel machines').
The only good thing about doing the port was that the contract kept the company alive through a lean period.
3.2 Binary Compatbility Break (Score:1)
That's a very good reason to break binary compatibility, they should be congratulated on the change. This could well mean we will have C++ libraries working with each other across compilers and versions on the same platform. Always a positive.