GCC 5.2 Released 91
AmiMoJo writes: The release of GCC 5.2 brings lots of bug fixes and a number of new features. The change list is extensive, featuring improvements to the C compiler, support for new languages like OpenACC, improvements for embedded systems, updates to the standard library and more.
5.2 (Score:5, Informative)
All of those new features were in 5.1. 5.2 is just a few bug fixes.
Re: (Score:2)
All of those new features were in 5.1. 5.2 is just a few bug fixes.
Yes in the old numbering system 5.1 would have been 5.0.0, and this 5.2 is the old 5.0.1.
Good day (Score:1, Informative)
It's a good day to be a developer apparently, as Visual Studio 2015 was also released today.
Re: (Score:3, Informative)
Ugh. Visual Studio 2015 requires Windows 8.1. No thanks.
* https://www.visualstudio.com/e... [visualstudio.com]
Here is the list of bugs fixed in
* GCC 5.2 compiler issues [gnu.org]
* MSVC 2015 compiler issues [msdn.com]
Additional MSVC 2015 bug fixes ...
* MSVC 2015 Features [msdn.com]
* MSVC 2015 (C++11/14/17) [msdn.com]
* MSVC 2015 STL part 1 [msdn.com]
* MSVC 2015 STL part 2 [msdn.com]
Re:Good day (Score:5, Informative)
Windows 7 Service Pack 1
Windows 8.1
Windows 8
Windows Server 2008 R2 SP1
Windows Server 2012
Windows Server 2012 R2
Windows 10
But can compile for a target as old as WinXP.
Re: (Score:2, Offtopic)
But can compile for a target as old as WinXP.
Only if you have a version of Visual Studios installed that supports WInXP. They started that with VS2012 where you could still target WinXP if and only if you have VS2010 installed; VS 2012 was the first version of VS to use a predecessor's compiler environment to support older platforms.
Re: (Score:3)
1. Most corporate environments don't allow tweaks like startisback. Most don't allow much registry tweaking either.
2. Metro is a pain in the ass on the desktop. Win10 fixes some of this, but still leaves much to be desired.
3. People who like to focus on getting the job done don't want extra hassles and steps added to their workflows by idiot managers who insist on software upgrades because newversion > currentversion.
Re: (Score:2)
Truth (Score:2, Interesting)
Instead of "bug fixes and new features", why isn't software ever delivered with the simple truth? "Lots of new bugs."
Re: (Score:2)
Instead of "bug fixes and new features", why isn't software ever delivered with the simple truth? "Lots of new bugs."
Because it's not simply "lots of new bugs". It's a rearrangement of bugs.
All joking aside, metrics over the years have indicated that once a software product reaches maturity, the total number of bugs on file for further releases will be relatively constant.
People still use GCC? (Score:1, Insightful)
I switched to clang long ago, haven't really looked back except to read Stallman's sad rantings
Re: (Score:2, Interesting)
I use clang as well because I develop on a mac and I don't care clang VS GCC. I just hope that clang continues to be compatible with GCC and that GCC continues to move forward. I guess we are now in a kind of golden age for developing on UNIX with such compatible choices to choose from.
But because clang is so reliant on Apple's benefaction, I see this as a potential point of failure. What if apple has a change of heart and puts all their efforts in one language I don't care about, or they start to deviat
Re: (Score:2)
> I use clang as well because I develop on a mac
Sadly clang doesn't support OpenMP yet ... so still using GCC 5.x on OSX 10.9
Re:People still use GCC? (Score:4, Informative)
You are a complete moron for using OpenMP.
That was very helpful; full of thoughtful reasoning and examples. The level of detail in explaining the rational to avoid OpenMP was, to say the least, above and beyond. You sir or madam have made us proud. Well done.
Re: (Score:1)
You must be new here.
That's what Slashdot discussions are for!
Re:People still use GCC? (Score:5, Insightful)
Why would I waste my time explaining things to idiot morons?
Knowledge should be passed along, not hoarded. Everyone is at a different place on the learning curve. In practical terms, that means everyone is an idiot moron with respect to someone else - or, in your case, obviously many others.
Re:People still use GCC? (Score:5, Informative)
I'm not the AC, but I'll try to share the knowledge.
I'm a kernel programmer and worked on a Linux based realtime highdef broadcast quality H.264 video encoder that used a hybrid mix of multiple cores and FPGAs, so I'm fairly familiar with at least one use case.
openMP has uses for parallelizing workloads via pragmas in the compiler code. That is, take an app that is designed for a single CPU, add some pragmas and some openMP calls and let the compiler parallelize it. It does this [mostly] by paralleling loops that it finds.
Parallelizing [simple] loops can be done in [at least] two ways:
(1) A single loop can be parallelized across multiple cores
(2) If a function does loop A followed by loop B and loop A and B share no data, they can be done in parallel.
openMP assumes a shared memory architecture (e.g. all cores are on the same motherboard). Contrast this to MPI that can go "off board" [via a network link]. There are hybrid implementations that use both in a complementary fashion.
A good use case for this is weather prediction/simulation which is highly compute intensive but doesn't have realtime requirements. We just want our final answer ASAP, but what the program does moment-to-moment doesn't matter. Another use case is protein folding.
But, neither openMP nor MPI is well suited to a realtime situation that requires precise control over latency. Also, openMP doesn't support compare-and-swap. And, it's prone to race conditions.
Ideally, designing a given app from the ground up for parallelism is a better choice. If one does that, the fanciness of openMP isn't required. My last implementation of an openMP equivalent [that also incorporated what MPI does] was ~1000 lines of code because the app was pre-split into threads set up in a pipeline. It supported a multi-master, distributed, map/reduce equivalent using worker threads [still within 1000 lines].
Consider the second loop parallelization case. It's easy enough for a programmer to see that loop A and loop B are disjoint and put them in separate threads (e.g. A and B). But, if one is aware of this, the splitup can be done even if loop A and B share some data because one can control the synchronization between threads precisely. Extend this to 40-50 threads that have a more complex dependency graph.
Note that latency means that a given thread A will deliver its results to thread B in a finite/precise/predictable/repeatable amount of time. In video processing, each stage must finish processing within a the allotted for a video frame [usually 1/30th of a second]. With extra buffering, that can be relaxed a bit, but the average must be 1/30th and can't vary too widely (e.g. no frame could take [say] 1/2 second).
Thus, the AC, although snide, is partially right. If I were doing an implementation, I believe the result would be better not using openMP. But, I've got 40+ years doing realtime systems. Not everybody does. Most consumers of openMP [and/or MPI] are usually scientists/researchers who are [no doubt] experts in their field, but they're usually not expert level programmers. And, they usually don't have the restrictions imposed by a realtime system. Notable exceptions: programming for MRI/PET/etc machines.
Re: (Score:2)
Thanks for the info. I'm familiar with MP concepts but not openMP specifically. As for the AC being right or wrong, it doesn't matter. Neither of the AC comments in the tree were helpful in any real way and they were simply being snide, childish, dicks for no reason -- so 0/10 would not hire :-)
Re: (Score:1)
Re: (Score:3)
Pretending to be am armchair expert who is still struggling to understand ad hominem [wikipedia.org] attacks just makes you look like an complete tool but then what can you expect from an Anonymous Coward. /sarcasm Clearly the rest of us are doing it wrong.
Re: (Score:2)
For all practical purposes, CLANG is useless if you want to develop C++.
CLANG is incompatible with libstdc++ or microsoft's c++ library. Which means you have to use the libc++ that they supply. Unfortunately libc++ is not available on windows, so any app that uses C++ features is out. On linux, if you want to use the C++ features, it is pretty much impossible to cross-link against libc++ and the other libs on your distribution that may be complied with g++, so you have to compile every library you want
Re: (Score:3)
WTF? I use Clang on Linux and link against the GNU C++ runtime library all the time. It works just fine. Why are you spreading FUD?
Re: (Score:2)
Just to clarify: were you able to link an app compiled with CLANG with libstdc++ compiled with g++, ,and another C++ lib compiled with g++, like say libqt4? And it ran without any problems?
If this is true I apologize to the CLANG community. But the fact is our team has not been able to get this to work on RHEL and I admit we did not dig too much deeper into that since anecdotal info on the web indicated that this is not possible. Our code did not even link, and the same code compiled with g++ linked and
Re: (Score:2)
Yes, one supported configuration for building MAME [mamedev.org] is using Clang on Linux. It links against distro-provided, GCC-built libstdc++ and Qt4. It definitely works using Clang 3.4 or later on Fedora 20 or later. I've also successfully built applications with Clang against distro-provided, GCC-built libstdc++, xerces-c and Clang on CentOS 6 and later, and Fedora 20 and later.
There are some issues with experimental C++14 mode in Clang that cause it to choke on some of the libstdc++ headers, but these are real k
Re: (Score:2)
Changed version numbering scheme (Score:2, Informative)
This is just a bugfix release. With the old (GCC 4) versioning scheme this would've been called 5.0.1.
autotools is no fun (Score:5, Insightful)
I've been configuring a toolchain for Algoram's programmable radio transceiver, which has a SmartFusion 2 containing a Cortex M3. Until today, I've been working with GCC 5.1. Building GCC for cross-compilation on a no-MMU, no-FP processor and a software platform that doesn't support shared libraries isn't trivial, though it should be. GCC has many configure scripts, one for each library that it builds and at least one for the compiler. You run across many configure issues which are difficult to debug. For example, the configure file, a macro-expanded shell script, doesn't have source code line numbers from its configure.ac file. Error messages do not in general indicate the actual problem, and are difficult to trace. Figuring out what to fix is far from trivial. I ended up not being able to use multilibs (which would have allowed me to build for FP processors like Cortex M4F as well), couldn't link in ISL, couldn't build libjava.
Some of these are beginner problems - I'm new to building cross-toolchains and have avoided autotools as much as possible before this project. But not all of them.
One would think that we could build a better system today than such voluminous M4 and shell. Perhaps basing it on a test framework might be the right approach.
Try Stack Overflow and --synclines (Score:5, Informative)
Perhaps you could demonstrate the difficulty of building a cross-GCC by phrasing your rant in the form of a good Stack Overflow question [jonskeet.uk]. Explain what you are trying to do, what web search queries you used, what you tried, what you expected, and what each failure looked like. If they are in fact "beginner problems", getting the question onto SO should eventually help future web searchers find the answer more easily. Or if Stack Overflow scares you [pineight.com], you might try looking at how it was done in devkitARM [devkitpro.org].
For line numbers in an M4 script, have you tried adding --synclines [gnu.org]?
If error messages from some compiler or interpreter are unhelpful, have you tried filing bugs against said compiler or interpreter to improve the usefulness of its error messages?
Re: Try Stack Overflow and --synclines (Score:1)
Why would he waste time asking a question at SO, when a likely outcome is that some power-tripping mod will come along and incorrectly deem it a "bad" question, and then lock it?
Re: (Score:2)
So long as a question meets the guidelines set forth in Jon Skeet's essay, actions by "some power-tripping mod" can be appealed on Meta Stack Overflow.
Re: Try Stack Overflow and --synclines (Score:5, Insightful)
This isn't really a problem for StackOverflow. It's a problem for the developers of GCC and its libraries, and a policy problem for the overall GNU project in that Autotools is IMO too much of a mess to live, and is a barrier to participation as it stands. That's why I talk about it here instead of just submitting it as a bug report.
I would like to see someone come up with an alternative. That alternative is not CMake or Scons, etc., because those are build systems rather than systems that probe a platform for fine differences in the programming environment and produce a set of macro switches as output.
Re: (Score:2)
This is assuming that a shell capable of running ./configure is available on all of your platforms.
Re: (Score:2)
No. Make, a cross compiler, and assembler need to be able to run on your HOST platform. It's configure that's worthless if it won't run on the TARGET platform.
Re: (Score:2)
That's not the way it works. When cross-compiling, configure runs on the host. It tests that things compile. It doesn't test that they run.
Re: (Score:2)
At least that's how it's supposed to be, but I have seen way too many broken ones that will build for the host but it's easier to replace the build system than it is to make it cross compile.
Re: (Score:2)
If you're building on *n?x, do more *n?x systems have cmake or a shell installed? If you're building on Windows, it comes with neither.
Re: Try Stack Overflow and --synclines (Score:5, Informative)
You can think of CMake as autoheader, autoconf, automake and libtool rolled up into one tool. It's a scripting language which is evaluated, the end products being generated files of any type you like, plus the files for a specified build system, typically Makefiles on UNIX, but could be many other types. It's a superset of the functionality of the autotools, and is vastly more maintainable and flexible, not to mention portable to non-POSIX platforms. It's not the nicest language I've encountered, but it's certainly better than the multi-language mess of m4/shell/templates we live with in the Autoconf world.
You can do all the feature testing with CMake that you can with Autoconf. For example, in place of AC_TRY_COMPILE, you use CHECK_C_SOURCE_COMPILES [cmake.org], or the equivalent in another language. There are variants for all sorts of other feature tests and checks, same as with Autoconf. But in general, I think it solves current portability problems somewhat better and more portably that the Autotools, which seem to still be stuck in the 90s in terms of the problems they try to solve. Example: portably enabling threading.
When creating custom headers for macros from the feature test results, in place of AC_CONFIG_HEADER you would use configure_file [cmake.org], which does exactly the same thing using CMake variables.
After 15 years of autotools usage, I converted my most important projects to CMake around 22 months ago, and haven't looked back. Most recently, I did conversions from autotools for bzip2 and libtiff. In both cases, the conversion is pretty much a 1:1 change from Autoconf macro or Automake variable to the corresponding CMake macro/function/variable.
Regards,
Roger
Re: (Score:2)
I don't really get your point:
It seems to be "all build systems suck but autotools is more suitable than cmake, scons etc".
This seems to be a common opinion, since build systems are the first line of defence agains anyone trying to compile the program of course. Naturally the authors of systems designed to be better than autoconf and make are usually written by people who understand neither and as a result work worse on all but the simplest projects.
What would your ideal build/configuration system be?
I thin
Re: (Score:2)
CMake, Scons, etc. are mainly targeted at dependency-based building of programs. Autotools doesn't really build anything. It goes through a long list of system facilities, determining if each is present. For many, perhaps most of them, it builds a little C program that exercises the facility, and sees if it compiles.
Now, there's another poster who says you really can do this with CMake, which I'll have to look at.
Re: (Score:2)
I should correct that: automake builds things by creating makefiles. The configure script created by autoconf is concerned with configuration rather than building, but its output is input to automake.
Re: (Score:2)
My reply is a bit disorganised, as are my thoughts on this matter.
CMake, Scons, etc. are mainly targeted at dependency-based building of programs. Autotools doesn't really build anything. It goes through a long list of system facilities, determining if each is present. For many, perhaps most of them, it builds a little C program that exercises the facility, and sees if it compiles. Now, there's another poster who says you really can do this with CMake, which I'll have to look at.
I'm not sure. My exp
Re: (Score:2)
I'm not sure. My experiences with CMake have been somewhat less than stellar. Cross compiling seems to be very much a second class citizen, whereas autoconf whines loudly if you break such things. As such cLAPACK actually won't cross compile, or wouldn't last time I tried it at any rate.
That has been my experience too.
I'm actually not much a fan of automake. I personally quite like autoconf plus GNU Make.
Autoconf is kind of the partner of automake, right?
Re: (Score:2)
Autoconf is kind of the partner of automake, right?
Kinda: it's a pair of tools. autoconf is a dependency scanner. Automake is to automate the creation of makefiles and dependency scanning, i.e. automake generates an autoconf script as part of its output.
You can use autoconf alone though.
Re: (Score:2)
Hi Bruce,
As an example, take a look at the script for libtiff [github.com], lines 180-402 in particular since these are copying exactly what the original configure script does. The rest is also copying the configure script (options, etc.), but this section is the feature tests.
Re: (Score:2)
Roger,
This is great. It does look like a 1:1 mapping to what we expect autoconf to do, except neater and maintainable.
The only problem with selling this to GNU folks is that it would make CMake a prerequisite to everything. But I think it's worth it. And then there's inertia. And the language isn't as pretty as we'd like.
Can you see any other possible objections?
Thanks
Bruce
Re: (Score:3)
As another poster commented, it's another tool to install, which is a burden. And if you use a newer version, it makes it harder to build the package on older distributions; though as you can see at the top of my example, you can specify a minimum cmake version and also through its policy mechanism general behaviour matching a specific version with tweaks for individual policies, so it's certainly possible to be backward compatible if you make the effort, at the expense of not using newer-version-only feat
Re: (Score:2)
Now, there's another poster who says you really can do this with CMake, which I'll have to look at.
Did you find out if this works? I'm interested because last time I tried to do complex cross-platform compilation with cmake, I eventually gave up completely and wrote my own build script for several different projects. I would be happy to know that actually it does work.
I think the best way to go about this might be to create a front-end to autotools. Once you have are really nice syntax system, then eventually the underlying system can be re-implemented. But setting up the syntax system is something a s
Re: (Score:2)
Re: (Score:2)
Be bold in fixing closed questions (Score:1)
The most useful posts I see on [Stack Overflow], in terms of questions that I want answered, tend to be the ones that were locked for being in the incorrect format
Then put it into the correct format. Anyone can suggest edits to a question on a Stack Exchange site, and if the edit is accepted, the question goes to the reopen queue.
Re: (Score:2)
The mods are a bunch of language-lawyering pedants who regularly reject questions because of how they're worded
Funny you mention this. In the past month many of the SO answers that Google found were rejected, but there were some really good answers before the question was closed. I thought they were good questions and the answers were perfect, yet some mods decided to shut it down. At least they quickly got answered before the mod-trolls hit.
Re:Try Stack Overflow and --synclines (Score:4)
Besides devKitARM, there is the collection of toolchains mentioned here [elinux.org]. I am getting most of my clues from the Emcraft toolchain, which is the only one for the SmartFusion. And we're great friends with Emcraft, but I want something a bit newer and a different build-tree style.
My last approach to the libstdc++ mailing list, here [gnu.org], was left unanswered. I figured out the problem behind that one, but it would have been nice to get some advice.
Autoconf doesn't have a --synclines flag, but I might be able to pass it in the M4 environment variable. I'll give it a try.
Re: (Score:3)
Yes, I can get a pre-built toolchain or a building kit, but it doesn't really solve the problem of not being able to build the current GCC with the right settings in its configure script and to use it with the right C library and kernel headers for my device. Should I modify any of those toolchain kits to do that, they'll come up with the same errors.
Re:autotools is no fun (Score:5, Informative)
Autotools has the same problem. Each system it targets has its own weird idiosyncrasies. Autotools has grown up over time to handle all those idiosyncrasies, as people had them.
Are autotools perfect? No, they are kind of weird. And moreover, when people write build scripts, they often write the most hacky code (which is the problem Maven's strictness is designed to solve). Some codebases are so poorly written, that to get them to cross-compile, you need to modify the source itself. That has nothing to do with the build system, it's because of a lousy programmer.
However, for all it's annoyances, there's just no other system that has the same flexibility in configuring a codebase to run on weird systems as autotools. They have all the features, the only weirdness is the strange syntax. (As an aside, I've never used a macro system that had easy-to-remember commands. I don't know if this is a problem with macro systems in general, or just a problem with the ones I've used, or just a problem with my memory).
I hope they fixed the size issue. (Score:1)
A compiler for embedded systems needs to understand and use a smaller foot print. Current version cannot run in less than 128MB of memory. No swapping part of it out. Not pipes between stages. ALL MUST BE IN MEMORY at the same time. What a waste.
Re:I hope they fixed the size issue. (Score:5, Informative)
A compiler for embedded systems needs to understand and use a smaller foot print. Current version cannot run in less than 128MB of memory.
I think the idea is that even if you're targeting an embedded system, you're still hosting the compiler on something with a keyboard big enough to comfortably edit code, such as a server, desktop PC, or laptop PC. Even dinky little Atom-powered tablets come with 2 GB of RAM nowadays.
But there's a different size issue: footprint of the GNU implementation of the C++ standard library when it is statically linked. Years ago, I used devkitARM, a distribution of GCC targeting the Game Boy Advance, a platform with 256 KiB* of main RAM, 32 KiB of tightly coupled fast RAM generally used for the stack and certain inner loops, and 96 KiB of video memory. (It also had up to 32 MiB of cartridge ROM, but not if receiving the program from a GameCube or from the first player in a multiplayer network.) I compiled C++ "hello world" programs using the static Newlib and libstdc++ that shipped with devkitARM. A hello world program using <cstdio> was less than 16 KiB, including the statically linked terminal. So as long as I stuck to C libraries, I was fine. But a similar program using <iostream> produced an 180,032 byte executable, even after turning on relatively aggressive options to remove dead code. That left less than one-third of main RAM free for actual program code and data. I debugged into it, and the culprit turned out to be "locale" stuff (date, time, and currency formatting) that got initialized on std::cout even if the program never printed any date, time, or currency types.
* That's 262,144 bytes, about a quarter of a megabyte, or about a four-thousandth of a gigabyte.
Re: (Score:3)
Conclusion: never use iostreams.
Re:I hope they fixed the size issue. (Score:5, Insightful)
The compiler? Why would you want to run that on an embedded system?
Did they fix multilib vs gnueabihf (Score:3)
Did they fix the conflict between gcc-multilib and gcc-arm-linux-gnueabihf?
Since Ubuntu Trusty was released, over a year ago, there has been a conflict between gcc-multilib (needed for building and running 32-bit application on 64-bit Intel/AMD architectures) and several cross compiler suites including gcc-arm-linux-gnueabihf (the cross-compiler suite needed for developing applications for ARM processors, such as those on the BeagleBones and many Internet of Things devices.)
This means if you want to do cross-development and you have a 64-bit machine running a 64-bit install and doing builds for itself for both 64 and 32 bit environments, or running some 32-bit applications, you can't just install the cross-tools from the repository and dig in. You need a separate machines for cross-development, or you need to take time out to do your own hacking of the tools.
I've looked around the net for solutions: The issue seems to be a disconnect between teams, primarily over conflicting uses of the symlink at /usr/include/asm. But I haven't found any clear description of how to work around this, nor has the problem been fixed in the repositories. After over a year I find this very disappointing.
Has this been addressed with this new release of the underlying compilers?
Re: (Score:2)
Wouldn't it be easier to check your bugzilla ticket?
There were already several tickets out on it (and flames from the maintainers about duplicates) so I didn't file another. None of them seemed to indicate that anyone, let alone the core compiler crew, were intending to do anything abut it.
I was hoping someone more actively engaged in either reporting or dealing with the issue might happen to be participating here and care to chime in. B-)
Meanwhile, others should know that there IS (or, we can hope, WAS)
Re: (Score:2)
Also: It's not clear to me which group should be handling this. It seems to be a conflict between how two projects downstream of the compiler itself are handling a global namespace.
I'd only expect the compiler guys to fix it if they decided the downstream stuff was a problem and pulled part of it into their own stuff to settle the matter.