Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Technology

GCC 5.2 Released 91

AmiMoJo writes: The release of GCC 5.2 brings lots of bug fixes and a number of new features. The change list is extensive, featuring improvements to the C compiler, support for new languages like OpenACC, improvements for embedded systems, updates to the standard library and more.
This discussion has been archived. No new comments can be posted.

GCC 5.2 Released

Comments Filter:
  • 5.2 (Score:5, Informative)

    by Anonymous Coward on Monday July 20, 2015 @10:15AM (#50145197)

    All of those new features were in 5.1. 5.2 is just a few bug fixes.

    • All of those new features were in 5.1. 5.2 is just a few bug fixes.

      Yes in the old numbering system 5.1 would have been 5.0.0, and this 5.2 is the old 5.0.1.

  • Good day (Score:1, Informative)

    It's a good day to be a developer apparently, as Visual Studio 2015 was also released today.

  • Truth (Score:2, Interesting)

    by Anonymous Coward

    Instead of "bug fixes and new features", why isn't software ever delivered with the simple truth? "Lots of new bugs."

    • Instead of "bug fixes and new features", why isn't software ever delivered with the simple truth? "Lots of new bugs."

      Because it's not simply "lots of new bugs". It's a rearrangement of bugs.

      All joking aside, metrics over the years have indicated that once a software product reaches maturity, the total number of bugs on file for further releases will be relatively constant.

  • by Anonymous Coward

    I switched to clang long ago, haven't really looked back except to read Stallman's sad rantings

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      I use clang as well because I develop on a mac and I don't care clang VS GCC. I just hope that clang continues to be compatible with GCC and that GCC continues to move forward. I guess we are now in a kind of golden age for developing on UNIX with such compatible choices to choose from.

      But because clang is so reliant on Apple's benefaction, I see this as a potential point of failure. What if apple has a change of heart and puts all their efforts in one language I don't care about, or they start to deviat

      • > I use clang as well because I develop on a mac

        Sadly clang doesn't support OpenMP yet ... so still using GCC 5.x on OSX 10.9

      • For all practical purposes, CLANG is useless if you want to develop C++.

        CLANG is incompatible with libstdc++ or microsoft's c++ library. Which means you have to use the libc++ that they supply. Unfortunately libc++ is not available on windows, so any app that uses C++ features is out. On linux, if you want to use the C++ features, it is pretty much impossible to cross-link against libc++ and the other libs on your distribution that may be complied with g++, so you have to compile every library you want

        • by _merlin ( 160982 )

          WTF? I use Clang on Linux and link against the GNU C++ runtime library all the time. It works just fine. Why are you spreading FUD?

          • Just to clarify: were you able to link an app compiled with CLANG with libstdc++ compiled with g++, ,and another C++ lib compiled with g++, like say libqt4? And it ran without any problems?

            If this is true I apologize to the CLANG community. But the fact is our team has not been able to get this to work on RHEL and I admit we did not dig too much deeper into that since anecdotal info on the web indicated that this is not possible. Our code did not even link, and the same code compiled with g++ linked and

            • by _merlin ( 160982 )

              Yes, one supported configuration for building MAME [mamedev.org] is using Clang on Linux. It links against distro-provided, GCC-built libstdc++ and Qt4. It definitely works using Clang 3.4 or later on Fedora 20 or later. I've also successfully built applications with Clang against distro-provided, GCC-built libstdc++, xerces-c and Clang on CentOS 6 and later, and Fedora 20 and later.

              There are some issues with experimental C++14 mode in Clang that cause it to choke on some of the libstdc++ headers, but these are real k

      • I don't think clang+llvm will just fail once Apple drops support. There are so many other companies supporting clang+llvm and use them to do important things. See: http://llvm.org/Users.html [llvm.org]
  • by Anonymous Coward

    This is just a bugfix release. With the old (GCC 4) versioning scheme this would've been called 5.0.1.

  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Monday July 20, 2015 @10:53AM (#50145517) Homepage Journal

    I've been configuring a toolchain for Algoram's programmable radio transceiver, which has a SmartFusion 2 containing a Cortex M3. Until today, I've been working with GCC 5.1. Building GCC for cross-compilation on a no-MMU, no-FP processor and a software platform that doesn't support shared libraries isn't trivial, though it should be. GCC has many configure scripts, one for each library that it builds and at least one for the compiler. You run across many configure issues which are difficult to debug. For example, the configure file, a macro-expanded shell script, doesn't have source code line numbers from its configure.ac file. Error messages do not in general indicate the actual problem, and are difficult to trace. Figuring out what to fix is far from trivial. I ended up not being able to use multilibs (which would have allowed me to build for FP processors like Cortex M4F as well), couldn't link in ISL, couldn't build libjava.

    Some of these are beginner problems - I'm new to building cross-toolchains and have avoided autotools as much as possible before this project. But not all of them.

    One would think that we could build a better system today than such voluminous M4 and shell. Perhaps basing it on a test framework might be the right approach.

    • Perhaps you could demonstrate the difficulty of building a cross-GCC by phrasing your rant in the form of a good Stack Overflow question [jonskeet.uk]. Explain what you are trying to do, what web search queries you used, what you tried, what you expected, and what each failure looked like. If they are in fact "beginner problems", getting the question onto SO should eventually help future web searchers find the answer more easily. Or if Stack Overflow scares you [pineight.com], you might try looking at how it was done in devkitARM [devkitpro.org].

      For line numbers in an M4 script, have you tried adding --synclines [gnu.org]?

      If error messages from some compiler or interpreter are unhelpful, have you tried filing bugs against said compiler or interpreter to improve the usefulness of its error messages?

      • by Anonymous Coward

        Why would he waste time asking a question at SO, when a likely outcome is that some power-tripping mod will come along and incorrectly deem it a "bad" question, and then lock it?

        • by tepples ( 727027 )

          So long as a question meets the guidelines set forth in Jon Skeet's essay, actions by "some power-tripping mod" can be appealed on Meta Stack Overflow.

          • by Bruce Perens ( 3872 ) <bruce@perens.com> on Monday July 20, 2015 @12:11PM (#50146189) Homepage Journal

            This isn't really a problem for StackOverflow. It's a problem for the developers of GCC and its libraries, and a policy problem for the overall GNU project in that Autotools is IMO too much of a mess to live, and is a barrier to participation as it stands. That's why I talk about it here instead of just submitting it as a bug report.

            I would like to see someone come up with an alternative. That alternative is not CMake or Scons, etc., because those are build systems rather than systems that probe a platform for fine differences in the programming environment and produce a set of macro switches as output.

            • You can think of CMake as autoheader, autoconf, automake and libtool rolled up into one tool. It's a scripting language which is evaluated, the end products being generated files of any type you like, plus the files for a specified build system, typically Makefiles on UNIX, but could be many other types. It's a superset of the functionality of the autotools, and is vastly more maintainable and flexible, not to mention portable to non-POSIX platforms. It's not the nicest language I've encountered, but it's certainly better than the multi-language mess of m4/shell/templates we live with in the Autoconf world.

              You can do all the feature testing with CMake that you can with Autoconf. For example, in place of AC_TRY_COMPILE, you use CHECK_C_SOURCE_COMPILES [cmake.org], or the equivalent in another language. There are variants for all sorts of other feature tests and checks, same as with Autoconf. But in general, I think it solves current portability problems somewhat better and more portably that the Autotools, which seem to still be stuck in the 90s in terms of the problems they try to solve. Example: portably enabling threading.

              When creating custom headers for macros from the feature test results, in place of AC_CONFIG_HEADER you would use configure_file [cmake.org], which does exactly the same thing using CMake variables.

              After 15 years of autotools usage, I converted my most important projects to CMake around 22 months ago, and haven't looked back. Most recently, I did conversions from autotools for bzip2 and libtiff. In both cases, the conversion is pretty much a 1:1 change from Autoconf macro or Automake variable to the corresponding CMake macro/function/variable.

              Regards,
              Roger

            • I don't really get your point:

              It seems to be "all build systems suck but autotools is more suitable than cmake, scons etc".

              This seems to be a common opinion, since build systems are the first line of defence agains anyone trying to compile the program of course. Naturally the authors of systems designed to be better than autoconf and make are usually written by people who understand neither and as a result work worse on all but the simplest projects.

              What would your ideal build/configuration system be?

              I thin

              • CMake, Scons, etc. are mainly targeted at dependency-based building of programs. Autotools doesn't really build anything. It goes through a long list of system facilities, determining if each is present. For many, perhaps most of them, it builds a little C program that exercises the facility, and sees if it compiles.

                Now, there's another poster who says you really can do this with CMake, which I'll have to look at.

                • I should correct that: automake builds things by creating makefiles. The configure script created by autoconf is concerned with configuration rather than building, but its output is input to automake.

                • My reply is a bit disorganised, as are my thoughts on this matter.

                  CMake, Scons, etc. are mainly targeted at dependency-based building of programs. Autotools doesn't really build anything. It goes through a long list of system facilities, determining if each is present. For many, perhaps most of them, it builds a little C program that exercises the facility, and sees if it compiles. Now, there's another poster who says you really can do this with CMake, which I'll have to look at.

                  I'm not sure. My exp

                  • I'm not sure. My experiences with CMake have been somewhat less than stellar. Cross compiling seems to be very much a second class citizen, whereas autoconf whines loudly if you break such things. As such cLAPACK actually won't cross compile, or wouldn't last time I tried it at any rate.

                    That has been my experience too.

                    I'm actually not much a fan of automake. I personally quite like autoconf plus GNU Make.

                    Autoconf is kind of the partner of automake, right?

                    • Autoconf is kind of the partner of automake, right?

                      Kinda: it's a pair of tools. autoconf is a dependency scanner. Automake is to automate the creation of makefiles and dependency scanning, i.e. automake generates an autoconf script as part of its output.

                      You can use autoconf alone though.

                • by rl117 ( 110595 )

                  Hi Bruce,

                  As an example, take a look at the script for libtiff [github.com], lines 180-402 in particular since these are copying exactly what the original configure script does. The rest is also copying the configure script (options, etc.), but this section is the feature tests.

                  • Roger,

                    This is great. It does look like a 1:1 mapping to what we expect autoconf to do, except neater and maintainable.

                    The only problem with selling this to GNU folks is that it would make CMake a prerequisite to everything. But I think it's worth it. And then there's inertia. And the language isn't as pretty as we'd like.

                    Can you see any other possible objections?

                    Thanks

                    Bruce

                    • by rl117 ( 110595 )

                      As another poster commented, it's another tool to install, which is a burden. And if you use a newer version, it makes it harder to build the package on older distributions; though as you can see at the top of my example, you can specify a minimum cmake version and also through its policy mechanism general behaviour matching a specific version with tweaks for individual policies, so it's certainly possible to be backward compatible if you make the effort, at the expense of not using newer-version-only feat

                • Now, there's another poster who says you really can do this with CMake, which I'll have to look at.

                  Did you find out if this works? I'm interested because last time I tried to do complex cross-platform compilation with cmake, I eventually gave up completely and wrote my own build script for several different projects. I would be happy to know that actually it does work.

                  I think the best way to go about this might be to create a front-end to autotools. Once you have are really nice syntax system, then eventually the underlying system can be re-implemented. But setting up the syntax system is something a s

      • Besides devKitARM, there is the collection of toolchains mentioned here [elinux.org]. I am getting most of my clues from the Emcraft toolchain, which is the only one for the SmartFusion. And we're great friends with Emcraft, but I want something a bit newer and a different build-tree style.

        My last approach to the libstdc++ mailing list, here [gnu.org], was left unanswered. I figured out the problem behind that one, but it would have been nice to get some advice.

        Autoconf doesn't have a --synclines flag, but I might be able to pass it in the M4 environment variable. I'll give it a try.

    • by phantomfive ( 622387 ) on Monday July 20, 2015 @12:23PM (#50146345) Journal
      Building things that will compile on (or in your case, for) completely different systems is not a trivial problem. You have to deal with completely different library formats (sometimes even on the same system, as is the case with Windows), name mangling conventions, as you mentioned MMUs, completely different processors.....figuring out all the places your program needs to be flexible is not an easy thing. There are so many places you need to be flexible.

      Autotools has the same problem. Each system it targets has its own weird idiosyncrasies. Autotools has grown up over time to handle all those idiosyncrasies, as people had them.

      Are autotools perfect? No, they are kind of weird. And moreover, when people write build scripts, they often write the most hacky code (which is the problem Maven's strictness is designed to solve). Some codebases are so poorly written, that to get them to cross-compile, you need to modify the source itself. That has nothing to do with the build system, it's because of a lousy programmer.

      However, for all it's annoyances, there's just no other system that has the same flexibility in configuring a codebase to run on weird systems as autotools. They have all the features, the only weirdness is the strange syntax. (As an aside, I've never used a macro system that had easy-to-remember commands. I don't know if this is a problem with macro systems in general, or just a problem with the ones I've used, or just a problem with my memory).
  • by Anonymous Coward

    A compiler for embedded systems needs to understand and use a smaller foot print. Current version cannot run in less than 128MB of memory. No swapping part of it out. Not pipes between stages. ALL MUST BE IN MEMORY at the same time. What a waste.

    • A compiler for embedded systems needs to understand and use a smaller foot print. Current version cannot run in less than 128MB of memory.

      I think the idea is that even if you're targeting an embedded system, you're still hosting the compiler on something with a keyboard big enough to comfortably edit code, such as a server, desktop PC, or laptop PC. Even dinky little Atom-powered tablets come with 2 GB of RAM nowadays.

      But there's a different size issue: footprint of the GNU implementation of the C++ standard library when it is statically linked. Years ago, I used devkitARM, a distribution of GCC targeting the Game Boy Advance, a platform with 256 KiB* of main RAM, 32 KiB of tightly coupled fast RAM generally used for the stack and certain inner loops, and 96 KiB of video memory. (It also had up to 32 MiB of cartridge ROM, but not if receiving the program from a GameCube or from the first player in a multiplayer network.) I compiled C++ "hello world" programs using the static Newlib and libstdc++ that shipped with devkitARM. A hello world program using <cstdio> was less than 16 KiB, including the statically linked terminal. So as long as I stuck to C libraries, I was fine. But a similar program using <iostream> produced an 180,032 byte executable, even after turning on relatively aggressive options to remove dead code. That left less than one-third of main RAM free for actual program code and data. I debugged into it, and the culprit turned out to be "locale" stuff (date, time, and currency formatting) that got initialized on std::cout even if the program never printed any date, time, or currency types.

      * That's 262,144 bytes, about a quarter of a megabyte, or about a four-thousandth of a gigabyte.

    • by Lisandro ( 799651 ) on Monday July 20, 2015 @11:45AM (#50145951)

      The compiler? Why would you want to run that on an embedded system?

  • Did they fix the conflict between gcc-multilib and gcc-arm-linux-gnueabihf?

    Since Ubuntu Trusty was released, over a year ago, there has been a conflict between gcc-multilib (needed for building and running 32-bit application on 64-bit Intel/AMD architectures) and several cross compiler suites including gcc-arm-linux-gnueabihf (the cross-compiler suite needed for developing applications for ARM processors, such as those on the BeagleBones and many Internet of Things devices.)

    This means if you want to do cross-development and you have a 64-bit machine running a 64-bit install and doing builds for itself for both 64 and 32 bit environments, or running some 32-bit applications, you can't just install the cross-tools from the repository and dig in. You need a separate machines for cross-development, or you need to take time out to do your own hacking of the tools.

    I've looked around the net for solutions: The issue seems to be a disconnect between teams, primarily over conflicting uses of the symlink at /usr/include/asm. But I haven't found any clear description of how to work around this, nor has the problem been fixed in the repositories. After over a year I find this very disappointing.

    Has this been addressed with this new release of the underlying compilers?

Real programmers don't comment their code. It was hard to write, it should be hard to understand.

Working...