Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Google Programming Technology

C++ the Clear Winner In Google's Language Performance Tests 670

Posted by Soulskill
from the meep-meep dept.
Paul Dubuc writes "Google has released a research paper (PDF) that suggests C++ is the best-performing language on the market. It's not for everyone, though. They write, '...it also required the most extensive tuning efforts, many of which were done at a level of sophistication that would not be available to the average programmer.'"
This discussion has been archived. No new comments can be posted.

C++ the Clear Winner In Google's Language Performance Tests

Comments Filter:
  • Common knowledge (Score:5, Insightful)

    by PitaBred (632671) <(slashdot) (at) (pitabred.dyndns.org)> on Wednesday June 15, 2011 @01:05AM (#36446386) Homepage

    I thought this was common knowledge. Did anyone ever doubt that?

    • by Anonymous Coward on Wednesday June 15, 2011 @01:11AM (#36446412)

      It's been common knowledge for at least a decade that Java is 6 months away from being quicker than c++.

      • by rvw (755107)

        It's been common knowledge for at least a decade that Java is 6 months away from being quicker than c++.

        Just wait for Java++! Oh wait...

      • Talk to Java heads they'll tell you Java is already faster than C++. They can show you some contrived tests to demonstrate this too! Of course they never seem to have a good answer for why if this is the case all performance important apps (like games, audio software, etc) are written in something else or why Java linpack pulls like 750Mflops on a system where native compiled (from C) Linpack gets 41Gflops. However they are sure it is faster!

        • by mario_grgic (515333) on Wednesday June 15, 2011 @05:48AM (#36447900)
          That's because Java floats are not the same as C floats so compiler can't use the CPU's FPU to do the work for it directly. Java float types make more wider guarantees than C floats (that are usually just restricted to underlying hardware float). This is exactly the reason why Fortran beats C or C++ when it comes to numerical computation. The Fortran compiler can output faster code because language makes some guarantees and it's easier/possible to optimize it better.
          • This is exactly the reason why Fortran beats C or C++ when it comes to numerical computation.

            Those are different guarantees. Fortran provides some better floating point facilities for numerics, like errors on a loss of precision, but those don't affect speed on common hardware. The main thing about FORTRAN is that the lack of array aliasing and pointers makes optimization much easier. C now has the restrict keyword which gives you similar options for optimization.

          • by w_dragon (1802458)
            This is also true of integer types - Java guarantees a long is 64-bits. C++, well, on a 32-bit machine a long will be 32 bits with any compiler I'm aware of. On 64 bit Windows a long remains 32 bits. On 64 bit Linux a long is 64 bits. Makes Java easier to write in, but if you try to run it on a 32-bit machine you'll take a performance hit if you're using a long where an int would do.
        • by rjstanford (69735) on Wednesday June 15, 2011 @08:09AM (#36449112) Homepage Journal

          Talk to Java heads they'll tell you Java is already faster than C++. They can show you some contrived tests to demonstrate this too!

          Take a look at the comments on http://jeremymanson.blogspot.com/2011/06/scala-java-shootout.html [blogspot.com] about the paper:

          Here's one from the top:

          The benchmark did not use Java collections effectively. It was generating millions of unnecessary objects per second, and the runtime was swamped by GC overhead. In addition to the collections-related fixes I made (as described in the paper), the benchmark also used a HashMap when it should have just stored a primitive int in Type; it also used a LinkedList where it should have used an ArrayDeque. In fact, after being pressed on the matter by other Googlers (and seeing the response the C++ version got), I did make these changes to the benchmark, which brought the numbers down by another factor of 2-3x. The mail I sent Robert about it got lost in the shuffle (he's a busy guy), but he later said he would update the web site.

          Changes which, I might add, are still far easier for the average Java peon-on-the-street to understand than the C++ equivalents. The fact that the paper was comparing one program in C++ that had been optimized to within an inch of its life with another program, in Java, that had had someone spend about an hour "cleaning it up a little," makes for a grossly unfair comparison.

          The fact that the "naive" (far more common) programs were all relatively the same speed was insightful.

    • by AuMatar (183847) on Wednesday June 15, 2011 @01:11AM (#36446422)

      There's a lot of Java-ites who claim that Java is just as fast. They're idiots, but they're vocal.

      • by drolli (522659)

        Oh yes. Why shouldn't a GC language where the GC has to search through lists regularly instead of you telling the memory management what to clean up by giving it the pointer be faster?

        The magic of the buzzwords creates a reality distortion field around the code.

        (I like java, but i know how to profile code.).

        • Re:Common knowledge (Score:5, Informative)

          by EvanED (569694) <evaned@ g m ail.com> on Wednesday June 15, 2011 @01:36AM (#36446568)

          A GC is a net loss, but don't think it doesn't have good effects mixed in there. Allocation of memory is a few instructions with a GC; malloc() can't hope to be in that same league. On a whole, GCs improve the memory locality of your program as it runs, with some substantial hiccups at the collections; manual memory management hinders it as your heap becomes more fragmented.

          On a whole I think most programs nowadays should be written in a managed language; the performance gap just doesn't matter for most things.

          • Re: (Score:2, Interesting)

            by Anonymous Coward

            Things are not so rosy on systems where RAM is scarce and/or there is no swap, as you can watch with the headaches of Android developers. On Android each allocation seems to take a chunk of your soul and the SDK comes with tools that help the programmer detect where they happen so they can avoid them. Most of the time pre-allocating chunks of memory yourself is more efficient than letting Android do it. So tell me, where is the advantage over malloc there?

            • Re: (Score:3, Funny)

              by EvanED (569694)

              Just for future reference, not everything I say is necessarily applicable in every situation. For instance, Java does not run very fast at all on my abacus. It would also be possible to craft some programs where the Java version would exhibit pathologically bad behavior and/or the C++ version would behave particularly well, and the consequences of GCs moving things around wouldn't be a big deal.

          • Re: (Score:3, Informative)

            by MadKeithV (102058)

            On a whole I think most programs nowadays should be written in a managed language; the performance gap just doesn't matter for most things.

            I've been developing professionally (disclaimer: in C++ ;-) ) for 10 years now, I've put up with people saying exactly this for 10 years, and for 10 years it hasn't been generally true at all.
            I don't mean to say managed languages have no place - I love using Python for scripting, and we've started doing GUI work in C#/.Net (oh no!) at my company as well, but there's always a dark performance corner where you end up having to go back to a language that gives you more low-level control. The performance gap

            • by Joce640k (829181)

              I've been developing professionally (disclaimer: in C++ ;-) ) for 10 years now, I've put up with people saying exactly this for 10 years, and for 10 years it hasn't been generally true at all.

              Me too, and I really can't remember the last time I worried about memory management or had a memory leak in C++. It's just all automatic when you do it properly.

              The problem with GC is that they told people to use it for things other than RAM. For files, network connections, etc. GC is a very bad idea. You need those resources freed in a timely manner, not when some mindless robot thinks it might be worth doing.

              In Java there's no way to automate this process (no equivalent to C++ stack unwinding) so every t

          • by drolli (522659)

            Yes i also prefer gc languages, but the task in the article was very specific.

            I would even say that if you have such a very specific task, you write (or custiomize) your own malloc (not unusual when doing numerics) to prevent thing like heap fragmentation. (on the other hand, one could write a specific gc...)

          • The allocation/deallocation advantages that GC languages have can be given to C++ through techniques such as memory pools.

            It's actually something seen on the all too common "hey this code is faster in Java than C++" examples that you see from time to time. They will be allocating and de-allocating many small objects. Without using a memory pool the C++ code will be slower whilst the Java program which internally uses a memory pool through its garbage collector will be faster. Of course, if you use a memory

        • Re:Common knowledge (Score:4, Interesting)

          by Pseudonym (62607) on Wednesday June 15, 2011 @02:44AM (#36446954)

          Why shouldn't a GC language where the GC has to search through lists regularly instead of you telling the memory management what to clean up by giving it the pointer be faster?

          When you're doing serious structure-hackery (as opposed to, say, string-hackery or numeric-hackery) in a non-GC language, you often end up having to structure your code around variable lifetimes, rather than structuring it around the algorithms like you're supposed to. The lack of GC can mean that some algorithms are not viable, and can result in the developer picking a worse algorithm instead. This kind of cost won't easily show up on a profile, but the cost is there, and it's nonzero.

          Compiler writers in particular know this, which is why even GCC uses GC. Yes, it's home-built application-specific carefully-applied-and-tuned GC, but it's GC nonetheless. I have a theory that one of the reasons why most languages use GC is that most languages are optimised for writing their own compiler in.

          Oh, and in a multi-threaded application, free() can be more expensive (in a latency sense) than malloc(). A well-tuned memory allocator maintains multiple arenas, so it can can allocate memory from whichever one has the lowest thread contention. But deallocating memory requires returning it to the arena from whence it came; you have no choice in this. Some high-performance applications (e.g. database servers) have been known to avoid the latency of free() by handing off critical memory blocks to a dedicated thread which just sits in a loop calling free(). Essentially, it's a GC thread, only it's a manual one.

          Don't get me wrong. I don't use Java for pretty much anything (definitely more of a C++-head). But one should never underestimate the cost of manual memory management, or of any other resource for that matter.

          • Re:Common knowledge (Score:5, Informative)

            by TheRaven64 (641858) on Wednesday June 15, 2011 @04:23AM (#36447420) Journal

            Compiler writers in particular know this, which is why even GCC uses GC. Yes, it's home-built application-specific carefully-applied-and-tuned GC, but it's GC nonetheless.

            No it isn't, it's just the Boehm GC with half a dozen small patches applied.

            Oh, and in a multi-threaded application, free() can be more expensive (in a latency sense) than malloc()

            Not really. Recent malloc() implementations use per-thread pools, so free() just returns the memory to the pool (so do most GCs). Multithreading is where you really start to see a big win for GC though. If you're careful with your data structure design, then you can just about use static analysis to work out the lifetime of objects in a single-threaded environment. When you start aliasing them among multiple threads, then it becomes a lot harder. You end up with objects staying around a lot longer than they are actually referenced, just to make sure that they are not freed while one thread still has access to them.

            It's also worth noting that most examples of manual memory management outperforming GC are, like most examples of hand-crafted asm outperforming a compiler, are quite small. If you can do whole-program analysis and write special cases for your algorithm, then you can almost certainly outperform general code, whether it's a compiler optimisation or a garbage collector[1]. If you try to do this over an entire program, then you're very lucky - most of us need to deal with libraries, and a lot of us are writing libraries so can't control what the people calling our code will be doing.

            As an anecdote, I recently implemented GC support in the GNUstep Objective-C runtime. I tested a few programs with it, and found that memory usage in normal operation dropped by 5-10%, with no noticeable performance change. The code inside the runtime was simplified in a few cases because it now uses the general GC rather than a specialised ad-hoc GC for a few data structures that need to have multiple critical-path readers and occasional writers.

            Some high-performance applications (e.g. database servers) have been known to avoid the latency of free() by handing off critical memory blocks to a dedicated thread which just sits in a loop calling free()

            This will cause a serious performance hit on modern malloc() implementations. The memory will be allocated from one thread's pool and then returned to another, meaning that almost every allocation in your 'performance critical' thread is going to require acquiring the global allocation lock. I'd be surprised if code using this pattern scaled to more than 4-8 cores.

            [1] Although, interestingly, Hans Boehm's team found that per-object allocator pools that most of the C++ advocates in this thread are suggesting have very poor performance compared to even a relatively simple GC. They tend to result in far more dead memory being kept by the program (reducing overall performance, by making swapping more likely) and, if implemented correctly, required much more synchronisation than a per-thread general allocator in multithreaded code.

      • Java is fast (Score:5, Insightful)

        by wurp (51446) on Wednesday June 15, 2011 @01:41AM (#36446590) Homepage

        In some situations Java will be faster than unoptimized C++ - JIT compilation will do enough of a better job than vanilla C++ to make the difference. In general, C++ will clearly be faster. However, I think what most of the people you're qualifying as idiots get up in arms about (rightly) is the assumption that so many programmers seem to make that Java will be many times slower than C++. That's (usually) just wrong.

        In particular, here's what Google's analysis had to say about it on page 9:

        Jeremy Manson brought the performance of Java on par with the original C++ implementation

        They go further to say that they deliberately chose not to optimize the Java further, but several of the other C++ optimizations would have applied to Java.

        For most programming tasks, use the language that produces testable, maintainable code, and which is a good fit for the kind of problem you're solving. If it performs badly (unlikely on modern machines), profile it and optimize the critical sections. If you have to, write the most critical sections in C or assembly.

        If you're choosing the language to write your app based on how it performs, you are likely the one making bad technical decisions.

        • Re:Java is fast (Score:5, Insightful)

          by Splab (574204) on Wednesday June 15, 2011 @01:51AM (#36446680)

          For me it doesn't matter which language is faster, I'm using the one that solves my problem the fastest (e.g. shippable product fastest) and right now, Java is the main player for me.

          Also, since our CPUs aren't getting any faster, we need to use languages that makes threading easier the safest way and on that topic, Java is miles ahead of C++. (Java used to have an utterly broken threading model, but these days it works [tm]).

          • You mean suspend/resume? That was deprecated like 10 years, but functions required to do it right were in from the beginning (since 1.0).

      • Does throwing insults somehow might you right?

        You (and the mods who rated your comment "insightful") would do well to take an objective look at the facts here. If you'd bothered to RTFA, you'd realise that this is an apples to oranges comparison. The C++ code was optimised far beyond the Java code:

        "E. Java Tunings: Jeremy Manson brought the performance of Java on par with the original C++ version. This version is kept in the java_pro directory. Note that Jeremy deliberately refused to optimize the code furt

    • This paper was created with the sole purpose of getting accepted for an up coming Scala conference. Also, copied from my post on OSnews:

      This paper has some pretty serious issues as has been discussed extensively https://groups.google.com/forum/#!topic/golang-nuts/G8L4af-Q9WE [google.com] and http://www.reddit.com/r/programming/comments/hqkwk/google_paper_com [reddit.com]...
      For instance nearly all the optimization for the C++ code could have been applied to the Java code. Also according to Ian Lance Taylor:

      Despite the name, the "Go Pro" code was never intended to be an example of idiomatic or efficient Go. Robert asked me to take a look at his code and I hacked on it for an hour to make a little bit nicer. If I had realized that he was going to publish it externally I would have put a lot more time into making it nicer.

      I'm told that the program as compiled by 6g is significantly faster now than it was when Robert did his measurements.

      So yeah, in general a

    • by beelsebob (529313)

      More importantly... Did anyone ever care about that? I spend my days writing computer game engines, and I don't remember the last time I worried about the performance of a language.

    • by martin-boundary (547041) on Wednesday June 15, 2011 @02:27AM (#36446864)
      It's reasonable to doubt that C++ is faster than ASM, and it's reasonable to doubt that C++ is faster than C. And if we're talking about hand tuned numerical libraries, it's reasonable to doubt that C++ is faster than FORTRAN.
  • by Toksyuryel (1641337) on Wednesday June 15, 2011 @01:08AM (#36446400)

    RTFA and take a good hard look at what they compared it to: Java, Scala, and Go. This post is a complete non-story.

    • FTA: "All languages but C++ are garbage collected, where Scala and Java share the same garbage collector."

      That's got to play a factor here.

    • by jd (1658)

      I agree. The article is nonsense. Whilst one of the comments suggested comparing it to assembly, that's perhaps a bit unfair. However, to be a test of "languages on the market" I would have expected the following to be there for certain:

      • C99
      • D (Digital Mars' language)
      • Fortran 2008
      • Erlang
      • Forth

      Now, if you want to consider "the best", then you'd also want to include a few of the more experimental languages:

      • Occam-Pi
      • Unified Parallel C
      • AspeCt-oriented C
      • NESL

      The problem with selecting benchmarks is that it's easy to pic

  • ... and? (Score:4, Informative)

    by Durandal64 (658649) on Wednesday June 15, 2011 @01:11AM (#36446418)
    Wow, they compared a whole four languages: C++, Java, Go and Scala, of which, C++ is the fastest. Is this seriously a surprise to anyone?
    • by drolli (522659)

      they should have included fortran IMHO.

      • Re:... and? (Score:4, Interesting)

        by Canberra Bob (763479) on Wednesday June 15, 2011 @02:07AM (#36446748) Journal

        It would have been interesting to a C, C++ and Fortran shootout on some heavy number crunching. Throw in some OpenGL, OpenCL and assembly for good measure. We always get to see how high level languages compare, when in reality for most apps that are written in higher level languages raw speed is one of the lesser factors when choosing a language. Yet we never see shootouts between the lower level languages which would be used if speed truly was a concern.

        • by beelsebob (529313)

          Better yet, throw in Single Assignment C, and then run the heavy number crunching on a machine with many cores/cluster and see what happens.

        • Re:... and? (Score:4, Interesting)

          by TheRaven64 (641858) on Wednesday June 15, 2011 @05:09AM (#36447680) Journal

          It would have been interesting to a C, C++ and Fortran shootout on some heavy number crunching

          No it wouldn't. For this kind of algorithm, C, C++ and Fortran will generate the same compiler IR and you'll get exactly the same code out of the back end. The difference between compilers will be much greater than the difference between languages. Actually, it is already. For C, C++ and Fortran, EKOPath is 10-200% faster than GCC in real-world tests and synthetic benchmarks. There's more difference between GCC and EKOPath for C than there is between my Smalltalk compiler with Smalltalk and GCC with C, which is why language benchmarks are largely meaningless when you're talking about serial code.

          Language benchmarks are important when it comes to parallel code. For example, in Erlang I've written code that scaled up to 64 cores without even trying. In C/C++ I'd have to think very carefully about the design for that. Go is similar: it encourages you to write concurrent code. Idiomatic Go on a 64-processor machine would be a lot faster than idiomatic C++ on the same system, even if the C++ compiler did a lot more aggressive optimisation.

    • What do you suggest then? Assembly?

      • by Morth (322218)

        C would have been an interesting language to compare. We actually rewrote some C code to C++ and saw a speed benefit. Ofc, the original C code was very object oriented in this case, using structs with function pointers.

      • C? Compiled with clang? Or maybe more than one algorithm?
    • On a single algorithm! I hope it was a good one.

    • by jd (1658)

      If I'd assigned someone a project like that, I'd have mandated a minimum of 12 languages (4 popular, 4 traditional, 4 experimental) with a further breakdown mixing various methodologies (you want some parallel languages, some functional, some procedural, some stack, some OO, and maybe some with more exotic forms of abstraction).

      My suspicion is that C++ will rank good against some, not so good against others, but where the balance shifts according to what you're trying to do.

      Now, if you're insisting on pure

  • by GoodnaGuy (1861652) on Wednesday June 15, 2011 @01:11AM (#36446424)
    Yes its true that C/C++ is generally faster than other languages, but when it comes to writing bug proof code, its not so good. Its very easy to write past the end of arrays and use bad pointers amongst other things. From a career point of view, C/C++ is bad. I should know, my main expertise is in it and I am struggling to find a job. There seems to be way more jobs for Java and C# programmers.
    • There may be more jobs, but in my (albeit limited) experience all the fun ones require C++ :)
    • by Erbo (384) <obreerboNO@SPAMgmail.com> on Wednesday June 15, 2011 @01:28AM (#36446520) Homepage Journal
      And, while C++ will always necessarily be faster to execute, there's no question that the other three languages will be faster and more straightforward to develop in. (Which, in general, makes them a net win, as programmer time is almost always more expensive than computer time, except in certain corner cases which should be obvious.)

      Why? Three words: Automatic memory management.

      No more worrying about whether you've allocated the right buffer size for your data...and maybe allocated too little resulting in an overrun screw, or allocated too much and wasted the memory. And no more forgetting to free that memory afterwards, resulting in a memory leak. You can write expressions like "f(g(x))" without having to worry about how the return value from "g" is going to be freed, allowing a more "natural" coding style for compound expressions.

      I maintain that automatic memory management, while not great for code-execution performance, is probably the single biggest boon to developer productivity since the full screen-mode text editor. (Not saying "emacs" or "vi" here. Take your pick.)

      Granted: You can retrofit a garbage-collecting memory manager onto C++...but that code will rob your C++ code of some of that enhanced execution performance which is probably the reason why you chose to develop in C++ in the first place.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Seriously, everybody using heap directly in C++ should reconsider his coding practice. Memory management, automatic or manual, is something that should never be exposed at interface level. And, unlike Java, in C++ you can actually achieve this.

        Good code in C++ is simply not using heap, except perhaps for some low-level implementation stuff. If you are concerned about buffer overruns or 'delete' responsibility, you are not using C++ correctly. In my C++ code, I have hardly one new/delete per 10K lines of cod

      • by Pseudonym (62607)

        No more worrying about whether you've allocated the right buffer size for your data...and maybe allocated too little resulting in an overrun screw, or allocated too much and wasted the memory.

        You've clearly never programmed in C++. Or, more likely, all of your C++ code has been C code in a very thin disguise.

      • by Stele (9443)

        That you got 5 Insightful indicates that most people are quite ignorant of many of C++'s features.

        It's very easy to have "automatic memory management" in C++ without resorting to GC. STL containers and reference-counted smart pointers make this trivial. I have written numerous commercial applications in C++ and most of them do not have a single explicit "delete" or "free" statement. And I get templates, which lets the optimizer do some amazing things.

    • by smellotron (1039250) on Wednesday June 15, 2011 @01:30AM (#36446530)

      Yes its true that C/C++ is generally faster than other languages, but when it comes to writing bug proof code, its not so good.

      Well when you put it that way, it's not a surprise. C and C++ are different languages with different approaches for effectively achieving low error rates. If you approach them as a single "C/C++" language, you'll end up inheriting the weaknesses of both languages without likewise inheriting the strengths of either.

    • by msclrhd (1211086)

      And in C#/Java it is easy to not cleanup a resource by not closing/destroying the resource -- not placing it in a try...finally or using block.

      Or, to use your examples, in C#/Java it is easy to create bugs by not properly trapping a NullPointerException or ArrayOutOfBounds exception causing the program to close unexpectedly!

      I'm not saying that C/C++ are simple, or that they are not error prone, just that you need to be careful in all languages you program in. Any program has bugs in it.

      • by jeremyp (130771)

        You shouldn't be trapping either of those exceptions. They are both indicative of an incorrect assumption on the part of the programmer which means that the program's state is no longer trustworthy. The only safe thing to do is get out and start again.

    • by Alex Belits (437) *

      Any given programmer ALWAYS produces a constant (amount * severity) of bugs per time.

    • by fyngyrz (762201)

      Yes its true that C/C++ is generally faster than other languages, but when it comes to writing bug proof code, its not so good.

      No, you mean YOU aren't so good. A good c programmer won't produce any more bugs than a good any-other-language programmer; we know how to handle memory, multi-threading, and those other issues that terrify those who have only worked with languages with training wheels permanently attached.

  • by istartedi (132515) on Wednesday June 15, 2011 @01:15AM (#36446446) Journal

    This jibes with "common sense" and the computer-language shoot-out [debian.org]

    It's not useless. It's nice to see multiple studies with different approaches coming to the same conclusions.

  • basic (Score:5, Funny)

    by theheadlessrabbit (1022587) on Wednesday June 15, 2011 @01:16AM (#36446454) Homepage Journal

    They didn't test BASIC? Lame...

  • I'm disappointed, everybody knows that when the going get tough FORTRAN get going
  • by meburke (736645) on Wednesday June 15, 2011 @01:24AM (#36446498)

    Notice one of the comments pointed out that Borland Pascal was one of the fastest executing languages next to ASM. I remember that Borland Pascal (in 19991) executed almost 10 times faster than Borland C++ on a consistent basis on the same systems.

    This only points out that tests need to compare apples and apples. I would be quite surprised if any C++ can execute a FFT as fast as my Leahy FORTRAN95.

    If I was going to pick only one language to work with, it would probably be LISP, but Haskell comes a very close second. I like code that does exactly what I want it to do with no side effects.

    There is much more to comparing languages than is reported in the article, including testing the language's suitability for a given task.

    • by Daniel Dvorkin (106857) on Wednesday June 15, 2011 @01:30AM (#36446528) Homepage Journal

      I remember that Borland Pascal (in 19991) executed almost 10 times faster than Borland C++ on a consistent basis on the same systems.

      Apparently, the reason it executed so fast was because it was reaching 18000 years into the future to run on the computers of the galaxy-spanning civilization built by the hyperspatial god-beings who will be our distant descendants. Man, Borland had some great tech in its day.

    • I don't know about that, maybe it was faster than Borland C++, but I did a lot of work (disassembly) on Borland Pascal compulation units and executables back in the day, and the code was horrible. HORRIBLE. Didn't even have the most basic peephole-optimizations (though someone wrote an external application to do that). It was fast to compile though, due to being one-pass, but that right there sacrifice optimizations.

      So if TP/BP was faster than BC++, it was only because BC++ must have been even worse than I

    • > I would be quite surprised if any C++ can execute a FFT as fast as my Leahy FORTRAN95.

      Have you looked at the Fastest Fourier Transform in the West (FFTW)? [fftw.org] It's written in C - but the funny thing is, it's written in C by an OCaml program.

      I think this is the way forward for truly performance intensive code. Not doing it in C++, but writing dedicated compilers for specific subroutines, churning out C, assembly, or compiler specific intermediate language code. Functional languages should excel at this, the

      • " Functional languages should excel at this, they have been ruling the program transformation/analysis space for a long time."

        I think the key is homoiconicity, not functionalness. Prolog isn't a functional language, and it's mainly used for domain specific languages/one-off compilers.

  • Assembly (Score:4, Interesting)

    by Baby Duck (176251) on Wednesday June 15, 2011 @01:26AM (#36446506) Homepage
    If pure speed is the sole criterion with tuning effort having zero consideration, wouldn't masterful Assembly or opcode be the fastest?
    • Re:Assembly (Score:4, Funny)

      by MachDelta (704883) on Wednesday June 15, 2011 @01:36AM (#36446560)

      A prof of mine used to say that "writing a program in assembly is like writing a novel in math."

      Anything longer than a haiku and you''ll want to blow your brains out.

      • Also (Score:4, Interesting)

        by Sycraft-fu (314770) on Wednesday June 15, 2011 @03:25AM (#36447158)

        It often turns out programmers are not as good at the assembly as they might think. I'm not saying a little hand tuned assembly isn't useful for some things but the "I can do better than a compiler," attitude usually has no basis in reality. Good ones are pretty clever at optimizing things. So maybe if you have an area of your code that uses a lot of time (as determined by a profiler, don't think you are good at identifying it yourself) and write a hand tuned in-line assembly version (maybe starting with the assembly the compiler generates). However you don't go and write the whole program in assembly.

        C++, of course, is something you can quite easily write a very large program in, or operating system for that matter. Not quite as easy as a fully managed language, but perfectly feasible to deal with large scale and indeed it is what a large number of projects use.

    • No, I learned programming in assembler (Zilog Z80), and used asm (MC68000, 8086-386, ARM) often until maybe 1995 when I realized that modern C/C++ compilers are generating faster code then I was able to write in asm by some 20%. For "ordinary" programs this is the case until today, more so with increasing quality of compilers. Exception is some small specialized functions, like inner loop of video encoder, where you can use SSE still better then compiler.
    • by oiron (697563)

      Reasonable c/c++ code can be as good as (or even better than) hand-tuned assembler. There's just no benefit to writing in ASM for most things.

      Basically, c/c++ code should nearly transform back into the equivalent assembler-generated code for almost anything, as long as your compiler is good.

  • by Phantom of the Opera (1867) on Wednesday June 15, 2011 @01:41AM (#36446592) Homepage
    Of course its going to be a C that wins. It's pushed close to the iron. C++ is for the careful, exacting personalities. I simply don't have the patience to use it on a day to day basis. Scripting is for the frenetic like me. If you pick a scripting language, you're selling out performance for keeping your rapid development and sanity. You can write beautiful safe stuff in C++ too. Use what you're comfortable with.
  • '...it also required the most extensive tuning efforts, many of which were done at a level of sophistication that would not be available to the average programmer.' I think that's the problem in a lot of systems today. Too many "average" programs are being written by too many "average programmers" - and this is why there are the problems such as memory leaks, etc. I feel too many have been spoilt by the ridiculous memory and processing speeds available in todays computers. What ever happend to understandi
    • I don't think that quote is right. If you look at the optimizations done in the C++ version, their main performance increases was made by using hash_map instead of map. This is easily within the sophistication of the average programmer, at least, no harder than their Java optimizations, which were:

      "Replaced HashSet and ArrayList with much smaller equivalent data structures that don’t do boxing or have large backing data structures."

      Also, I don't think the test was quite fair for Java. Here is what the paper says:

      Note that Jeremy deliberately refused to optimize the code further, many of the C++ optimizations would apply to the Java version as well.

      In other words, partially optimized Java is slower than fully optimized C++? Well isn't that amazing. If they're going to do it, t

    • Of course on Slashdot everyone is above average.

  • by sgt101 (120604) on Wednesday June 15, 2011 @01:44AM (#36446620)

    From the paper (section 6,E: Java Tunings) ". Note that Jeremy deliberately refused
    to optimize the code further, many of the C++ optimizations
    would apply to the Java version as well.'

  • by Anonymous Coward

    ... most people are doing it wrong.

    Good use of C++ fills a very small niche of people that want a relatively high-level language but care about a statement like "the compiler generates good code for this"... You want some of the properties of C, like being close to the hardware and generating straightforward machine code. Add to that some things that make OO easier. Add type safety. Add templates and function objects, which due to inlining gives you much better machine code than the typical C approach o

  • The cases in which performance is critical, are getting less and less, now that hardware is getting faster and faster. Not strange that Microsoft is focussing on JavaScript and HTML5. It seems that at the moment the greatest effort in performance improvement is put into JavaScript.
  • Nope, C++ certainly not for everyone. But the most powerful tools rarely are.

  • Many talk about time to develop, and certainly C/C++ sucks in that capacity. And many talk about code complexity and liklihood of bugs due to memory management, and certainly there's much to be said there too. But C/C++ fails me in one very simple business category: code robustness.

    Every project I work on has clients doing one very important thing -- turning the entire thing upside-down and backwards three times. In C/C++, I'd get to start from scratch every time. In Perl, I get to shuffle a few lines o

  • Many talk about time to develop, and certainly C/C++ sucks in that capacity. And many talk about code complexity and liklihood of bugs due to memory management, and certainly there's much to be said there too. But C/C++ fails me in one very simple business category: code robustness.

    Every project I work on has clients doing one very important thing -- turning the entire thing upside-down and backwards three times. In C/C++, I'd get to start from scratch every time. In Perl, I get to shuffle a few lines of co

  • Lowest-level language with careful tweaking yields best performance. Film at 11.
  • by antifoidulus (807088) on Wednesday June 15, 2011 @02:56AM (#36447032) Homepage Journal
    There is no such thing as a real "computer language speed test", this is really a test of compilers, environments, VMs/interpreters, and environments. The first question that has to be raised is of course when the program is running on hardware, what "language" is it written in? The hardware sure doesn't give a shit. You can compile almost any language into native code, including VM driven languages such as Java and PERL. Now granted TFA states that they ran their loop 15k times to try to minimize the effect of the load time and run as much JIT compiled code as possible, but that's still not the same as compiling it directly into native code.

    Which brings up another point, are they really testing the "programming language"(which is just a bunch of specification and usually implementation hints) or are they testing the compiler/environment they are on. Code compiled with GCC and ran on a Linux box will probably perform differently than code compiled with Microsoft's compiler running on Windows which will behave differently than code compiled with LLVM/CLanger running on OS X...... You can probably say the same thing about Java compilers, I'm assuming they used the Oracle reference javac, but there are other Java compilers out there. How do you test the speed of the "language" when so much of that performance depends on the compiler and environment you are running other?

    Which leads into my final point, when does a language stop becoming something written in that language? Although not tested this time, probably the best example of this point is Ada. Anyone who has coded in Ada knows how insanely strict it can be, it constantly does things like bounds checking to ensure that data stored in subtypes is within the bounds of those types. However on most Ada compilers most of these checks can be disabled with just a couple compiler flags. Obviously the resulting code is going to be faster than if you kept the checks in, but does it stop becoming Ada at that point? You can make a similar case for Java and JNI. JNI is completely legal in the Java language specification, but when you use JNI does your program stop being a Java program? Could you have optimized it further by using JNI?

    This is merely a test of whatever compilers/VMs they used in whatever environment they ran the code in, nothing more, nothing less.

Old programmers never die, they just branch to a new address.

Working...