Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming IT Technology

The Great Computer Language Shootout 180

kato writes: "Doug Bagley has posted results from benchmarking of 29 different language implementations solving 25 different problems (he's written ~600 of the 725 programs so far). The languages include C/C++, Perl, Python, Eiffel, BASH, Tcl, and OCaml. The problems range in complexity from "Hello, World!" to the Sieve of Eratosthenes and Matrix Multiplication. The results can be sorted by speed, memory usage, or lines of code. You can also give one particular program more weight than another (if you are doing more client/server code than "Hello, World!") and find the faster/smallest/shortest language implementation. I can see many of my programs being written in OCaml from now on." Update: 07/04 12:42 PM by CT : The site is apparently now redirecting people back here. I guess technically thats an error message, just not a helpful one. Update: 07/05 8:40 PM by M : Please don't email. The link is broken. We know. The guy is running a server at home on a metered connection, and doesn't want any more traffic.
This discussion has been archived. No new comments can be posted.

The Great Computer Language Shootout

Comments Filter:
  • by Anonymous Coward
    And most important of all: Because it gives awesome blowjobs.
  • by Anonymous Coward
    Australians don't drink Fosters! yuk! We export the stuff and make others drink it :)
  • by Anonymous Coward
    Use the google cached page if need be. Slashdot may consider linking to these cache pages when a site is being hosted on a relatively low power server. http://www.google.com/search?q=cache:776Xtlcr-4M:w ww.bagley.org/~doug/shootout/craps.shtml+&hl=en
  • If you think Rob Malda actually writes any code, you're mistaken.
  • and I would just like to note that you are GAY.

    --
  • While trying to retrieve the URL: http://216.30.46.6:8080/~doug/shootout/

    The following error was encountered:

    Connection Failed
    The system returned:

    (111) Connection refused
    The remote host or network may be down. Please try the request again.

    Damn you, you blew it all up!


  • The problem is it's much easier to measure a programs performance than its maintainability. Most managers prefer facts to conjecture even if it means missing out on the "big picture."

    Combine that with the fact that it's often different developers doing the initial coding and support, and you get the situation where C is preferred to other languages that help you reduce the number of errors earlier on.
  • Actually, Java is at a huge loss with its native libraries.

    Languages like PERL and Python know they're natively slow, so they do all the real work underneath in C/C++ libs (that's how PERL's regexp's fly, and how Python-Numeric is within a small constant factor of number crunching in straight C).

    But Java has some *terrible* langauge design choices that really limit speed:

    - Everything is an Object. This means that you can't have typed containers (i.e. templates), so you have to do expensive dynamic-downcasts when you remove objects from the container (heh, in C you'd just say "I know it's a struct foo*!" and cast away happily".

    - Except *not* everything is an object -- primitive types are second-class citizens, so to use the container library on ints or floats you have to use the heavy capital-I-Integer, etc

    - All method calls are virtual. This is a small but insidious memory-lookup on every function call... (can be avoided in most places in C++, but is sometimes necessary).

    ...

    If you really want to go fast, you use OCaML -- C and C++ are relatively speaking very hard languages to optimize. Strong typing (without Java's silly "strongly typed, then throw away the type info!") allows you to do much more rigourous analysis at compile time, so you have a much stronger picture of what the data flow graph is. For example, in C/C++ it's almost impossible to draw the data flow graph anywhere but in the basic block (think "within a set of {}") -- and even to do that the optimizer is making assumptions. In OCaML the data flow diagram contains the whole program, so the compiler can optimize across function calls.
    ...

    Note to the lisp junkies: get over syntax. Someone should write a "straight abstract syntax tree" frontend to OCaML just to make you guys happy. Gee whiz.

  • And in my experience, most engineers don't know how to do basic performance tuning.

    The most insightful thing I have read all day on /.

    Good show!

  • size() doesn't use "lazy" or "overeager" evaluation, it's an accessor method. A vector needs to keep track of what its size is-- how else does it know when it needs to allocate more memory for itself ? If it "dynamically" computed its size, push_back() would be an O(n) operation (and compliant implementations need to do better than this in C++)

  • The real news in here, for me at least, is the frequent gaps between C and C++ implementations of the same thing. For the few sources I managed to look at (yes, the server is /.'ed to hell) the main culprit seems to be the STL.

    Sure -- you are definitely "closer to the metal" with C, at least when you write C++ that looks like C++. But the nice thing about C++ is that it gives you a choice -- you can trade speed for maintainer/programmer efficiency. Or you can write C++ that looks like C. I do both, depending on the context.

    The real news in here, for me at least, is the frequent gaps between C and C++ implementations of the same thing. For the few sources I managed to look at (yes, the server is /.'ed to hell) the main culprit seems to be the STL.

    I love it, but I agree with your point about the learning curve. I'd recommend "Accelerated C++" as a great book for getting up to speed on it. I've found STL to be vastly superior than the half-assed attempts at container class designs I see in most textbooks (a lot of the authors barely seem to comprehend the fact that linked lists are not random access data structures, making a basic conceptual error in using indices instead of iterators)

    I don't like the streams libraries either

    Love 'em. Typesafe, fast, extensible. Works for me. As a side note, the printf("hello world") vs cout Does this go to reinforce my position that in this day and age Java is a logical choice for pretty well everything bespoke and not performance critical?

    It's too bad it's not standardised. It's a nice "slow" language. I've made a lot of use of reference counted "envelope" classes in C++, and having everything GC'd is certainly a nice idea if performance isn't critical. Unfortunately, for most of my work it is. If there was a mature Java Qt port, I'd consider it.

  • Thank you for reminding me why I read comments at 4.

    Don't you think that not drinking the beer you've been drinking for years just because it suddenly became trendy is just as bad as starting to drink a new beer just because it suddenly became trendy? The original poster did mention that they'd been drinking it "for many years now". How did you form the conclusion that they were a "sophisticed beer drinker", who was only drinking it because it "is the most trendy beer right now"?

    Later,
    Blake.

    P.S. As many people care what they drink as what you think. And at least their post was funny.
    --
    I'm sure this is flamebait, and should probably be moderated down...
  • I took the liberty of benchmarking 25 spoken languages by comparing their methods for expressing the phrase "Wuzzzzzzaaaaaah" from the budweiser commercial. Some of these may not be optimized as I am a bit of a newbie at some of these languages. Here is an excerpt:

    ...

    Australian English: "I'm not saying any such thing. Give me a Fosters'."


    The Australian version should probably be "Mmmmmmaaaaaaaattttteeeeeee" since one of our not a creative cell in their bodies advertising agencies ripped off the American ads and produced a bunch of crappy lets yell mate ads... the ads sucked enough that I can't even remember what they were for.

    Also, Australians don't drink Fosters. We just export the crap to unsuspecting Americans (possibly as revenge for Tom dumping Nicole, opening up the possibility she might actually come back here).
  • You know, these benchmarks are completely worthless unless he can prove that he wrote the optimal code possible for each language.

    --
  • However I have worked with PL/I and do so on a day to day basis. I also know C++, Perl and many other languages. I find PL/I both flexible and fast, It has pointers and other neat things like sliding DSECT. It would be interesting to see a OO implementation of PL/I. Also I wish there was a cersion of PL/I for Linux!
  • A much fairer contest is the pseudoknot benchmark.

    Why should this be fairer than the approach taken in the shootout? I think it's better to have several categories/programs. Floating-point performance is the last thing I care about.

    Still, it's a better example than a self-confessed beginner trying out toy problems.

    You probaly didn't visit the site yet, didn't you? Solutions have been provided by different individuals, usually people actually working with the languages, too.

    You disagree with a solution? Provide a better one!

  • by vs ( 21446 )
    For real-world use, would you choose a language that compiled out to run 5% faster if it was also 20% harder to maintain? IT has a misplaced fixation on speed.

    For real-world use, would you choose a language that is much more apt to the job if it wasn't Java,C or C++?

  • What happened? I get an error from slashdot (can't find ~doug/shootout)
  • From a BRIEF read, I could not see where OCaml supports the notion of introspection. Can I ask a class or object what methods it supports and then invoke them? If not, it would fail to make this OO Programmer happy...

    Please respond via email - thanks.
  • It was Bjarne Stroustroup himself that said:

    "With C you can shoot yourself in the foot quite easily. With C++ it is harder, but if you do shoot yourself in the foot, you blow your whole leg off."
  • by Anonymous Coward
    PHP scored the way it did, because the benchmark author forgot to enable inline optimizations.

    PHP disables this by default, because GCC's optimizer goes nuts on 64-bit targets when building PHP's script execution engine. You could see a 2 to 5 fold increase in execution speed over the posted results by simply configuring PHP using `--enable-inline-optimization'.

    When notifying the benchmark author about this issue, he simply replied that he expects most users NOT to checkout any possible optimizations. If a user cares about speed, why would not he try to explore such possibilities? Or was the benchmark author just lazy, trying to cram as many languages into his little benchmark as possible?

  • Having a lameness filter on Slashdot is like having a shit filter on your ass.
  • Well, I attempted to fill in some of his blanks, and took five minutes to write the PHP regular expression example. I was going to post it here, to perhaps inspire somebody to one-up me, so that the slashdot community might actually do something useful.

    Instead I fought the lameness filter, despite using the 'code' type, and generally proved that Taco can't code for shit.

    This is exactly why automatic censorship software is a terrible idea, and must be stopped.

    And to top it off, when I attempted to submit this message, I received a screen saying: 'This comment has been submitted already, 276169 hours , 30 minutes ago. No need to try again.' Apparently I posted this message earlier... in 19 fuckin 70.

    --

  • At least the first version does what you expect.
    I think you probably meant:
    for(i = some_length_func(); i >= 0; i--)

    That's a really scary error, so fundamental, it's the kind of thing that would make me want to hunt out and replace all code written by that developer.
  • the upcount version evaluates the length function N times, the downcount evaluates once.

    Although, I must admit I normally use
    for (i=0;ia.size();i++)
    when I know that a.size() uses cached value
  • The foot test [logica.com] is already done.
  • The PhoneCode project is important because it attempted to measure programmer productivity and code quality. While not nearly rigorous enough for more than a conversation, it makes for a very interesting conversation.

    Chris Cothrun
    Curator of Chaos
  • He CNAME'd www.bagley.org to point to www.slashdot.org since his network can't even handle the SYN flood from connection attempts (my guess for the reason). It's probably the guy's home LAN.

  • Maybe Slashdot could start caching the sites they are going to link to beforehand, and then change the links to the cache if there's a problem. It could also be useful in case some site changes their content because of Slashdot referencing them and trying to refute the previous content.

  • The ratio between languages of those language's propensity to have flaws in programs is not going to be a constant. The ratio will rise with bad programmers and approach 1 with good programmers. Giving the language the chore of accomplishing program correctness indicates a lack of good programmers (as defined above).

    Of course I want absolute correctness in air traffic control systems and medical instrumentation systems. So the work in making sure programs are correct is certainly important, and I wouldn't want work to make a better language in this regard to stop. But what about also working to achieve better programmers?

    The quality level of programmers coming out of colleges is going down. I'm sure that is significantly due to the larger numbers entering the field. But this also means a college degree is essentially worthless in determining the quality of the programmer. Given that experience is also a factor, that pretty much means something else needs to be done to determine who is, and who is not, the quality programmer.

    I propose this solution. In addition to having a language that ensures provable correctness, let's also have a language that has none of those features at all. Force everyone to program in that language first for a few years and use it to weed out the bad programmers. Once we have selected the best and brightest programmers based on not only their education, but also the results of real world programming effort produced from experience, then we have them work in the provably correct languages. Now we'll have the best programmers combined with the best language. This should ensure the best results.

    Personally, I'd rather have a good programmer writing in a bad language than a bad programmer writing in a good language. I write in C (considered a bad language in this context) and before that in assembly (probably considered very bad). Of course I'm not perfect and frequently there are bugs in the code I write. But these bugs are typically the result of things like not keeping track of how I change things around (I discover I need 2 variables instead of 1 for something, and forgot to change some of the references to the new one), or misunderstanding the interfaces (there are such things as bugs in documentation and man pages, as well as just unclear and ambiguous writing). But I have gotten better on those things over the years. Experience helps you realize you need to develop correct models for what you are doing ahead of time, be more careful about changes, and be sure the understanding of the interfaces is precise and accurate with no shadow of a doubt.

    How can a "correct" language ensure that a programmer does not use the wrong variable? What if I transpose the rows and columns of the output matrix because I exchanged the variables that index where each result is stored in a square matrix? Writing wrong programs correctly is certainly very possible. In real world systems, what's going to happen isn't so much a programmer getting out of the bounds of an array (something a good programmer won't do even in assembly, and something any programmer won't do in a correct language) as it is the programmer simply not understanding what is really going on in the system they are coding for. What if a programmer mistakenly assumes the data is coming in in metric units when really it is coming in in old English units, and he simply doesn't code in the conversion somewhere. We could lose a space probe ... or we could lose an airplane full of people. Think it isn't happening? Systems are often designed to compensate for errors in data measurement all the time, and once deployed may well be functional even without the english->metric conversion in place. But the functional tolerances will now be shifted and performance specifications will be other than intended or documented.

    A bad language can still have good code written if the programmer is good. But a bad programmer is a disaster waiting to happen in any language. And if you think the language is going to protect against that, consider that since a correct language does indeed do that protection to an extent, that it now opens the door for even lamer programmers to start doing coding. Eventually, you hit the bottom again and you have someone who is such an idiot, that no language in the world can prevent them from screwing up.

  • > > ...unless he can prove that he wrote the optimal code possible for each language

    > Uh, how is this possible?

    Thank you for reinforcing my point.

    These benchmarks mean absolutely nothing.

    --
  • The real news in here, for me at least, is the frequent gaps between C and C++ implementations of the same thing. For the few sources I managed to look at (yes, the server is /.'ed to hell) the main culprit seems to be the STL.

    God I hate that thing. I mean, yes, it's very abstract and nice and fast by abstract library standards. But it has a learning curve like the north face of Everest, FFS. I've coded in C++ for a number of years now and have ended up with a personal style that perhaps more mirrors objective C than anything else, it's starting to look like that wasn't such a bad choice.

    I don't like the streams libraries either.

    Bjarne Stroustrup would hate me.

    Does this go to reinforce my position that in this day and age Java is a logical choice for pretty well everything bespoke and not performance critical? (Slow language made by people that make big computers. Hmmmmmmm)

    Dave
  • Reason 1, VB sucks as an enterprise language [ddj.com]. Where by "enterprise" I mean able to be maintatined better by large teams that have high churn rates. Because they do.

    Reason 2, VB will not pay off your mortgage as fast as Java. See "enterprise" and "churn" above.

    Other than that I agree whole heartedly. With the possible exception of using php server side. Now _there_ is a language that doesn't know shit about object orientation.

    Dave
  • When notifying the benchmark author about this issue, he simply replied that he expects most users NOT to checkout any possible optimizations.

    To a certain extent I agree with him. I suspect there's a lot of php code that will be run on 'off the shelf' Cobalts etc. where rebuilding the interpreter is not really a practical proposition.

    Besides, you have to draw a line in the sand somewhere, and that's the simplest.

    Too many languages? Yeah, I think so. I would've been happier with just C, C++, Java and possibly Python myself. Must be some mileage in comparing entire web application platforms too:

    Homepage favourite: Apache+PHP+MySQL
    Red Hat: Apache+PHP+PostgreSQL
    Java trendies: Apache+Jakarta+JSP+Oracle
    MS compliant: IIS+ASP+SQL7

    For instance.

    Dave
  • Actually, even if he did generate optimal code the benchmarks are meaningless. How maintainable is the code? How many bugs did he generate when writing it? How long did it take to write? How much will it cost me to train all my Visual Basic developers to switch over to OCaml? And so on, and so forth...
  • Well, that's not quite true; you can argue that a language newbie will pick the most 'natural' way to write something in a given language. If that's true (and I know that's arguable in itself), then a newbie's benchmark is a pretty good comparison of how the languages stack up.

    E.g.: I am a Python freak, but for some things, 'regular' Python idioms (like for loops) are slow, but functional Python features (like map()) are much faster. I would use map() but that would be cheating, 'coz I wouldn't know the same shortcut for Perl, for example...


  • Thankyou. God knows why you got flamebaited, some good info. I will investigate OCaML for my own enlightenment :)
  • 1. I agree, I would love to give this a go with perl though (pure perl), it would be interesting to see whether it copes as well.

    2. Yeah, lots of people seem to have jumpedon the java bandwagon and dumped code into the public domain.

    3. Encountered this already, people who want Java for an obvious perl job (text manipulation) because it means they can hire any old Java programmer to maintain it. Not sure I agree with this motivation but I guess its out there :)

    4. Fair enough.

    Thanks for the info :) Its interesting but I find I can do 90% of what I need to do in perl..the mark of a general language perhaps?
  • Thanks for the info :) Lots to think about.
  • by PhiRatE ( 39645 )
    Ok, this is Not a Troll. I honestly would like some serious opinions on Java strength, in consideration of the following points: (I use perl in a lot of these examples 'cos its my favorite language, but they apply in other cases too)

    1. Portability
    Ok, everyone raves about how portable Java is, but well, so is perl (interpretor on numerous platforms, bytecode compiler too)

    2. Large publically available codebase
    Erm. CPAN?

    3. Rapid development
    Ok, In my opinion, there are very few things that could be built as fast or faster in Java (by an expert Java programmer) than in perl (by an expert perl programmer). I say this for several reasons:
    - Terse syntax under perl, less typing (cheap shot, but valid none the less)
    - extremely flexible types mean easy reusability and a total lack of numeric problems
    - Built-in high-level data structures (hashes, arrays) and the syntax to operate efficiently on them (list assignment)

    4. Speed, ok, admittedly fo the most part, people maintain that java is slow. I am aware that this is considered to be a VM issue more than a language issue, however I think its worth pointing out that in many cases, the data types used most often in large applications are slow because they're written in Java. Take the classic hash for instance. Perl has it built in. C has it in a library which gets nicely compiled, C++ in a template, again compiled. Java on the other hand, has all its high level data types written in Java. An enormous performance hit considering the amount of use these kind of types get in heavy duty applications.

    Now, again in my opinion, a JIT could make up for the speed issue, and I only claim that perl is plenty portable, not that it has an advantage over Java in this respect.

    I am willing to concede that there is more Java code out there than perl code, but I don't believe there is so much that it makes a big difference.

    This being the case, and assuming my assertion about speed of development is true, why are people using Java? What benefits do Java programmers believe they gain (technical, not $$ :), as opposed to using perl, or even one of the functional langauges (ocaml was presented here, but there are others, Erlang etc, which profess even greater programmer efficiency gains than perl)

  • The fact that the C++ vector implementation increments and decrements a counter on pushes and pops instead of when size() is called

    Problems there:

    • The ISO C++ Standard doesn't require this behavior. The vector (or any other container) is allowed to not do the count of elements until size() is called.
    • The Standard recommends, but does not require, that vector et al use an O(1) size() -- thereby requiring the behavior which you describe -- but the time/space tradeoff choice is left to the implementation.
    • The STL from HP/SGI is the basis of the implementation of the "STL subset" of the C++ Standard Library for many compilers, including GCC 3.0. And it does not do as you write; it postpones the count until size() is called, leading to O(n) behavior.
    • Thus, use v.empty() rather than v.size() == 0 for tests.

    The FAQ at the SGI STL site discusses this further. It's good reading.

  • Doug Bagley makes an open invitation for readers to suggest improvements to the code he has written.

    I agree that the results cannot be considered definitive, but they are very far from "completely worthless". It is as important how a programming language fares in the hands of its average programmers as in the hands of its gurus.

  • Why should this be fairer than the approach taken in the shootout?

    I don't object to the solutions so much, since he is taking submissions. I object to the problems, which are all trivial. Floating point performance may be the last thing that you care about. Personally, I find performance in computing Ackermann's function slightly less important.

    Pseudoknot is real-world, CPU-intensive and hard. Yes, it's not a complete benchmark suite. I believe that I said as much. My main point is that shootouts based on the ability to solve trivial toy problems are useless.

    Off the top of my head, these are the sorts of things I want to see in a "real world" benchmark suite:

    • MPEG compression or decompression.
    • Something to do with string matching, such as shortest edit script (diff) or shortest common superstring.
    • Something to do with structure manipulation. Perhaps a compiler optimisation pass, such as partial redundancy elimination.
    • A nontrivial alpha-beta or A* search problem, like chess solving.

    I'm sure you have your own pet ideas too.

  • It's written than "The spirit of the Prohpet is subject to the Prophet." In particular, how do we know that C code runs so much faster than Bash code isn't because he's just a better C coder? Well, maybe that's a bad example, but what about Forth versus Lisp? Clearly programming skill for these languages provides a huge boost in code efficiency. It's great that he knows so many languages, but is he really a master of all of them? Is *anyone* a master of all of them?

    -Ted
  • Actually, I think you're very wrong. These may not absolutely show what each compiler can accomplish when John Carmack is writing something for them, but what it can show is what each compiler might be able to accomplish when an average programmer makes an attempt at writing code for them. If one compiler/scripting language handles the average person's average code in a better than average way, don't you think that would be a much better metric to base your choice of language on than the extremes? Afterall, it's the average everyday programmers that do 90% of all the work.
  • While waiting for Doug's server to be unslashdotted, you can also check out an earlier effort, Lutz Prechelt's PhoneCode Project [uni-karlsruhe.de] in which competitive implementations of the same problem in different languages were measured on several criteria.
    Doug's project is much more ambitious, but since he wrote most of the code it may not be as competitively written.
  • Java is a logical choice for pretty well everything bespoke and not performance critical?

    Absolutely.

    I'm a researcher on RDF and Semantic Web apps, with a real-world site to build too. Most of our current work is IIS/ASP/JScript, and this is excruciating to work with compared to some of the newer stuff we've done (Servlets). The reason is simple, but compelling; availability of existing high-quality open sourced code libraries. The smartest people out there are building what I need, they're letting me use it, and they're doing it for Java.

  • But how many people actually know if the size() method in the vector class in whatever library they use (e.g. Java, C++, etc) actually use lazy evaluation or overeager evaluation?
    Well, in C++ at least, the vector class's definition requires that accessors such as size() have constant time execution. I'm not sure how many C++ programmers could have told you that, though.
  • Your code transformation bit is right on (ML makes the tradeoff of a more powerful syntax instead of a simple and easily-manipulated abstract syntax tree), but the types thing is definitely bullshit. Types are not just about speed, they are vital to modularity and debugging. ML-family languages have the ability to write generic code (with type inference) just as easily as lisp, and do indeed allow you to easily "delay deciding types" with structured data types. (On the contrary, lisp does not let you statically assure that something has a particular type in the same powerful way as ML-family languages do.)

    Lisp is a cool language; there's no war here (as there might be with the C++ folks ;)) but I think you need a little more practice with a powerful type systems before you can make statements such as the one you made.
  • Seriously - very interesting project

    Besides becoming fodder for flamewars I can't see what good a novice writing "Hello World" and it's equivalents in a dozen or so languages and claiming that it is a benchmark is supposed to achieve.

    --
  • The programs in question shouldn't be optimized, they should be representative of how it's commonly done. Big difference.
  • Because it doesn't have a interactive, iterative, intuitive programming environment that lets you control, inspect, and explore all aspects of the runtime system. (not just an "IDE" ala emacs, something more complete)

    Because it's not as elegantly simple and consistent as other languages (i.e. Smalltalk or Self).

    I know what's it's like to be a language bigot (i'm one too), but no language is perfect.

  • But how many people actually know if the size() method in the vector class in whatever library they use (e.g. Java, C++, etc) actually use lazy evaluation or overeager evaluation?

    Well, speaking as a Java programmer, I check the Java library source code all the time for issues just like this... and Vector.size() simply returns a counter variable, so there's no calculation, just method call overhead to worry about. Actually I avoid using Vectors altogether as they are synchronized; ArrayLists are much faster for single threaded access. (And take up less memory to boot)

  • "But most "enterprise" applications are not CPU bound, anyway."
    Another poster already addressed the cost of (expensive) people vs. (cheap) hardware.

    But I'd point out that in an enterprise situation, one trip to the database can easily use up more time that is spent doing raw calculation to service a typical request. It is much better optimize database design and queries, and to optimize the system to do fewer queries, than to squeeze out a few CPU cycles. When optimizing a system, go for the big lumps first.

  • But when it comes to distribution and concurrency Java seems so complex with its synchronization and shared data between threads.

    If you want to make a scalable program that distributes almost transparently, try Erlang instead. Erlang's message passing rules!

    Too bad that Ericsson did not release Erlang as Open Source earlier...

    Sounds neat...I'm gonna go take a quick look this morning.

    But I disagree that Java doesn't handle threading well, compared to C++.

    I may be off base, because I have not done a lot of multithreaded development in C++. I used to develop ActiveX controls, which followed "apartment model" threading. This meant that Windows guaranteed instances would only be accessed by one thread at a time. Only data static to the class needed synchronization. I only used "critical section" code a few times, but I thought I remembered it being part of the Windows API, rather than C++ itself. Some guru will correct me if I am wrong.

    It is true in Java that you must think about synchronization, if your object will be accessed by more than one thread. But the language provides for synchronized methods, and more generally for synchronizing on any desired object.

    Not transparent, but pretty convenient and flexible. If you design your interfaces well, you can make your system thread-safe object by object.

  • Seriously - very interesting project - he'll ctach flack for being a newbie I'm sure - but a great endeavour all around. More power to him!
    He's not exactly a newbie I think, he just isn't a guru in every programming language ever implemented. I don't think any of us are.
    --
  • I dunno about that. With the speed it's loading my money's on his webserver going first. Sure hope he doesn't use that to code on because it looks like it's about to be a victim of the /. tactical strike.
  • bah, doesn't anyone use INTERCAL ? I guess that's more like drinking cyanide than shooting yourself, but still...
  • In my experience, most programmers generate awful, bug ridden code thanks to premature (and frequently incorrect) `optimization'.

    I'd agree with that. I've often seen programmers introduce early twists and turns into basic data flow in hopes of creating some performance improvement down the line. In real life, most performance problems are unanticipated and need to be determined by empirical measurement, not by convoluting the architecture.

    Of course, there are some obvious performance problems that can be found by early design review, before coding even starts, but usually those are in areas where the problem is well understood. For instance, in a process system, you know that context switch is going to be a performance bottleneck, so you don't introduce 2K data copies at each switch, and in a B-tree system, you know that you want the index blocks to carry as many items per block as possible, so you don't have a 32-byte key when a 4-byte key will do. (These are both real-world examples of performance problems in Apple software which could have been headed off by review.)

    Tim

  • I've been working pretty exclusively in Java for about 3 years now in corporate IT environment (with previous experience in Fortran, Cobol, Smalltalk, C/C++, Assembler, etc.). I've done fat client apps, browser-based apps, and comms middleware - so I think I have a pretty decent picture. To address your questions:
    1. Portability - means write-once-test-everywhere (as it should!). I've moved pure Java from NT to OS/2 to *nix to os/390 without rework. Results may vary of course.
    2. Public Code - tons of code available all over the damn place. Gamelan, IBM Alphaworks, Sourceforge, and a zillion other sites....
    3. Rapid Development - the double-edged sword. I agree that Perl's terse style allows for rapid development, but you have to balance off against maintainability (esp. in corp. environment). If you like text-editors and terse syntax, Perl is likely the winner. If you like (fairly) economical syntax/construction and the option of IDE development - go with Java.
    4. Speed - never a problem with Java in my experience. Standard hardware configs are way powerful these days. Just not something I think too hard about anymore (doesn't mean I write bloated code, though).

    Hope this helps. In the end, I choose Java because it is a very nice balance of strong OO syntax, good packaging options, and VERY strong industry support. However, it's not the best tool for everything (today). There are times when C/C++ (for example) makes more sense (i.e. when you have to do something very close t othe hardware). However, I find that 90% of what I need I can do in Java.

  • Now hold on a second... i learned O'Caml in an intro course also, and yeah, it was pretty annoying at the time, but since then, i've looked into it independently, and it really does seem like quite a nice language. Incidentally, Doug Bagley's Shootout was one of the first pieces of evidence i found that O'Caml is actually a language you can get real work done in. Aside from that, i emailed one of my professors and got his opinions on the language. He said that unless he's writing driver code, or something similarly low-level, it's his language of choice, adding that it's a shame industry is so harsh to newer languages.

    Ok, about O'caml [O'caml = Objective Caml = a variant of ML with objects]. I had to use it for a PL class last semester. It's kind of nice, but don't expect it to be in any way like any language you've ever used (unless you know some other ML variant, of course).

    This is true -- it's unlike anything you've ever used before. But that's not a bad thing, necessarily! This is a different way of thinking about programming, and many people feel it's better than the traditional procedural models.

    In many ways it's very nice. In other ways it will drive you absolutely insane. For example, O'caml has very strong typing rules (which are good). It has almost no diagnostics when it decides it doesn't like your code (which is very very bad). It also apperantly sometimes gets very confused. For example, sometimes you'll get something like this:

    Almost no diagnostics? Did you use the debugger? ocamldebug has many features of gdb, supposedly. Although i'll admit i have little experience with it, from what i've seen, it provides a comprehensive debugging environment.

    Granted, the interactive interpreter has few debugging features, but it's not really meant for debugging large chunks of code -- that's why they have a debugger :)

    foo.ml, line 100, chars 16-30: this expression has type int, but is used here with type int

    This can be a little maddening after a few hours.


    This happens when you write screwy code -- the types may have the same name, but they aren't the same underneath. That's another thing you have to look out for -- things don't change, they just get new environments. This is what functional programming is all about, and why it's much better for certain tasks (like things where you have to step back, or undo -- think web browser, text editor...). Incidentally, i doubt it happens with built-in types like int...

    Other annoyances: no function overloading. So to print a string, use print_string. To print an int, use print_int. And so on. This is just how it works (I think you can write a wrapper which will check the type and dispatch the right call, but that's fairly irritating in and of itself).

    No function overloading equates to good type-checking. If you allow functions or operators to be overloaded, you compromise the type inference system, which is one of the nicest features of O'Caml. (Besides, you can do things like print_string "n = " ^ (string_of_int n). You just have to be a bit more cognizant of what you're passing around...)

    mjd (of perl fame) actually gave a nice talk on why strong typing is so cool [plover.com] -- ML's type system was able to prevent an infinite loop bug! The anecdote starts at slide 27 [plover.com], though for full effect, i'd recommend you read the entire talk [plover.com]

    Another thing that irritates me (probably just because I'm from a C/C++/Perl background), is that sometimes you use no ';' to end a statement, sometimes one, sometimes a pair.

    You only use a single semi-colon when you want to throw away the return-value of an expression -- it's an imperative feature. That's what's so cool about O'Caml: it doesn't tie you down to any one paradigm, but rather it supports them all -- functional, imperative, object-oriented, ...

    And the double semi is for when the compiler can't tell you've started a new statement. It actually makes quite a bit of sense once you understand the reasoning behind it.

    Also, I found the documentation to be very erratic (the modules docs are quite complete, but try finding simple examples of how to do OO, without reading the BNF grammar they put in the docs - that's no way to learn a language).

    Unfortunately, i'm inclined to agree with you here. Their documentation is very erudite, and difficult to comprehend. And regrettably, there are almost no good books on the language. (My professor pointed me to The Functional Approach to Programming, by Couisneau and Mauny, but it's more of a textbook, and doesn't cover the object-oriented features of the language.) Presumably this is due to the fact that big businesses don't recognize O'Caml by name (case in point: how many good C books are there? what about LISP?).

    Nice stuff: strong typing, cool type matching stuff, bytecode and native compilers, seems like a decent module system.

    Regarding the module system, i don't have enough experience to say anything intelligent about it myself, but all my professors are in love with it. They say it's more advanced than anything available, even in Java (the other language they teach us).

    But if you don't already know it, good luck. You'll need it.

    You make it sound so painful! :) On the contrary, i think any geek who gives it a whirl will find it refreshing and mind-expanding. <strike>Like LSD!<strike>

    majiCk
  • It's not a valid criticism in that the test programs are all very simple to implement. Also, taking into account the fact that some unexpected languages made it to the top reveals the relatively low level of bias.
  • Well, there isn't static typing in Lisp. There is strong typing, but not static. Some implementations (CMUCL, for example) have some ability to make such static checks. But an implementation needn't, and in fact, may not be able to. For example, in your second example, what if we do (bar (read *standard-input*))? There's no way to know at compile time what type of data read is going to return, so reasonable static checking seems somewhat impossible here.

    Also, the second example doesn't have to error. If you look here [xanalys.com] for example, you will see that if we do something like (bar 0) the spec. declares that the consequences are undefined. If you then check the definitions here [xanalys.com], you'll see that this doesn't necessarily mean that an error is signaled. It may be, but it needn't be.

  • Yes, it is frustating in programming in Ocaml. It has a lousy GUI except if you have the emacs binding (which is pretty cool). Debugging it can be very frustating since the error message is vague. The documentation is erroneous. Learning Ocaml can take a lot of time, especially if you have never touched functional language realm. But...

    In Ocaml you can do a lot of nice things: throwing out functions of functions. Which can simplify a lot of things. This is a significant feature of functional language. It supports a pattern matching like Perl (although not as sophisticated). You can build AST, trees, hashtables, and other ADTs effortlessly using the language's basic construct (unlike C/C++ or Java which supports that in the library). Thus, this is very ideal for theoretical and compiler researches.

    I think that you guys C/C++ gurus should try to both reverse engineer the binary output of OCAML and look the code generation phase why it is so fast. Try here [inria.fr] for further reference on Ocaml.

  • After writing a few 10klocs of O'CAML, I have rarely came across the problems you mention, and they were never really serious. Overloading and dynamic typing are supported by the new G'CAML features, which I would expect to become standard. I do think the O'CAML syntax could stand some improvement to eliminate common gotchas, in particular in imperative code.
  • Don't feel bad about feeling bad about STL. STL's design seems to have suffered from the "second system effect": it's far too general. And in the process, performance and error checking were left behind.
  • Unfortunately, your objections are pretty typical. Most CL users seem to confuse a great experience with a particular implementation with what the language standard actually guarantees. I think the CL standard is probably what ultimately doomed Lisp, failing at ensuring compatibility, making the development of new implementations unnecessarily burdensome, and succeeding at stifling language innovation and evolution.

    Some specific points:

    • Threads only exist in some experimental versions of CMU CL.
    • ANSI CL is full of unspecified behaviors. Check "defconstant" and "/" for some particularly egregious examples.
    • I claimed that ANSI CL lacked efficient binary I/O, meaning, bulk operations like READ-ARRAY and WRITE-ARRAY. It does.
    • and on and on...

    I always thought, and I still think, that Lisp is a great idea, and specific implementation have been wonderfully productive for me. I do hope that someone will start a project like O'CAML for a full-features modern Lisp system: something with the clean design of Scheme, but with some more practical design choices (e.g., no call/cc, more built-in types, etc.).

  • I'm a lot more interested in "this is how fast straightforward, readable, portable code runs" as opposed to "this is how fast obscure code runs on one particular processor configuration after three weeks of manual tuning by several CS Ph.D. candidates".
  • One distinction needs to be made clear: language definition vs language implementation. Threads, sockets, binary I/O, etc do exists in all major implementations of the CL language, free (CMUCL [cons.org], CLisp [slashdot.org]) and commercial (Franz's Allegro [franz.com], Xanalys' Lispworks [xanalys.com], Digitool's MCL [digitool.com]) alike. It is true that the ANSI standard for CL doesn't include threads, sockets (it does have binary I/O, well-defined runtime errors; and bit-vector is neither inefficient nor a hack (see for yourself: the standard is online [cmu.edu]). We'd appreciate it if you'll check your facts first before posting; spreading misinformation is immoral). However, if you insist on using language features only if they're in the standard, then you've even more problems with OCaml: it doesn't even have a standard (ANSI, IEEE, ISO, whatever) yet!

    As for type declarations, they're there both for speed and correctness. The optional type declarations in CMUCL can serve both purposes. There is a tradeoff here: on the one hand, the need to keep track of the type information is a burden on the programmer when it is inessential to the logic of the program; OTOH, compiler needs this information to produce compact code and sometimes, catch errors. I like the CL's approach the best: it provides you with the option ONLY when you need it. Also, when programming in CL, I find type errors are rare; most bugs are logical.

    Python is a very nice language, especially for beginners. But I doubt anyone who know CL well will prefer Python over CL. The expressiveness, the flexibility, the speed, the maturity of compiler and runtime system technologies; those are enough reasons for me to stick with CL.

    As an aside: I like Python, and I've been watching Python's development for a while; however, I've yet to see a PEP (Python Enhancement Proposal) which cannot be implemented *within* the language in CL with a page or two using Macro, MOP, etc. Therefore I concluded that it is much less effort to bring libraries to CL, than it is to bring Python up to the level of CL in terms of language maturity.

    In short, I think it is much easier to find happiness in CL! I believe most people will feel the same, too, if they're persistent enough to master the few beautiful concepts underlying the design of CL.

  • Wonder where CGI scripting fits in there. My, that page is slow to play with.

    Hrm. Never even heard of Ocaml. Have to look that one up. If you give Lines of Code a '1' multiplier in addition to CPU Speed, it comes out on top in both native and non-native implementations. Java also ranks suprisingly high. Eiffel ranks pretty low. Oh well. Can't have everything.

    Too many implementations of Scheme, though. How many do you need? =P
  • It introduces programmers to languages they've never heard of.

    It gives several examples of working code in each language.

    It demonstrates an innovative way to deal with being slashdotted :)

  • by EngrBohn ( 5364 ) on Wednesday July 04, 2001 @03:01AM (#110705)

    I am not dismissing the value of this work, but so long as developer & maintainer time is more valuable than processor time, when you choose a language, you should consider how well each language will make the developers' and (especially) the maintainers' jobs easier.

    An example of how this can be considered is found in the appendices of "A Gentle Introduction to Software Engineering" [af.mil] (2,565 KB; MS Word format) by the Air Force's Software Technology Support Center (STSC). The appendices come from a document written in March 1996, so it's a little dated (predates the C++ standard, does not address Java, Python, or Perl), but the idea is there. (I like that it keeps repeating "Bad programmers can write bad code in any language, but a good language facilitates the production of good code by good programmers.".)

    Including the performance achievable with each language is already in the formulation offered by STSC. The obvious caveat is that you look at the performance of benchmarks related to your application.


    cb
  • by John Whitley ( 6067 ) on Tuesday July 03, 2001 @05:06PM (#110706) Homepage
    Actually, generics are Under Development. See Jun Furuse's page below for details. There will be some time to hash around with the theory and implementation (as with the merger of O'Caml and Jacques Garrigue's O'Labl work in O'Caml 3.x), and barring major roadblocks, I would expect to see G'Caml functionality in a future major release of O'Caml.

    http://pauillac.inria.fr/~furuse/generics/ [inria.fr]

  • by Black Parrot ( 19622 ) on Tuesday July 03, 2001 @10:22PM (#110707)
    For real-world use, would you choose a language that compiled out to run 5% faster if it was also 20% harder to maintain?

    IT has a misplaced fixation on speed.

    --
  • by brianvan ( 42539 ) on Tuesday July 03, 2001 @10:33AM (#110708)
    He forgot native Assembly for whatever platform he's working on.

    And he forgot to test each implementation in single processor, SMP, or Beowulf cluster (Imagine one!) on his platform.

    For that matter, he forgot to test it on many different platforms.

    And with different hard drive an memory types too.

    And at different elevations... computing runs faster in the Colorado air...

    And he forgot to test it with or without water cooling or a Peltier cooler... or both...

    Finally, he forgot what the sun looks like.
  • by Pseudonym ( 62607 ) on Tuesday July 03, 2001 @08:34PM (#110709)

    A much fairer contest is the pseudoknot benchmark [soton.ac.uk]. The idea is to take one real-world task (not a partial task like matrix multiplication), and get experienced programmers to write a program to solve the problem in whatever is the most natural way for some language. The results are then benchmarked on equivalent hardware.

    Of course it's not representative of all programs. Pseudoknot is a floating point-intensive search problem, which is not the sort of thing that I do most of the time. Still, it's a better example than a self-confessed beginner trying out toy problems.

  • by Pseudonym ( 62607 ) on Tuesday July 03, 2001 @09:05PM (#110710)
    I see no mention of how long it took him to code any of these...

    Or how difficult it would be to fix a bug or add a new feature. Or how robustly it would perform in the presence of other kinds of failure (e.g. unexpected input, hardware failure etc). Or how easy or difficult it would be for a larger group of people to work on the same program. Or how easy it would be to adapt his programs to work with other pre-existing programs.

  • by crucini ( 98210 ) on Tuesday July 03, 2001 @04:14PM (#110711)
    If you don't think "Hello World" is worthwhile, enter a weight of 0 for it and press Recalculate Scores. That's why Doug provided the flexible interface. Of course the server is probably too slow for you to access right now.
  • by egomaniac ( 105476 ) on Tuesday July 03, 2001 @04:42PM (#110712) Homepage
    I'd strongly disagree with that. Nobody, nowhere, writes optimal code. What good does it do you if you can prove that language FooBar has the best optimal implementation of problem X, but nobody in the real world actually writes it that way? (Hint: none at all)

    I think the important metric is not that the programs are optimal, but that they are representative of what a programmer of average skill in a particular language would produce. After all, one of the benefits of a good language is how easy it is to use -- and we presume that this ease of use will be reflected by better-written benchmarks. It's a lot easier, for instance, to write good code in Java or Smalltalk than in assembly, so why shouldn't those languages be able to show some benefit from that in a test like this?

    Of course, ease of use is impossible to quantify in a test like this, but I'd argue that shooting for optimal (i.e. written by somebody far more skilled than an average programmer) will seriously distort one's expectations of real-world usability of these languages.
  • Here's a good example of this:

    for(int i = 0; i < vector.size(); i++)
    do_something(vector.element_at(i));

    This code potentialy has a O(n^2) complexity if the size of vector is calculated dynamically each time size() is called versus

    int size = vector.size();

    for(int i = 0; i < size; i++)
    do_something(vector.element_at(i));

    which should have O(n) complexity with regards to traversing the vector. The interesting thing is that the previous version could potentially be O(n) depending on how the size() method is implemented in the vector class. But how many people actually know if the size() method in the vector class in whatever library they use (e.g. Java, C++, etc) actually use lazy evaluation or overeager evaluation? <br><br>

    NOTE: Regardless of which language used. If a benchmark is run using vectors which dynamically calculates their size and contain a large amount of elements then the code in the first part will ALWAYS run slower than the code in the second part even if the comparison is Java and C/C++.


    --
  • by Carnage4Life ( 106069 ) on Tuesday July 03, 2001 @07:51PM (#110714) Homepage Journal
    size() doesn't use "lazy" or "overeager" evaluation, it's an accessor method.

    Evaluation is deemed overeager or eager if the value is computed as you go along instead of just when it is needed. The fact that the C++ vector implementation increments and decrements a counter on pushes and pops instead of when size() is called is eager evaluation .

    --
  • From his site: Disclaimer No. 1: I'm just a beginner in many of these languages, so if you can help me improve any of the solutions, please drop me an email. Thanks.

    Me thinks that his email server will be slashdotted before his website is with every geek in teh world telling him how to improve the programs he wrote.

    Which is a good thing - I'd love to see before and after results with after being after all teh /.'er changes are added - course that assumes /.ers would be able to agree on the best route of action. ROFLMAO!

    Seriously - very interesting project - he'll ctach flack for being a newbie I'm sure - but a great endeavour all around. More power to him!

  • by janpod66 ( 323734 ) on Tuesday July 03, 2001 @05:52PM (#110716)
    I agree that CL's syntax is nice, as is its interactive system. But the CL language definition has serious limitations, foremost lack of a number of important facilities like threads, sockets, efficient binary I/O, well-defined runtime errors, and full reflection. CMU CL also has a number of problems, including lack of threads (except experimental in one version) and lack of packed binary structures (there is only an inefficient hack based on typed arrays).

    Furthermore, type declarations are not there for speed, they are there for correctness. ML programs (O'CAML or SML) usually run correctly once they compile; with CL, you spend a lot of time unit-testing for silly type errors.

    I think an updated version of CL would be great, something that is based on UNICODE, throws out the old pathname and character set cruft, gets rid of some other obsolete features, defines error handling and reflection carefully, and adds threads, sockets, and binary I/O. But I don't see it happening. Most of the people who want CL-like interactivity are now using Python or Java+Beanshell. The syntax isn't as nice, but they are so much more practical.

  • by leifb ( 451760 ) on Tuesday July 03, 2001 @04:14PM (#110717)
    I see no mention of how long it took him to code any of these...
  • I can see many of my programs being written in OCaml from now on.

    Ok, about O'caml [O'caml = Objective Caml = a variant of ML with objects]. I had to use it for a PL class last semester. It's kind of nice, but don't expect it to be in any way like any language you've ever used (unless you know some other ML variant, of course).

    In many ways it's very nice. In other ways it will drive you absolutely insane. For example, O'caml has very strong typing rules (which are good). It has almost no diagnostics when it decides it doesn't like your code (which is very very bad). It also apperantly sometimes gets very confused. For example, sometimes you'll get something like this:

    foo.ml, line 100, chars 16-30: this expression has type int, but is used here with type int

    This can be a little maddening after a few hours.

    Other annoyances: no function overloading. So to print a string, use print_string. To print an int, use print_int. And so on. This is just how it works (I think you can write a wrapper which will check the type and dispatch the right call, but that's fairly irritating in and of itself).

    Another thing that irritates me (probably just because I'm from a C/C++/Perl background), is that sometimes you use no ';' to end a statement, sometimes one, sometimes a pair.

    Also, I found the documentation to be very erratic (the modules docs are quite complete, but try finding simple examples of how to do OO, without reading the BNF grammar they put in the docs - that's no way to learn a language).

    Nice stuff: strong typing, cool type matching stuff, bytecode and native compilers, seems like a decent module system.

    But if you don't already know it, good luck. You'll need it.

  • by kreyg ( 103130 ) <kreyg@shawREDHAT.ca minus distro> on Tuesday July 03, 2001 @04:10PM (#110719) Homepage
    Apparently the server was running Commodore 64 BASIC.... on native hardware.
  • by donglekey ( 124433 ) on Tuesday July 03, 2001 @04:13PM (#110720) Homepage
    I don't know about that. In C you would have to include headers and define the main function. In perl easy things are easy so it would just be
    shoot($foot);

    Thanks perl!
  • by clary ( 141424 ) on Tuesday July 03, 2001 @04:59PM (#110721)
    Ok...gonna lose a couple of karma points again, but...
    Does this go to reinforce my position that in this day and age Java is a logical choice for pretty well everything bespoke and not performance critical?
    I have been living in the Java/EJB enterprise application world for a couple of years now. Purely from a development point of view, Java has some very nice features.
    • The combination of jar-file packaging, pure interfaces, and dynamic class-loading by name make it easy to use a "pluggable component" approach. In my opinion, this goes beyong traditional OO development in making large-scale development manageable.
    • Automatic memory management and no pointers, love it or hate it, saves boatloads of time in chasing down those old runaway pointer bugs. Of course, you can still leak memory, but you have to work a little harder to do it.
    • The built-in Java libraries are as good as or better than the C runtime, if you ask me.
    • It looks damned nice on the resume these days.
    Of course, if you need to write anything that is CPU-bound and has to be balls-to-the-wall fast, you are still better off using C++, or probably C or assembler. But most "enterprise" applications are not CPU bound, anyway.
  • by srichman ( 231122 ) on Tuesday July 03, 2001 @05:20PM (#110722)
    ...unless he can prove that he wrote the optimal code possible for each language

    Uh, how is this possible? Even if the semantics of all these languages were formally specified (which, of course, they're not) and even if the semantics included execution time information (which I've never ever seen before), the task of proving optimality seems impossible. No, I take that back, it is impossible.

  • by metalogic ( 445469 ) on Tuesday July 03, 2001 @04:39PM (#110723)
    I am a Common Lisp (CL) programmer; OCaml doesn't make me happy:

    Because code transformation in OCaml is not pretty. Owning to CL's clean and uniform syntax, macro in CL is simple and elegant; this metaprogramming capability provides it with unlimited expressiveness and adaptibility.

    Because OCaml doesn't provide the option to delay deciding types. In CL, I write generic code to speed up development; I then (optionally) declare types in bottleneck code segments to speed up program execution -- it makes perfect sense (remember the 90/10 rules).

    Because OCaml doesn't have a Meta Object Protocol.

    CMUCL is a highly optimized, Free compiler and execution system for Common Lisp (http://cmucl.cons.org/cmucl). Please come and help making it better (http://www.telent.net/cliki)!

  • by Bryan Andersen ( 16514 ) on Tuesday July 03, 2001 @10:28AM (#110724) Homepage
    Personally I like C because I can shoot myself in the foot faster and with less effort with it.
  • by braque ( 16684 ) on Tuesday July 03, 2001 @04:15PM (#110725)
    ... is my favorite language: Brainfuck [koeln.ccc.de].
  • by LKBONG ( 19998 ) on Tuesday July 03, 2001 @10:44AM (#110726)
    Here [google.com]
  • by RobertFisher ( 21116 ) on Tuesday July 03, 2001 @04:33PM (#110727) Journal
    As pointed out by a previous poster, this author is a beginner in most of these languages. I had an interesting experience in a graduate level CS class here in Berkeley on optimizing a matrix-matrix multiply routine [berkeley.edu]. (Interestingly enough, the class was on parallel computing -- the point of the exercise was to learn just HOW INCREDIBLY important the serial part of one's algorithm is, even when one has oodles of processors available).

    The results [berkeley.edu] are interesting for a number of reasons.

    (1) The "naive" algorithm, even with optimization, performed at about 50 MFlops (the Suns we used had a theoretical peak of 333 MFlops).

    (2) With excellent optimization, pulling out all the stops, and using all the tricks available (unrolling loops, deallocating pointers to local variables, etc.), teams of just two students working for a week could get EXCELLENT performances (ie, within 10-20% of theoretical peak), approaching those of Sun's built-in library, and exceeding those of some existing libraries (like PHiPAC).

    (3) Different groups with different approaches got very widely disparate results -- some barely exceeding those of the naive algorithm.

    In sum, how ones goes about coding an algorithm can make ENORMOUS differences in the performance of a code. This is particularly true with numerical algorithms in C and other languages with pointers, where some compilers have great trouble optimizing routines using pointers, since the values are not known at compilation time. Taking this into account, I wouldn't give this guy's results much credence at all.

    However, with the help of a lot of experts in the various languages, it will be possible to get a much better appraisal of the relative performances of different languages. A close analogy exists with these benchmarks and the SPEC [slashdot.org] open standards evaluations for CPUs. The only fair way they found, to compare across all CPUs and compilers, was to allow a very strict non-optimal compilation, and no-holds barred compilation. The same is true here -- we need to get teams to go no-holds barred in the creation of the best possible codes for each language.

    Bob
  • They responded by setting their dns to point back to slashdot, except for bagley.org, which they forgot to change and still points to the site. Interesting way of dealing with the slashdot effect, maybe they're trying to save bandwidth costs or punish slashdot.
  • by fanatic ( 86657 ) on Tuesday July 03, 2001 @04:14PM (#110729)
    Hrm. Never even heard of Ocaml.

    It's the Irish version of Perl. (ouch.)

    --
  • by clary ( 141424 ) on Tuesday July 03, 2001 @04:39PM (#110730)
    ...it sounds like the PL/I of the new millenium! (smirk)
  • by kalifa ( 143176 ) on Tuesday July 03, 2001 @10:32AM (#110731)
    Because it has all the great functional features that can make Lisp programmers happy.

    Because it has a wonderful OO model which can make all OO programmers happy.

    Because it has super fast compilers that can make C and C++ programmers happy.

    Because it is great for imperative programming and for functional programming.

    Because it is great for procedural programming and for OO programming.

    Because it is as multiplatform and portable as Java.

    Because it is designed to please everyone without compromising on anything, and, put more simply, because it can reconciliate the C, Java, Lisp and C++ community.

    Because it can even be used indifferently as a scripting or a system language.

    Because it is great for teaching AND for the real world.

    Because its compilers are libre software and its design and developement are made in a very open fashion.
  • I took the liberty of benchmarking 25 spoken languages by comparing their methods for expressing the phrase "Wuzzzzzzaaaaaah" from the budweiser commercial. Some of these may not be optimized as I am a bit of a newbie at some of these languages. Here is an excerpt:

    American English: "Wuzzzzzzzzzzzzzahhh!"
    British English: "What is up?"
    Japanese: "Konichiwaaaaaaaaaaaaa!"
    Spanish: "Holaaaaaaaaaaaaa!"
    Welsh: "Wyffffffwyfffffffffffff!"
    The binary language of water vaporators:
    "0100000000000000000000000000000000"
    Australian English: "I'm not saying any such thing. Give me a Fosters'."
    Irish English: "I'm not saying any such thing. Give me another pint of Guiness."
    Javanese: " I'm not saying any such thing. Give me another pint of Guiness. No, I'm not the Irish guy in Javanese traditional dress, I'm from Java!"
  • by tim_maroney ( 239442 ) on Tuesday July 03, 2001 @04:32PM (#110733) Homepage
    I found this comment at one of the articles in his bibliography [uni-karlsruhe.de] very interesting:
    the performance variability due to different programmers... is on average as large or even larger than the variability due to different languages

    In other words, if engineers spent more time studying best practices and efficient algorithm design, that might well improve performance more than spending time in religious wars about the merits of pet languages.

    In my experience, basic performance tuning can easily create an order of magnitude improvement regardless of the language. And in my experience, most engineers don't know how to do basic performance tuning. That's got to be more important than language selection in the average case.

    Tim

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...