Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Java Python Ruby

Rosetta Code Study Weighs In On the Programming Language Debate 165

An anonymous reader writes: Rosetta Code is a popular resource for programming language enthusiasts to learn from each other, thanks to its vast collection of idiomatic solutions to clearly defined tasks in many different programming languages. The Rosetta Code wiki is now linking to a new study that compares programming language features based on the programs available in Rosetta Code. The study targets the languages C, C#, F#, Go, Haskell, Java, Python, and Ruby on features such as succinctness and performance. It reveals, among other things, that: "functional and scripting languages are more concise than procedural and object-oriented languages; C is hard to beat when it comes to raw speed on large inputs, but performance differences over inputs of moderate size are less pronounced; compiled strongly-typed languages, where more defects can be caught at compile time, are less prone to runtime failures than interpreted or weakly-typed languages."
This discussion has been archived. No new comments can be posted.

Rosetta Code Study Weighs In On the Programming Language Debate

Comments Filter:
  • by Anonymous Coward

    If you are characterizing performance of "large inputs" without quantifying machine behaviors (cache, TLB, ram) you're doing it wrong.

    • Re:Machine specific (Score:4, Interesting)

      by i kan reed ( 749298 ) on Wednesday September 24, 2014 @10:08AM (#47983755) Homepage Journal

      Pragmatically, almost no one actually codes software with that aspect of the target platform in mind. Unless you're writing drivers, OSes or something else that might need to know EXACTLY how many cycles an op is going to take, your cache behavior, e.g. is never going to be part of what you're building your code around.

      And RAM sizes are large enough that a "large" input is easily contained entirely within even smallish RAM.

      As long as they used a consistent testbed between languages, it's an excellent heuristic for language effects on performance in the real world.

      • And RAM sizes are large enough that a "large" input is easily contained entirely within even smallish RAM.

        /old-man mode: ...and this is EXACTLY what's wrong with programmers today! No eye for efficiency! :old-man mode/

        Cue endless round of arguments about how so-and-so wrote an app that only used one bit of RAM to execute four jobs, yeah - but in all seriousness, the attitude of 'RAM is cheap' has done more to create bloat than any other.

      • And a cache miss costs ~200 clock cycles.

        Size isn't the only thing, locality of reference is just as big or bigger and languages like C/C++ allow a programmer to think about and control such things. With everything being powered by batteries now being efficient isn't something we want to always just throw more hardware at, although such a phone might be more resistant to bending.
      • try looking at some of the work loads that hadoop et al deal with individual file scan be in excess of the largest disk size avaible
  • So it's telling us just what we already knew? Interesting.
  • by Squidlips ( 1206004 ) on Wednesday September 24, 2014 @09:28AM (#47983389)
    especially if it makes the code unreadable. Give me the verbose, easy to read code any time. If you really, really want succinctness, use Perl or even better, APL and don't worry about the next poor slob who has to maintain your code.
    • Re: (Score:3, Informative)

      by beelsebob ( 529313 )

      The problem is that OOP languages rarely have more readable code. Instead, they typically have simply code with more boiler plate.

      • Depends on the OO Language I would say:
        o SmallTalk
        o Groovy
        o Simula
        all easy to read and no boilerplate code.

        • I think it depends more on the coder. Spaghetti code can be written in any language. Good, readable code can be written in almost any language (not sure about Brainfuck). Some languages go farther to force the user to write good code than others but it's still a choice.

          • I guess the parent was more concerned in about languages like Java/C# where you have to "repeat" public/private all over the place at every definition and have to "manually" write getters and setters (if you believe in them)

            But yes: obviously you can write Spaghetti code in every language.

            • My point was more than just that you can write spaghetti code in every language. It was also that you can write good code in any language. (except for obvious exceptions that are designed to be difficult.. I didn't want a 'what about Braifuck' reply)

              I think sometimes new languages are being presented a a 'cure' to bad programming. new language constructs and rules are designed to try to force a bad coder to be a good one. As a result we get an endless churning of languages obsoleting old code and making m

              • I did not raise the topic 'Brainfuck' if it was not you, it was one of our parents.

                Regarding 'bad programming', there are many ways to program badly.

                I don't really think new languages address that, what do you have in mind there? Imho Onjective-C is an ugly language, and Swift is as successor much less ugly.

                Java and C# are easy languages ... but their strength is the platform/VM, not the language itself. Hence people 'invented' Scala or Groovy etc.

                Bad programmers still find a way to write bad code. Even 'g

    • by Theovon ( 109752 )

      Verilog vs. VHDL. I find that the verboseness of VHDL (which requires like 3 times as much typing) actually impedes readability. Sure, there are situations where VHDL can catch a bug at synthesis time that Verilog can't, but the rest of the time, it just makes VHDL unwieldy.

      • VHDL has a lot of (useless) metaprogramming. A lot of the standard operators and values are actually defined in header files and stuff. Even things like high and low, if I recall correctly. It's like the designers wanted a language that could describe anything, given the right boilerplates and templates, but didn't care to implement the actual things you need for FPGA programming as built-ins.

    • Comment removed based on user account deletion
      • by steveha ( 103154 )

        I was surprised at how many instructions that developers previously spread out over multiple lines are now packed into highly idiomatic one-liners.

        As with many things, Python one-liners can be good or bad. When done correctly they are awesome.

        Consider this code:

        bad = any(is_bad_word(word) for word in words_in(message))

        If words_in() is a generator that yields up one word at a time from the message, and is_bad_word() is a function that detects profanity or other banned words, then this one-liner checks to se

    • by Anonymous Coward

      I think, perhaps, there might be a difference between syntactic succinctness and semantic succinctness.

      Perl and APL let you express a lot of operations in a few characters. Syntactically succinct, hard to read.

      A lot of functional languages give you (or give you the ability to write) more powerful operators, so that fewer operations are required to accomplish a task. Semantic succinctness: easy to read, if you know what the operators do (and they aren't buggy or inconsistent).

      I'm just pulling this out of my

      • Perl [lets] you express a lot of operations in a few characters. Syntactically succinct, hard to read.

        That really depends on your experience level, like in anything. Reading a wiring diagram is arcane to the uninitiated, but once you know what symbols represent what types of circuit pieces (including resistors, capacitors, diodes, FETs, etc.), they are both syntactically succinct and easy to read because you can tell what goes where at a glance, you don't need to read and parse a lot of text.

        Same thing in Perl. Once you actually learn it, it becomes easier and faster to read than, say, Java, because there

    • by Flammon ( 4726 )
      Succinct does not equate unreadable.
    • Poorly done, cryptic succinctness can indeed make code impenetrable. Yet overdone verbosity can destroy readability just as thoroughly. When the language is naturally succinct, it's easy to ensure that it contains enough context to be readable. When the language is overly verbose, you generally can't slim it back down to readable conciseness.

    • by sberge ( 2725113 )
      Anyone who writes code should care about succinctness. For every line of code you write there is a probability that you make a mistake. The bigger your program, the more room for mistakes.
      • I used to think this as well but it is not backed up by the data in the study linked to above (read the actual pdf paper). They compare the number of errors per language and the less succinct languages (C, C#, Java etc) actually have fewer errors than the succinct ones like Python and Ruby. In fact Python, which was the most succinct language, had the lowest percentage (79%) of programs that ran without timeout or errors.
        • by mrego ( 912393 )
          What really matters is Productivity. If it took 100 hours to get the nice program compiled and working perfectly in a language, verse only 5 hours in another language that would be very significant. Ease of coding, number of errors, and maintainability (i.e. understandability from the readableness of the code) are secondary. Readability obviously is enhanced by the 'meta-code' in other words COMMENTS that could be added to nearly any language. So what language are coders most productive in? (Naturally t
      • by LQ ( 188043 )
        We don't usually supply a proof of correctness with every program. The program itself must stand as an argument for its own correctness. Sometimes breaking it down into more verbose steps is a way to clarify to future maintainers what your intention was. Also densely packed code is much harder to modify later.
    • by ljw1004 ( 764174 )

      You're conflating "succinct" with "terse"...

      F#, Swift and other functional inspired languages let you omit the wordy boilerplate that gets in the way of readability.

      For instance algebraic data types (a.k.a. discriminated unions, or enums in Swift) are less wordy than declaring an inheritance tree like you would in Java/C#, and pattern matching us a shorter more readable way to deconstruct the data than virtual methods.

    • by T.E.D. ( 34228 )

      especially if it makes the code unreadable. Give me the verbose, easy to read code any time.

      Interesting. When I first hit the Internet in the 80's back in the Usenet era, there were lots of folks using this exact argument to promote using platform native assembers over high-level programming languages. I didn't think any of you guys were still around...

    • Give me the verbose, easy to read code any time.

      There are different ways to interpret simplicity and "easy to read", especially local vs. global.

      I've come across this during code reviews, where a few lines of "complex" code (eg. a fold) can replace multiple screens of "simple" code ("doThingA(); doThingB(); doThingC(); .."). Each "simple" line is straightforward and clear, but it's difficult to see the big picture of *what we're trying to achieve*. In contrast, it's obvious what the "complex" solution is trying to do, although it may take a few minutes t

    • by naasking ( 94116 )

      especially if it makes the code unreadable. Give me the verbose, easy to read code any time

      So I can surmise that you program in Ada?

  • strongly-typed languages, where more defects can be caught at compile time, are less prone to runtime failures than interpreted or weakly-typed languages

    Isn't that kind of the point?

    Is this supposed to be something we didn't know? Or just confirming something we did?

    • by Anonymous Coward

      if you've ever talked with a python dev^H^H^Hzealot, you would think it doesn't matter and you must be a bad developer to rely on strong types.

      • by Anonymous Coward

        if you've ever talked with a python dev^H^H^Hzealot, you would think it doesn't matter and you must be a bad developer to rely on strong types.

        Funny, the first python zealot I ever met was a bad developer who didn't do ANY error checking.

        So, are python developers all overconfident morons?

        Then again, what do you expect from a language which makes whitespace actually part of the syntax.

        I've never been able to look at python as anything other than a lazy man's language since.

      • Python is strongly-typed. It is not statically-typed, however.

    • by Anonymous Coward

      Neither, because it is incorrect. Python, for example, is a strongly-typed language that will not catch the overwhelming majority of type errors at compile-time.

      Static typing is what can catch type errors at compile time.

      • Python has a single type, which includes integers, bignums, floats, lists, strings, classes, functions, etc. *and errors*. In Python, a run-time error is a value which can be manipulated just like anything else. For example, this code passes errors into functions, adds together a couple of errors and prints some errors out:


        def myFunction(x):
        print "CALLED"
        print x + ' world'

        myFunction('hello')

        try:
        myFunction(100)
        except TypeE

    • strongly-typed languages, where more defects can be caught at compile time, are less prone to runtime failures than interpreted or weakly-typed languages

      Isn't that kind of the point?

      Is this supposed to be something we didn't know? Or just confirming something we did?

      Mostly confirmation. It's good to have empirical evidence: https://news.ycombinator.com/i... [ycombinator.com]

  • by Anonymous Coward on Wednesday September 24, 2014 @09:59AM (#47983673)

    The difference, as the summary noted, is that when using a scripted-language, you are trading all your compile-time (build breaks) for runtime errors that your users will see.

    If you write 'C' code, would you declare all your input and output return types as 'void*'?
    If you write 'Java' code, would you declare all your input and output return types as 'Object'?

    Why someone would willingly give up the function of a compiler is beyond me. Sure, use scripts for little tasks / prototyping etc. Any long-term project should be using a proper language, that provides type-checking (at compile time), and provides proper encapsulation so that 'private' means 'private' (looking at your Groovy). I don't want to be forced to read every line of your crappy code, just to try to figure out what object-type your method supports because you are too damn lazy to define it in the method's interface.

    When you change the behavior of the method and assume different input/output object-types, I want that to be a BUILD-BREAK instead of me once again having to reverse engineer your code.

    • I think the test-driven advocates would say that relying on the compiler is OK for that one particular kind of error, but you really should be writing tests to catch that kind of error along with many others.

      The reality is probably, as you kind of imply, sometimes you have a task that is more suited to one approach or another.

      • Writing tests for 100% code coverage takes a good amount of effort to do. If you don't do this, and use a weakly typed language then you risk not seeing the issue until that rare case that was programmed for is covered.

      • relying on the compiler is OK for that one particular kind of error, but you really should be writing tests to catch that kind of error along with many others.

        Tests *only* make sense if the compiler is trustworthy. If you can't trust the compiler, you can't trust anything it produces, including your tests! Test suites are *far* less trustworthy in comparison, so anything that *can* be handled by the compiler *should* be handled by the compiler.

        Anything that's handled by the compiler doesn't need to be tested, since the tests wouldn't compile!


        -- Our function
        boolAnd :: Bool -> Bool -> Bool
        boolAnd True True = True
        boolAnd _ _ = False

        -- A redundant test
        testBoolA

      • by PJ6 ( 1151747 )

        I think the test-driven advocates would say that relying on the compiler is OK for that one particular kind of error, but you really should be writing tests to catch that kind of error along with many others.

        The reality is probably, as you kind of imply, sometimes you have a task that is more suited to one approach or another.

        The nature of testing is that complete coverage grows combinatorially with state. What you're saying is you don't want to eliminate the possibility of an entire class of errors, but rather rest this (rather significant) burden on testing. From my point of view that's like abandoning DRI in a database and saying tests can detect foreign key constraint violations and all the other things DRI can check. While technically true, it just doesn't make any practical sense.

    • by tomhath ( 637240 )
      But look at the results:

      C C# F# Go Haskell Java Python Ruby
      # ran solutions 391 246 215 389 376 297 676 516
      % no error 87% 93% 89% 98% 93% 85% 79% 86%

      Java was one of the poorest at actually running successfully.

    • I wouldn't want to make too close of a comparison to scripted-langs and non-scripted, but this part that you reference

      The difference, as the summary noted, is that when using a scripted-language, you are trading all your compile-time (build breaks) for runtime errors that your users will see.

      Is not true for all scripted languages. AutoHotkey for instance gained a #warn flag, that among other things:
      1) Tells you when you have a local variable with the same name as a global. The local trumps the global, unl

    • In my opinion the basic trade-off is that "scriptish" languages can be written to be closer to pseudo-code and thus easier to read and grok. Strong/heavy typing tends to be verbose and redundant, slowing down reading.

      Better grokkability often means less "conceptual" errors, but at the expense of more "technical" errors, such as type mismatches. There's no free lunch, only trade-offs.

      In some projects the conceptual side overpowers the technical-error side, and vice verse. It also depends on the personality o

  • by mr_mischief ( 456295 ) on Wednesday September 24, 2014 @10:23AM (#47983905) Journal

    Simply because a language is billed as a "scripting" language (by which people tend to mean distributed as source code and partially compiled for each execution rather than compiled once and distributed as object code rather than actually used primarily to script other programs) doesn't mean there's no programming paradigm associated with them. They can support procedural, functional, actor-based, object-oriented, logical, dataflow, reactive, late binding, iteration, recursion, concurrency, and whatever other paradigms and methods people want. Some of them support mixing and matching even in the same program.

    Languages that are typically fully compiled can even be run in an interpreter. C-- comes to mind. Often languages known for interpretation (actually most of which are partially compiled rather than interpreted line-by-line) have support for compiling at least portions of a program up front, too. Examples include the .pyc files of Python, luajit, Facebook's HHVM, Steelbank Common Lisp, and Reini Urban's work on perlcc.

    People making claims about one type of language vs. another should really keep straight what types they are talking about.

  • It always comes down to personal preference.
    All i care about is performance. If i want performance, i will learn how to use C++, regardless of what new writing methods i need to learn.
    People who cant/dont want to learn a "better language" will always try to brickwall every other language with an excuse that suits their "locked in" writing ability.

    Funny how C++ is missing from this "lets try and justify programming languages using a graph made by a 2 year old with crayons", tests

    • by T.E.D. ( 34228 )

      All i care about is performance. If i want performance, i will learn how to use C++,

      Actually, if all you care about is performance, generally Fortran is the language for you. Its when you start caring about something like toolchain support and your own established codebase that C++ starts looking like a much better choice to most folks. But if you "don't care" about those things, here's your compiler [intel.com].

      • by Smerta ( 1855348 )
        Sincere question - I've heard that Fortran blows away (or at least beats) C++ for scientific/calculation programming, and considering the 2 languages' history and "raison d'etre", I'm not surprised... but can you lend any insight into what accounts for that, specifically? I mean, if I create arrays or matrices or whatever in C++, and I pay attention to cache effects, etc. it seems like my C++ still can't be as fast when it's compiled down into machine code... I've never seen a good explanation of what's go
      • compiled COBOL and Forth also will beat the ass of C/C++

    • The paper justifies their choices, and hence why C++ wasn't included.

      Anyway, if all you care about is performance then why use C++? FORTRAN is faster.

    • by Anonymous Coward

      Seconded.

      C++ (and C) are the only languages I've ever used while working on runtime code for games (and really, C hasn't been the primary language for the past 20 years or so).

      But I've used all sorts of languages for non-runtime code (Python, C# and DOS-Batch being the 3 most commonly used).

      For instance, our GUI tools are mostly written in C# (with some legacy C++ still in use for some older tools). But I prefer python (or sometimes DOS-Batch) for faceless tools (build farm, installers, miscellaneous script

    • Given a project with a deadline, and all projects have deadlines, the performance statement is subjective and depends on the project. Yes C++ will natively run faster (somewhere around 5% faster with current JITs) than Java, C#, scripting, etc. but chasing memory leaks can consume a significant amount of a C/C++ project and that time can be spent optimizing the alorithms/code in other languages.

      For a "Hello World" application, C/C++ win. For a team project involving significant complexity the whole life
  • Why aren't there more languages that allow strong typing were desired but weak typing where not? One can kind of emulate such with generic "variant" or "object" types in some languages, but you have to keep declaring everything "object". If I want dynamic parameters, I shouldn't have to put any type-related keyword in the parameter definition.

    For example, one should be able to type: function foo(x, y)... versus function foo(x:int, y:string)... for weak and strong typing, respectively. And types could be con

    • Strong vs weak typing; static vs dynamic type checking Tablizer's post skids across all four, mangling the differences making it hard to discern their point.
  • My /. account still works. Havent' used it in ...err... a few years.

    Just here to bump this article for my friend Mike and his fantastic work on RC.

  • by Alomex ( 148003 ) on Wednesday September 24, 2014 @12:55PM (#47985785) Homepage

    If I understand the statistics correctly the average program has 71 lines of code. Those are mickey mouse tests for which scripting languages shine. All the verbosity of imperative languages becomes handy when you have tasks that are a few 100KLC long.

    This is a lesson Perl learned the hard way: once your program is long enough you beg in your knees for strong static type checking system.

  • The problem with programming language evaluations is that they tend to be based on small snippets of code, like this one, or data from novice student programmers, or worse, popularity. Yet what really tends to matter is how much trouble a language causes in large systems and in later years. That's where high costs are incurred because changes in module A affect something way over in module Z. Undetected cross-module bugs, high costs of changing something because too much has to be recompiled, that sort of

  • The biggest suprise for me is how well Go does:

    "Go is the runner-up but still significantly slower with medium effect size: the average Go program is 18.7 times slower than the average C program. Programs in other languages are much slower than Go programs, with medium to large effect size (4.6–13.7 times slower than Go on average)."

    My only objection is that they classify Go as "procedural" along with C, Ada, PL/1 and FORTRAN. It may not have inheritance (a good thing in my book!) but it has many OO f

BASIC is to computer programming as QWERTY is to typing. -- Seymour Papert

Working...