Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Programming Bug PHP Python Ruby

Which Programming Languages Are Most Prone to Bugs? (i-programmer.info) 247

An anonymous reader writes: The i-Programmer site revisits one of its top stories of 2017, about researchers who used data from GitHub for a large-scale empirical investigation into static typing versus dynamic typing. The team investigated 20 programming languages, using GitHub code repositories for the top 50 projects written in each language, examing 18 years of code involving 29,000 different developers, 1.57 million commits, and 564,625 bug fixes.

The results? "The languages with the strongest positive coefficients - meaning associated with a greater number of defect fixes are C++, C, and Objective-C, also PHP and Python. On the other hand, Clojure, Haskell, Ruby and Scala all have significant negative coefficients implying that these languages are less likely than average to result in defect fixing commits."

Or, in the researcher's words, "Language design does have a significant, but modest effect on software quality. Most notably, it does appear that disallowing type confusion is modestly better than allowing it, and among functional languages static typing is also somewhat better than dynamic typing."

This discussion has been archived. No new comments can be posted.

Which Programming Languages Are Most Prone to Bugs?

Comments Filter:
  • by Ukab the Great ( 87152 ) on Monday January 01, 2018 @03:01AM (#55842203)

    Brainfuck

  • by Waccoon ( 1186667 ) on Monday January 01, 2018 @03:05AM (#55842211)

    You already have to be a genius to understand functional languages, so of course those people make fewer mistakes.

    I love it when functional fans insist it's more analogous to how the brain really thinks. That's why so few people can figure out how to do things that way.

    • Functional programming is not more complicated then other imperative programming styles.
      But unfortunately functional languages like Haskell often have a strange syntax, that is all.

  • 2018 (Score:5, Insightful)

    by AHuxley ( 892839 ) on Monday January 01, 2018 @03:08AM (#55842217) Journal
    Rediscovers how great Ada would have been for the consumer.
    • by romiz ( 757548 )
      Perhaps if one day we get a compiler that does not require you to choose between an expensive license or coding only GPLv2 projects. Until then, all those GCC efforts only serve as an advertisement for GNAT Pro, and the only users will continue to be those avionics companies that are used to pay a lot.
      • Is that really the case? GNAT is part of GCC isn't it, which allows you to use the output and the libraries without counting as a derived work.

        • by romiz ( 757548 )
          It seems that I was confused: you don't have the same standard libraries when you use AdaCore's GNAT Libre, GNAT Pro, or Ada support in GCC. And the Ada support integrated into the GCC mainline supports the linking exception. There is a better explaintation [stackoverflow.com], but all this is quite confusing...
      • There's absolutely nothing wrong with writing GPLv2 code in Ada. I can think of one major issue that could have been avoided entirely had Ada been the language of choice for a GPL application: the Heartbleed bug in OpenSSL [wikipedia.org].

        "If software is safe, it cannot harm the world. If software is secure, the world cannot harm it." -- John Barnes, author of several Ada programming texts

      • What drugs are you on?
    • by HiThere ( 15173 )

      The first problem I had with Ada was that strings of different lengths were of different types. That could be solved with bounded or unbounded strings, but then you couldn't compare against literals. There were other problems, and there were always ways around them, but I had to used unchecked conversion too often for things that were perfectly safe. The entire thing was a mess due to over-concern with simple type conversions. Dynamically allocated storage is also a mess, though, given the original idea

    • by Tom ( 822 )

      It would actually be a good idea to rediscover some of the advances of the past that have been lost to history. My domain is information security, and in many areas we are worse than our ancestors in the early days of computing were. Updating some of the academic research that has fallen by the wayside could do wonders for us.

  • Complexity (Score:5, Insightful)

    by Anonymous Coward on Monday January 01, 2018 @03:24AM (#55842273)

    Or could it be that the software written in C++ usually tends to be large complex software where performance is important along with various other complicating factors. While the software written in ruby for example tends to be simpler?

    Sounds like this 'study' started with a conclusion already in mind.

    • That was my first thought. It turns out that they ranked popularity by number of stars.

      When you think of popular pieces of open source written in each language, what do you think of? Here are the ones from the paper:

      C projects: Linux, git, php-src.
      Python projects: Flask, django, reddit.
      JavaScript projects: Bootstrap, jquery, node.
      Java projects: Storm, elasticsearch, ActionBarSherlock

      If you didn't know, ActionBarSherlock is a piece of Android infrastructure. Given that, these all seem very reasonable.

      How abo

      • by AuMatar ( 183847 )

        ActionBarSherlock is a replacement for a piece of Android Architecture to backport functionality that existed in 4.0+ to 2.2+ (or thereabouts). Its been deprecated for years, as nobody writes for those old versions, and Google has been releasing their own backporting libraries for years. So no, its not reasonable.

      • by jeremyp ( 130771 )

        Mongo seems reasonable, but... those other two! There's no Mozilla, no Boost, no tensorflow, no LLVM... not even webkit, just two webkit clients, both of them javascript bindings.

        Do any of those projects use GitHub as anything other than a mirror? LLVM, for example, is hosted in its own SVN repository. I imagine most people who want the source go direct to the source, as it were.

        • Fair point. It's also true that a lot of these projects don't have that many people working on them.

          LLVM and tensorflow are good examples of software projects where you need to have a minimum level of knowledge of the problem domain to make any meaningful contribution. So while a lot of people use those projects, not very many contribute.

          Boost, of course, has a very strict code review policy before it accepts new functionality.

    • Or could it be that the software written in C++ usually tends to be large complex software where performance is important along with various other complicating factors. While the software written in ruby for example tends to be simpler?

      Sounds like this 'study' started with a conclusion already in mind.

      Yeah. Another possible conclusion from this data is that C++ is more commonly or easily debugged, and thus more bugs are found and fixed, where they are left unfixed in the other languages.

      • by Entrope ( 68843 )

        No one else has pointed out the old saw that there are three kinds of lies (lies, damned lies, and statistics)? Or that one could -- and someone actually did -- literally write a book on "How to Lie with Statistics"?

      • by HiThere ( 15173 )

        Sorry, that hypothesis doesn't fly. It may be harder to debug C or assembler than C++, but most other languages provide more usable debugging facilities than does C++, and most of them are easier to write unit tests for. The unit testing for C++ is basically an add-on. Even assert statements in C++ are crippled, unless you use an extension. (Assert statements should include an optional message that is printed with the error, and which can dump variables of interest in a formatted way.)

        So C++ is more dif

  • by shess ( 31691 ) on Monday January 01, 2018 @03:31AM (#55842303) Homepage

    Something the linked article didn't seem to address it that the population for each language will differ. The average Haskell programmer is going to be very different from the average C++ programmer, or, god forbid, the average Python programmer.

    Also, while they did try to address problem domains, I don't think they addressed systemic issues. For historical reasons, there are many projects which use C or C++ simply because of what they need to interface with to get the job done. For instance, there simply aren't going to be that many browser projects which aren't written in C++.

    Personally, I think the interesting take-home is not the difference between languages, it's how small the number of commits for security and memory issues was.

    • by serviscope_minor ( 664417 ) on Monday January 01, 2018 @04:58AM (#55842469) Journal

      Also, while they did try to address problem domains, I don't think they addressed systemic issues.

      I don't think they do: none of them have things like zero overhead abstractions, zero cost memory allocation and so on. And some of them (like go) lack the kind of abstractions present in many modern languages.

      For instance, there simply aren't going to be that many browser projects which aren't written in C++.

      Of the three remaining extant enignes: Firefox, Webkit/Blink and Edge and Trident all except firefox are written in C++. Firefox is partly Rust now.

      Rust I think is one of the very very few languages aimed a the same problem domain as C++ by people who understand enough C++ to know what the problem domain was. Look for example at Pike's rants on GO and how was designed to replace C++ and didn't: many C++ programmers sikmmed the features and said something like "oh that'll make my program slowe, more verbose, buggier and harder to write". Rust on the other hand is the same machine model as C++ but with a very very different type system.

      It's never going to replace C++ across the board that's for sure but it's proven capable of replacing C++ in a niche where formerly there were no contenders.

    • I'd venture that the small number of commits for security issues is because many developers 1) don't mark issues as security issues (security not being foremost in their mind) and 2) many developers can't recognize issues as affecting security (which is even scarier).

  • by mykepredko ( 40154 ) on Monday January 01, 2018 @03:58AM (#55842355) Homepage

    This is an interesting study, but I don't know if the results can be extrapolated to include closed source software.

    My problem with this is that I don't see any evidence of:
    a) Projects in the study have a published project plan with somebody managing it at a high level (I would think the Linux Kernel could be thought of as having a plan with strong central management ). I tend to believe that projects in which multiple individuals (with varying levels of understanding of the software, the app's background and issues experienced during development) would be at a much lower quality level than something managed by a strong, continuous team - this doesn't seem to be a consideration when I RFTA (popularity of projects seems to be a bigger issue).
    b) Different development tools used by different developers. In terms of the C/C++ typing issues, Windows software developed and built in Visual Studio, Eclipse Text Editor with MinGW or something like Komodo Edit with Cygwin and user written make files will identify different typing issues and may generate code that works differently, especially in regards to identifying and handling typing issues. I would like to know how many bug fixes are the result of something that isn't flagged and works fine on VS and doesn't work when built in MinGW, leading to a fix.
    b.1) I'm not 100% sure of the methodology used in this study, but wouldn't a file that originally had tabs for indentation that an editor automatically changes it into spaces be misidentified as a "fix" if it's uploaded back into the repository? This is a combination of b) and c).
    c) Different coding styles. I know of several Open Source projects in which a developer has re-formatted code simply because they don't think it's in the "correct" style and they have difficulty reading it resulting in them changing it so they can follow it better. To be fair, I'm sure a lot of us have done that because some people have very different and strongly felt ideas about how code should be formatted.
    d) Lack of formal testing methodologies. I don't think many Open Source projects have strong, automated regression testing processes and methodologies before allowing a new release.
    e) Difference in functional use of different languages. I would think that methods written in C, C++ and Objective C would be providing more low-level functionality than Clojure, Haskell or Scala. Ruby probably fits somewhere between the two groups.

    Comments?

  • Python (Score:5, Insightful)

    by _merlin ( 160982 ) on Monday January 01, 2018 @04:16AM (#55842389) Homepage Journal

    I know I'll get flamed for this, but Python is really error-prone in a particular area, and that's its ridiculously weak name resolution rules. In a language like C, Perl, or even PHP, names are resolved during the compile phase. The compiler knows which definition of a name is going to be used at any point. Python doesn't have this - when it runs across a name, it walks up the scope hierarchy looking for a candidate.

    This means that code can run happily for months or even years, until it just crashes with an undefined name error. This could be because of a rarely-used code path with a typo in it, botched refactoring of a rarely-used code path, or a particular set of rare circumstances where a global name isn't set before the code gets to a certain place.

    The usual response is that unit tests should catch this. But let's face it, 100% unit test coverage is pretty rare, particularly for the kind of fast turnaround stuff that Python's frequently used for. Also, unit testing isn't necessarily going to simulate a corner case where a global doesn't get set before code that uses it executes. It also makes refactoring more risky because there's no point where the compiler can tell you you're referencing a name that's no longer defined, or no longer has a certain method/field.

    This is the kind of area where it's really useful if the compiler can help you, and Python's ridiculously weak name resolution rules make that completely impossible.

  • source http://wiki.c2.com/?AplLanguag... [c2.com]

    [6] L(L':')L,L drop To:
    [7] LLJUST VTOM',',L mat with one entry per row
    [8] S1++/\L'(' length of address
    [9] X0/S
    [10] LS((L)+0,X)L align the (names)
    [11] A((1L),X)L address
    [12] N0 1DLTB(0,X)L names)
    [13] N,'',N
    [14] N[(N='_')/N]' ' change _ to blank
    [15] N0 1RJUST VTOM N names
    [16] S+/\' 'N length of last word in name

    As mangl

    • Actually thinking about it, it was always easier to just code a new function than try to read someone else's old stuff.

    • For whatever reason APL always reminds me of Arthur C Clarke's classic story "The Nine Billion Names of God." If anyone ever writes a readable APL program perhaps the stars in the sky will, without any fuss, go out.

  • The base concept is bulls**it on its own.

    It's more like spoken or written human languages to me:
    You need to study, learn and practice before being proficient.

    If you think that you need a fast solution, then the language you know the best is among the right solutions.
    Assembly isn't more error prone than English.
    It just depends whether you are or not an idiotic programmer or a easy-going speaker.

    • The base concept is bulls**it on its own.

      It's more like spoken or written human languages to me: You need to study, learn and practice before being proficient.

      If you think that you need a fast solution, then the language you know the best is among the right solutions. Assembly isn't more error prone than English. It just depends whether you are or not an idiotic programmer or a easy-going speaker.

      Correct. No one language over another is more prone to bugs; it's the maturity of the programmers writing the code. Mature programmers won't have memory issues in C, C++, etc; immature programmers will have all kinds of bugs in any language - even memory errors in Java.

  • by munch117 ( 214551 ) on Monday January 01, 2018 @06:06AM (#55842571)

    Python program can be very self-diagnostic. Something goes wrong, it presents as an exception traceback from an uncaught exception.

    A lot of bug reports I get go like this: Someone sends me a screenshot with a traceback, I look up the line of the error, find that the error is obvious, fix it, commit the fix, and I still have time for a cup of coffee before 5 minutes have passed. The reporter may not be happy because they can't get on with their work until I cut a new version, but other than that this sort of bug is of very little consequence: no data files have been corrupted or anything like that.

    Then there's the other kind of bug, the subtle kind where everything seems to be working fine, but someone checked the output and it just isn't right: the totals on the report don't add up or something. These are the hard ones. And then you have to dig in and hypothesise and experiment and bisect and so on. Of course those bugs happen in Python programs as well.

    But I bet the kind of bugs that put Python over average are the first kind, and that Python is below average on the second kind. Which is a good tradeoff.

    • by Ihlosi ( 895663 )
      Then there's the other kind of bug, the subtle kind where everything seems to be working fine, but someone checked the output and it just isn't right: the totals on the report don't add up or something.

      Even worse are horrid bugs (think "buffer overflow") that in practice result in minor performance degradation (still well within the requirements).

      Or, my favorite so far - using an unitialized variable that by complete coincidence is always zero at this point in this compile run, and zero is the value t

    • by Entrope ( 68843 )

      No, that is a shitty tradeoff. If easily spotting a bug when someone tells you where to look is the most common situation, you are writing too much obviously bad code, and you should look at every line you write as a possible bug and write more (or better) unit tests.

      In the group I work with, if you look at code that gets committed to shared repositories, the most common Python errors involve runtime detection of errors on code paths that only run in uncommon situations: in particular, use of undefined fun

      • In my line of work, silently computing the wrong value could easily cost my company millions. A minor production stop is manageable, but delivering bad product unnoticed is a disaster. So for me it's a good tradeoff. YMMV.

        Statically typed languages often have some sort of default value system, e.g. default-construction in C++. I see that as an unacceptable risk: If I ever try to use the value of a variable that has not been explicitly assigned, then I want an error trap. I do not want the number zero, the

        • by _merlin ( 160982 )

          A mistake in my line of work can cost millions in seconds. If you want a program to blow up on accessing an uninitialised member in C++, use boost::optional (or std::optional if you're using C++17), or a dynamically allocated object so it'll fault on a null pointer access. In python, an attribute error could be caused by a typo, incomplete refactoring, or an API change that you didn't deal with properly, besides a value being used before it's initialised.

          At any rate, this is the kind of thing that you sho

          • If you use boost::optional for every single declaration you make, then you get the same type of runtime errors as in Python. Except with lower quality tracebacks. I expect you don't, and my point stands.

            • by _merlin ( 160982 )

              You can't see past the limitations of your favourite tool. In C++ or other sane languages, a lot of things can be scoped so they can't be used before they're initialised. You get an error at compile time rather than having it blow up when code path is hit in production. You actually get tools to reduce mutable state, which allows for better optimisation as well as identifying a whole class of errors at compile time. Functional languages (e.g. Erlang or F#) do this even better. Your whole point seems to

    • But I bet the kind of bugs that put Python over average are the first kind, and that Python is below average on the second kind. Which is a good tradeoff.

      It would be no different in C++ with end users programming in the modern idiom on top of mature application libraries that support and encourage the modern idiom.

      C++ is the GTA III of programming languages.

      cout << "Open, world! [wikipedia.org]" << '\n';

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Monday January 01, 2018 @07:06AM (#55842675)

    Comparing PHP with Scala is like comparing "Game of Thrones" with "Ulysses".
    Any n00b can program something useful in PHP within an hour. That's the whole point of PHP. That's why we have such amazingly feature complete systems like WordPress. Given, the architecture of these PHP systems is so bizarre any reasonably seasoned programmer will not believe his eyes when he looks at the actual code - but it does work (most of the time) and it is useful.

    Scala is a programming language that forces you to know what you are doing. Yeah, no shit it has less bugs. If I don't know what a JVM and what bytecode is, there is little chance I'll even get started with Scala. Only an experienced Programmierung will get the point of Scala in the first place. Thus Scala code has less bugs. No surprise here.

    My 2 cents.

    • You apparently know little about PHP. That may or may not be a terrible thing, but holding up WordPress as a piece of the PHP ecosystem is unrepresentative, even disingenuous. It was obsolete when written and it can never be meaningfully improved. It's its own self-sustaining ecosystem that has nothing to do with anything written in PHP in the 15 years since then.

      Modern PHP looks a bit like JavaScript, actually, and the frameworks are all very boring MVC. It's also (somehow) one of the faster scripting lang

      • You know little about the real world, the majority of web frameworks that use PHP such as drupal are badly written garbage. It is the language of the careless, the language used by builders of sites that get infected and that spread malware and cause identity theft.

        PHP developers are like those that join the school band and want to play the triangle, blocks or cymbals.

        • Mentioning Drupal can be taken as an indication that you know nothing about PHP frameworks. You are simply displaying your prejudices.

          • take it as an indication that I have to deal with their incompetent code, the security breeches, data corruption, and resource problems they cause every fucking working day

            • If you work in an organization that used obsolete frameworks like Drupal, then you deserve what you get.

              • uh huh, but make a list of the top CMS systems and you find the top three and their market share is 60% Wordpress, 6% Joomla, 4.5% Drupal.

                • Weren't we talking about frameworks a minute ago? Joomla and Drupal are dead projects. I don't know why you're crying up a 4.5% market share. Well, except that you seem to be an idiot with an axe to grind.

  • That in of itself makes the results next to useless.

    In particular, considering C++ pre 2011 and after (c++11) as the same language from a prone-to-bugs POV is ridiculous. Sure, since it's backwards compatible you can continue to shoot yourself in the foot like it's 2010 (or 2000 - sheesh!) if you really want to, but if you're using C++ nowadays and having problems like memory leaks or dangling pointers then YOU are the problem, not the language.

    I'm sure other languages have similar issues - if you don't use

    • by Entrope ( 68843 )

      Well, they are using data from GitHub, so even though some of the projects go back 18 years, the projects should all be fairly well-maintained. However, the study's authors give no good reason to think that the kind of linear regression they use is appropriate. Even assuming a linear model is appropriate, their choice of variables seems arbitrary. For example, why is "log of project age" the right parameter to use, rather than some quantized version of age, or the square root of the age? Did they choose

  • This is one of those flamebait topics that is basically pointless to debate. It's too general. There are too many ways to define a bug, and many of them depend heavily on indirect/abstract qualities of the language, such as what sort of people use it or what sort of problems it's most commonly used to solve. It's just impossible to remove enough of the unknowns and side-effects of one sort of bug to give a useful answer on any other.

    For example. if you're going to judge a language on the code-to-fix rati

  • Looking at programming languages is good but this report implies there are other factors more important at play. What is the demographic of a good programmer? What is the marker of a good programmer who does not produce bugs? Ivy league vs. "Scheme certificate in 90 days" training programs? Wyoming programmers vs. California programmers? Just graduated 20 somethings vs. 50 year olds? Traditional CS programs vs. explicit software development training programs?

    We all have our biases, but let's see what, if an

  • convert problems I don't know how to solve into problems I *do* know how to solve. That's what programming is.

    So using that methodology, I have to ask here: which programming languages are the most popular?

  • And the most buggy software is always C, C++, Objective C or anything else that encourages a human to manipulate pointers and/or memory.

    Most buggy framework? WPF, by far. I haven't had to test UWP yet, but it looks like an even bigger, overly elaborate clusterfuck.

    Sometimes newer is not better. I despised MFC, loved Winforms, which is inflexible and dull as dishwater, but simple, obvious and easily testable (everything Microsoft appears to hate). It's all been downhill after that.

    And don't get me started on

BLISS is ignorance.

Working...