Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Performance Bugs, 'the Dark Matter of Programming Bugs', Are Out There Lurking and Unseen (forwardscattering.org) 266

Several Slashdot readers have shared an article by programmer Nicholas Chapman, who talks about a class of bugs that he calls "performance bugs". From the article: A performance bug is when the code computes the correct result, but runs slower than it should due to a programming mistake. The nefarious thing about performance bugs is that the user may never know they are there -- the program appears to work correctly, carrying out the correct operations, showing the right thing on the screen or printing the right text. It just does it a bit more slowly than it should have. It takes an experienced programmer, with a reasonably accurate mental model of the problem and the correct solution, to know how fast the operation should have been performed, and hence if the program is running slower than it should be. I started documenting a few of the performance bugs I came across a few months ago, for example (on some platforms) the insert method of std::map is roughly 7 times slower than it should be, std::map::count() is about twice as slow as it should be, std::map::find() is 15% slower than it should be, aligned malloc is a lot slower than it should be in VS2015.
This discussion has been archived. No new comments can be posted.

Performance Bugs, 'the Dark Matter of Programming Bugs', Are Out There Lurking and Unseen

Comments Filter:
  • by TWX ( 665546 ) on Wednesday March 22, 2017 @11:15AM (#54088637)

    It's stupid to call them "the dark matter of programming bugs". We were just accustomed to this being the way Microsoft did things, not a bug, a feature.

    That stems from Microsoft, originally writing for IBM, being paid per thousand lines of code. As such it made sense that software was not written efficiently because the programmer was not rewarded for efficiency, it merely had to fit within available memory. Unfortunately it seems that this practice has not stopped given the sheer size of Microsoft operating systems relative to the amount of end-user functionality that has been added since the days of say, Windows 3.1.

  • by xxxJonBoyxxx ( 565205 ) on Wednesday March 22, 2017 @11:18AM (#54088683)
    >> that the user may never know they are there

    They will if they try to run a lot of them on a machine with finite resources, like a phone. Or it's a process that's iterated frequently, like a "big data" operation. But if the end user STILL doesn't notice it...then it's hard to call it a bug.

    On the other hand, the performance/just-get-er-done trade-off is well known to programmers of all stripes. (At least I hope it is - are people really finding new value in the article?) There's the quick and dirty way (e.g., a script), and then there's the "I can at least debug it" way (e.g., a program developed in an IDE), and then there's the optimized way, where you're actually seeing if key sections of code (again, especially the iterated loops), are going as fast as possible. Generally your time/cost goes up as your optimization increases, which becomes part of the overall business decision: should I invest for maximum speed, maximum functionality, maximum quality, etc.
    • Although, a "scripting" language without a serious performance bug may actually process a larger data set faster than a program written even in a system language like C, if said latter program contained a serious performance bug. But not making obvious mistakes is hardly "optimization". At the risk of making a little bit too contrived example, the decision not to use a bubble sort is not something I'd call optimization. It's just common sense.
  • Is it a problem, if no one notices? Most code can be optimized more than it is, but if everyone's happy with how it works, what's the problem?
  • The first example: std::insert should do nothing if an item is already present. So calling std::insert only if the object is not yet present is 100% equivalent to calling std::insert directly, but it is 7 times slower.

    This violates the important principle that when using a library, the obvious way to do things should be the fasted. So hacks are required to make your code fast, and that shouldn't happen.

    I assume the explanation is probably that std::find is small enough to be inlined, while std::insert
    • by pruss ( 246395 )

      Actually, the explanation in the article is that there is a memory allocation for a node done *before* checking whether the object is present. So if the object is present, there is a pointless memory allocation and deallocation done. Nothing to do with inlining, and an easy fix for the library: just swap the order of the check for presence and the memory allocation.

  • Losing Battle (Score:5, Insightful)

    by ghoul ( 157158 ) on Wednesday March 22, 2017 @11:26AM (#54088779)

    It is a losing battle to try and solve performance in the programmer space. The Compiler does a much better job of optimization due to a multitude of compiler trics including both Static and dynamic analysis, cache analysis and so on. The programmer trying to write the most efficient code should rather spend his/her time trying to use out of the box algos as far as possible as the compiler knows how to fine tune those. next they should run a profiling tool like jprofiler and see where the job is actually spending its time rather than trying to say this is probably the heaviest part of the program. With multiple cores and multiple instruction pipelines and optimizing compilers the bottleneck is oftentimes not where we would think it to be. Once we find the bottleneck using a profiling tool than we can optimize it. In most cases 2% of the code is causing 98% of the bottleneck so its a much better use of programmer time (which is of course more expensive than computer time in most cases) to work backwards.
    1 Write your code so that its correct irrespective of efficiency,
    2 profile and then
    3 fix the bottlenecks
    rather than trying to find the most efficient algorithms before you write your code.

  • Two Solutions (Score:5, Insightful)

    by UnknownSoldier ( 67820 ) on Wednesday March 22, 2017 @11:27AM (#54088789)

    Programmers love to use the cop-out

    "Premature Optimization is the root of evil"

    dogma which is complete bullshit. It tells me your mindset is:

    "Oh, we'll "fix" performance issue later."

    Except later never comes. /Oblg. Murphy's Computer Law: [imgur.com]

    * There is never time to do it right, but there is always time to do it over.

    As Fred Brooks said in Mythical Man-Month.

    "Show me your flowcharts and conceal your tables, and I shall continue to be mystified.
      Show me your tables, and I won't usually need your flowcharts; they'll be obvious."
    -- Fred Brooks

    Which can be translated into the modern vernacular as:

    * Show me your code and I'll wonder what your data structures are,
    * Show me your data and I'll already know what your code is

    There are 2 solutions to this problem of crappy library code.

    1. You are benchmarking your code, ALONG THE WAY, right?

    Most projects "tack-on" optimization when the project is almost completed. This is completely BACKWARDS. How do you know which functions are the performance hogs when you have thousands to inspect?

    It is FAR simpler to be constantly monitoring performance from day one. Every time new functionality is added, you measure. "Oh look, our startup time went from 5 second to 50 seconds -- what the hell was just added?"

    NOT: "Oh, we're about to ship in a month, and our startup time is 50 seconds. Where do we even begin in tracking down thousands of calls and data structures?"

    I come from a real-time graphics background -- aka games. Every new project our skeleton code runs at 120 frames per second. Then as you slowly add functionality you can tell _instantly_ when the framerate is going down. Oh look, Bob's latest commit is having some negative performance side effects. Let's make sure that code is well designed, and clean BEFORE it becomes a problem down the road and everyone forgets about it.

    2. You have a _baseline_ to compare against? Let's pretend you come up with a hashing algorithm, and you want to know how fast it is. The *proper* way is to

    * First benchmark how fast you can slurp data from a disk, say 10 GB of data. You will never be FASTER then this! 100% IO bound, 0% CPU bound.
    * Then, add a single-threaded benchmark where you just sum bytes.
    * Maybe, you add a multi-threaded version
    * Then you measure _your_ spiffy new function.

    Library Vendors, such as Dinkumware who provide the CRTL (C-Run Time Library), _should_ be catching these shitty performance bugs, but sadly they don't. The only solution is to be proactive.

    The zeroth rule in programming is:

    * Don't Assume, Profile!

    Which is analogous to what carpenters say:

    * Measure Twice, Cut Once.

    But almost no one wants to MAKE the time to do it right the first time. You can either pay now, or pay later. Fix the potential problems NOW before they become HUGE problems later.

    And we end up in situations like this story.

    • by Anonymous Coward

      All well and good in theory, but have fun being beaten to market by your competitor who made it work to the customer's satisfaction faster because they didn't do in-depth performance testing every step of the way and just made sure it wouldn't be noticeably slow overall.

      I'm all for performance optimization while developing solutions so long as you aren't adding steps to delivering a solution that have no business value. If a widget has to function fast, by all means, test the crap out of it early and often

    • Please develop something non trivial that has to meet marketing and shipping deadlines, defects reported in past releases and then come back and read your own post to see how much of what you preach you have actually practiced.
    • "Premature optimisation is the root of all evil"

      is an aphorism that is exactly trying to get across what you say at the end

      "Don't assume, Profile!"

      Basically, the guy that originally said it was trying to say you can't guess what the problems are going to be before you lay the code down. Write the code correctly, but don't try to tinker with your scheduling algorithm to make it provably optimal when it's going to be dwarfed by order-of-magnitude problems with the network code.

    • Re:Two Solutions (Score:4, Insightful)

      by HornWumpus ( 783565 ) on Wednesday March 22, 2017 @01:22PM (#54089959)

      You are arguing against the misquote: 'Early optimization is the root of all evil'.

      Not the accurate quote: "Premature optimization is the root of all evil'.

      If you know a block of code is low, tight and potentially slow, it is not premature to write it with efficiency in mind from day one.

      • Unless you know it is only called once after the software is deployed.

        Or you know you have to ship in a few days, and writing it "perfect" takes longer than those days left.

        Should I continue? I probably find 200 reasons why "premature optimization" is as unpleasant as other "premature ..." things.

        • If you know that, it's 'premature' to optimize it. Unless of course the 'one time run' will take weeks, better test it against real world datasets. I've seen DBAs lock up systems with update scripts that would have run for _years_ (for apparently simple schema updates).

          Not all early optimization is premature. Redefining 'premature' as you just did, doesn't change the basic truth.

    • Re: Two Solutions (Score:3, Informative)

      You don't seem to understand what Knuth meant by premature optimization. He specifically refers to optimizing prior to profiling. You claim it is bullshit and then go on to argue for its veracity by saying profiling first is correct just as Knuth was saying.
  • Check the links, decent code and analysis. Short and simple. I recently found a very similar bug in both PHP and HHVM with their trim() function (and variants there of). In both PHP and HHVM, trim() unconditionally allocates more memory, even if there is no white-space on either end of the string to trim. It is faster to write PHP code to check for white-space on both ends and then conditionally call trim() on a string.

    • I recently found a very similar bug in both PHP and HHVM with their trim() function

      When you have a moment, can you submit a bug over at bugs.php.net?

      • by darkain ( 749283 )

        I'm still trying to find a small isolated test case, which is proving difficult. The script in question is a data importer dealing with ~100,000 rows and ~10 columns at a time, so 1mil entities being trimmed. In smaller cases, built in trim is faster, but once it breaks around 100k calls, it is faster to manually check if data could be trimmed before calling trim. At 1mil, it is an order of magnitude faster. But building a small test case doesn't yield these same results. When I figure out the right combina

    • Actually, it's not (only?) a library bug, but a(lso) programmer bug. Using std::make_pair(x, x) makes a pair, duh! Complaining about it is silly. Hint: the initializer list version of insert() is faster than the pair version (at least on sane platforms, Microsoft can be weird about c++)

  • The programmer is writing crappy and unoptimized code.
  • It's a myth of the cult of unit-testing that all tests passed == no bugs. Correctness bugs outlive all executed tests, and are a worse form of dark matter because they give no outward signs.
    • Depends on how well the unit test is written. I spend three times more time writing the unit test than the actual code. Mostly because I test for what the results should be and every possible edge case that I think could happen. Once I'm satisfied with the unit test, I'll tweak the code for speed.
      • So it's highly dependent on what you think could happen, and what satisfies your sense of sufficiency. Of course, your time needs to be weighed against the cost of failure from the bug manifesting itself. Despite all that's been invested in development methodology, this is remains a black art.
        • So it's highly dependent on what you think could happen, and what satisfies your sense of sufficiency.

          It's called programming. The [computer | self-driving car | AI | programmable girlfriend] is no better than the person who wrote the software.

          Despite all that's been invested in development methodology, this is remains a black art.

          That's why I always laugh when a computer science graduate gets on his high horse. Those of us in the trenches know better.

  • I interact with a piece of code I don't maintain that has this issue. A piece of Visual Basic (don't get me started) that copies from a source to a destination. However instead of using B=A to move the data, the author used copy(A) and paste(B). The data therefore interacts with the clipboard, slowing the process down.

    The issue might not be noticeable on a small amount of data. However, I use this piece of code to move gigabytes of data every day.
  • We were required to install it on our Linux servers - we run CentOS (same as RH). Every few days, the stupid monitor is suddenly eating 99%-100% of the CPUs... for *hours*. Overnight.

    I attached strace to it, and it's in some insanely tight loop, looking at its own threads.

    Maybe if I prove that it's doing it on multiple servers (it is, but I have to catch it - nothing's reporting this, unless it runs the system so hard it throws heat-based machine checks), and put a ticket in, and *maybe* the team that forc

  • by nospam007 ( 722110 ) * on Wednesday March 22, 2017 @11:43AM (#54088999)

    Being an old fart, in my day, I remember the worst performance problems were caused by programmers with their own badly written library of functions and objects that they included everywhere, most of those were from their very first weeks of being a programmer and they sucked badly.

  • by middlebass ( 1918164 ) on Wednesday March 22, 2017 @11:56AM (#54089137) Homepage
    In 1973 or 1974 I was the systems programming manager at the National Academy of Sciences after our IBM mainframe had been upgraded to the first version of the OS supporting virtual storage. And many programs, mostly Fortran programs, were running much slower than they used to. The problem was two-dimensional arrays and how they were initialized and searched. If you're looping through the wrong subscript in a big array, you cause a page fault every time you increment it. Flash storage makes that a much smaller problem than it is with disk drives, but I'm sure most programmers today have no idea that this problem exists or how to handle it.
    • by swilver ( 617741 )

      So, I profile that code, I find, hm, odd, this loop is taking a lot of time.

      It's accessing some array, wierd, array accesses are normally blazingly fast.

      Oh look, it's a two dimensional array, that's something you don't see every day!

      Let's play a bit with the code, hey, it's fast now that I'm looping through it differently... hm, I wonder how that thing is laid out in memory... oh, could it be that it is causing cache line misses / page faults / disk cache misses (yes, those abstractions are present at every

  • The rule of thumb for programming anything is, first make it work, then make it work better / faster.

    If the first pass works well enough and fast enough, it doesn't matter if the code was written an efficient manner. If somebody used bubble sort for an array of 5 items, who cares? If the array becomes larger, now you have a performance bug.

    It's literally wasteful to spend time on performance enhancement before you know which performance problems actually occur in real life. Another name for premature perfor

    • by swilver ( 617741 )

      In my experience, way too many programmers go for the obvious, short-cutting, direct, layer-breaking solution because it only requires writing a few lines of code. Out of an infinite set of possible solutions for the problem, they choose the one that saves them the most characters typed.

      Experienced ones however will reduce the solution space and reject solutions that don't adhere to architectural criteria right from the start, and write something that looks a little more complicated but in fact results in

      • [...] looks a little more complicated [...]

        A different programmer looks at it a year later, determines that it looks too complicated, and refactors the code to be more simpler "in a far more elegant, better scaling and maintainable solution."

  • "an experienced programmer, with a reasonably accurate mental model of the problem"

    Well, that's one of the many things that separate programmers from coders as I designate them. I.e., the ones who know and do vs. the ones who have a limited clue at most but do it nonetheless and behave like they'd be the gods of algorithm and program development. In my book, and in my area of work, a code that gives the result but it's 2-7-whatever times slower than it could be with some actual knowledge besides tapping c
    • Seems like you're complaining about script kiddies and not coders. I'm a coder because I got my programming degree from a community college and I don't work professionally as a programmer (I work in IT support), but I can think through a coding problem well enough than most programmers I know.
  • by account_deleted ( 4530225 ) on Wednesday March 22, 2017 @02:08PM (#54090349)
    Comment removed based on user account deletion
  • by johanwanderer ( 1078391 ) on Wednesday March 22, 2017 @02:50PM (#54090721)
    For an explanation: http://thedailywtf.com/article... [thedailywtf.com]
  • by Gravis Zero ( 934156 ) on Wednesday March 22, 2017 @03:07PM (#54090845)

    Isn't this why we have profilers like Valgrind to identify slow functions?

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...