Performance Bugs, 'the Dark Matter of Programming Bugs', Are Out There Lurking and Unseen (forwardscattering.org) 266
Several Slashdot readers have shared an article by programmer Nicholas Chapman, who talks about a class of bugs that he calls "performance bugs". From the article: A performance bug is when the code computes the correct result, but runs slower than it should due to a programming mistake. The nefarious thing about performance bugs is that the user may never know they are there -- the program appears to work correctly, carrying out the correct operations, showing the right thing on the screen or printing the right text. It just does it a bit more slowly than it should have. It takes an experienced programmer, with a reasonably accurate mental model of the problem and the correct solution, to know how fast the operation should have been performed, and hence if the program is running slower than it should be. I started documenting a few of the performance bugs I came across a few months ago, for example (on some platforms) the insert method of std::map is roughly 7 times slower than it should be, std::map::count() is about twice as slow as it should be, std::map::find() is 15% slower than it should be, aligned malloc is a lot slower than it should be in VS2015.
Stupid analogy (Score:3)
It's stupid to call them "the dark matter of programming bugs". We were just accustomed to this being the way Microsoft did things, not a bug, a feature.
That stems from Microsoft, originally writing for IBM, being paid per thousand lines of code. As such it made sense that software was not written efficiently because the programmer was not rewarded for efficiency, it merely had to fit within available memory. Unfortunately it seems that this practice has not stopped given the sheer size of Microsoft operating systems relative to the amount of end-user functionality that has been added since the days of say, Windows 3.1.
Re:Stupid analogy (Score:5, Interesting)
And yet from an end-user point of view, Windows 8 and subsequent basically headed right back to Program Manager.
When I think back to my Windows 3.1 experience, I had a launcher in the form of Program Manager, a file tree browser called File Manager, I had the ability to run several programs at the same time, I had the ability to play video and sound including playing music from file and from CD, I could access network storage and map resources to use as if they were local, and I could even use a web browser to access the fledgling Internet. Hell, the local college was part of the Internet so I had 10BaseT connectivity to what was available at the time.
My point is that while the back-end of 16-bit Windows 3.1 is essentially gone, the way that people use Windows operating systems is substantially similar to the way it was almost 25 years ago. Obviously particulars have changed, but when you fundamentally look at the end-user experience versus the increase in hardware requirements and the sheer size of the install base you must wonder where all that effort really went, because from the end-user point of view it's not really all that obvious.
Re: (Score:2)
There has been A LOT of added functionality since Windows 3.1 days! Not just changes in the UI, but changes to security, API's, and so on. Windows, and for that matter other operating systems, are far more complex than they have ever been.
Possibly, but at least developers don't have to deal with the segmented memory model and other 16-bit limitations from Windows 3.1. Writing programs to run in the original 16-bit Windows API was one of the most byzantine things I have ever done.
Shhh...don't tell him about scripting languages... (Score:5, Insightful)
They will if they try to run a lot of them on a machine with finite resources, like a phone. Or it's a process that's iterated frequently, like a "big data" operation. But if the end user STILL doesn't notice it...then it's hard to call it a bug.
On the other hand, the performance/just-get-er-done trade-off is well known to programmers of all stripes. (At least I hope it is - are people really finding new value in the article?) There's the quick and dirty way (e.g., a script), and then there's the "I can at least debug it" way (e.g., a program developed in an IDE), and then there's the optimized way, where you're actually seeing if key sections of code (again, especially the iterated loops), are going as fast as possible. Generally your time/cost goes up as your optimization increases, which becomes part of the overall business decision: should I invest for maximum speed, maximum functionality, maximum quality, etc.
Re: (Score:3)
Is It A Problem? (Score:2)
Re: (Score:2)
when you are maimed in your self-driving car because its computer was too slow to pick the crazy driver out of the crowd
You can't blame the self-driving car for an accident if the crazy driver drives out of a crowd into traffic. Cars in other lanes are predictable events. Cars driving on the sidewalk and running over pedestrians are not predictable events.
Re: (Score:2)
why not? normal drivers are found at fault ALL THE TIME for NOT avoiding hazards
Some accidents are unavoidable accidents that can't be prevented.
Re: (Score:2)
You certainly can if the car could have avoided him by being written a little better.
If the accident is avoidable. Some accidents are not. I read an article last year that self-driving cars won't be safe until all the human drivers are off the road.
std::insert is double awful (Score:2)
This violates the important principle that when using a library, the obvious way to do things should be the fasted. So hacks are required to make your code fast, and that shouldn't happen.
I assume the explanation is probably that std::find is small enough to be inlined, while std::insert
Re: (Score:2)
Actually, the explanation in the article is that there is a memory allocation for a node done *before* checking whether the object is present. So if the object is present, there is a pointless memory allocation and deallocation done. Nothing to do with inlining, and an easy fix for the library: just swap the order of the check for presence and the memory allocation.
Losing Battle (Score:5, Insightful)
It is a losing battle to try and solve performance in the programmer space. The Compiler does a much better job of optimization due to a multitude of compiler trics including both Static and dynamic analysis, cache analysis and so on. The programmer trying to write the most efficient code should rather spend his/her time trying to use out of the box algos as far as possible as the compiler knows how to fine tune those. next they should run a profiling tool like jprofiler and see where the job is actually spending its time rather than trying to say this is probably the heaviest part of the program. With multiple cores and multiple instruction pipelines and optimizing compilers the bottleneck is oftentimes not where we would think it to be. Once we find the bottleneck using a profiling tool than we can optimize it. In most cases 2% of the code is causing 98% of the bottleneck so its a much better use of programmer time (which is of course more expensive than computer time in most cases) to work backwards.
1 Write your code so that its correct irrespective of efficiency,
2 profile and then
3 fix the bottlenecks
rather than trying to find the most efficient algorithms before you write your code.
Re: (Score:2)
The article is talking about Visual Studio, possibly the worst compiler in the world. There isn't much optimization going on there.
Re:Losing Battle (Score:4, Informative)
Studio is an IDE. You can swap out the Microsoft compiler if you dont like it.
Two Solutions (Score:5, Insightful)
Programmers love to use the cop-out
"Premature Optimization is the root of evil"
dogma which is complete bullshit. It tells me your mindset is:
Except later never comes. /Oblg. Murphy's Computer Law: [imgur.com]
* There is never time to do it right, but there is always time to do it over.
As Fred Brooks said in Mythical Man-Month.
Which can be translated into the modern vernacular as:
* Show me your code and I'll wonder what your data structures are,
* Show me your data and I'll already know what your code is
There are 2 solutions to this problem of crappy library code.
1. You are benchmarking your code, ALONG THE WAY, right?
Most projects "tack-on" optimization when the project is almost completed. This is completely BACKWARDS. How do you know which functions are the performance hogs when you have thousands to inspect?
It is FAR simpler to be constantly monitoring performance from day one. Every time new functionality is added, you measure. "Oh look, our startup time went from 5 second to 50 seconds -- what the hell was just added?"
NOT: "Oh, we're about to ship in a month, and our startup time is 50 seconds. Where do we even begin in tracking down thousands of calls and data structures?"
I come from a real-time graphics background -- aka games. Every new project our skeleton code runs at 120 frames per second. Then as you slowly add functionality you can tell _instantly_ when the framerate is going down. Oh look, Bob's latest commit is having some negative performance side effects. Let's make sure that code is well designed, and clean BEFORE it becomes a problem down the road and everyone forgets about it.
2. You have a _baseline_ to compare against? Let's pretend you come up with a hashing algorithm, and you want to know how fast it is. The *proper* way is to
* First benchmark how fast you can slurp data from a disk, say 10 GB of data. You will never be FASTER then this! 100% IO bound, 0% CPU bound.
* Then, add a single-threaded benchmark where you just sum bytes.
* Maybe, you add a multi-threaded version
* Then you measure _your_ spiffy new function.
Library Vendors, such as Dinkumware who provide the CRTL (C-Run Time Library), _should_ be catching these shitty performance bugs, but sadly they don't. The only solution is to be proactive.
The zeroth rule in programming is:
* Don't Assume, Profile!
Which is analogous to what carpenters say:
* Measure Twice, Cut Once.
But almost no one wants to MAKE the time to do it right the first time. You can either pay now, or pay later. Fix the potential problems NOW before they become HUGE problems later.
And we end up in situations like this story.
Re: (Score:2)
All well and good in theory, but have fun being beaten to market by your competitor who made it work to the customer's satisfaction faster because they didn't do in-depth performance testing every step of the way and just made sure it wouldn't be noticeably slow overall.
I'm all for performance optimization while developing solutions so long as you aren't adding steps to delivering a solution that have no business value. If a widget has to function fast, by all means, test the crap out of it early and often
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You only ship bug free code? Someone is lying, to yourself perhaps.
Re: (Score:2)
I worked in plenty of organizations that only shipped bug free software.
I personally had only one single bug (created by myself) delivered into production the last 30 years.
However the last years I often worked in organizations that unfortunately accepted bugs going into production ... probably a reason why I don't work for them anymore ... it is just to annoying to have a stupid non working process and can not use the tools you want etc.
Re: (Score:2)
It's a stretch to claim you only ship code with no known bugs. To claim you ship bug free code is just silly.
Non 'show stoppers' are documented and worked around, every day.
Re: (Score:2)
There never was a defect/bug report in my carrier blaming the defect on me, except the one I mentioned (that is _production code_).
And as I said before: most of the time I worked the last 30 years in teams that had ZERO bugs in production. The issue trackers etc. prove that.
The severity "showstopper" or "minor" has absolutely nothing to do with that.
Re: (Score:2)
Further the inefficiency is in the STL implementation.
Marketing and shipping dead lines are always important. We ship real software to real users who pay real dollars. 50K per seat per year. We raised 12 million in our IPO back in 1996. We are 9 billion market cap today. I stayed all the way through. The architecture I designed and implemented back in 1996 is scaling up
Re: (Score:3)
"Premature optimisation is the root of all evil"
is an aphorism that is exactly trying to get across what you say at the end
"Don't assume, Profile!"
Basically, the guy that originally said it was trying to say you can't guess what the problems are going to be before you lay the code down. Write the code correctly, but don't try to tinker with your scheduling algorithm to make it provably optimal when it's going to be dwarfed by order-of-magnitude problems with the network code.
Re:Two Solutions (Score:4, Insightful)
You are arguing against the misquote: 'Early optimization is the root of all evil'.
Not the accurate quote: "Premature optimization is the root of all evil'.
If you know a block of code is low, tight and potentially slow, it is not premature to write it with efficiency in mind from day one.
Re: (Score:2)
Unless you know it is only called once after the software is deployed.
Or you know you have to ship in a few days, and writing it "perfect" takes longer than those days left.
Should I continue? I probably find 200 reasons why "premature optimization" is as unpleasant as other "premature ..." things.
Re: (Score:2)
If you know that, it's 'premature' to optimize it. Unless of course the 'one time run' will take weeks, better test it against real world datasets. I've seen DBAs lock up systems with update scripts that would have run for _years_ (for apparently simple schema updates).
Not all early optimization is premature. Redefining 'premature' as you just did, doesn't change the basic truth.
Re: (Score:2)
You are redefining "premature" to early.
We all are talking about "premature", you are not ;D
Re: (Score:2)
Reread the thread.
Re: Two Solutions (Score:3, Informative)
Check the links (Score:2)
Check the links, decent code and analysis. Short and simple. I recently found a very similar bug in both PHP and HHVM with their trim() function (and variants there of). In both PHP and HHVM, trim() unconditionally allocates more memory, even if there is no white-space on either end of the string to trim. It is faster to write PHP code to check for white-space on both ends and then conditionally call trim() on a string.
Re: (Score:2)
When you have a moment, can you submit a bug over at bugs.php.net?
Re: (Score:2)
I'm still trying to find a small isolated test case, which is proving difficult. The script in question is a data importer dealing with ~100,000 rows and ~10 columns at a time, so 1mil entities being trimmed. In smaller cases, built in trim is faster, but once it breaks around 100k calls, it is faster to manually check if data could be trimmed before calling trim. At 1mil, it is an order of magnitude faster. But building a small test case doesn't yield these same results. When I figure out the right combina
Re: (Score:2)
Actually, it's not (only?) a library bug, but a(lso) programmer bug. Using std::make_pair(x, x) makes a pair, duh! Complaining about it is silly. Hint: the initializer list version of insert() is faster than the pair version (at least on sane platforms, Microsoft can be weird about c++)
Sounds like a creative way to say that... (Score:2)
It's a myth... (Score:2)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
So it's highly dependent on what you think could happen, and what satisfies your sense of sufficiency.
It's called programming. The [computer | self-driving car | AI | programmable girlfriend] is no better than the person who wrote the software.
Despite all that's been invested in development methodology, this is remains a black art.
That's why I always laugh when a computer science graduate gets on his high horse. Those of us in the trenches know better.
Story Time! (Score:2)
The issue might not be noticeable on a small amount of data. However, I use this piece of code to move gigabytes of data every day.
CarbonBlack is a perfect example (Score:2)
We were required to install it on our Linux servers - we run CentOS (same as RH). Every few days, the stupid monitor is suddenly eating 99%-100% of the CPUs... for *hours*. Overnight.
I attached strace to it, and it's in some insanely tight loop, looking at its own threads.
Maybe if I prove that it's doing it on multiple servers (it is, but I have to catch it - nothing's reporting this, unless it runs the system so hard it throws heat-based machine checks), and put a ticket in, and *maybe* the team that forc
Lazy is the rule (Score:3)
Being an old fart, in my day, I remember the worst performance problems were caused by programmers with their own badly written library of functions and objects that they included everywhere, most of those were from their very first weeks of being a programmer and they sucked badly.
Re: (Score:2)
The opposite extreme is where everyone linked their project to a left-pad package and the developer pulls the package in a hissy fit, breaking the Internet at the same time. A left-pad function is something that every programmer should be able to pull out of their ass.
http://www.haneycodes.net/npm-left-pad-have-we-forgotten-how-to-program/ [haneycodes.net]
A Very Old Performance Problem, Mostly Forgotten (Score:3, Interesting)
Re: (Score:2)
So, I profile that code, I find, hm, odd, this loop is taking a lot of time.
It's accessing some array, wierd, array accesses are normally blazingly fast.
Oh look, it's a two dimensional array, that's something you don't see every day!
Let's play a bit with the code, hey, it's fast now that I'm looping through it differently... hm, I wonder how that thing is laid out in memory... oh, could it be that it is causing cache line misses / page faults / disk cache misses (yes, those abstractions are present at every
Not bugs until they cause problems (Score:2)
The rule of thumb for programming anything is, first make it work, then make it work better / faster.
If the first pass works well enough and fast enough, it doesn't matter if the code was written an efficient manner. If somebody used bubble sort for an array of 5 items, who cares? If the array becomes larger, now you have a performance bug.
It's literally wasteful to spend time on performance enhancement before you know which performance problems actually occur in real life. Another name for premature perfor
Re: (Score:2)
In my experience, way too many programmers go for the obvious, short-cutting, direct, layer-breaking solution because it only requires writing a few lines of code. Out of an infinite set of possible solutions for the problem, they choose the one that saves them the most characters typed.
Experienced ones however will reduce the solution space and reject solutions that don't adhere to architectural criteria right from the start, and write something that looks a little more complicated but in fact results in
Re: (Score:3)
[...] looks a little more complicated [...]
A different programmer looks at it a year later, determines that it looks too complicated, and refactors the code to be more simpler "in a far more elegant, better scaling and maintainable solution."
coder vs programmer (Score:2)
Well, that's one of the many things that separate programmers from coders as I designate them. I.e., the ones who know and do vs. the ones who have a limited clue at most but do it nonetheless and behave like they'd be the gods of algorithm and program development. In my book, and in my area of work, a code that gives the result but it's 2-7-whatever times slower than it could be with some actual knowledge besides tapping c
Re: (Score:2)
Re: (Score:2)
IT guy! My mail is crashing! It says assertion failed! Fix it with your coding skills! Quickly! IT guy stop surfing donkey porn and fix the code now! I need my mail!
Sorry. I work in InfoSec, not Help Desk. Call 1-800-IBM-HELP.
Re: (Score:3)
You really must grow out of help desk and learn some useful stuff...imbecile fucker. And stop pretending you understand something about computers or even know how to read English errors. You are a fucking waste of space on this planet.
That's kind of harsh to say to an AC. :P
Re: (Score:2)
Funny how the guy claiming he's really great at thinking through things can't catch obvious errors in his own writing.
Perfect grammar isn't a requirement for thinking through problems. My obvious error is the usage of "than" (comparison) rather "then" (time). A common mistake that I make because it's related to my learning disability of being unable to distinguish similar sounding words (i.e., than/then, glass/grass or ear/year) due to a hearing loss in one ear. Than/then was a particular challenge until a college instructor explained the differences.
Comment removed (Score:5, Funny)
That's my speedup loop, you clod! (Score:4, Funny)
Hold the phone. (Score:3)
Isn't this why we have profilers like Valgrind to identify slow functions?
Re: (Score:2)
Not only that, but more slowly than it should have all too often ignores sanity checks and edge case processing which slow down what would run fast 95% of the time, but breaks the other 5% of the time.
Re: (Score:2)
intra-program sanity checks shouldn't be necessary (or ever deployed!) outside of debug mode.
Unless sanity checks are a bottleneck (rare) they should be left in. Users will do things to your program that you never imagined. When I deploy, I use a custom assert that emails me a stack trace and keylog on failure. That is infinitely more valuable than a useless complaint like "Your program keeps crashing for no reason".
Re: (Score:3)
Its bad assumptions. Like cache-misalignment or TLB thrashing. The same code will run full-speed on one system but 10x slower on another with slightly different cache characteristics. Performance might even change with an upgrade to the OS or even just a system library that re-arranges memory layout in a trivial way.
That's why I'd like to see a nondeterministic compiler some day. Take dozens of good-ish but alternative decisions differently every time. I once did some very rudimentary checks with a Scheme compiler that already generated very good code but there were some weird things going on with inlining. Tiny changes in inlining thresholds were changing the performance deterministically with a factor of several. I suspected the way the code interacted with the CPU was the culprit. But since all CPUs are different (a
Re: (Score:2)
There are usually a handful of "best" solutions, depending on your demands. There is a best solution when it comes to computing time. Another one for memory footprint. And so on.
So you cannot find a solution that is the best in all situations. But you can determine whether a solution is not the best in any situation.
Re: (Score:2)
And for nearly all applications, they don't give a shit. Computers are today faster than they need to be for nearly all applications the average office runs into.
Re:All too true (Score:5, Insightful)
I came here to say this, mostly.
I *know* that there are plenty of places in our software that I could spend an hour or two, and rewrite an algorithm to run in 1/5th the time. And I don't care at all, because the cost is too low to measure, and usually, performance bottlenecks are elsewhere.
Who really cares if I can get a loop to run in 800ns instead of 1500ns, when the real bottleneck is a complex SQL query 11 lines up that joins 11 tables together and takes 3 full seconds to run?
Scale it... (Score:2)
Indeed. A human being can not even perceive a difference between 1 millisecond and 1 microsecond.
But, repeated a million times, the former turns into 15 minutes, whereas the latter is still merely a second. Food for thought...
Re: (Score:2)
Re: (Score:2)
Indeed. A human being can not even perceive a difference between 1 millisecond and 1 microsecond.
But, repeated a million times, the former turns into 15 minutes, whereas the latter is still merely a second. Food for thought...
And you said as if most programs/applications nowadays required that many loops, or even a minute long to complete a run. Also often times, programs/applications are web base and/or deal with database stuff that always have a bottle neck issue else where as the GP already stated.
Anyway in programming, I always prefer correctness over speed (and I believe all computer scientists prefer the same). You can always try to optimize a program as long as it runs correctly. If a program isn't running correctly, it i
Re: (Score:2)
The observation I posted is just as applicable to the SQL-queries and even the database-servers themselves.
Re: (Score:2)
It's not even clear if some of the stuff he says is a bug. For example, his aligned memory allocation example takes 100ns longer than it "should" when calling an Intel specific function. It's not at all clear what the Intel function does differently, if anything... Seems to be part of one of their frameworks that makes cross-platform aligned memory allocation easier.
It may not be comparing like-for-like. I have a feeling Microsoft will respond to his bug report with little enthusiasm.
Re:All too true (Score:5, Insightful)
Same here
The users are getting a correct result. Good.
The developers moved on to something else that's also important. Good.
The machine is doing 15% more work than strictly necessary... Is it slowing down the users? No. Are we getting hammered by the electricity bill? No. Is the machine getting tired? No. So what exactly is the problem?
Like the real Donald (Knuth) said: "premature optimization is the root of all evil (or at least most of it) in programming".
Re: (Score:2)
I would probably say that performance is probably dead last on any software company's mind, unless something is so slow that it gathers user complaints of affects the use of a device (for example, an embedded controller in a vend a goat machine is having a software loop that fires off every five seconds, winding up taking 6-7 seconds to complete, or a daily backup taking 26-27 hours to complete.)
Performance can always be improved, but oftentimes, it is a case of diminishing returns. In reality, it will not
Re: (Score:2)
an embedded controller in a vend a goat machine is having a software loop that fires off every five seconds.
Is 17,200 goats per day actually necessary?
Re: (Score:2)
Re: (Score:3)
Who really cares if I can get a loop to run in 800ns instead of 1500ns, when the real bottleneck is a complex SQL query 11 lines up that joins 11 tables together and takes 3 full seconds to run?
To misquote a not-quite-famous Congress critter: A nanosecond here, a nanosecond there, eventually we are talking about real time (minutes).
You do not exist in a vacuum. You think your application is the only one but I have hundreds of "applications", written by people just like you, who thought that the real bottleneck is elsewhere so why worry about this particular bottleneck?
This attitude is why I can notice the delay in a particular window opening up or why I see "hiccups" in the smoothness of my screen
Re: (Score:3)
It would take as long as it is cheaper to run the inefficient query than recoding it.
Re: (Score:3)
Sayeth the noob who didn't think about how long testing the change would take...
Agreed that replacing tested/working code with new "more efficient" code does incur a re-validation cost.
On the other hand, that's also an argument for writing the more-efficient implementation the first time, rather than waiting until some later release. Since you know it's all going to have to go through the testing cycle at least once, why waste your QA group's time testing slow/throwaway code, when you could have them spend that time testing the code you actually want your program to contain? (Assumin
Re: (Score:3, Insightful)
Maybe in your world, but when weighted down with sloggy operating systems and minimal memory (typical of many Windows 10 installations TODAY), code can get pretty slow.
For a very long time now, there have been libs that add breakpoints to examine how long processes are taking, think: debug mode, that can pinpoint problem areas pretty easily. Not enough coders use them.
It gets worse when a user has 94 Chrome tabs open, something in Office, and an AV app running.... all on a laptop whose processor speed is me
Re: (Score:2)
You'd think someone who's been here as long as you would have heard of profilers.
Your first tool should be performance monitor. In my experience most slow tasks will show 0% CPU utilization for a large part of the wait. Because some coder has never heard of firing queries asynch, then doing local work while waiting.
Re: (Score:2)
Sure, I've heard of and have used profilers.
But performance monitors often only give point to point execution times, not "network I/O took 3242ms") or "Auth timed out 3x" sorts of details.
I like logs, syslogs, and other methods of determining execution problems, too, because sometimes: it's not actually the code, it's the host, the UI, the wm, the phase of the moon. Best to know.
Re: (Score:2)
You see a client sitting at 0% utilization, you click 'break all' in the debugger. You're looking at the offending block. It's real useful.
Then you start into the logs to see how it got there, if it's not obvious.
It's a particularly useful method when faced with unfamiliar code that is sucking big wet donkey balls.
Re: (Score:2)
I like execution-time libs that give me full stats, so the database guy doesn't argue with the network guy who doesn't argue with the team that did middleware, etc etc.
DevOps, SCRUM, and other continuous development systems often eschew this, because they're running under fire control rather than improving incrementally. This said, I've seen a few SCRUM teams that were fast and surgical and rightly proud of their work.
Re: (Score:2)
A good team can produce results with any POS methodology, even SCRUM. Typically despite the formal methods, not because of them.
Stats can prevent circular blame pointing? Not in my experience with the 'barely competent'.
When network operations go from acceptable to too slow, it's been my experience that people who don't know what they are doing have been fucking with them. e.g. Six outer joins to the same table. queries that were building temp tables but nobody noticed etc. Sure sometimes the data volu
Re: (Score:2)
I guess the premise needs to be someone above barely competent.
We'll agree that your method works for fire control. Projects should NOT BE fire control in most cases. Sadly, many are. While I like to be the Ross Adair of systems malfunctions, I'll also take a less stressful life. At the end of fire control, there are often a pile of ashes. It's possible to lead a long and successful life, and not deal with but a small pile of ashes. Others seem to need them for daily lunch.
Re: (Score:2)
Fires happen. Even with the best of teams. HR gets involved and a seat is filled. Six months later you get back to a block of code and WTF happened? How did this pass QA and code review?
Nobody likes it, but I've yet to find a long term solution. I've quit, more than once, over teams being ruined by rapid growth and 'he was the best we interviewed'. Was clearly net negative worker who, at best, could have maintained some simple reports. Shouldn't have gotten out of probation, even if you accept the premis
Re: (Score:3)
Videogame programmers care *very much* about all these sorts of performance issues. Not coincidentally, many videogame programmers use custom containers, and nearly ALL of them use custom allocators for exactly this reason.
That being said, not everyone programs real-time pseudo-simulations like we do. But you should very much care about ensuring the most basic building blocks of code everyone uses are highly optimized at the very least. The more often code is called, the more attention should be paid to
Re: (Score:3)
Yes, ... umm... no.
Not anymore. Yes, it still matters with high end, AAA titles. For everyone else, there's Unity. And Unreal Engine. And ... something I forgot now. Game programming used to be one of the few areas where you really needed top programmers that can come up with creative ways to cut a few extra cycles. Just think of the infamous 1/sqrt(x) [wikipedia.org] hack.
I wouldn't expect many who currently claim to be game programmers to understand it. Let alone come up with something close to it.
Even in games, efficien
Re: (Score:2)
Computers are today faster than they need to be for nearly all applications the average office runs into.
For nearly everything except rebooting. I still don't understand the problem. It's better than it was, but we're still back in the old days of waiting for the TV to warm up.
Re: (Score:2)
When you can fix those to get them to run in seconds you get really popular. (Perhaps not so popular with the original "developer", but that's part of life, eh?)
Re: Evil bugs (Score:2)
Re: (Score:3)
Fundamental library code is either as fast as possible, or useless. You know know who or how the library code will be used, so you have to assume plentiful use cases where every instruction matters. The std::map code is particularly bad (even in CLANG) .
When you're delivering the end product, sure, don't optimize until proven necessary. That's a different world than library code. Not every thing is your thing, surprising as that may be.
Re: (Score:2)
Depends. It's a bug if the code is doing something different than what it's supposed to be doing.
If your sort algorithm is supposed to run in O(N log N) but it actually runs in O(N^2) then I'd call that a bug. Algorithmic complexity can be a requirement just as important as the output. After all, the output hardly matters if your users die of old age before the algorithm finishes.
If your code is performing unnecessary work then that might be a bug, depending on the aut
Re: (Score:2)
In many simple situations, yes.
But there are plenty of cases where you are operating on such a large scale that using programming resources to optimize performance is a a good tradeoff. As example: in one customer case a 1% total increase in efficiency maps to 5000+ euro/month in the costs they pay to host the solution, on a yearly bases that buys quite many programmer days of optimization.
Re: (Score:2)
If performance sucks, buy a faster computer. Speed covers a multitude of sins.
You're not a programmer, right? See things aren't so simple. A poor design decision will still be slow on a faster computer, and the speedup you get will not correlate to the extra money spent. Spending double on a new computer will not cause your program to finish in half the time if it's waiting on a poorly constructed network request.
Re: (Score:2)
You're not a programmer, right?
Not professionally. I work in IT Support.
Spending double on a new computer will not cause your program to finish in half the time if it's waiting on a poorly constructed network request.
Those days are long over. I switched out my quad-core processor for an eight-core processor. Performance overall didn't improved that much because most of my applications are single core.
Re: (Score:3)
Re: (Score:2)
Assuming that there aren't other disadvantages, if there's a faster algorithm or faster implementation that achieves the same goal, the slower approach is slower than it should be. For instance, using bubble sort when you could use quicksort.
Just barely meeting your requirements leaves you vulnerable to a competitor who can do a better job.
Re: (Score:3)
Real programmers use only the 1 and 0 keys
Keys? Real programmers use jumper wires directly on the memory bus pins of the CPU.
Re: (Score:2)
No. Butterflies. [xkcd.com]
Re: (Score:3)
Re: (Score:3)
Klingon coding...
Copy con: program.exe
Then enter structure, opcodes and data with Alt-keypad.
Comments...where would those go?
Re: (Score:3)
You can optimize for dev time or CPU time. Which is cheaper?
The inexcusable is optimizing for neither. e.g. server side javascript.
Re: (Score:2)
Two types of performance. If you don't get 'time to market' nobody will ever see the delay to complain about it.
Which isn't an excuse for just plain bad programming. Which is a constellation of potential mistakes.