Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Programming As If Performance Mattered 615

Junks Jerzey writes "Saw the essay 'Programming as if Performance Mattered', by James Hague, mentioned at the Lambda the Ultimate programming language weblog. This is the first modern and sensible spin on how optimization has changed over the years. The big 'gotcha' in the middle caught me by surprise. An inspiring read." Hague begins: "Are particular performance problems perennial? Can we never escape them? This essay is an attempt to look at things from a different point of view, to put performance into proper perspective."
This discussion has been archived. No new comments can be posted.

Programming As If Performance Mattered

Comments Filter:
  • Damn! (Score:4, Funny)

    by Spruce Moose ( 1857 ) on Wednesday May 05, 2004 @11:40PM (#9070662)
    If only I had written my first post program with performance in mind I could have not failed it!
  • by Anonymous Coward on Wednesday May 05, 2004 @11:41PM (#9070667)
    Is the time it takes me to do the performance optimization worth it in time or money.
    • by irokitt ( 663593 ) <archimandrites-iaur AT yahoo DOT com> on Wednesday May 05, 2004 @11:56PM (#9070740)
      Probably not, but if you are working on an open source project, we're counting on you to make it faster and better than the hired hands at $COMPANY. That's what makes OSS what it is.
    • Is the time it takes me to do the performance optimization worth it in time or money.

      The question I ask is, can the server handle the load any other way? As far as my company is concerned, my time is worth nothing. They pay me either way. The only issue is, and will always be, will it work? Throwing more hardware at the problem has never solved a single performance problem, ever.

      We've been slashdotted twice. In the pipeline is a database request, a SOAP interaction, a custom apache module, and an X
    • "Is the time it takes me to do the performance optimization worth it in time or money."

      To a certain extent. I've seen that excuse for some pretty bad/slow code out there.

      Writing effecient and somewhat optimised code is like writing readable extensable code: if you design and write with that in mind you usually get 90% of it done for very very little (if any) extra work. Bolt it on later and you usually get a mess that doesn't actually do what you intented.

      A good programmer should always keep both clean c
  • What annoys me (Score:4, Insightful)

    by Anonymous Coward on Wednesday May 05, 2004 @11:42PM (#9070672)
    is that ms word 4 did all I need, and now the newest office is a thousand times the size and uses so much more cpu and ram but does no more.

    a sad inditement
  • by Nick of NSTime ( 597712 ) on Wednesday May 05, 2004 @11:43PM (#9070678)
    I code in managed environments (.NET and Java), so I just let some mysterious thing manage performance for me.
    • by metlin ( 258108 ) * on Thursday May 06, 2004 @12:08AM (#9070786) Journal
      Contrary to popular belief, managed code environments do optimize code a whole lot more than you would think!

      Joe Beda [eightypercent.net], the guy from Microsoft behind Avalon, had a discussion on Channel9 where he talked about why managed code is not that bad a thing afterall.

      Like I mentioned in an earlier post, managed code helps optimize the code for some of the bad programmers out there who cannot do it themselves, and takes care of a lot of exceptions and other "troublesome" things :) So, in the long run, it may not be that bad a thing afterall.

      There are two facets to optimization - one is optimization of the code per se, and the other is the optimization of the project productivity - and I think managed code environments do a fairly good job of the former and a very good job of the latter.

      My 0.02.
  • by ObviousGuy ( 578567 ) <ObviousGuy@hotmail.com> on Wednesday May 05, 2004 @11:44PM (#9070683) Homepage Journal
    You can spend all your time optimizing for performance and when you finally release your product, your competition whose main objective was to get the product out the door faster, who uses a slower algorithm, is already first in mindshare with your customers. Not only that, the processors that you thought you would be targetting are already a generation behind and that algorithm that was going to hold back your competition runs perfectly fast on new processors.

    Performance gains occur at the hardware level. Any tendency to optimize prematurely ought to be avoided, at least until after v1.0 ships.
    • by corngrower ( 738661 ) on Wednesday May 05, 2004 @11:52PM (#9070724) Journal
      Any tendency to optimize prematurely ought to be avoided, at least until after v1.0 ships.


      Assuming there is a second version, which there may not be because potential customers found that the performance of v1.0 sucked.

    • by metlin ( 258108 ) * on Thursday May 06, 2004 @12:00AM (#9070755) Journal
      Well said.

      However, I will dispute the claim that performance gains happen only at the hardware level - although programmers cannot really optimize every tiny bit, there is no harm in encouraging good programming.

      The thing is that a lot of programmers today have grown NOT to respect the need for performance - they just assume that the upcoming systems would have really fast processors and infinite amounts of RAM and diskspace, and write shitty code.

      I agree that like Knuth said, premature optimization is the root of all evil. However, writing absolutely non-optimized code is evil in itself - when a simple problem can be simplified in order and time, it's criminal not to :)

      A lot of times, programmers (mostly the non-CS folks who jumped the programming bandwagon) write really bad code, leaving a lot of room for optimization. IMHO, this is a very bad practice, something that we have not really been paying much attention to because we always have faster computers coming up.

      Maybe we never will hit the hardware barrier, I'm sure this will show through.
      • by techno-vampire ( 666512 ) on Thursday May 06, 2004 @12:24AM (#9070868) Homepage
        The thing is that a lot of programmers today have grown NOT to respect the need for performance - they just assume that the upcoming systems would have really fast processors and infinite amounts of RAM and diskspace, and write shitty code.

        That's not the only reason. Programmers usually get to use fast machines with lots of RAM and diskspace, and often end up writing programs that need everything they have.

        Back in the DOS days, I worked on a project that had a better way of doing things. We had one machine with reasonable speed as the testbed. It wasn't well optimized as we didn't expect our customers to know how to do that and the programs we were writing didn't need expanded or extended memory. If what you wrote wouldn't run on that machine, it didn't matter how well it worked on your machine, you had to tweak it to use less memory.

      • by Moraelin ( 679338 ) on Thursday May 06, 2004 @05:48AM (#9071855) Journal
        Well, yes and no.

        I still don't think you should start doing every single silly trick in your code, like unrolling loops by hand, unless there's a provable need to do so. Write clearly, use comments, and use a profiler to see what needs to be optimized.

        That is coming from someone who used to write assembly, btw.

        But here's the other side of the coin: I don't think he included better algorithms in the "premature optimization". And the same goes for having some clue of your underlying machine and architecture. And there's where most of the problem lies nowadays.

        E.g., there is no way in heck that an O(n * n) algorithm can beat an O(log(n)) algorithm for large data sets, and data sets _are_ getting larger. No matter how much loop unrolling you do, no matter how you cleverly replaced the loops to count downwards, it just won't. At best you'll manage to fool yourself that it runs fast enough on those 100 record test cases. Then it goes productive with a database with 350,000 records. (And that's a small one nowadays.) Poof, it needs two days to complete now.

        And no hardware in the world will save you from that kind of a performance problem.

        E.g., if most of the program's time is spent waiting for a database, there's no point in unrolling loops and such. You'll save... what? 100 CPU cycles, when you wait 100,000,000 cycles or more for a single SQL query? On the other hand, you'd be surprised how much of a difference can it make if you retrieve the data in a single SQL query, instead of causing a flurry of 1000 individual connect-query-close sequences.

        (And you'd also be surprised how many clueless monkeys design their architecture without ever thinking of the database. They end up with a beautiful class architecture on paper, but a catastrophic flurry of querries when they actually have to read and write it.)

        E.g., if you're using EJB, it's a pointless exercise to optimize 100 CPU cycles away, when the RMI/IIOP remote call's overhead is at least somewhere between 1,000,000 and 2,000,000 CPU cycles by itself. That is, assuming that you don't also have network latency adding to that RPC time. On the other hand, optimizing the very design of your application, so it only uses 1 or 2 RPC calls, instead of a flurry of 1000 remote calls to individual getters and setters... well, that might just make or break the performance.

        (And again, you'd be surprised how many people don't even know that those overheads exist. Much less actually design with them in mind.)

        So in a nutshell, what I'm saying is: Optimize the algorithm and design, before you jump in to do silly micro-level tricks. That's where the real money is.
        • by cat_jesus ( 525334 ) on Thursday May 06, 2004 @11:30AM (#9074630)
          E.g., there is no way in heck that an O(n * n) algorithm can beat an O(log(n)) algorithm for large data sets, and data sets _are_ getting larger. No matter how much loop unrolling you do, no matter how you cleverly replaced the loops to count downwards, it just won't. At best you'll manage to fool yourself that it runs fast enough on those 100 record test cases. Then it goes productive with a database with 350,000 records. (And that's a small one nowadays.) Poof, it needs two days to complete now.


          And no hardware in the world will save you from that kind of a performance problem.

          E.g., if most of the program's time is spent waiting for a database, there's no point in unrolling loops and such. You'll save... what? 100 CPU cycles, when you wait 100,000,000 cycles or more for a single SQL query? On the other hand, you'd be surprised how much of a difference can it make if you retrieve the data in a single SQL query, instead of causing a flurry of 1000 individual connect-query-close sequences.

          (And you'd also be surprised how many clueless monkeys design their architecture without ever thinking of the database. They end up with a beautiful class architecture on paper, but a catastrophic flurry of querries when they actually have to read and write it.)

          E.g., if you're using EJB, it's a pointless exercise to optimize 100 CPU cycles away, when the RMI/IIOP remote call's overhead is at least somewhere between 1,000,000 and 2,000,000 CPU cycles by itself. That is, assuming that you don't also have network latency adding to that RPC time. On the other hand, optimizing the very design of your application, so it only uses 1 or 2 RPC calls, instead of a flurry of 1000 remote calls to individual getters and setters... well, that might just make or break the performance.

          (And again, you'd be surprised how many people don't even know that those overheads exist. Much less actually design with them in mind.)
          Not much surprises me these days. I had to rewrite a SQL trigger one time and I was very concerned about optimization because that sucker would get called all the time. I was shocked to discover that in this one particular instance a cusor solution was more efficient than utilizing a set processing methodology. I was so suprised that I wrote up a very nice paper about it for a presentation to my department, along with standard O notation and graphs.

          No one knew what O notation was.

          Not long after that I found out about knoppix. I burned a few disks and gave them out. Only one other person knew what linux was. It wasn't my manager.

          Just last week one of our servers had a problem with IIS. "What's IIS?", My manager asks.

          Here are some other gems

          "We can't put indexes on our tables! It will screw up the data!"

          "I've seen no evidence that shows set processing is faster than iterative processing" -- this one from our "Guru".

          "What is a zip file and what am I supposed to do with it?" -- from one our our senior systems programmers in charge of citrix servers.

          "What do you mean by register the dll?" -- from the same sysprog as above

          They pushed a patch out for the sasser worm and about 2% of the machines got the BSOD. I finally decided to give the fix a try on my machine and it died too. I booted into safe mode and rolled it back. Everyone else had to get their machines reimaged because desktop support couldn't figure out what was wrong. Lucky for my neightbor I was able to recover most of his data before they wiped it clean. He made the mistake of letting his machine loop through reboots for two days, which hosed his HD up. Of course the PC "experts" couldn't recover any data because the machine wouldn't boot up.

          Yes, I am in programmer purgatory. I am reluctant to say hell because I'm sure it can get worse. No, I'm not kidding.
    • by almaw ( 444279 ) on Thursday May 06, 2004 @04:07AM (#9071609) Homepage
      > Performance gains occur at the hardware level.
      > Any tendency to optimize prematurely ought to be
      > avoided, at least until after v1.0 ships.

      Performance gains occur at the algorithm level. It doesn't matter how much hardware you throw at a problem if it needs to scale properly and you have an O(n^3) solution.
      • by Theatetus ( 521747 ) * on Thursday May 06, 2004 @06:55AM (#9072064) Journal
        It doesn't matter how much hardware you throw at a problem if it needs to scale properly and you have an O(n^3) solution.

        Well, maybe you're not ignoring it since you said "if it needs to scale properly". But that's a very crucial "if", and the "scale properly" only refers to certain situations.

        If the array you need to sort might have several million members and you won't be sorting more than a few dozen of those arrays, yes you should use an O(n lg n) or whatever sort routine. OTOH, if the array itself is smaller (a few hundred members) but you have to sort several hundred thousand of them, quicksort or merge sort will be remarkably slow compared to the much-maligned bubble sort.

        Big-O notation is an asymptotically-tight bound, not the function itself. For small datasets, not only is there no guarantee that the lower big-O algorithm will be faster, it's in fact usually the case that the allegedly "less efficient" algorithm will actually be faster.

        • by An Onerous Coward ( 222037 ) on Thursday May 06, 2004 @10:21AM (#9073709) Homepage
          A project I was doing last semester had just what you described: thousands of arrays of twenty members each. I was still able to double the performance by switching from bubblesort to quicksort. Besides, you never know when those arrays are going to get bumped up from a few hundred members to a few thousand.

          I'm still a firm believer in the principle that bubblesort is never the way to go.
      • by Vengie ( 533896 ) on Thursday May 06, 2004 @07:33AM (#9072215)
        You're ignoring constants. Constants can sometimes be large. That is why strassen's matrix multiply method takes longer than the naive method on small matricies.

        Scarily, you have just enough knowledge to sound like you know what you're talking about. Sometimes it DOES matter how much hardware you throw at the problem, lest you forget the specialized hardware DESIGNED to crack DES.

        How about your next computer I replace all the carry-lookahead adders with ripple-carry adders? Please look up those terms if you don't know them. I'm sure you'd be unpleasantly surprised.
    • by hankaholic ( 32239 ) on Thursday May 06, 2004 @06:32AM (#9071999)
      I was going to moderate this post "Overrated", but I'd rather just explain why you're wrong in stating that the "algorithm that was going to hold back your competition runs perfectly fast on new processors".

      Certain algorithms take more-than-proportionately longer as the data size increases. For example, if you're writing route-planning software, each additional stop on a route might cause the number of calculations required to (roughly) double.

      In such a case, having hardware which is twice as powerful would mean that performance would half, although as soon as the user added two more data points, the performance would be slower than the original machine.

      To clarify a tad, let's say FedEx decides to optimize the routes drivers in Montana are travelling. Assume that there are 10,000 stops and 200 drivers, and that your code runs in, say, an hour on FedEx's machines.

      Assume that you've used an algorithm for which each additional data point doubles the amount of computation required. Now FedEx deciding to hire 10 more drivers means that your route planning software is going to take 2^10 times as long to plan their routes (since it doubles for each new data point, that's 2^1 for one driver, 2^2 for two, 2^3 for three...).

      The point is that tiny operations add up when you've chosen the wrong algorithm. Despite the fact that runtime was fine using FedEx's CPU farm in the original situation, your disregard for efficiency will cause the route-planning time to take not the overnight-batch-job-friendly hour, but a stunning 1024 times as long (hint: over a month).

      Say a new big fast machine enters the market, with four times the CPU power. FedEx will still need 256 times as many machines to perform the same calculations in under an hour, or at least, say, 32 times as many in order to be able to perform them overnight.

      All because you decided that choosing algorithms based on performance was poppycock.

      Prematurely optimizing on a microscopic level may be "bad", but choosing the proper algorithm can make the difference between a product with a future and a product with too many designed-in limitations to be able to handle greater-than-expected loads.

      (CS fans will note that the TSP problem was a unrefined to have pulled out given the whole P/NP thing, but that's the point -- sticky situations can and will arise for which no amount of source-level optimization will save the day.)
  • by equex ( 747231 ) on Wednesday May 05, 2004 @11:46PM (#9070690) Homepage
    i remember times when 7.14mhz and 256k ram was enough to drive a multitasking windowed os. (amiga)
    ive seen glenz vectors and roto-zoomers on the commodore 64.
    modern os's, escpecially windows seem super-sluggish when you see what is possible on those old computers if you just care to optimize the code to the max.
  • Don't agree (Score:5, Interesting)

    by Docrates ( 148350 ) on Wednesday May 05, 2004 @11:51PM (#9070718) Homepage
    While the author's point seems to be that optimization and performance are not all that important, and that you can achieve better results with how you do things and not what you use, I tend to disagree with him.

    The thing is, in real life applications, playing with a Targa file is not the same as service critical, 300 users, number crunching, data handling systems, where a small performance improvement must be multiplied by the number of users/uses, by many many hours of operation and by years in service to understand its true impact.

    Just now I'm working on an econometric model for the Panama Canal (they're trying to make it bigger and need to figure out if it's worth the effort/investment) and playing with over 300 variables and 100 parameters to simulate dozens of different scenarios can make any server beg for more cycles, and any user beg for a crystal ball.
    • Re:Don't agree (Score:5, Insightful)

      by mfago ( 514801 ) on Thursday May 06, 2004 @12:08AM (#9070787)
      Not what I got out of it at all, rather:

      Clear concise programs that allow the programmer to understand -- and easily modify -- what is really happening matter more than worrying about (often) irrelevant details. This is certainly influenced by the language chosen.

      e.g. I'm working on a large F77 program (ugh...) that I am certain would be much _faster_ in C++ simply because I could actually understand what the code was doing, rather than trying to trace through tens (if not hundreds) of goto statements. Not to mention actually being able to use CS concepts developed over the past 30 years...
      • Do NOT convert to C++ under any circumstances!

        Fortran 77 sucks.

        But C++ sucks, in different ways.

        Fortran 95 is a much better language than Fortran 77, and for many things, better than C++ as well.

        It is practically a new language with an old name.

        If you currently have a F77 code, it is almost certainly far better to start using Fortran 95.

        Essentially all Fortran 95 implementations have compile and run-time checks which can make things as safe as Java, and when you take off the checks, things will run ve
        • So true... (Score:3, Interesting)

          by warrax_666 ( 144623 )
          You have to work much, much harder in C++ to get anywhere near FORTRAN performance, so much so that it's almost never worth the effort.

          One of the most dangerous things (optimization-wise) in C++, I've found is the temporary-creation problem. You have to be insanely careful to avoid creating temporaries to get any sort of reasonable performance... (or maybe I just need a better compiler than GNU GCC?)

          Templates powerful and dangerous.

          Not quite sure why you would consider them dangerous, but they are Turi

    • Re:Don't agree (Score:5, Insightful)

      by Frobnicator ( 565869 ) on Thursday May 06, 2004 @01:33AM (#9071097) Journal
      Just now I'm working on an econometric model ... in real life applications, playing with a Targa file is not the same as service critical, 300 users, number crunching, data handling systems, where a small performance improvement must be multiplied by the number of users/uses, by many many hours of operation and by years in service to understand its true impact. ... I'm playing with over 300 variables and 100 parameters to simulate dozens of different scenarios can make any server beg for more cycles, and any user beg for a crystal ball.

      I don't think that fits into the description the article was talking about.

      The point of this article is not targeted to you. I've seen interns as recent as last year complain about the same things mentioned in the article: division is slow, floating point is slow, missed branch prediction is slow, use MMX whenever more than one float is used, etc.

      The point I get out of the article is not to bother with what is wasteful at a low level, but be concerned about the high levels. A common one I've seen lately is young programmers trying to pack all their floats into SSE2. Since that computation was not slow to begin with, they wonder why all their 'improvements' didn't speed up the code. Even the fact that they are doing a few hundred unneccessary matrix ops (each taking a few hundred CPU cycles) didn't show up on profiling. Their basic algorithm in a few cases I'm thinking about are either very wasteful, or could have been improved by a few minor adjustments.

      The article mentions some basic techniques: choosing a different algorithm, pruning data, caching a few previously computed results, finding commonalities in data to improve the altorithm. Those are timeless techniques, which you probably have already learned since you work on such a big system. Writing your code so that you can find and easily implement high-level changes; that's generally more important than rewriting some specific block of code to run in the fewest CPU cycles.

      A very specific example. At the last place I worked, there was one eager asm coder who write template specializations on most of the classes in the STL for intrinsic types in pure asm. His code was high quality, and had very few bugs. He re-wrote memory management so there were almost no calls to the OS for memory. When we used his libraries, it DID result in some speed gains, and it was enough to notice on wall-clock time.

      However... Unlike his spending hundreds of hours on this low-return fruit, I could spend a day with a profiler, find one of the slower-running functions or pieces of functionality, figure out what made it slow, and make some small improvements. Usually, a little work on 'low-hanging fruit', stuff that gives a lot of result for a little bit of work, is the best place to look. For repeatedly computed values, I would sometimes cache a few results. Other times, I might see if there is some few system functions that can be made to do the same work. On math-heavy functions, there were times when I'd look for a better solution or 'accurate enough but much faster' solution using calculus. I'd never spend more than two days optimizing a bit of functionality, and I'd get better results than our 'optimize it in asm' guru.

      Yes, I would spend a little time thinking about data locality (stuff in the CPU cache vs. ram) but typically that doesn't give me the biggest bang for the buck. But I'm not inherently wasteful, either. I still invert and multiply rather than divide (it's a habit), but I know that we have automatic vectorizers and both high-level and low-level optimizers in our compilers, and an out-of-order core with AT LEAST two floating point, two integer, and one memory interface unit.

      And did I mention, I write software with realtime constraints; I'm constantly fighting my co-workers over CPU quotas. I read and refer to the intel and AMD processor documentation, but usually only to see which high-level functionality best lends itself to the hardware. I am tempted to 'go straight to the metal' occasionally, or to count the CPU cycles of everything, but I know that I can get bigger gains elsewhere. That's what the point of the article is, I believe.

  • by ErichTheWebGuy ( 745925 ) on Wednesday May 05, 2004 @11:51PM (#9070719) Homepage

    should really read that essay! Maybe then we wouldn't need [slashdot.org] dual-core 4-6 GHz CPUs and 2GB ram to run their new OS.
    • Maybe then we wouldn't need dual-core 4-6 GHz CPUs and 2GB ram to run their new OS.

      The reason they're targetting this kind of system is because the hardware will probably be cheaper than Windows itself by the time Longhorn comes out.

      I'm sure they'll let you switch off the flash features that need it, though. All recent versions of Windows have been able to degrade to roughly the same performance standard as the previous version if you choose the right options.
  • by Godeke ( 32895 ) * on Wednesday May 05, 2004 @11:53PM (#9070725)
    One of the concepts touched upon is the idea that optimization is only needed after profiling. Having spent the last few years building a system that sees quite a bit of activity, I have to say that we have only had to optimize three times over the course of the project.

    The first was to get a SQL query to run faster: a simple matter of creating a view and supporting indexes.

    The second was also SQL related, but on a different level: the code was making many small queries to the same data structures. Simply pulling the relevant subset into a hash table and accessing it from there fixed that one.

    The most recent one was more complex: it was similar to the second SQL problem (lots of high overhead small queries) but with a more complex structure. Built an object to cache the data in with a set of hashes and "emulated" the MoveNext, EOF() ADO style access the code expected.

    We have also had minor performance issues with XML documents we throw around, may have to fix that in the future.

    Point? None of this is "low level optimization": it is simply reviewing the performance data we collect on the production system to determine where we spend the most time and making high level structural changes. In the case of SQL vs a hash based cache, we got a 10 fold speed increase simply by not heading back to the DB so often.

    Irony? There are plenty of other places where similar caches could be built, but you won't see me rushing out to do so. For the most part performance has held up in the face of thousands of users *without* resorting to even rudementry optimization. Modern hardware is scary fast for business applications.
  • by jesup ( 8690 ) * <randellslashdot@ ... p.org minus city> on Wednesday May 05, 2004 @11:57PM (#9070743) Homepage
    66 fps on a 3 GHz machine, doing a 600x600 simple RLE decode...

    Ok, it's not bad for a a language like Erlang, but it's not exactly fast.

    The big point here for the author is "it's fast enough". Lots of micro- (and macro-) optimizations are done when it turns out they aren't needed. And writing in a high level language you're comfortable in is important, if it'll do the job. This is a good point.

    On the other hand, even a fairly naive implementation in something like C or C++ (and perhaps Java) would probably have acheived the goal without having to make 5 optimization passes (and noticable time examining behavior).

    And even today, optimizations often do matter. I'm working on code that does pretty hard-real-time processing on multiple threads and keeps them synchronized while communicating with the outside world. A mis-chosen image filter or copy algorithm can seriously trash the rest of the system (not overlapping DMA's, inconvenient ordering of operations, etc). The biggest trick is knowing _where_ they will matter, and generally writing not-horrible-performance (but very readable) code as a matter of course as a starting point.

    Disclaimer: I was a hard-core ASM & C programmer who for years beta-tested 680x0 compilers by critiquing their optimizers.
    • Some implementations of popular dynamic languages (e.g., LISP, Scheme), let you do some type inference and/or some explicit declarations, and will spit out machine code or C that will do the job that much faster. Tweak your algorithm in the slow version of the language and then produce a program that runs ten times faster with an optimizing compiler.

      The Squeak [squeak.org] VM is a great example of this. The whole thing is written in Squeak itself. Running a Smalltalk VM this way is painfully slow, but a Smalltalk->C
  • by KalvinB ( 205500 ) on Thursday May 06, 2004 @12:05AM (#9070771) Homepage
    Working on a heavily math based application speed is necessary to the point that the user is not expected to wait a significant amount of time without something happening. I have a large background in game programming working on crap systems and it comes in handy. My tolerance for delays goes to about half a second for a complete operation. It doesn't matter how many steps are needed to perform the operation, it just all has to be done in less than half a second on a 1200Mhz system. My main test of performance is seeing how long it takes for Mathematica to spit out an answer compared to my program. Mathematica brags about being the fastest and most accurate around.

    When operations take several seconds a user gets annoyed. The program is percieved to be junk and the user begins looking for something else that can do the job faster. It doesn't matter if productivity is actually enhanced. It just matters that it's percieved to be enhanced or that the potential is there.

    You also have to consider if the time taken to complete an operation is just because of laziness. If you can easily make it faster, there's little excuse not to.

    For distributed apps you have to consider the cost of hardware. It may cost several hours of labor to optimize but it may save you the cost of a system or few.

    In the world of games half a second per operation works out to 2 frames per second which is far from acceptible. Users expect at minimum 30 frames per second. It's up to the developer to decide what's the lowest system they'll try to get that target on.

    You have to consider the number of users that will have that system vs the amount it will cost to optimize the code that far.

    In terms of games you also have to consider that time wasted is time possibly better spent making the graphics look better. You could have an unoptimized mesh rendering routine, or a very fast one and time left over to apply all the latest bells and whistles the graphics card has to offer.

    There are countless factors in determining when something is optimized enough. Games more so than apps. Sometimes you just need to get it out the door and say "it's good enough."

    Ben
  • by Debian Troll's Best ( 678194 ) on Thursday May 06, 2004 @12:09AM (#9070789) Journal
    I'm currently completing a degree in computer science, with only a few courses left to take before I graduate. Man, I wish I had read that article before last semester's 'Systems Programming and Optimization' course! It really puts a lot of things into perspective. So much of a programmer's time can get caught up in agonizing over low-level optimization. Worse than that are the weeks spent debating language and design choices with fellow programmers in a team. More often than not, these arguments boil down to personal biases against one language or another, due to perceived 'slowness', rather than issues such as 'will this language allow better design and maintenance of the code', or 'is a little slow actually fast enough'?

    A particular illustration of this was in my last semester's 'Systems Programming and Optimization' course. The professor set us a project where we could choose an interesting subsystem of a Linux distro, analyze the code, and point out possible areas where it could be further optimized. I'm a pretty enthusiastic Debian user, so I chose to analyze the apt-get code. Our prof was very focused on low-level optimizations, so the first thing I did was to pull apart apt-get's Perl codebase and start to recode sections of it in C. At a mid-semester meeting, the professor suggested that I take it even further, and try using some SIMD/MMX calls in x86 assembly to parallelize package load calls.

    This was a big ask, but me and my partner eventually had something working after a couple of weeks of slog. By this stage, apt-get was *flying* along. The final step of the optimization was to convert the package database to a binary format, using a series of 'keys' encoded in a type of database, or 'registry'. This sped up apt-get a further 25%, as calls to a machine-readable-only binary registry are technically superior to old fashioned text files (and XML was considered too slow)

    Anyway, the sting in the tail (and I believe this is what the article highlights) was that upon submission of our project, we discovered that our professor had been admitted to hospital to have some kidney stones removed. In his place was another member of the faculty...but this time, a strong Gentoo supporter! He spent about 5 minutes reading over our hand-coded x86 assembly version of apt-get, and simply said "Nice work guys, but what I really want to see is this extended to include support for Gentoo's 'emerge' system...and for the code to run on my PowerMac 7600 Gentoo PPC box. You have a week's extension'

    Needless to say, we were both freaking out. Because we had focused so heavily on optimization, we had sacrificed a lot of genericity in the code (otherwise we could have just coded up 'emerge' support as a plug-in for 'apt-get'), and also we had tied it to Intel x86 code. In the end we were both so burnt out that I slept for 2 days straight, and ended up writing the 'emerge' port in AppleScript in about 45 minutes. I told the new prof to just run it through MacOnLinux, which needless to say, he wasn't impressed with. I think it was because he had destroyed his old Mac OS 8 partition to turn it into a Gentoo swap partition. Anyway, me and my partner both ended up getting a C- for the course.

    Let this be a lesson...read the article, and take it in. Optimization shouldn't be your sole focus. As Knuth once said, "premature optimisation is the root of all evil". Indeed Donald, indeed. Kind of ironic that Donald was the original professor in this story. I don't think he takes his work as seriously as he once did.

    • WTF are you talking about?

      I'm staring at the apt codebase on my screen just now, and it's all C++, baby. Ok, so there is a trivial amount of perl; sloccount summary:

      Totals grouped by language (dominant language first):
      cpp: 26481 (89.75%)
      sh: 2816 (9.54%)
      perl: 209 (0.71%)

      This is for apt-0.5.14, but I can't imagine that the newest version in unstable (0.5.24) would be that different.

      Now, if the rest of your story is true, that's mind-boggling. If the new teacher refused to jud
  • by wintermute42 ( 710554 ) on Thursday May 06, 2004 @12:13AM (#9070803) Homepage

    The economist Brian Arthur is one of the proponents of the theory of path dependence [bearcave.com]. In path dependence something is adopted for reasons that might be determined by chance (e.g., the adoption of MS/DOS) or by some related feature (C became popular in part because of UNIX's popularity).

    The widespread use of C and C++, languages without bounds checking in a world where we can afford bounds checking, is not so much a matter of logical decision as history. C became popular, C++ evolved from C and provided a some really useful features (objects, expressed as classes). Once C++ started to catch on, people used C++ because others used it and an infrastructure developed (e.g., compilers, libraries, books). In sort, the use of C++ is, to a degree, a result of path dependence. Once path dependent characteristics start to appear, choices are not necessarily made on technical virtue. In fact, one could probably say that the times when we make purely rational, engineering based decisions (feature X is important so I'll use language Y) are outweighed by the times when we decide on other criteria (my boss say's we're gonna use language Z).

  • by kaan ( 88626 ) on Thursday May 06, 2004 @12:15AM (#9070818)
    All projects are an exercise in scheduling, and something is always bound to fall of the radar given the real-world time constraints. In my experience, the right thing to do is get a few smart people together to isolate any problem areas of the product, and try to determine whether that code might produce performance bottlenecks in high-demand situations. If you find any warning areas, throw your limited resources there. Don't fret too much about the rest of the product.

    In the business world, you have to satisfy market demands and thus cannot take an endless amount of time to produce a highly optimized product. However, unless you are Microsoft, it is very difficult to succeed by quickly shoving a slow pile of crap out the door and calling it "version 1".

    So where do you optimize? Where do you concentrate your limited amount of time before you miss the window of opportunity for your product?

    I know plenty of folks in academia who would scoff at what I'm about to say, but I'll say it anyway... just because something could be faster, doesn't mean it has to be. If you could spend X hours or Y days tweaking a piece of code to run faster, would it be worth it? Not necessarily. It depends on several things, and there's no really good formula, each case ought to be evaluated individually. For instance, if you're talking about a nightly maintenance task that runs between 2am and 4am when nobody is on the system, resource consumption doesn't matter, etc., then why bother making it run faster? If you have an answer, then good for you, but maybe you don't and should thus leave that 2 hour maintenanc task alone, spend your time doing something else.

    For people who are really into performance optimization, I say get into hardware design or academia, because the rest of the business world doesn't really seem to make time for "doing things right" (just an observation, not my opinion).
  • by xant ( 99438 ) on Thursday May 06, 2004 @12:16AM (#9070819) Homepage
    Less code is faster than more code! Simply put, it's easier to optimize if you can understand it, and it's easier to understand if there's not so much of it. But when you optimize code that didn't really need it, you usually add more code; more code leads to confusion and confusion leads to performance problems. THAT is the highly-counterintuitive reason premature optimization is bad: It's not because it makes your code harder to maintain, but because it makes your code slower.

    In a high-level interpreted language with nice syntax--mine is Python, not Erlang, but same arguments apply--it's easier to write clean, lean code. So high-level languages lead to (c)leaner code, which is faster code. I often find that choosing the right approach, and implementing it in an elegant way, I get performance far better than I was expecting. And if what I was expecting would have been "fast enough", I'm done -- without optimizing.
    • by AaronW ( 33736 ) on Thursday May 06, 2004 @01:52AM (#9071185) Homepage
      Less code does not equal faster code. You can usually get the best performance increases by using a better algorithm. For example, if you're doing a lot of random adding and deleting of entries, a hash table will be much faster than a linked list. This will have the greatest impact.

      Other things that can help are doing your own memory management at times (i.e. freelists) since that will be faster than malloc/new, and will have less memory overhead. Also, design your storage to your data. If you know you'll allocate up to 64K of an item, and the item is small, allocate an array of 64K of them and maintain a freelist. This will use a lot less memory than dynamically allocating each item and will result in better locality.

      I write code in the embedded space, where memory usage and performance are both equally important. Usually getting the last ounce of performance out of the compiler doesn't make much difference.

      A good real-world example is that I replaced the malloc code provided by a popular embedded OS with DLMalloc, which glibc is based. The dlmalloc code is *much* more complicated, and the code path is much longer, but due to much better algorithms, operations that took an hour with the old simple malloc dropped down to 3 minutes. It went from exponential to linear time.

      -Aaron
  • You would think that with all the years put into developing computer languages, as well as the decades of software engineering, that these algorithms and techniques would make their way into best practices.

    This, of course, has already begun with many frequently used algorithms like sorting or hashing being made part of the language core libraries, but more than that, it seems that duplicating effort occurs much more often than simply that.

    This is one instance where Microsoft has really come through. Their COM architecture allows for inter-language reuse of library code. By releasing a library which is binary compatible across different languages, as well as backwards compatible with itself (v2.0 supports v1.9), the COM object architecture takes much of the weight of programming difficult and repetitive tasks out of the hands of programmers and into the hands of library maintainers.

    This kind of separation of job function allows library programmers the luxury of focusing on optimizing the library. It also allows the client programmer the luxury of ignoring that optimization and focusing on improving the speed and stability of his own program by improving the general structure of the system rather than the low level mundanities.

    Large libraries like Java's and .Net's as well as Smalltalk's are all great. Taking the power of those libraries and making them usable across different languages, even making them scriptable would bring the speed optimizations in those libraries available to everyone.
  • Code tweaking (Score:5, Insightful)

    by Frequency Domain ( 601421 ) on Thursday May 06, 2004 @12:20AM (#9070849)
    You get way more mileage out of choosing an appropriate algorithm, e.g., an O(n log n) sort instead of O(n^2), than out of tweaking the code. Hmmm, kind of reminds me of the discussion about math in CS programs.

    Every time I'm tempted to start micro-optimizing, I remind myself of the following three simple rules:

    • 1) Don't.

    • 2) If you feel tempted to violate rule 1, at least wait until you've finished writing the program.
      3) Non-trivial programs are never finished.
  • by Dominic_Mazzoni ( 125164 ) * on Thursday May 06, 2004 @12:24AM (#9070865) Homepage
    Proper programming perspective? Please. People-centered programming? Pretty pathetic.

    Programmer's purpose: problem-solving. Programmers prefer power - parallelizing, profiling, pushing pixels. Programmers prefer Pentium PCs - parsimonious processing power. Pentium-optimization passes Python's popularity.

    Ponder.

    [Previous painful posts: P [slashdot.org], D [slashdot.org]]
  • by corngrower ( 738661 ) on Thursday May 06, 2004 @12:27AM (#9070885) Journal
    Often times to get improved performance you need to examine the algorithms used. At other times, and on certain cpu architectures, things that slow your code can be very subtle.

    If you're code must process a large amount of data, look for ways of designing your program so that you serially process the data. Don't try to bring large amounts of data from a database or data file all at once if you don't have too. Once you are no longer able to contain the data in physical memory, and the program starts using 'virtual' memory, things slow down real fast. I've seen architects forget about this, which is why I'm writing this reminder.

    On the other hand I've worked on a C++ project where, in a certain segment of the code, it was necessary to write our own container class to replace one of the std: classes, for performance on the SPARC architecture. Using the std: container would cause the subroutines to nest deeply enough to so that the cpu registers needed to be written out out to slower memory. The effect was enough to be quite noticeable in the app.

    With today's processors, to optimize for speed, you have to think about memory utilization, since running within cache is noticably faster than from main memory. Things are not as clear cut, so far as speed optimization goes, as they once were.

  • by StevenMaurer ( 115071 ) on Thursday May 06, 2004 @12:33AM (#9070899) Homepage
    This article seems to be something that I learned twenty years ago... performance is an aspect of good design.

    That is why I insist on "optmization" in the beginning. Not peephole optimization - but design optimization. Designs (or "patterns" in the latest terminology) that are fast are also naturally simple. And simple - while hard to come up with initially - is easy to understand.

    But that's also why I discount any "high level language is easier" statement, like this fellow makes. It is significantly harder to come up with a good architecture than learning to handle a "hard" language. If you can't do the former (including understanding the concepts of resource allocation, threads, and other basic concepts), you certainly aren't going to do the latter. Visual Basic is not an inherently bad language because you can't program well in it. It just attracts bad programmers.

    And that goes the same for many of the newer "Basics": these "managed languages" that make it so that people can "code" without really understanding what they're doing. Sure, you can get lines of code that way. But you don't get a good product.

    And then the whole thing falls apart.
  • by BigZaphod ( 12942 ) on Thursday May 06, 2004 @12:35AM (#9070912) Homepage
    There seems to be two basic causes of bad performance:

    1. Mathematically impossible to do it any other way.
    2. Modularity.

    Of course crap code/logic also counts, but it can be rewritten.

    The problem with modularity is that it forces us to break certain functions down at arbitrary points. This is handy for reusing code, of course, and it saves us a lot of work. Its the main reason we can build the huge systems we build today. However, it comes with a price.

    While I don't really know how to solve this practically, it could be solved by writing code that never ever calls other code. In other words, the entire program would be custom-written from beginning to end for this one purpose. Sort of like a novel which tells one complete story and is one unified and self-contained package.

    Programs are actually written more like chapters in the mother of all choose-your-own-adventure books. Trying to run the program causes an insane amount of page flipping for the computer (metaphorically and actually :-))

    Of course this approach is much more flexible and allows us to build off of the massive code that came before us, but it is also not a very efficient way to think about things.

    Personally, I think the languages are still the problem because of where they draw the line for abstractions. It limits you to thinking within very small boxes and forcing you to express yourself in limited ways. In other words, your painting can be as big as you want, but you only get one color (a single return value in many languages). It is like we're still stuck at the Model T stage of language development--it comes in any color you want as long as its black!
  • by Jason Pollock ( 45537 ) on Thursday May 06, 2004 @12:38AM (#9070921) Homepage

    I am hearing a lot of people saying that you shouldn't optimise prior to the first release. However, it is very easy to select a design or architecture that limits your high end performance limit. Therefore, there is some optimisation that needs to be done early.

    When you're architecting a system that is going to take tens of man years of effort to implement, you need to ensure that your system will scale.

    For example, a project I recently worked on hit a performance wall. We had left optimisation for later, always believing that it shouldn't be done until last. However, while the architecture chosen was incredibly nice and clear, it limited the performance to 1/3th what was required. Back to the drawing board, we just doubled the project cost - ouch.

    Even worse, there are performance differences on each platform! For example, did you know that throwing an exception is 10,000 times slower than a return statement in HP/UX 11? Solaris is only a little better at 2 orders of magnitude. Linux is (I understand) a dead heat.

    So, while low-level optimisation of statements is silly early in the project, you do need to ensure that the architecture you choose is going to meet your performance requirements. Some optimisations are definitely necessary early in the project.

    The article also talks about tool selection, suggesting that the extra CPU could be better used to support higher level languages like Erlang. If a system has CPU to spare, I agree, use what you can. The projects I work on always seem to lack in CPU cycles, disk write speed, and network speed. You name it, we're short of it. In fact, a large part of our marketing strategy is that we are able to deliver high performance on low end systems. What would happen to us if we dropped that edge? We're working with a company that has implemented a real-time billing system in Perl. Not a problem, until you try and send it 500 transactions/second. Their hardware budget? Millions to our 10s of thousands. Who do you think the customer likes more?

    Jason Pollock
  • by epine ( 68316 ) on Thursday May 06, 2004 @12:42AM (#9070933)

    Sigh. One of the best sources of flamebait is being right for the wrong reasons.

    Surely C++ must rate as the least well understand language of all time. The horrors of C++ are almost entirely syntactic, beginning with the decision to maintain compatibility with the C language type declaration syntax and then adding several layers of abstraction complexity (most notably namespaces and templates).

    There are only two areas where I fear C++ for program correctness. The first is making a syntactic brain fart leading to an incorrect operator resolution or some such. These can be tedious to ferret out, but most of these battles are fought with the compiler long before a defect makes it into the production codebase.

    My second source of fear concerns interactions of exception unwinding across mixtures of object oriented and generic components. I see this as the only case where managed memory provides a significant advantage: where your program must incorporate exception handling. If you can't manage your memory correctly in the absence of the exception handling mechanism, I really don't believe you can code anything else in your application correctly either. I think exceptions are mostly a salvation for poor code structure. If all your code constructs are properly guarded, you don't need an error return path. Once a statement fails to achieve a precondition for the code that follows, the code path that follows will become a very efficient "do nothing" exercise until control is returned to a higher layer by the normal return path, whereupon the higher layer of control can perform tests about whether the objectives were achieved or not and take appropriate measures. I think the stupidest optimization in all of programming is cutting a "quick up" error return path that skips the normal path of program execution so that the normal path of execution can play fast and loose with guard predicates.

    The four languages I use regularly are C, C++, PHP, and Perl. Perl is the language I'm least fond of maintaining. Too many semantic edge cases that offer no compelling advantage to motivate remembering the quirk. C++ has many strange cases, but for C++ I can remember the vast majority of these well enough, because I've stopped to think about how they evolved from taking the C language as a starting point.

    I happen to love PHP for the property of being the most forgettable of all languages. I forget everything I know about PHP after every program I write, and it never slows me down the next time I sit down to write another PHP program. The managed memory model of PHP appeals to me in a way that Java doesn't, because as an inherently session-oriented programming model, PHP has a good excuse for behaving this way.

    I have a love/hate relationship with both C and C++. I write one program at a high level of abstraction in C++ and then when I return to C it feels like a breath of fresh air to live for a while in an abstraction free zone, until the first time I need to write a correctness safe string manipulation more complicated than a single sprintf, and then I scream in despair.

    The part of my brain that writes correct code writes correct code equally easily in all of these languages, with Perl 5 slightly in the rear.

    If I really really really want correct code I would always use C++. The genericity facilities of C++ create an entire dimension of correctness calculas with no analog in most other programming languages. The template type mechanism in C++ is a pure functional programming language just as hard core as Haskell, but because C++ is a multi-paradigm language, in C++ you only have to pull out the functional programming hammer for the slice of your problem where nothing less will do.

    What I won't dispute is that C++ is a hard language to master to the level of proficiency where it becomes correctness friendly. It demands a certain degree of meticulous typing skills (not typing = for ==). It demands an unflagging determination to master the sometim
  • by the_skywise ( 189793 ) on Thursday May 06, 2004 @12:45AM (#9070942)
    So much as an attempt to "prove" that programming to the metal is no longer necessary or desireable. (IE "After all, if a C++ programmer was truly concerned with reliability above all else, would he still be using C++?" )

    The analogy is all wrong. These days there are distinctly two types of "optimization". Algorithmic and the traditional "to the metal" style.

    During college I worked with the English department training English students to use computers as their work had to be done on a computer. (This was before laptops were commonplace) The theory was that word processing allowed students a new window into language communication. To be able to quickly and painlessly reorganize phrases, sentences and paragraphs showed the students how context, clarity and meaning could change just by moving stuff around.

    This is what the author has discovered. That by being able to move code actions around, he can experiment and "play" with the algorithm to boost speed while keeping error introduction to a minimum. (Ye olde basic anyone?)

    He mistakenly equates this to "advanced technologies" like virtual machines and automatic memory buffer checking. In reality, we've just removed the "advanced technologies" from the process. (IE Like pointers, dynamic memory allocation, etc) (IE, ye olde basic anyone?)

    There's nothing wrong with this. Though I am a C++ programmer by trade, I was far more productive when I was professionally programming Java. But that was because I had LESS creative control over the solution because of the language syntax. No passed in variable changing, no multiple inheritance, etc. So I'm thinking of how to layout the code, there's pretty much a limited way of how I'm going to go about doing that.

    It's like the difference between having the Crayola box of 8 crayons and the mondo-uber box of 64. If you're going to color the green grass with the box of 8, you've got: Green. If you've got 64 colors, you're going to agonize over blue-green, green-blue, lime green, yellow-green, pine green and GREEN.

    That doesn't make C++ less "safe" than Java. Sure, you can overwrite memory. But you can also create a Memory class in C++ ONCE which will monitor the overflow situation FOR you and never have to worry again.

    But back to optimization:
    66 fps seems really fast. But in game context it's still kind of meaningless. Here's why. You're not just displaying uncompressed images. You're also doing AI, physics, scoring, digital sound generation, dynamic music, User input, possibly networking. As a game programmer, you don't stop at 66 fps. Because if you do 132 fps, then you can really do 66 fps, and still have half a second left over to do some smarter AI or pathfind. Or if you get it up to 264 fps than you can spend 1/4 of the cycle doing rendering, maybe you can add true Dynamic voice synthesis so you don't have to prerecord all your speech!

    Ultimately, my point is this. (and I think this is what the author intended) You're going to get bugs in whatever language you write in. That's the nature of the beast. VM's and 4th generation languages take away the nitty gritty of programming while still providing alot of performance power. And in alot of cases, that's a good thing. But it's still nothing more than a "model" of what's really going on in the hardware. If you're really going to push the limits of the machine you have to be able to control all aspects of it. Now, it's getting harder to do that in Windows. We spend more time coding to the OS than the metal. But in the embeddes systems category, and in console video game systems the metal still reigns and if you're going to develop a game that will push the hardware, you're going to need a programming language that will let you speak machine language. Not one that's going to protect you from yourself.

    As it was in the beginning, as it always will be: Right tool for the right job.
  • by Animats ( 122034 ) on Thursday May 06, 2004 @01:04AM (#9071015) Homepage
    "I was also bothered--and I freely admit that I am probably one of the few people this would bother--by the fragility of my decoder."

    And, sure enough, there's a known, exploitable buffer overflow in Microsoft's RLE image decoder.

  • uhh.. yeah (Score:3, Interesting)

    by XO ( 250276 ) <{blade.eric} {at} {gmail.com}> on Thursday May 06, 2004 @01:10AM (#9071036) Homepage Journal
    ok, so this guy is saying that.. he found 5 or 6 ways to improve the performance of his program by attacking things in an entirely different fashion... ok..

    back in the day, i discovered a really great trick... you might represent it as something like... :

    boolean a;
    a = 1 - a;

    this is a zillion times more efficient than if(a == 1) a = 0; else a = 1;

    it is also about the same as a |= 1; if you were going to use bitwise functions.

    OK. Great.

    • Re:uhh.. yeah (Score:3, Informative)

      by prockcore ( 543967 )
      this is a zillion times more efficient than if(a == 1) a = 0; else a = 1;

      This is the one time where I'll step up and say that VC actually does a few neat tricks for the trinary operator.
      c=(a>b)?0:1 /* or c=!(a>b), it's the same code */
      translates to
      cmp b,a
      sbb c,c
      inc c
      there are other variants of this, I'll leave it as an exercise to the reader to figure out what is going on.
  • by alanwj ( 242317 ) on Thursday May 06, 2004 @01:29AM (#9071085)
    The problem, in my opinion, is that people go about optimizing in the wrong place.

    You can spend all day optimizing your code to never have a cache-miss, a branch misprediction, divisions, square roots, or any other "slow" things. But if you designed an O(n^2) algorithm, my non-optimized O(n) algorithm is still going to beat it (for sufficiently large n).

    If the asymptotic performance of your algorithm is good, then the author is right, and you may not find it worth your time to worry about further optimizations. If the asymptotic performance of your algorithm is bad, you may quickly find that moving it to better hardware doesn't help you so much.

    Alan
    • by Flyboy Connor ( 741764 ) on Thursday May 06, 2004 @02:58AM (#9071421)
      To put it in different terms: Optimisation is in finding a good algorithm, not in tweaking code details.

      To give a nice example: a colleague of mine worked on a program that took two months to execute (it consisted of finding the depth of all connections between all nodes in a graph containing 50,000 nodes). Since the customer needed to run this program once a month, this took far too long. So my colleague rewrote the whole program in assembly, which took him a few months, managing to reduce the required time to, indeed, one month.

      My boss then asked me to take a look at it. Together with a mathematician I analysed the central function of the program, and we noticed that it was, basically, a matrix multiplication. We rewrote the program in Delphi in an hour or so, and reduced the required running time to less than an hour.

      I won't spell out the lesson.

  • by ibullard ( 312377 ) on Thursday May 06, 2004 @01:47AM (#9071164)
    Quote:
    Even traditional disclaimers such as "except for video games, which need to stay close to the machine level" usually don't hold water any more.

    Yeah, as long as you write simple, 2D games [dadgum.com](like the author of the essay does) that would be true. Complex, 3D games are another matter. I write games for a living and even if you're within sight of cutting edge you're writing at least some assembly and spending a lot of time optimizing C++.

    Now I'm not knocking all he says or saying that good games need to be in C++ and assembly. Some games rely heavily on scripting languages to handle the game mechanics and world events. There's a lot less assembly code than there used to be. However, the core engine that handles graphics, physics, AI, and I/O is going to be written in C++ and assembly and will be for the forseeable future.

    If I published a game that required a 3Ghz computer to display 576x576 images at 66fps, I'd be laughed off the internet. A PS2 has a 300Mhz processor and needs to display a 512x448 image every 30-60 seconds.

    • by prockcore ( 543967 ) on Thursday May 06, 2004 @02:35AM (#9071343)
      Yeah, as long as you write simple, 2D games(like the author of the essay does) that would be true.

      Not only that, but even simple 2d games can need optimizing. Perhaps they need optimizing because they're on an inherently slow platform (like Flash or a cell phone), or perhaps they need optimizing because they're multiplayer (and games with bad network code are immediately obvious and usually fail miserably)

      I find it strange that so many programmers here talk about things being "fast enough" or "not worth my time"... yet any article about mozilla, openoffice, windows, osx, damn near any software package with a gui is filled with complaints about slowness and bloat.

      Makes you wonder what IS worth their time.
      • Mozilla is slow because its GUI is written using a flexible script interpreter, rather than being hard coded in C++.

        I don't know why OpenOffice is slow, I've never analysed the way it works in enough detail. I'm sure the reason is fairly obvious to anyone who knows the code base well enough to comment.

        Windows isn't really slow, but has some annoying features that have been added recently that can slow you down; for instance in the user interface it will try to open files of certain types to display infor
  • by Kunta Kinte ( 323399 ) on Thursday May 06, 2004 @01:53AM (#9071191) Journal
    Maybe a little offtopic.

    But if you haven't heard of it http://www.javaperformancetuning.com/ [javaperfor...tuning.com] is a good source of performance tips for java

  • by Kris_J ( 10111 ) * on Thursday May 06, 2004 @02:21AM (#9071287) Homepage Journal
    Intially develop the entire project in a langauge you can develop fast. Once it works (and you're sure it does what the client wants), find out where the most CPU time is spent, then optimise those bits. "Optimise" may just mean having a good look at your code and working out a better way of doing it, or it might mean writing a library in assembler. Either way, optimise last.
  • by nimblebrain ( 683478 ) on Thursday May 06, 2004 @02:35AM (#9071346) Homepage Journal

    After years of developing, I really take to heart two things:

    1. Premature optimization often makes better optimizations down the line much more difficult
    2. It's 90% guaranteed that the slowdown isn't where or what you thought it was

    Profilers are the best thing to happen to performance since compilers - really. I encounter a number of truths, but many myths about what degrades performance. A few examples of each:

    Performance degraders

    • Mass object construction
    • Searching sequentially through large arrays
    • Repeated string concatenation (there are techniques to mitigate this)
    • Staying inside critical sections for too long

    Not performance degraders

    • Lots of object indirection
    • Lots of critical sections

    The "lots of object indirection" myth is one I encounter frequently. Object A calls Object B calls Object C, and it "intuitively" looks like it must be slow (Computer A calling Computer B, etc. would be slow), but even with stack frame generation, these are lightning fast compared with even the likes of "date to string" functions, never mind line-drawing commands or notification-sending.

    The reason that particular myth is dangerous is that it's the single most pervasive myth (IMHO) that leads to premature optimization. People take out layers of object indirection and make it harder to put in better solutions later. I had an object that recorded object IDs in a list and let you look them up later. If I had "flattened" that into the routine that needed it, I might have effected a 0.1% speed increase (typical range for many premature optimizations). As it stood, because it hid behind an interface (equivalent to an ABC for C++ folks), when I had finally implemented a unit-tested red/black tree, it was trivial (~5 minutes) to drop in the new functionality. That's not an isolated case, either.

    Mind you, I profiled the program to determine the slowdown first. Searching on the list, because so many were misses (therefore full scans), the search was taking up 98.6% of the entire operation. Switching to the red/black tree dropped the search down to 2.1%.

    All in all, if you have a slow program, profile it. There is no substitute for a well-written profiler. Stepping through and "feeling" how long it takes in a debugger, while it can point you in rough directions, will miss those things that take 50 ms out of the middle of each call to the operation you're checking. Manually inserting timing calls can be frustrating enough to maintain or slow down your program enough that you can't narrow down the performance hit.

    gprof works well with gcc and its relatives (make sure to add -pg to your flags), but I'm not sure if there's a good open source option out there for people using other tools that doesn't require you to alter your source.

    In the Windows world, we recently got in the professional version of AQTime 3 [automatedqa.com]. It's an astounding package, allowing you numerous reports, pie charts and call graphs, saving the last few runs, calculating differences in performance between runs, allowing attachment to running processes, on top of a pretty nice way to define areas of the program to profile. The single nicest thing about it, though, is the performance. We turned on full profiling (that is, profiling all methods in all modules, including all class library and third party components) on the largest project we had, and it ran with perhaps a 30% slowdown. If you've used profilers before, you know how astounding that is ;)

    Profiling applications always surprises me. In one case, a space-making algorithm I was running on controls seemed a little pokey; I found out more than 50% of the time spent was on constantly verifying that the lists were sorted. Today, I was investigating a dialog that looked like it must hav

  • wtf (Score:4, Insightful)

    by fred fleenblat ( 463628 ) on Thursday May 06, 2004 @02:50AM (#9071391) Homepage
    The article made it sound like the optimizations he was doing at the erlang level were somehow "better" than optimizations done in a language like C++ because he could just try out new techniques w/o worrying about correctness. His array bounds and types would be checked and all would be good.

    BS.

    First of all, erlang won't catch logical or algorithm errors, which are quite common when you're optimizing.

    Second, you can optimize just fine in C++ the same way just as easily, IF YOU ARE A C++ programmer. You just try out some new techniques the same way you always do. So array bounds aren't checked. You get used to it and you just stop making that kind of mistake or else you get good at debugging it. Hey at least you have static type checking.

    In fact you might be able to do a better job of optimization because you'll be able to see, right in front of you, low level opportunities for optimization and high level ones also. C++ programmers aren't automatically stupid and blinded by some 1:1 source line to assembly line ratio requirement.
  • engineering (Score:3, Interesting)

    by sir_cello ( 634395 ) on Thursday May 06, 2004 @05:43AM (#9071833)

    A lot of this discussion here is either crap, a rehash or was covered in Engineering 101.

    Basically, you have some requirements for the product, and you optimise according to those requirements. Performance is just one variable (time to market, scalability, reliability, security, usability, cost, etc - are the many others).

    The requirements for a product in a fast moving market entry company are less about performance and more about rollout ASAP.

    The requirements for the same product two years later may be to improve performance to achieve scalability requirements.

    If you're writing some sort of overnight build or batch application: whether it takes an extra hour or not may not matter, because it has a 12 hour window to run in.

    If you're writing an order processing system, then performance and end-to-end turn around to will be vitaly important, but you won't focus on the assembly, you'll focus on the algorithms, design and architecture.

    If you're writing a compression or encryption module: you probably will work on the assembly.

    All of the above cases, before you optimise anything: you profile and understand how the optimisation is going to pay back in real terms.

    In my experience, you cannot prescribe any of this: you need to take it on case by case basis because every product and circumstance is different.

  • Coding while blind (Score:5, Interesting)

    by xyote ( 598794 ) on Thursday May 06, 2004 @06:40AM (#9072015)
    The biggest problem I see with performance is lack of visiblity of performance factors. At the hardware level there is cache (which is supposed to be transparent) and really deep pipelined processors. This can have a major effect on what would otherwise be an optimal algorithm. And the hardware keeps changing, so what may have been optimal at one point will become suboptimal later on.

    In software, the biggest problem is lack of performance directives. POSIX pthreads is one of the biggest offenders here. Best performance practices in pthreads are based on how common implementations work. POSIX allows implementations that would cause major performance problems for so called best pthread programming practices. Example, POSIX allows pthread_cond_signal implementations to wake all waiting threads, not just one. There are programs that depend on pthread_cond_signal to wake only one thread for performance in order to avoid "thundering herd" problems. So while standards allow portability of correct programs, whey do not necessarily allow portability of performance.

    We need explicit performance directives.
  • by scorp1us ( 235526 ) on Thursday May 06, 2004 @08:18AM (#9072504) Journal
    If you spend hours tweaking code to eliminate a few instructions, even instructions in a loop, then you are just wasting your time.


    Real opimizations come before you right your program. Take for example that loop that you removed an instruction or two from. Say it is a searching an array. and looks like:

    for (i=0; i<strlen(x); i++){
    if (x[i]=='&') ;
    }


    There are two things wrong. One you cal strlen repetitively. Strlen() is theta(n) So you have a loop that executes n times at a cost of n . n*n=n^2. That's one of the slowest algorithms around. Maybe your compiler is smart enough to see that x is not being modified and will to a s=strlen(x); then compare against X for you, but probably not.


    The other thing is when searching an array, try to give it structure. If your array contains sorted characters, then you can find it in log _2 (n). Of course, of you sort by frequency (most commonly accessed at the top) then your n^2 loop *might* do better.


    The article is right: constant-time operations (type checking, etc) are asymtotically infitessimal in algorithms. The article's real problem is that it is n, but on a 2d image (x*y)=n you can't do any better. Note that it is not n^2, (though it makes a square picture) because you're operating on pixels. So that will be your unit of measure - NOT time.


    Which is my 2nd point. Don't measure in time or instructions. Measure in OPERATIONS. Operations are not instructions or lines of code. An Operation is everytime you have to look at a unit. It is a logical unit of cost. Hardware can be changed out. We know that hardware (performance) doubles every 18 months. The constant-time instructions will get smaller. (Also clocks per cycle are irrelevant as well). But your loops will remain the biggest part of your program.


    With that, here's a crash course in CS:

    1. Loops are most of (time, operations) the program. Try not to use them.
    2. To avoid loops, structure your data. Giving structure means assumptions, and assumptions means you can skip irrelevant sections of data.
    3. Determine your dataset, minimize worst-case occurences. Find out what order of data or instructions will make your n*log2(n) algorithm become n^2. Then find away around it.
    4. and optimize for average case. That is, if you never sort more than 6 numbers at a time, an n^2 will beat a n*log_2 (n) algorithm.
    5. If your data structure introduces overhead (most will) find yuor most common or costly operation. Optimize your datastructure for that (searching, sorting, etc) If you do a combination determine the ratio and optimize for that. The cost of overhead is usually small compared to the reason why your using a datastructure to speed up your common operation.
    6. The most obvious and easiest to code algorithm is the slowest. (Bubble sort vs. Radix or quick-sort)
    7. Mastery of the above is the difference between a $50k programmer and a $90K programmer.


      To learn more, take a datastructures class at your local university. Please review Calculus II before doing that though.

grep me no patterns and I'll tell you no lines.

Working...