Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming The Almighty Buck IT Hardware Technology

Hardware Is Cheap, Programmers Are Expensive 465

Sportsqs points out a story at Coding Horror which begins: "Given the rapid advance of Moore's Law, when does it make sense to throw hardware at a programming problem? As a general rule, I'd say almost always. Consider the average programmer salary here in the US. You probably have several of these programmer guys or gals on staff. I can't speak to how much your servers may cost, or how many of them you may need. Or, maybe you don't need any — perhaps all your code executes on your users' hardware, which is an entirely different scenario. Obviously, situations vary. But even the most rudimentary math will tell you that it'd take a massive hardware outlay to equal the yearly costs of even a modest five person programming team."
This discussion has been archived. No new comments can be posted.

Hardware Is Cheap, Programmers Are Expensive

Comments Filter:
  • Timing is everything (Score:4, Interesting)

    by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Saturday December 20, 2008 @11:50AM (#26183933)

    Sure, right now it may be more expensive to hire better developers.

    But just wait a couple more months when unemployment starts hitting double digits. You'll be able to pick up very good, experienced developers for half, maybe a third of their current salaries.

    Sure, invest in some HW now. That stuff will always be handy. But don't just go off and assume that developers will be expensive forever.

    • by tsa ( 15680 )

      For a third of the price of a developer you can buy an enormous amount of hardware.

      • Re: (Score:3, Funny)

        by ijakings ( 982830 )

        Yeh but you have to appreciate where all this enormous amount of hardware, enormous amount of hardware goes. It doesnt just come on a truck you can dump things on, it has to come via a series of tubes. Oh... Wait.

      • Re: (Score:3, Interesting)

        by nabsltd ( 1313397 )

        Big deal.

        Right now, I've got a problem with a software upgrade where it has to convert the Oracle database to the new version. The conversion takes 40 hours on a database with less than 6 million rows because the code starts a transaction, updates one row, then ends the transaction. After seeing the actual SQL being used, it could be replaced by "UPDATE thetable SET field1 = field2 + constant WHERE field3 = anotherconstant".

        I literally could not buy hardware fast enough to overcome the stupidity of these

        • This comes down to choosing the right tool for the job. Perhaps their application is written using frameworks which enable/enforce this kind of normal transactional processing of requests.

          Now if the existing application is the only way that they know how to get to the data, then it may easily become the golden-hammer / silver bullet that gets used for performing an upgrade, rather than writing an external sql script which they might not be familiar with. Add to this the common convention of "nothing touc
        • by Bodrius ( 191265 ) on Saturday December 20, 2008 @05:24PM (#26186363) Homepage

          The $hardware$ $programmer-time$ equation is always based on the assumption that the programmer is always worth their qualifications.

          You are correct that this is an unrealistic assumption but, like the "rational self-interest" assumption in economics, it is a very useful one.

          Given a set of uniformly competent programmers, you quickly reach the point of diminishing returns on optimizing performance over hardware - but that's because a competent programmer should implement code with reasonable performance in the first place. Sadly, some people think they can compensate one with the other (competence vs hardware), when that is an entirely different problem, entirely different variable (e.g.: an incompetent programmer with more time is not always a good thing).

          First you have to reach the point of competence where you can talk about performance optimizations in the first place. What you describe is not 'unoptimized code', it is not a naive but reasonable implementation - its gross incompetence (assuming SQL qualifications were claimed in the first place).

          As you said, you can't pay for enough hardware to compensate for that. But in the same vein, you really, *really* do not want to pay for more of that programmer time either.

        • by Xest ( 935314 ) on Saturday December 20, 2008 @06:43PM (#26186871)

          Indeed. It depends entirely on the problem, this is where computational complexity comes in, but cheap programmers wont even know what computational complexity is. The more complex the problem, the more knowledgeable your programmers will need to be in coming up with novel solutions.

          You only have to look at most combinatorial optimization problems to see where you may run into trouble, a cheap programmer may try and brute force it and no matter how much hardware you throw at the problem that method simply isn't going to work for all but the smallest of data sets. You're going to have to get someone who knows the tricks (algorithms such as ACO) to produce acceptable solutions in a sensible time frame.

          But you don't even need the hardest COPs to demonstrate the types of problems you may run into, even the most basic COPs can throw lesser skilled programmers whilst better programmers can implement a solution without even needing to look up any references.

          It's another case of cutting corners. To the companies considering this option; sure if you wanna hire cheaper programmers and throw hardware at the problem that's fine. Just don't come crying when your entire system keels over under the weight of a problem it can't solve with the method implemented to solve it and when you then have to get someone in to do the job properly. Also then when you find yourself with a load of hardware lying round you never actually needed had it been done right to start with.

          Cheap programmers are great for throwaway or non-mission critical software, but make sure you have at least some good programmers around who have the computer science background underlying their software engineering abilities to deal with the tough/complex stuff.

    • by samkass ( 174571 ) on Saturday December 20, 2008 @12:03PM (#26184015) Homepage Journal

      We'll see. The good developers probably won't be in the first wave of folks looking for jobs. I know our company is still in the "we have to figure out how to hire fast enough to do next year's work" mode.

      Where having good engineering really helps, though, is in version 2.0 and 3.0 of the product, and when you try to leverage embedded devices with some of the same code, and when you try to scale it up a few orders of magnitude... basically, it buys you flexibility and nimbleness on the market that the "throw more hardware at the problem" folks can't match.

      Despite Moore's Law being exponential over time (so far), adding additional hardware is still sub-linear for any snapshot in time. So it's not going to automatically solve most hard scalability problems.

      • Although you mention scalability and flexibility, I don't think you really hit the nail on the head.

        Performance and scalability are NOT the same. They are fundamentally different. You can have a weakly performing software product that scales nicely, and you can easily have a high performance application that doesn't scale at all.

        Understanding this difference can be the make/break point in whether or not a mildly profitable company can become a world-changer! It's fairly easy to write high-performance softwa

    • by diskofish ( 1037768 ) on Saturday December 20, 2008 @12:04PM (#26184017)
      I think there will be more developers looking for work in the future, but I don't think the price is going to drop THAT much. I just think you'll be able to find qualified developers more easily.

      As for the article, it makes a lot of sense when you're running in a controlled environment. It's really a no brainier in consulting work. Upgrading hardware or optimizing software will both meet the customers needs only the hardware upgrade is $2,000 and the software optimization costs $20,000.

      Of course, if you're releasing software into the wild and it needs run on many different machines you better make sure it performs well especially if it's a retail product. So spend the extra money and make it really good.
      • by jopsen ( 885607 ) <jopsen@gmail.com> on Saturday December 20, 2008 @12:58PM (#26184363) Homepage

        Of course, if you're releasing software into the wild and it needs run on many different machines you better make sure it performs well especially if it's a retail product. So spend the extra money and make it really good.

        People usually buys a product before they realize the performance sucks... And retailers always says that it's just because your computer isn't new enough... Which makes people buy new computers, not complain about the software...
        - Or may be I'm wrong...

        But, I don't know many non-computer-freaks who can tell you the system requirements of their computer, and even less that compare them to the minimum requirements of a game, and almost nobody who know that recommended system spec. is actually the minimum requirements for any practical purpose...
        And I don't blame them... I'm a nerd, no gamer, and I can't tell the difference between most modern graphics cards...

        • Re: (Score:3, Insightful)

          by gbjbaanb ( 229885 )

          The biggest problem is that poorly optimised software can be ok (everyone runs Java or .NET acceptably, and they're not exactly resource light), but some poorly written software can be dreadfully slow - so much so that throwing more hardware at it will never work.

          You know, the websites written as a single jpeg image cut into 100 pieces, the loops that iterate over themselves several times to get 1 piece of data, etc etc. I'm sure we've all seen stuff that makes us gawk in wonder that someone actually did it

      • Re: (Score:3, Insightful)

        Imagine server-software so craptastically written, that the maximum amount of people that can use the app at any one time is, say: 20. Now, imagine that when you double the hardware capacity, user capacity only goes up by a factor of say 1.4

        Still sure you're _only_ going to throw hardware at the issue when business wants the application online for a couple of thousand people?
        • Re: (Score:3, Funny)

          by deraj123 ( 1225722 )
          Did I use to work with you? I got to experience this first hand once - I left when we had gone from 3 servers to 84. (our factor of capacity increase was a bit better, the first server supported about 25, the 84th about 15...)
    • Re: (Score:3, Funny)

      by dexmachina ( 1341273 )
      And if that still doesn't appeal to you, Walmart sometimes has developers going as loss leaders during the Christmas season...you can pick one up today for a fraction of its wholesale value!
  • Not sure if this site copied Jeff Atwood's post with permission or not, so I'm posting the original link to Coding Horror: http://www.codinghorror.com/blog/archives/001198.html [codinghorror.com]
  • I agree. (Score:5, Insightful)

    by theaveng ( 1243528 ) on Saturday December 20, 2008 @11:53AM (#26183945)

    Recently my boss reviewed my schematic and asked me to replace 1% resistors with 2 or 5% "because they are cheaper". Yes true, but I spend most of the day doing that, so he spent about $650 on the task, thereby spending MORE not less.

    So yeah I agree with the article that's it's often cheaper to specify faster hardware, or more-expensive hardware, than to spend hours-and-hours on expensive engineers/programmers trying to save pennies.

    Or as Benjamin Franklin said, "Some people are penny-wise, but pound foolish." You try to save pennies and waste pounds/dollars instead.

    • Re:I agree. (Score:4, Interesting)

      by tinkertim ( 918832 ) on Saturday December 20, 2008 @12:05PM (#26184025)

      Or as Benjamin Franklin said, "Some people are penny-wise, but pound foolish." You try to save pennies and waste pounds/dollars instead.

      The same can be true in programming, but usually the scenario describes development itself, i.e. premature optimization. If your team is experienced, the only reason for this would be people trying to do big things in small spaces.

      I think it comes down to what you need, what you want and what you need to spec for your software to actually run.

      If your willing to spend quite a bit of money on some really talented people, what you need as far as hardware (at least memory) can be reduced significantly.

      What you want is to roll a successful project in xx months and bring it to market, so raising the hardware bar seems sensible.

      Then we come down to what you can actually spec, as far as requirements for your clients who want to use your software. Microsoft ended up lowering the bar for Vista in order to appease Intel and HP .. look what happened.

      If your market is pure enterprise, go ahead and tell the programmers that 4GB/Newer dual core CPU is the minimum spec for your stuff. If your market is desktop users .. may be a bad idea.

      I don't think there's a general rule or 'almost always' when contemplating this kind of thing.

    • by StCredZero ( 169093 ) on Saturday December 20, 2008 @12:06PM (#26184029)

      This only works for certain cases. Some your problems are too many orders of magnitude too big to throw hardware at them.

      Before you do anything: Profile, analyze, understand.

      It might be useless to spend a month of development effort on a problem that you can solve by upgrading the hardware. It's also useless to spend the money on new hardware and the administrator time setting it up and migrating programs and data, when you could've just known that wouldn't have helped in the first place.

      Two questions I used to ask when giving talks: "Okay, who here has used a profiler? [hands go up] Now who has never been surprised by the results? [almost no hands]"

      Before you spend money or expend effort, just take some easy steps to make sure you're not wasting it. Common sense.

      • Re: (Score:3, Insightful)

        by ceoyoyo ( 59147 )

        There's the hardware, cooling, space, someone to administer it, replacing it for the next twenty years (or isn't your code going to last that long?)....

        Of course, in my line of work the goal is to go from "a million years" to "realtime" so all the hardware in the world isn't really going to help much.

    • by aliquis ( 678370 )

      Uhm, you get paid / cost more than $ 650 / day? Consult cost for him or what? Will whatever you did only be used once?

      Can't wrong resistance give wrong results since one have calculated on the result and what is needed using exact values?

      • Re: (Score:3, Funny)

        by theaveng ( 1243528 )

        Engineers are billed at about $90 an hour. That includes wages, health benefits, rental for the cubicle space, and heating.

        • Re: (Score:3, Interesting)

          by theaveng ( 1243528 )

          I don't know why this is "funny"? Ask a manager sometime how much he charges per hour for his programmers/engineers, and he'll tell you $90 or maybe even $100.

          What we actually get PAID is far below that. ;-)

      • Re: (Score:3, Insightful)

        by petermgreen ( 876956 )

        Can't wrong resistance give wrong results since one have calculated on the result and what is needed using exact values?
        We can't achive perfection so we have to be able to deal with variation in our designs. Designers should know when to specify precision components and when something more run of the mill is ok (1% resistors is kinda on the edge, it used to be regarded as precision but manufacuring improvements have meant 1% resistors are pretty cheap nowadays).

        What the parent was getting at was that swappi

    • Re: (Score:3, Informative)

      But it is so much fun to explain to the bean counter who ordered twice as many disk drives of half the capacity you specified, because their painstaking research found they were a few percent cheaper per byte, that now they have to add in the cost of twice as many raid card channels or storage servers, rack expenses, et cetra when figuring out how much money they saved the company.

      • Re: (Score:3, Insightful)

        This is a failure on your part. Bean counters are not penny-wise, pound foolish. They do need a concrete financial analysis, however, to prove that you aren't just blowing smoke up their skirt.

        Because most of the time, programmers are doing just that.

        And also, programmers often fail to understand the cost of money and that sometimes it is better spend more tomorrow than a little bit today.

    • Re: (Score:3, Insightful)

      by ultranova ( 717540 )

      So yeah I agree with the article that's it's often cheaper to specify faster hardware, or more-expensive hardware, than to spend hours-and-hours on expensive engineers/programmers trying to save pennies.

      Multiplied by how many servers, now that is the question ?

      I mean, if you have a thousand-server farm already, then a speedup of just one percent is going to save you from having to buy (and power, manage and eventually replace) ten servers. How much developer time is that one percent really going to cost ?

    • Re: (Score:3, Interesting)

      by mrand ( 147739 )

      The difference between 1% and 5% is 0.1 cents according to Digikey, so it is going to take 650k resistors to recoup that cost. Assuming your board has 100 resistors on it, the cost is recouped after selling ~6500 boards. Only you can tell me if it the board is going to sell that many over its lifetime.

      Having said that, if you are going to need a 1% resistor somewhere for a reason, it makes even less sense to use both 1% and 5% for that value. Just buy the 1% and eliminate the duplicate effort required to

    • Re: (Score:3, Funny)

      by bwcbwc ( 601780 )

      Uhh, you can't "throw hardware" at a hardware design. In the HW manufacturing case, you WANT to spend money on the upfront design to reduce the parts cost.

      If your design forced to use 1% resistors instead of 2%, you'd better have been building a medical device or something else with tightly regulated specifications. Otherwise, when your boss says to use 2% or 5%, tell him to loosen the specs. Otherwise you're just over-engineering.

  • by malefic ( 736824 ) on Saturday December 20, 2008 @11:54AM (#26183955)
    "10,000! We could almost buy our own ship for that!" "Yeah, but who's going to fly it kid? You?"
  • by Ckwop ( 707653 ) on Saturday December 20, 2008 @11:55AM (#26183963) Homepage

    http://www.codinghorror.com/blog/archives/001198.html [codinghorror.com]

    Give the person who actually wrote the article the ad revenue rather than this bottom feeding scum.

  • Better recalculate the trade-offs for the current economic crisis:

    TFA says the average programmer with my experience level should be getting a salary of around $50/hour but you'll see I've recenetly advertised myself at $8/hour. [majorityrights.com]

    How many hundreds of thousands of jobs have been lost in Silicon Valley alone recently?

    The crisis has gutted demand for hardware as well, but things are changing so fast, yesterday's calculations are very likely very wrong. Tomorrow, hyperinflation could hit the US making hard

    • by MickLinux ( 579158 ) on Saturday December 20, 2008 @12:19PM (#26184117) Journal

      Well, unless the $8/hr is an introductory rate (that is, the first 200 hrs are at $8.50, then after that you go up to $15 or $20/hr), you could do better by joining a construction site. At our place (prestress, precast concrete plant), we are paying warm bodies $10/hr.

      Show that you can read drawings, and you can quickly rise up to $12-$14/hr. Which is, admittedly, a pittance, but if you live in a trailer home, you can make ends meet. Then you can still program in your spare time, and keep the rights to your work, to boot.

      • I think some people would take less money rather than work outside in the winter. Working outside in the summer isn't always a picnic either.

      • by Baldrson ( 78598 ) *
        It's true that for permanent on-site work my compensation requirements are much higher, so my advertised $8/hour for remote temporary consulting is apples to the $50/hour permanent salary annualized to $99k given in TFA. But I think it trades fairly when you consider that employers don't want to commit to fixed recurring costs in the present economic climate, and the vast majority of programming work can be done remote.
    • Re: (Score:3, Funny)

      by 0xdeadbeef ( 28836 )

      I will send you $20 bucks if you post a photo of yourself holding a sign that says "A non-white immigrant paid me $20 to hold this sign."

  • Back in the mainframe days (when you were likely to be charged for every byte of storage and CPU cycle, hardware was viewed as expensive. But at least in my career, since about 1980 programmer time is viewed as the most expensive piece.

  • by MpVpRb ( 1423381 ) on Saturday December 20, 2008 @11:59AM (#26183983)

    With cheep hardware readily available, I agree that, for many projects, it makes no sense to spend lots of time optimizing for performance. When faced with this situation, I optimize instead for readability and easy debugging, at the expense of performance.

    But, and this is a big but, fast hardware is no excuse for sloppy, bloated code. Bad code is bad code, no matter how fast the hardware. Bad code is hard to debug, and hard to understand.

    Unfortunately, bad or lazy programmers, combined with clueless managers fail to see the difference. They consider good design to be the same as optimization, and argue that both are unnecessary.

    I believe the proper balance for powerful hardware is well thought out, clean unoptimized code.

    • by nine-times ( 778537 ) <nine.times@gmail.com> on Saturday December 20, 2008 @12:32PM (#26184209) Homepage

      I think if you're paying for programming vs. hardware, you're just paying for different things. I would think that would be somewhat obvious, given their very different nature, but apparently there's still some uncertainty.

      The improvements you get from optimizing software are limited but reproducible for free-- "free" in the sense that if I have lots of installations, all the installations can benefit from any improvements you make to the code. Improvements from adding new hardware cost each time you add new hardware, as well as costing more in terms of power, A/C, administration, etc. On the other hand, the benefits you can get from adding new hardware is potentially unlimited.

      And it's meaningful that I'm saying "potentially" unlimited, because sometime effective scaling comes from software optimization. Obviously you can't always drop in new servers, or drop in more processors/RAM into existing servers, and have that extra power end up being used effectively. Software has to be written to be able to take advantage of extra RAM, more CPUs, and it has to be written to scale across servers and handle load-balancing and such.

      The real answer is that you have to look at the situation, form a set of goals, and figure out the best way to reach those goals. Hardware gets you more processing power and storage for a given instance of the applcation, while improving your software can improve security and stability and performance on all your existing installations without increasing your hardware. Which do you want?

  • by Analogy Man ( 601298 ) on Saturday December 20, 2008 @11:59AM (#26183985)
    Toss as much CPU and memory as you want at a chatty transaction and you won't solve the problem. What about the cost of your 2000 users of the application that wander off to the coffee machine while they wait for an hour glass to relinquish control to them? Over the years I have seen wanton ignorance from programmers that ought to know better about efficiency, scalability and performance.
    • by AmberBlackCat ( 829689 ) on Saturday December 20, 2008 @01:10PM (#26184463)
      I remember being subject to a class called Data Structures & Algorithms, which was all about choosing the right data structures to efficiently handle a problem, and then calculating the efficiency of algorithms to make sure it works well when you throw a huge amount of data or cycles at it. I also seem to remember being subject to Computer Architecture, in which we were taught about the underlying structure of the computer to better understand the hardware before we try to write software for it. I wonder if they teach that in those call centers in India, or those single programming courses companies make people take when they're trying to cut out the cost of programmers altogether.
  • by Marcos Eliziario ( 969923 ) on Saturday December 20, 2008 @12:04PM (#26184019) Homepage Journal

    From someone who has been there, done that. I can say that throwing hardware at a problem rarely works.
    If nothing else, faster hardware tend to increase the advantage of good algorithms over poorer ones.
    Say I have an alghorithm who runs at O(N) and another one functionally equivalent that runs at O(N^2). Now let's say that you need to double the size of the input keeping the execution time constant. For the first algorithm you will need a machine which is 2X faster than the current one, for the second O(N^2) you'll need a 10X times faster machine.
    Let's not forget that you need not only things to run fast, but to run correctly, and the absurdity of choosing less skilled programmers with more expensive hardware will become painfully evident.

    PS: Sorry for the typos and other errors: english is not my native language, and I've got a bit too much beer last night.

  • Throwing hardware at a bad application is ALWAYS the right way to go.

    There's an old saying "Never throw good money after bad."

    GC

  • Wrong objective (Score:2, Insightful)

    by trold ( 242154 )

    Good hardware running code written by bad programmers just means the code will fail faster. The primary goal of a programmer is to make the code work, and that does not change no matter how fast your hardware is.

    • The article seems to assume that bad programmers write slow but correct code, which is a big assumption. But the observation on cost also means that good programmers should focus on correctness rather than performance.

      Just to illustrate how difficult it is to get correctness right, on page 56 [google.com] of The Practice of Programming by Kernighan and Pike---very highly regarded book and highly regarded authors---there is a hash table lookup function that is combined with insert to perform optional insertion when the k

      • by julesh ( 229690 ) on Saturday December 20, 2008 @03:36PM (#26185613)

        But the observation on cost also means that good programmers should focus on correctness rather than performance.

        Just to illustrate how difficult it is to get correctness right, on page 56 of The Practice of Programming by Kernighan and Pike---very highly regarded book and highly regarded authors---there is a hash table lookup function that is combined with insert to perform optional insertion when the key is not found in the table. It assumes that the value argument can be safely discarded if insertion is not performed. That assumption works fine with integers, but not with pointers to memory objects, file descriptors, or any handle to a resource. An inexperienced programmer trying to generalize int value to void *value will induce memory leak on behalf of the user of the function.

        Or, for a modest increase in hardware requirements to get the same performance, we can introduce automatic resource management (aka garbage collection) which makes this particular little difficulty go away.

  • Not always the case (Score:4, Interesting)

    by mark99 ( 459508 ) on Saturday December 20, 2008 @12:10PM (#26184059) Journal

    In a lot of big orgs it is amazing how expensive it can be to upgrade your hardware, or add to an existing farm. Not because of the hardware cost, but because of all the overhead involved in designing/specifying the setup,ordering, waiting for it to come, getting space for it, installation, patching, backing up, etc.

    In fact I've seen several orgs where the cost of a "Virtual Server" is almost as much as a physical one because the cost of all this servicing it is so high. Whether or not this is necessary I don't want to debate here, but it is undeniably the case.

    So I think the case for throwing hardware at issues is not as clear cut as this article implies.

  • What a crock... (Score:5, Insightful)

    by johnlcallaway ( 165670 ) on Saturday December 20, 2008 @12:11PM (#26184063)
    For pure CPU driven applications, I would agree with this statement. But NONE of the business applications I write are bogged down by CPUs. They are bogged down by I/O, either user uploads/downloads, network, or disk access.

    I have yet to see any application that was fixed for good by throwing hardware at it. Sooner or later, the piper has to be paid and the problem fixed. Someone improved response time by putting in a new server?? Does that mean they had web/app/database/data all on one machine?? Bad, bad, BAD design for large applications, no where to grow. At least if it's tiered and using a SAN with optical channels more servers can be added. Sometimes, more, not faster is better. And resources can be shared to make optimal use out of the servers that are available.

    The FIRST step is to determine WHY something is slow. Is it memory, cpu, or I/O bound. That doesn't take a rocket scientist, looking at sar in Unix or Task Mangager in Windows can show you that. Sure, if it's CPU bound, buying faster CPUs will fix it.

    The comment about developers having good boxes isn't the same as for applications. My latest job gives every developer a top-notch box with two monitors, I was in heaven. Unfortunately, it can't stop there. I also need development servers with disk space and memory to test large data sets BEFORE they go into production.

    Setting expectations is the best way to manage over optimization. Don't say "I need a program to do this", state "I need a program to do this work in this time frame". It is silly to make a daily batch program that takes 2 minutes run 25% faster. But it's not silly to make a web page respond in under 2 secs., or a 4 hour batch job to run in 3 *if* it is needed. But without the expectation, there is no starting or stopping point. Most developers will state "it's done" when the right answer comes out the other end, while a few may continue to tune it until it's dead.
    • You concentrate on CPU. Many web apps, including probably the one that I am looking at now (stats from the live system are still pending...), could go faster with more and better caching. I.e. more memory on the web or D=batabase tier. That's hardware too.

    • by davecb ( 6526 ) * <davecb@spamcop.net> on Saturday December 20, 2008 @01:16PM (#26184509) Homepage Journal

      Throwing hardware at a problem means the writer failed to use his sysadmin staff to do basic capacity planning while there wasn't a problem.

      And as johnlcallaway, said, the problem isn't usually CPU: most bottlenecks are either disk I/O or code-path length.

      I'm a professional capacity planner, and it seems only the smartest 1% of companies ever think to bring me in to prevent problems. A slightly larger percentage do simple resource planning using the staff they already have. A good example of the latter is Flickr, described by John Allspaw in The Art of Capacity Planning [oreilly.com], where he found I/O was his problem and I/O wait time was his critical measurement.

      Failing to plan means you'll hit the knee in the response-time curve, and instead of of a few fractions of a second, response time will increase (degrade) so fast that some of your customers will think you've crashed entirely.

      And that in turn becomes the self-fulfilling prophecy that you've gone out of business (;-()

      Alas, the people who fail to plan seem to be the great majority, and suffer cruely from their failure. The last few percent are those unfortunates whose professional staff planned, warned, and were ignored. Their managers pop up, buy some CPUs or memory to solve their I/O problem, scream at their vendor for not solving the problem and then suddenly go quiet. The hardware usually show up on eBay, so I think you can guess what happened.

      --dave

  • Get a rope (Score:5, Interesting)

    by Anonymous Coward on Saturday December 20, 2008 @12:11PM (#26184067)

    I almost feel an order of magnitude more stupid for reading that article. Throwing more hardware at a problem definitely makes more sense for a small performance issue, but this is rarely the case. The whole idea makes me sick as a developer. This reminds me of the attitude of many developers of a certain web framework out there. Instead of fixing real problems, they cover up fatal flaws in their architecture with a hardware band aid. There's no denying it can work sometimes, but at quite a high cost and completely inappropriate for some systems. Not everyone is just building a stupid to-do-list with a snappy name application.

    Consider that many performance problems graphically have an upper limit. At some point throwing more hardware at the problem is going to do absolutely nothing. Further, the long term benefit of hardware is far less than the potential future contributions of a highly paid, skilled programmer.

    Another issue is there are plenty of performance problems I have seen that cannot be scaled easily just by adding more hardware. A classic example are some RDBMS packages with certain applications. Often databases can be scaled vertically (limited by RAM and IO Performance), but not horizontally because of problems with stale data, replication, application design, etc. A programmer can fix these issues so that you can yes then add more hardware, but it is far more valuable in the long-term to have someone to enable you to grow in this way properly.

    Actually fixing an application is a novel idea, don't you think? If my air conditioning unit is sometimes not working, I don't go and install two air conditioning units. I either fix the existing one or rip it out and replace it.

    Further, there are plenty of performance problems that can never be solved with hardware. Tight looping is one that I often see. It does not matter what you throw at it, the system will be eaten. Another example is a garbage collection issue. Adding more hardware may help, but typically delays the inevitable. Scaling horizontally in this case would do next to nothing because if every user hits this same problem, you have not exactly bought more time (therefore you must go vertically as well, only really delaying the problem).

    The mentality of this article may be innocent in some ways, but it reminds me of this notion that IT people are resources and not actual humans. Creativity, future productivity, problem solving skills, etc are far more valuable to any decent company than a bunch of hardware that is worthless in a few months and just hides piss poor work by the existing employees.

    I feel like a return to the .com bubble and F'd Company. I am sure plenty of companies following a lot of this advice can look forward to articles about their own failures. If someone proposes adding hardware for a sane reason, say to accommodate a few thousands more visitors with some more load balanced servers, by all means do so. If your application just sucks and you need to add more servers to cover up mistakes, it is time to look elsewhere because your company is a WTF.

    • Re:Get a rope (Score:5, Insightful)

      by Thumper_SVX ( 239525 ) on Saturday December 20, 2008 @02:53PM (#26185273) Homepage

      Besides, one thing that's not covered in the article is that hardware has an exponentially higher residual maintenance cost.

      In order to maintain production, many companies these days insist that hardware be in-warranty and thus able to be replaced at a moment's notice. There comes a point as well at which the amount that the hardware will cost on an ongoing basis far exceeds the cost of a single programmer to write a decent app that doesn't need it.

      I have recently saved my company the equivalent of my salary, doubled for the next two years purely in the cost of maintenance contracts for around 150 servers. Granted, this was using virtualization rather than programming to combat the problem, but in this case it made sense. The concept is still the same regardless.

  • Wait, what? (Score:5, Interesting)

    by zippthorne ( 748122 ) on Saturday December 20, 2008 @12:12PM (#26184071) Journal

    Surely that might work for a one-off, but if you're selling millions or even thousands of copies of your software, even a $100 increase in hardware requirements costs the economy millions. Just because it doesn't cost YOU millions doesn't mean you don't see the cost.

    If your customers are spending millions on hardware, that money is going to the hardware vendors, not to you. And more importantly, that money represents wasted effort. Effort that could otherwise be used to increase real wealth, thus making the dollars you do earn more valuable.

    So i guess the lesson is, If you're CERN, throw hardware at it. If you're Adobe, get a lot of good programmers/architects.

    • Re:Wait, what? (Score:4, Insightful)

      by zrq ( 794138 ) on Saturday December 20, 2008 @03:21PM (#26185497) Journal

      So i guess the lesson is, If you're CERN, throw hardware at it. If you're Adobe, get a lot of good programmers/architects.

      Actually, I think that is the wrong way round. Places like CERN do 'throw hardware at it', lots of hardware, and it still isn't enough.

      Modern desktop systems have giga bytes of memory, hundreds of giga bytes of disk and multi core processors ... and in the Adobe example you are using it to display PDF documents or Flash movies. Your application would typically be using less that 1% of the available resources. Spending lots of money optimizing the performance does not make commercial sense.

      Large science projects like CERN are pushing the limits of hardware and software. They typically deal with data sets, data rates and processing requirements that are orders of magnitude larger that most systems can cope with.

      A typical science desktop application needs to be able to process and display giga byte data sets, often comparing more than one dataset visually in real time. A typical eScience grid service needs to be able handle extremely large (peta byte) datasets in real time, and you can't drop data or pause for a moment - the data stream is live and you only get one chance to process and store it.

      Same applies to Google, Yahoo, FaceBook etc. If your application is pushing the hardware to the limits, then optimizing the software to increase performance by 5% is worth a lot of developer time.

  • by itzdandy ( 183397 ) on Saturday December 20, 2008 @12:13PM (#26184077) Homepage

    I think you need to complicate this logic a bit by taking into account added electricity required to power the extra servers, run the servers at a higher load, or run the clients at a higher load as well as the air conditioning cost increase as well.

    also, time is money. If a program takes more time, there is more time for users to be idle which will also have a cost.

    best practice? program as efficiently as possible. Programming expenses are only spent once which the power bill lasts forever.

  • by 3seas ( 184403 ) on Saturday December 20, 2008 @12:16PM (#26184097) Homepage Journal

    ... throw the money at genuine software engineering (not psuedo engineering) so that we have much better tools by which to program with.

  • Problem that has nonlinear impact on performance can not be solved by adding of two more servers...
    Simplest example is index in database. Before adding of index it takes 2 days to execute it, after adding of an index query executes in 100 milliseconds. How can you solve that by adding of more hardware? Also you usually can not solve IO issues between app and DB servers by "just adding of two more servers"...
    Not to mention that when it comes to scaling of DB you really can not just depend on "adding of anoth

  • by olyar ( 591892 ) on Saturday December 20, 2008 @12:18PM (#26184111) Homepage Journal

    One thing not in the equation here: Hardware is cheap, but having that hardware managed isn't so cheap. When you scale from a couple of servers to a big bank of server, you have to pick up system admins to manage all of those boxen.

    Less expensive than a programmer (some times) but certainly not free.

  • People Are Expensive (Score:3, Informative)

    by Greyfox ( 87712 ) on Saturday December 20, 2008 @12:18PM (#26184113) Homepage Journal
    And using them inefficiently is also expensive. If you're looking for a quick fix perhaps you should first consider your company's processes and the tools you use to support those processes. If you can hire a programmer or two to write and maintain tools that allow you to eliminate some of the meetings you have to have every week because no one knows what's going on, you'll find it doesn't take very long for him to pay for himself.
  • But not with enough emphasis. To the suggested procedure:
    1. Throw cheap, faster hardware at the performance problem.
    2. If the application now meets your performance goals, stop.
    3. Benchmark your code to identify specifically where the performance problems are.
    4. Analyze and optimize the areas that you identified in the previous step.
    5. If the application now meets your performance goals, stop.

  • Absolutely True (Score:5, Interesting)

    by titzandkunt ( 623280 ) on Saturday December 20, 2008 @12:21PM (#26184135)
    When I was young, eager and naive I worked at a place that was doing some pretty heavyweight simulations which took a good three-four days on a (I think) quad-processor Sun box.

    It was quite a big site and had a relatively high turnover of decent hardware. Next to the IT support team's area was a room about 6 yards by 10 yards almost full to the ceiling with older monitors, printers and a shitload of commodity PC's. And I'd just stated reading about mainstream acceptance of linux clustering for paralellizable apps.

    Cue the lightbulb winking into life above my head!

    I approached my boss, with the idea to get those old boxes working again as a cluster and speed things up for the modelling team. He was quite interested and said he'd look into it. He fired up Excel and started plugging in some estimates...

    Later that day I saw him and asked him what he thought. He shook his head. "It's a non-starter" he said. Basically, if the effort involved in getting a cluster up and working - including porting of apps - was more than about four man-weeks, it's cheaper and a lot safer just to dial up the Sun rep, invoke our massive account (and commensurate discount) with them and buy a beefier model from the range. And the existing code would run just fine with no modifications.

    A useful lesson for me in innovation risk and cost.
  • Because throwing more hardware at the problem will fix your software bugs. Oh wait...

  • What point is he trying to make? Programmers do not spend 100% of their time on optimisation. They have to design front ends, create business logic, debug, document, and optimise when necessary. Let's say the average programmer spends 10% of his or her time on optimisation. That's maybe $8000 per year per programmer.

    Now assume that the application has a low number - say 10 customers per programmer, for a server application, and each customer instance needs 2 boxes. So the programmer optimisation cost is cur

  • Nothing new (Score:3, Insightful)

    by fermion ( 181285 ) on Saturday December 20, 2008 @12:33PM (#26184221) Homepage Journal
    This has been the trend for a very long time. Once, a long time ago, people wrote code in assembly. Even not so long ago, say 20 years, there were enough applications where it still made sense to do assembly simply because it was the only way for affordable hardware to perform well.

    Ten years ago many web servers were hand coded in relatively low level complied languages. Even though hardware had become cheaper, and the day of the RAID rack of PCs were coming on us, to get real performance one had to have software developers, not just web developers.

    Of course cheap powerful hardware has made that all a thing of the past. There is no reason for an average software developer to have anything but a passing familiarity with assembly. There is no reason for a web developer to know anything other than interpreted scripting languages. Hardware is, and always has been, cheaper than people. That is why robots build cars. That is why ISM sold a but load of typewriters. That is why the jacquard loom was such a kick but piece of machinery.

    The only question is how much cheaper is hardware, and when does it make sense to a replace a human wiht a machine, or maybe a piece of software. This is not always clear. There are still reletively develop places in the world where it is cheaper to pay someone to wash you clothes by hand than buy and maintain a washing machine.

    • Re: (Score:3, Interesting)

      Once, a long time ago, people wrote code in assembly... Of course cheap powerful hardware has made that all a thing of the past.

      Actually, I would argue that advances in compilers and interpreters have been just as important to that trend as advances in hardware.

  • The main goal of writing solid code isn't to lower resource requirements.. it's to increase maintainability.

    Sure you can hack out shitty code and make up for it with more hardware to handle the mem leaks and bloat... and probably save some money in the short term. In the long term though, when you need to add something to your mess of spaghetti code.. you're going to spend much more programmer time .. which is what you were trying to save from the get go.

    I`m a firm believer that a little extra time and mone

  • This is why interpreted or semi-interpreted programming languages make so much sense, especially for stuff such as web applications. Here you can scale to what ever the best hardware is, even changing CPU, without worrying that you will need to recode, or recompile. The same can't generally be said for languages such as C++. Its ironic that you would have to choose a approach that is probably less optimal to get cheaper long term improvements in performance.

  • by br00tus ( 528477 ) on Saturday December 20, 2008 @12:39PM (#26184249)

    This uses servers as an example, but what about desktops? We use Windows desktops where I am, and having AIM and Outlook open all the time is more or less mandatory for me. Plus there are these virus-scanning programs always running which eat up a chunk of resources. I open up a web browser and one or two more things and stuff starts paging out to disk. I'm a techie and sometimes need a lot of stuff open.

    We have a call center on our floor, where the people make less than one third what I do, and who don't need as many windows open, yet they get the exact same desktop I do. My time is three times more valuable than theirs, yet the company gives me the same old, low-end desktop they get, resulting in more of my productive time being lost - those seconds I wait when I switch from an ssh client to Outlook and wait for Outlook to be usable add up to minutes and hours eventually. Giving everyone the same desktop makes no sense (I should note I eventually snagged more RAM, but the point is about general company policy more than my initial problems).

  • by Todd Knarr ( 15451 ) on Saturday December 20, 2008 @12:40PM (#26184261) Homepage

    The first is that the hardware cost isn't the only cost involved. There's also the costs of running and maintaining that hardware. Many performance problems can't be solved by throwing just a single bigger machine at the problem, and every one of the multiple machines means more complexity in the system, another piece that can fail. And it introduces more interactions that can cause failures. An application may be perfectly stable using a single database server, but throw a cluster of 3 database servers into the mix and a problem with the load-balancing between the DB servers can create failures where none existed before. Those sorts of failures can't be addressed by throwing more hardware at the problem, they need code written to stabilize the software. And that sort of code requires the kind of programmer that you don't get cheap right out of school. So now you're spending money on hardware and you're still having to hire those pesky expensive programmers you were trying to avoid hiring. And your customers are looking at the failure rates and deciding that maybe they'd like to go with your competitor who's more expensive but at least delivers what he promises.

    Second is that, even if the problem's one that can be solved just by adding more hardware, often inexperienced programmers produce code whose performance profile isn't linear, it's exponential. That is, doubling the load doesn't require twice the hardware to maintain performance, it requires an order of magnitude more hardware. It doesn't take long for the hardware spending to become completely unbearab le, and you'll again be caught having to spend tons of cash on enough hardware to limp along while spending tons of money on really expensive programmers to try and get the software to where it's performance curve is supportable and watching your customers bail to someone offering better than same-day service on transactions.

    Go ask Google. They're the poster boy for throwing hardware at the problem. Ask them what it took on the programming-expertise side to create software that would let them simply throw hardware at the problem.

  • by Alpha830RulZ ( 939527 ) on Saturday December 20, 2008 @12:46PM (#26184289)

    If your performance problem is in an Oracle or SQL Server database, throwing more hardware at the problem probably has a license fee attached to it, and that can easily be measured in multiple developer salaries. This also causes people to scale using bigger boxes, rather than more boxes, and that gets you out of the range of commodity hardware and into the land of $$$$$.

    Which is why I don't care to deliver on Oracle, but my employer hasn't figured out that Postgres and MySQL will work for a lot of problems, and is still fellating the Oracle and IBM reps.

  • More factors (Score:3, Insightful)

    by Lazy Jones ( 8403 ) on Saturday December 20, 2008 @01:07PM (#26184429) Homepage Journal
    Generally, investing into hardware will usually mean more people with salaries like programmers' on the payroll (designing the architecture, the maintenance tools, installing the software and hardware, keeping it running ...). A lot of these things can be automated / done with little effort, but it takes someone as competent (and expensive) as a good programmer to get it right.

    In the long run, your best investment is still the good programmer, as long as you can keep him happy and productive, because then you can grow more/faster (by buying hardware as well).

  • by johnrpenner ( 40054 ) on Saturday December 20, 2008 @01:15PM (#26184505) Homepage

    Andy Hertzfeld, engineer on the original Macintosh team:

    Steve was upset that the Mac took too long to boot up when you first turned it on so he tried motivating Larry Kenyon by telling him well you know, how many millions of people are going to buy this machine - it's going to be millions of people and let's imagine that you can make it boot five seconds faster well, that's five seconds
    times a million every day that's fifty lifetimes, if you can shave five seconds off that you're saving fifty lives. And so it was a nice way of thinking about it, and we did get it to go faster. (PBS, Revenge of the Nerds, Part 3)

  • by Sique ( 173459 ) on Saturday December 20, 2008 @01:22PM (#26184559) Homepage

    When I was programmer, we once had a programming job at a large bank. One of our main reports was running across all booked loans and calculated the futural finance stream (interest and amortization) either until the debt was paid off, or up to 40 years at current interest rates. This report was sent to the Federal Bank for control, and to the department tasked with managing the bonds to get enough capital for further loans.

    This report took 200 processor hours to complete. To get it done, it was split into 18 tranches, each running 11 hours. So it was possible to complete the job during a weekend run on 18 processors, and restart it twice in case of errors.

    A colleague of mine took the task to rewrite the report to speed it up. For that she hooked into each booking that changed the amount of loan or the interest rate, repayment, end-of-contract or amortization and modified it so it wrote a flag into a table.

    Then she rewrote the central report to store the calculated finance stream each time it was calculated. Loans that were unchanged since the last calculation didn't have a flag set, so the report took the old calculation. This sped up the report about 150 times: Instead of 200 processor hours now it completed within 1:20 h.

    It allowed to put four large RS/6000 out of service, cancelling of the service contracts, rescheduling the report to run daily instead on weekends and saving on weekend man hours. With the daily report to the bond managment department also the finance controlling unit became interested and used the report results to refine their own tools. This together easily paid the amount of programming time put into the report.

    As you can see: There are programming task where just throwing more computing power at doesn't solve the problem. It hasn't even to be some high level programming job, sometimes it's a dull task (finding all points in a bookkeeping system where the booking changes the finance stream of a loan is a dull task!), but if someone gets it done, it pays off easily.

    • Re: (Score:3, Insightful)

      by Krishnoid ( 984597 ) *
      It makes me think that applying data related to this together with Moore's law could produce a heuristic to estimate the relative benefits of each approach:
      • Say you can optimize the code to give you a shot (P probability) at speeding up your entire operation by a factor of N or by M orders of magnitude, for a cost of D dollars in person-hours
      • speedup/dollar == f(P, N or M, D), a mostly multiplicative estimate assuming you can get a rough idea of P from a profiling run and a little thought about the archi
  • by 0xdeadbeef ( 28836 ) on Saturday December 20, 2008 @01:47PM (#26184747) Homepage Journal

    The idea expressed in that article isn't just stupid, it is economy destroying, civilization threatening, mind-bogglingly stupid.

    The author is trying to solve the problem of inadequate resources buy spending more to increase the brute force effort toward his already failing solution. It is the mythical man month expressed in CPU horsepower.

    That isn't improving your situation, that is merely delaying your inevitable downfall. You're running to stand still, and eventually your organization will collapse of exhaustion, while your competitors, who invested in smart design and smart people, lap your corpse.

    And if you simply can't afford better people, then your reach is exceeding your grasp. Scale back your ambition, plan for when you can, or accept your niche and buy the third party solutions produced by experts who can write scalable software.

  • by wolf12886 ( 1206182 ) on Saturday December 20, 2008 @06:31PM (#26186779)

    The bottom line is, software improvement is a one time cost, once its done, it's done.

    Hardware solutions on the other hand, though cheaper outright, are reoccurring (you'll need keep upgrading that hardware as it becomes outdated) and scale up with demand (if you double your number of servers, you'll need to double this hardware as well)

    This is why, except in cases were demand won't increase, or the extra hardware is unlikely to become outdated, software solutions tend to be the more economical choice.

  • Stupid Idea (Score:3, Interesting)

    by musicmaker ( 30469 ) on Sunday December 21, 2008 @12:36AM (#26188903) Homepage

    Ever come across the n+1 selects problem in hibernate. How many junior devs are good enough to figure out whats going on? Not many.

    It means if you are fetching 1,000 records from the database it takes as much as 1,000 times as long as it should. Is halving your dev team cost really worth a 1,000 fold increase in hardware costs because your programmers don't understand the technology properly.

    • Re:Stupid Idea (Score:5, Interesting)

      by Shados ( 741919 ) on Sunday December 21, 2008 @12:24PM (#26191721)

      Ever come across the n+1 selects problem in hibernate

      Common issue indeed, and actually a problem with Hibernate... in this day and age, there are algorythms that can be implemented in Object relational mappers to avoid what is at least the common scenario for this to happen in Hibernate or LINQ to SQL/Entity Framework...I'm not sure why it never gets fixed.

      That being said, if you read the article (I know, i know, slashdot), they're talking about premature optimisation. Basically, things like avoiding Hibernate completly because of its overhead, or optimising every single queries as much as possible (even if performance is acceptable) to save every last bit of juice, so that your app can run on 10 megs of RAM instead of 100. They're not, in any way, talking about using shitty programmers, but advocate using GOOD developer's time more efficiently (solving real problems, instead of spending too much time on performance).

      Almost everyone who replied saying it was stupid, almost ALL brought up either how programming mistakes can screw things up, or how its possible to make a system that doesn't scale at all. All those people missed the point.

      You can make a system that is slower, but still scales and is still correctly done a LOT (and I mean a LOT) faster, if you don't go nitpicky and try and optimise everything as you go. You simply avoid doing something totally dumb, code according to best practice, etc, but you're not going to rewrite your system in C to avoid Garbage Collection, you're not going to rewrite the data structures of the framework to squeeze a 1% performance, and you won't avoid Hibernate (completly) to avoid the mapping overhead. You can tap into these time saving paradigms just by upgrading your hardware. You still need competent developers!!! But those competent developers can do more in less time.

      Thats -all- what the author was advocating.

  • by tacocat ( 527354 ) <tallison1&twmi,rr,com> on Sunday December 21, 2008 @07:54AM (#26190441)

    Where I am currently working, a pizza box server has an annual cost of 2.5 developers salaries for the same period of time. It's grossly out of balance from this article.

    Perhaps there is a reason some companies need Government Bailouts...

  • Consumer products (Score:3, Insightful)

    by WarJolt ( 990309 ) on Sunday December 21, 2008 @08:16AM (#26190505)

    if(units() * savings() > programmercost())
        hireprogrammer();

    When you sell a million units a penny means $10,000 and $1 means a brand new Lamborghini. I guess this article only covers enterprise software where the number of machines thats running your code could be in the thousands. The opposite argument can be made when you talk about consumer products where the unit counts are in the millions.

  • by roman_mir ( 125474 ) on Sunday December 21, 2008 @12:20PM (#26191701) Homepage Journal

    I am late here for this story, but I would like to add something to it for the sake of the late readers anyway :)

    In the second half of 2001 I was on a project for a long time defunct company called WorldInsure (hey, former Corelan guys, any of you still out there, working for Symcor by any chance?)

    So, I came in about half way into the one year project, in a few months the person who was the most senior developer on the project left but the team was still about 40 people in total. The application was something like 5MegaBucks by the end, but the client didn't want to pay the last million, because the performance was outrageously slow. 12 concurrent transactions per second as opposed to the 200 that the client wanted on 2 gigantic for the time 4 way Sun servers.

    The app was a very detailed page after page insurance questionnaire, that would branch into more and more pages and questions as previous questions were answered. At some point a PDF was generated with the answers and provided on one of the last pages. The problem was with moving from page to page, the waiting times were too long, approaching minute wait times for some pages.

    I was asked to speed it up. Long story short, after 1.5 months of tinkering with code produced by a bunch of novices, here is the list of improvements that I can remember at this point:

    1. Removed about 80% of unnecessary database reading by removing TopLink.
    2. Removed about 80% of unnecessary database writing by changing the way the data was persisted. Instead of persisting the entire data set on each new page, only the incremental changes now were persisted.
    3. Reduced the page pre-processing by getting rid of the XSLT transformers on XML structures and switching to JSPs instead.
    4. Removed cluster IO thrashing by reducing the session object size from unnecessary 1MB to a manageable 10Kb.
    5. Reduced CPU load by caching certain data sets instead of reprocessing them on each request within a user session.
    6. Decoupled PDF generation into a separate application and communicated the request to generate the PDF via a simple home grown message queue done with a database table. This was one of the more serious problems within the app. because it could bring down a server due to the buggy code in the Adobe PDF generator that was used at the time. In fact the original application ran the PDF generation as a separate Java application that would be restarted after about 5 generations and would be called via System.execute call so not to bring down the BEA Weblogic. Later on this entire portion was rewritten and the Adobe code thrown away. I am sure that today the Adobe code is fine and all, but at the time it was a real pig.
    7. Removed many many many many unnecessary System.out.println calls, and replaced with proper logging where needed.
    8. Fixed the home grown servlet manager (similar to Struts main servlet), this code was freaking ugly as hell and totally unstable.

    There were some other smaller fixes, but the main bulk is listed here. By the end of the month and a half the app was doing over 300 simultaneous transactions per second.

    300/12, that's 25 times code performance improvement. I am not at all convinced that this improvement could have been achived through hardware at all, but even if it could, it would have cost much more than what I cost at the time (was like 70CAD/hr for 1.5 months.)

    Oh, did I mention that the client coughed out the last million bucks after that? After all, the code met their performance expectations and exceeded them by half at least.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...