Hardware Is Cheap, Programmers Are Expensive 465
Sportsqs points out a story at Coding Horror which begins:
"Given the rapid advance of Moore's Law, when does it make sense to throw hardware at a programming problem? As a general rule, I'd say almost always. Consider the average programmer salary here in the US. You probably have several of these programmer guys or gals on staff. I can't speak to how much your servers may cost, or how many of them you may need. Or, maybe you don't need any — perhaps all your code executes on your users' hardware, which is an entirely different scenario. Obviously, situations vary. But even the most rudimentary math will tell you that it'd take a massive hardware outlay to equal the yearly costs of even a modest five person programming team."
Timing is everything (Score:4, Interesting)
Sure, right now it may be more expensive to hire better developers.
But just wait a couple more months when unemployment starts hitting double digits. You'll be able to pick up very good, experienced developers for half, maybe a third of their current salaries.
Sure, invest in some HW now. That stuff will always be handy. But don't just go off and assume that developers will be expensive forever.
Re: (Score:2)
For a third of the price of a developer you can buy an enormous amount of hardware.
Re: (Score:3, Funny)
Yeh but you have to appreciate where all this enormous amount of hardware, enormous amount of hardware goes. It doesnt just come on a truck you can dump things on, it has to come via a series of tubes. Oh... Wait.
Re:Timing is everything (Score:5, Insightful)
Re: (Score:3, Interesting)
Big deal.
Right now, I've got a problem with a software upgrade where it has to convert the Oracle database to the new version. The conversion takes 40 hours on a database with less than 6 million rows because the code starts a transaction, updates one row, then ends the transaction. After seeing the actual SQL being used, it could be replaced by "UPDATE thetable SET field1 = field2 + constant WHERE field3 = anotherconstant".
I literally could not buy hardware fast enough to overcome the stupidity of these
Re:Right tool for the job; factor in hidden costs (Score:3, Interesting)
Now if the existing application is the only way that they know how to get to the data, then it may easily become the golden-hammer / silver bullet that gets used for performing an upgrade, rather than writing an external sql script which they might not be familiar with. Add to this the common convention of "nothing touc
Re:Timing is everything (Score:4, Insightful)
The $hardware$ $programmer-time$ equation is always based on the assumption that the programmer is always worth their qualifications.
You are correct that this is an unrealistic assumption but, like the "rational self-interest" assumption in economics, it is a very useful one.
Given a set of uniformly competent programmers, you quickly reach the point of diminishing returns on optimizing performance over hardware - but that's because a competent programmer should implement code with reasonable performance in the first place. Sadly, some people think they can compensate one with the other (competence vs hardware), when that is an entirely different problem, entirely different variable (e.g.: an incompetent programmer with more time is not always a good thing).
First you have to reach the point of competence where you can talk about performance optimizations in the first place. What you describe is not 'unoptimized code', it is not a naive but reasonable implementation - its gross incompetence (assuming SQL qualifications were claimed in the first place).
As you said, you can't pay for enough hardware to compensate for that. But in the same vein, you really, *really* do not want to pay for more of that programmer time either.
Re:Timing is everything (Score:4, Insightful)
Indeed. It depends entirely on the problem, this is where computational complexity comes in, but cheap programmers wont even know what computational complexity is. The more complex the problem, the more knowledgeable your programmers will need to be in coming up with novel solutions.
You only have to look at most combinatorial optimization problems to see where you may run into trouble, a cheap programmer may try and brute force it and no matter how much hardware you throw at the problem that method simply isn't going to work for all but the smallest of data sets. You're going to have to get someone who knows the tricks (algorithms such as ACO) to produce acceptable solutions in a sensible time frame.
But you don't even need the hardest COPs to demonstrate the types of problems you may run into, even the most basic COPs can throw lesser skilled programmers whilst better programmers can implement a solution without even needing to look up any references.
It's another case of cutting corners. To the companies considering this option; sure if you wanna hire cheaper programmers and throw hardware at the problem that's fine. Just don't come crying when your entire system keels over under the weight of a problem it can't solve with the method implemented to solve it and when you then have to get someone in to do the job properly. Also then when you find yourself with a load of hardware lying round you never actually needed had it been done right to start with.
Cheap programmers are great for throwaway or non-mission critical software, but make sure you have at least some good programmers around who have the computer science background underlying their software engineering abilities to deal with the tough/complex stuff.
Re:Timing is everything (Score:5, Insightful)
We'll see. The good developers probably won't be in the first wave of folks looking for jobs. I know our company is still in the "we have to figure out how to hire fast enough to do next year's work" mode.
Where having good engineering really helps, though, is in version 2.0 and 3.0 of the product, and when you try to leverage embedded devices with some of the same code, and when you try to scale it up a few orders of magnitude... basically, it buys you flexibility and nimbleness on the market that the "throw more hardware at the problem" folks can't match.
Despite Moore's Law being exponential over time (so far), adding additional hardware is still sub-linear for any snapshot in time. So it's not going to automatically solve most hard scalability problems.
Performance Vs. Scalability (Score:3, Interesting)
Although you mention scalability and flexibility, I don't think you really hit the nail on the head.
Performance and scalability are NOT the same. They are fundamentally different. You can have a weakly performing software product that scales nicely, and you can easily have a high performance application that doesn't scale at all.
Understanding this difference can be the make/break point in whether or not a mildly profitable company can become a world-changer! It's fairly easy to write high-performance softwa
Re:Timing is everything (Score:5, Informative)
As for the article, it makes a lot of sense when you're running in a controlled environment. It's really a no brainier in consulting work. Upgrading hardware or optimizing software will both meet the customers needs only the hardware upgrade is $2,000 and the software optimization costs $20,000.
Of course, if you're releasing software into the wild and it needs run on many different machines you better make sure it performs well especially if it's a retail product. So spend the extra money and make it really good.
Re:Timing is everything (Score:5, Interesting)
Of course, if you're releasing software into the wild and it needs run on many different machines you better make sure it performs well especially if it's a retail product. So spend the extra money and make it really good.
People usually buys a product before they realize the performance sucks... And retailers always says that it's just because your computer isn't new enough... Which makes people buy new computers, not complain about the software...
- Or may be I'm wrong...
But, I don't know many non-computer-freaks who can tell you the system requirements of their computer, and even less that compare them to the minimum requirements of a game, and almost nobody who know that recommended system spec. is actually the minimum requirements for any practical purpose...
And I don't blame them... I'm a nerd, no gamer, and I can't tell the difference between most modern graphics cards...
Re: (Score:3, Insightful)
The biggest problem is that poorly optimised software can be ok (everyone runs Java or .NET acceptably, and they're not exactly resource light), but some poorly written software can be dreadfully slow - so much so that throwing more hardware at it will never work.
You know, the websites written as a single jpeg image cut into 100 pieces, the loops that iterate over themselves several times to get 1 piece of data, etc etc. I'm sure we've all seen stuff that makes us gawk in wonder that someone actually did it
Re: (Score:3, Insightful)
Still sure you're _only_ going to throw hardware at the issue when business wants the application online for a couple of thousand people?
Re: (Score:3, Funny)
Re: (Score:3, Funny)
The Original Story from Coding Horror (Score:2, Informative)
I agree. (Score:5, Insightful)
Recently my boss reviewed my schematic and asked me to replace 1% resistors with 2 or 5% "because they are cheaper". Yes true, but I spend most of the day doing that, so he spent about $650 on the task, thereby spending MORE not less.
So yeah I agree with the article that's it's often cheaper to specify faster hardware, or more-expensive hardware, than to spend hours-and-hours on expensive engineers/programmers trying to save pennies.
Or as Benjamin Franklin said, "Some people are penny-wise, but pound foolish." You try to save pennies and waste pounds/dollars instead.
Re:I agree. (Score:4, Interesting)
The same can be true in programming, but usually the scenario describes development itself, i.e. premature optimization. If your team is experienced, the only reason for this would be people trying to do big things in small spaces.
I think it comes down to what you need, what you want and what you need to spec for your software to actually run.
If your willing to spend quite a bit of money on some really talented people, what you need as far as hardware (at least memory) can be reduced significantly.
What you want is to roll a successful project in xx months and bring it to market, so raising the hardware bar seems sensible.
Then we come down to what you can actually spec, as far as requirements for your clients who want to use your software. Microsoft ended up lowering the bar for Vista in order to appease Intel and HP .. look what happened.
If your market is pure enterprise, go ahead and tell the programmers that 4GB/Newer dual core CPU is the minimum spec for your stuff. If your market is desktop users .. may be a bad idea.
I don't think there's a general rule or 'almost always' when contemplating this kind of thing.
But first: Profile, Analyze, Understand (Score:5, Insightful)
This only works for certain cases. Some your problems are too many orders of magnitude too big to throw hardware at them.
Before you do anything: Profile, analyze, understand.
It might be useless to spend a month of development effort on a problem that you can solve by upgrading the hardware. It's also useless to spend the money on new hardware and the administrator time setting it up and migrating programs and data, when you could've just known that wouldn't have helped in the first place.
Two questions I used to ask when giving talks: "Okay, who here has used a profiler? [hands go up] Now who has never been surprised by the results? [almost no hands]"
Before you spend money or expend effort, just take some easy steps to make sure you're not wasting it. Common sense.
Re: (Score:3, Insightful)
There's the hardware, cooling, space, someone to administer it, replacing it for the next twenty years (or isn't your code going to last that long?)....
Of course, in my line of work the goal is to go from "a million years" to "realtime" so all the hardware in the world isn't really going to help much.
Re: (Score:2)
Uhm, you get paid / cost more than $ 650 / day? Consult cost for him or what? Will whatever you did only be used once?
Can't wrong resistance give wrong results since one have calculated on the result and what is needed using exact values?
Re: (Score:3, Funny)
Engineers are billed at about $90 an hour. That includes wages, health benefits, rental for the cubicle space, and heating.
Re: (Score:3, Interesting)
I don't know why this is "funny"? Ask a manager sometime how much he charges per hour for his programmers/engineers, and he'll tell you $90 or maybe even $100.
What we actually get PAID is far below that. ;-)
Re: (Score:3, Insightful)
Can't wrong resistance give wrong results since one have calculated on the result and what is needed using exact values?
We can't achive perfection so we have to be able to deal with variation in our designs. Designers should know when to specify precision components and when something more run of the mill is ok (1% resistors is kinda on the edge, it used to be regarded as precision but manufacuring improvements have meant 1% resistors are pretty cheap nowadays).
What the parent was getting at was that swappi
Re: (Score:3, Informative)
But it is so much fun to explain to the bean counter who ordered twice as many disk drives of half the capacity you specified, because their painstaking research found they were a few percent cheaper per byte, that now they have to add in the cost of twice as many raid card channels or storage servers, rack expenses, et cetra when figuring out how much money they saved the company.
Re: (Score:3, Insightful)
This is a failure on your part. Bean counters are not penny-wise, pound foolish. They do need a concrete financial analysis, however, to prove that you aren't just blowing smoke up their skirt.
Because most of the time, programmers are doing just that.
And also, programmers often fail to understand the cost of money and that sometimes it is better spend more tomorrow than a little bit today.
Re: (Score:3, Insightful)
Multiplied by how many servers, now that is the question ?
I mean, if you have a thousand-server farm already, then a speedup of just one percent is going to save you from having to buy (and power, manage and eventually replace) ten servers. How much developer time is that one percent really going to cost ?
Re: (Score:3, Interesting)
The difference between 1% and 5% is 0.1 cents according to Digikey, so it is going to take 650k resistors to recoup that cost. Assuming your board has 100 resistors on it, the cost is recouped after selling ~6500 boards. Only you can tell me if it the board is going to sell that many over its lifetime.
Having said that, if you are going to need a 1% resistor somewhere for a reason, it makes even less sense to use both 1% and 5% for that value. Just buy the 1% and eliminate the duplicate effort required to
Re: (Score:3, Funny)
Uhh, you can't "throw hardware" at a hardware design. In the HW manufacturing case, you WANT to spend money on the upfront design to reduce the parts cost.
If your design forced to use 1% resistors instead of 2%, you'd better have been building a medical device or something else with tightly regulated specifications. Otherwise, when your boss says to use 2% or 5%, tell him to loosen the specs. Otherwise you're just over-engineering.
Re: (Score:3, Insightful)
Which [potentially] shows why he's a boss - and you aren't. That $650 (overpaid in salary to you) is a one time cost - but it can also represent considerably savings, in setup time if 2 or 5% resistors are the standard wherever you circuits are manufactured, in total cost (of hardware) across a large production run (even more so if your design contains many resistors), etc... etc... Any engineer worth a damn knows enough accounting to be able to figure this stuff out.
I think you missed the point. The guy was saying that he's well aware of the cost savings of using cheaper resistors, but that he'd already done the analysis. The boss overrode him using financial criteria alone, rather than what a good engineer does, which is try to find a balance between cost and functionality (or reliability, or performance, or accuracy, or whatever your project's target criteria.) Chances are, that design will go into production and not meet spec, which means the expense of a redesign a
But who's going to fly it? (Score:5, Insightful)
Re:But who's going to fly it? (Score:5, Funny)
Typical Management Response:
"You bet I could!, I'm not such a bad programmer myself!"
Original Article here... (Score:5, Informative)
http://www.codinghorror.com/blog/archives/001198.html [codinghorror.com]
Give the person who actually wrote the article the ad revenue rather than this bottom feeding scum.
Recalculate for the crisis (Score:2, Interesting)
TFA says the average programmer with my experience level should be getting a salary of around $50/hour but you'll see I've recenetly advertised myself at $8/hour. [majorityrights.com]
How many hundreds of thousands of jobs have been lost in Silicon Valley alone recently?
The crisis has gutted demand for hardware as well, but things are changing so fast, yesterday's calculations are very likely very wrong. Tomorrow, hyperinflation could hit the US making hard
Re:Recalculate for the crisis (Score:5, Interesting)
Well, unless the $8/hr is an introductory rate (that is, the first 200 hrs are at $8.50, then after that you go up to $15 or $20/hr), you could do better by joining a construction site. At our place (prestress, precast concrete plant), we are paying warm bodies $10/hr.
Show that you can read drawings, and you can quickly rise up to $12-$14/hr. Which is, admittedly, a pittance, but if you live in a trailer home, you can make ends meet. Then you can still program in your spare time, and keep the rights to your work, to boot.
Re: (Score:2)
I think some people would take less money rather than work outside in the winter. Working outside in the summer isn't always a picnic either.
Re: (Score:2)
Re: (Score:3, Funny)
I will send you $20 bucks if you post a photo of yourself holding a sign that says "A non-white immigrant paid me $20 to hold this sign."
This has been true since at least 1980. (Score:2)
Back in the mainframe days (when you were likely to be charged for every byte of storage and CPU cycle, hardware was viewed as expensive. But at least in my career, since about 1980 programmer time is viewed as the most expensive piece.
Re:This has been true since at least 1980. (Score:5, Interesting)
I agree...to a point (Score:3, Insightful)
With cheep hardware readily available, I agree that, for many projects, it makes no sense to spend lots of time optimizing for performance. When faced with this situation, I optimize instead for readability and easy debugging, at the expense of performance.
But, and this is a big but, fast hardware is no excuse for sloppy, bloated code. Bad code is bad code, no matter how fast the hardware. Bad code is hard to debug, and hard to understand.
Unfortunately, bad or lazy programmers, combined with clueless managers fail to see the difference. They consider good design to be the same as optimization, and argue that both are unnecessary.
I believe the proper balance for powerful hardware is well thought out, clean unoptimized code.
Re:I agree...to a point (Score:5, Insightful)
I think if you're paying for programming vs. hardware, you're just paying for different things. I would think that would be somewhat obvious, given their very different nature, but apparently there's still some uncertainty.
The improvements you get from optimizing software are limited but reproducible for free-- "free" in the sense that if I have lots of installations, all the installations can benefit from any improvements you make to the code. Improvements from adding new hardware cost each time you add new hardware, as well as costing more in terms of power, A/C, administration, etc. On the other hand, the benefits you can get from adding new hardware is potentially unlimited.
And it's meaningful that I'm saying "potentially" unlimited, because sometime effective scaling comes from software optimization. Obviously you can't always drop in new servers, or drop in more processors/RAM into existing servers, and have that extra power end up being used effectively. Software has to be written to be able to take advantage of extra RAM, more CPUs, and it has to be written to scale across servers and handle load-balancing and such.
The real answer is that you have to look at the situation, form a set of goals, and figure out the best way to reach those goals. Hardware gets you more processing power and storage for a given instance of the applcation, while improving your software can improve security and stability and performance on all your existing installations without increasing your hardware. Which do you want?
Assuming of course hardware is the bottleneck (Score:5, Interesting)
Re:Assuming of course hardware is the bottleneck (Score:5, Interesting)
Re:Assuming of course hardware is the bottleneck (Score:4, Funny)
I remember being subject to a class called Data Structures & Algorithms ...
Yes, I believe Nicholas Wirth taught that one. Supposedly when you've added the Data Structures to the Algorithms, you get Programs or something.
How clueless can someone get? (Score:5, Insightful)
From someone who has been there, done that. I can say that throwing hardware at a problem rarely works.
If nothing else, faster hardware tend to increase the advantage of good algorithms over poorer ones.
Say I have an alghorithm who runs at O(N) and another one functionally equivalent that runs at O(N^2). Now let's say that you need to double the size of the input keeping the execution time constant. For the first algorithm you will need a machine which is 2X faster than the current one, for the second O(N^2) you'll need a 10X times faster machine.
Let's not forget that you need not only things to run fast, but to run correctly, and the absurdity of choosing less skilled programmers with more expensive hardware will become painfully evident.
PS: Sorry for the typos and other errors: english is not my native language, and I've got a bit too much beer last night.
Yes... that's the answer... (Score:2)
Throwing hardware at a bad application is ALWAYS the right way to go.
There's an old saying "Never throw good money after bad."
GC
Re: (Score:2)
The above was sarcasm by the way. ;)
Wrong objective (Score:2, Insightful)
Good hardware running code written by bad programmers just means the code will fail faster. The primary goal of a programmer is to make the code work, and that does not change no matter how fast your hardware is.
objective is correctness, always (Score:3, Insightful)
The article seems to assume that bad programmers write slow but correct code, which is a big assumption. But the observation on cost also means that good programmers should focus on correctness rather than performance.
Just to illustrate how difficult it is to get correctness right, on page 56 [google.com] of The Practice of Programming by Kernighan and Pike---very highly regarded book and highly regarded authors---there is a hash table lookup function that is combined with insert to perform optional insertion when the k
Re:objective is correctness, always (Score:4, Insightful)
But the observation on cost also means that good programmers should focus on correctness rather than performance.
Just to illustrate how difficult it is to get correctness right, on page 56 of The Practice of Programming by Kernighan and Pike---very highly regarded book and highly regarded authors---there is a hash table lookup function that is combined with insert to perform optional insertion when the key is not found in the table. It assumes that the value argument can be safely discarded if insertion is not performed. That assumption works fine with integers, but not with pointers to memory objects, file descriptors, or any handle to a resource. An inexperienced programmer trying to generalize int value to void *value will induce memory leak on behalf of the user of the function.
Or, for a modest increase in hardware requirements to get the same performance, we can introduce automatic resource management (aka garbage collection) which makes this particular little difficulty go away.
Not always the case (Score:4, Interesting)
In a lot of big orgs it is amazing how expensive it can be to upgrade your hardware, or add to an existing farm. Not because of the hardware cost, but because of all the overhead involved in designing/specifying the setup,ordering, waiting for it to come, getting space for it, installation, patching, backing up, etc.
In fact I've seen several orgs where the cost of a "Virtual Server" is almost as much as a physical one because the cost of all this servicing it is so high. Whether or not this is necessary I don't want to debate here, but it is undeniably the case.
So I think the case for throwing hardware at issues is not as clear cut as this article implies.
What a crock... (Score:5, Insightful)
I have yet to see any application that was fixed for good by throwing hardware at it. Sooner or later, the piper has to be paid and the problem fixed. Someone improved response time by putting in a new server?? Does that mean they had web/app/database/data all on one machine?? Bad, bad, BAD design for large applications, no where to grow. At least if it's tiered and using a SAN with optical channels more servers can be added. Sometimes, more, not faster is better. And resources can be shared to make optimal use out of the servers that are available.
The FIRST step is to determine WHY something is slow. Is it memory, cpu, or I/O bound. That doesn't take a rocket scientist, looking at sar in Unix or Task Mangager in Windows can show you that. Sure, if it's CPU bound, buying faster CPUs will fix it.
The comment about developers having good boxes isn't the same as for applications. My latest job gives every developer a top-notch box with two monitors, I was in heaven. Unfortunately, it can't stop there. I also need development servers with disk space and memory to test large data sets BEFORE they go into production.
Setting expectations is the best way to manage over optimization. Don't say "I need a program to do this", state "I need a program to do this work in this time frame". It is silly to make a daily batch program that takes 2 minutes run 25% faster. But it's not silly to make a web page respond in under 2 secs., or a 4 hour batch job to run in 3 *if* it is needed. But without the expectation, there is no starting or stopping point. Most developers will state "it's done" when the right answer comes out the other end, while a few may continue to tune it until it's dead.
Re: (Score:2)
You concentrate on CPU. Many web apps, including probably the one that I am looking at now (stats from the live system are still pending...), could go faster with more and better caching. I.e. more memory on the web or D=batabase tier. That's hardware too.
Prevent the problems, don't patch them! (Score:5, Interesting)
Throwing hardware at a problem means the writer failed to use his sysadmin staff to do basic capacity planning while there wasn't a problem.
And as johnlcallaway, said, the problem isn't usually CPU: most bottlenecks are either disk I/O or code-path length.
I'm a professional capacity planner, and it seems only the smartest 1% of companies ever think to bring me in to prevent problems. A slightly larger percentage do simple resource planning using the staff they already have. A good example of the latter is Flickr, described by John Allspaw in The Art of Capacity Planning [oreilly.com], where he found I/O was his problem and I/O wait time was his critical measurement.
Failing to plan means you'll hit the knee in the response-time curve, and instead of of a few fractions of a second, response time will increase (degrade) so fast that some of your customers will think you've crashed entirely.
And that in turn becomes the self-fulfilling prophecy that you've gone out of business (;-()
Alas, the people who fail to plan seem to be the great majority, and suffer cruely from their failure. The last few percent are those unfortunates whose professional staff planned, warned, and were ignored. Their managers pop up, buy some CPUs or memory to solve their I/O problem, scream at their vendor for not solving the problem and then suddenly go quiet. The hardware usually show up on eBay, so I think you can guess what happened.
--dave
Get a rope (Score:5, Interesting)
I almost feel an order of magnitude more stupid for reading that article. Throwing more hardware at a problem definitely makes more sense for a small performance issue, but this is rarely the case. The whole idea makes me sick as a developer. This reminds me of the attitude of many developers of a certain web framework out there. Instead of fixing real problems, they cover up fatal flaws in their architecture with a hardware band aid. There's no denying it can work sometimes, but at quite a high cost and completely inappropriate for some systems. Not everyone is just building a stupid to-do-list with a snappy name application.
Consider that many performance problems graphically have an upper limit. At some point throwing more hardware at the problem is going to do absolutely nothing. Further, the long term benefit of hardware is far less than the potential future contributions of a highly paid, skilled programmer.
Another issue is there are plenty of performance problems I have seen that cannot be scaled easily just by adding more hardware. A classic example are some RDBMS packages with certain applications. Often databases can be scaled vertically (limited by RAM and IO Performance), but not horizontally because of problems with stale data, replication, application design, etc. A programmer can fix these issues so that you can yes then add more hardware, but it is far more valuable in the long-term to have someone to enable you to grow in this way properly.
Actually fixing an application is a novel idea, don't you think? If my air conditioning unit is sometimes not working, I don't go and install two air conditioning units. I either fix the existing one or rip it out and replace it.
Further, there are plenty of performance problems that can never be solved with hardware. Tight looping is one that I often see. It does not matter what you throw at it, the system will be eaten. Another example is a garbage collection issue. Adding more hardware may help, but typically delays the inevitable. Scaling horizontally in this case would do next to nothing because if every user hits this same problem, you have not exactly bought more time (therefore you must go vertically as well, only really delaying the problem).
The mentality of this article may be innocent in some ways, but it reminds me of this notion that IT people are resources and not actual humans. Creativity, future productivity, problem solving skills, etc are far more valuable to any decent company than a bunch of hardware that is worthless in a few months and just hides piss poor work by the existing employees.
I feel like a return to the .com bubble and F'd Company. I am sure plenty of companies following a lot of this advice can look forward to articles about their own failures. If someone proposes adding hardware for a sane reason, say to accommodate a few thousands more visitors with some more load balanced servers, by all means do so. If your application just sucks and you need to add more servers to cover up mistakes, it is time to look elsewhere because your company is a WTF.
Re:Get a rope (Score:5, Insightful)
Besides, one thing that's not covered in the article is that hardware has an exponentially higher residual maintenance cost.
In order to maintain production, many companies these days insist that hardware be in-warranty and thus able to be replaced at a moment's notice. There comes a point as well at which the amount that the hardware will cost on an ongoing basis far exceeds the cost of a single programmer to write a decent app that doesn't need it.
I have recently saved my company the equivalent of my salary, doubled for the next two years purely in the cost of maintenance contracts for around 150 servers. Granted, this was using virtualization rather than programming to combat the problem, but in this case it made sense. The concept is still the same regardless.
Wait, what? (Score:5, Interesting)
Surely that might work for a one-off, but if you're selling millions or even thousands of copies of your software, even a $100 increase in hardware requirements costs the economy millions. Just because it doesn't cost YOU millions doesn't mean you don't see the cost.
If your customers are spending millions on hardware, that money is going to the hardware vendors, not to you. And more importantly, that money represents wasted effort. Effort that could otherwise be used to increase real wealth, thus making the dollars you do earn more valuable.
So i guess the lesson is, If you're CERN, throw hardware at it. If you're Adobe, get a lot of good programmers/architects.
Re:Wait, what? (Score:4, Insightful)
Actually, I think that is the wrong way round. Places like CERN do 'throw hardware at it', lots of hardware, and it still isn't enough.
Modern desktop systems have giga bytes of memory, hundreds of giga bytes of disk and multi core processors ... and in the Adobe example you are using it to display PDF documents or Flash movies.
Your application would typically be using less that 1% of the available resources.
Spending lots of money optimizing the performance does not make commercial sense.
Large science projects like CERN are pushing the limits of hardware and software. They typically deal with data sets, data rates and processing requirements that are orders of magnitude larger that most systems can cope with.
A typical science desktop application needs to be able to process and display giga byte data sets, often comparing more than one dataset visually in real time. A typical eScience grid service needs to be able handle extremely large (peta byte) datasets in real time, and you can't drop data or pause for a moment - the data stream is live and you only get one chance to process and store it.
Same applies to Google, Yahoo, FaceBook etc. If your application is pushing the hardware to the limits, then optimizing the software to increase performance by 5% is worth a lot of developer time.
energy consumption increases? (Score:3, Insightful)
I think you need to complicate this logic a bit by taking into account added electricity required to power the extra servers, run the servers at a higher load, or run the clients at a higher load as well as the air conditioning cost increase as well.
also, time is money. If a program takes more time, there is more time for users to be idle which will also have a cost.
best practice? program as efficiently as possible. Programming expenses are only spent once which the power bill lasts forever.
Maybe its simpler than that..... (Score:3, Insightful)
... throw the money at genuine software engineering (not psuedo engineering) so that we have much better tools by which to program with.
Depends on type of problem (Score:2, Informative)
Problem that has nonlinear impact on performance can not be solved by adding of two more servers...
Simplest example is index in database. Before adding of index it takes 2 days to execute it, after adding of an index query executes in 100 milliseconds. How can you solve that by adding of more hardware? Also you usually can not solve IO issues between app and DB servers by "just adding of two more servers"...
Not to mention that when it comes to scaling of DB you really can not just depend on "adding of anoth
Hardware doesn't just configure itself (Score:3, Insightful)
One thing not in the equation here: Hardware is cheap, but having that hardware managed isn't so cheap. When you scale from a couple of servers to a big bank of server, you have to pick up system admins to manage all of those boxen.
Less expensive than a programmer (some times) but certainly not free.
People Are Expensive (Score:3, Informative)
Depends, as noted in the article (Score:2)
But not with enough emphasis. To the suggested procedure:
1. Throw cheap, faster hardware at the performance problem.
2. If the application now meets your performance goals, stop.
3. Benchmark your code to identify specifically where the performance problems are.
4. Analyze and optimize the areas that you identified in the previous step.
5. If the application now meets your performance goals, stop.
Absolutely True (Score:5, Interesting)
It was quite a big site and had a relatively high turnover of decent hardware. Next to the IT support team's area was a room about 6 yards by 10 yards almost full to the ceiling with older monitors, printers and a shitload of commodity PC's. And I'd just stated reading about mainstream acceptance of linux clustering for paralellizable apps.
Cue the lightbulb winking into life above my head!
I approached my boss, with the idea to get those old boxes working again as a cluster and speed things up for the modelling team. He was quite interested and said he'd look into it. He fired up Excel and started plugging in some estimates...
Later that day I saw him and asked him what he thought. He shook his head. "It's a non-starter" he said. Basically, if the effort involved in getting a cluster up and working - including porting of apps - was more than about four man-weeks, it's cheaper and a lot safer just to dial up the Sun rep, invoke our massive account (and commensurate discount) with them and buy a beefier model from the range. And the existing code would run just fine with no modifications.
A useful lesson for me in innovation risk and cost.
Yeah, that'll work (Score:2)
Because throwing more hardware at the problem will fix your software bugs. Oh wait...
I'm not sure I understand the article (Score:2)
Now assume that the application has a low number - say 10 customers per programmer, for a server application, and each customer instance needs 2 boxes. So the programmer optimisation cost is cur
Nothing new (Score:3, Insightful)
Ten years ago many web servers were hand coded in relatively low level complied languages. Even though hardware had become cheaper, and the day of the RAID rack of PCs were coming on us, to get real performance one had to have software developers, not just web developers.
Of course cheap powerful hardware has made that all a thing of the past. There is no reason for an average software developer to have anything but a passing familiarity with assembly. There is no reason for a web developer to know anything other than interpreted scripting languages. Hardware is, and always has been, cheaper than people. That is why robots build cars. That is why ISM sold a but load of typewriters. That is why the jacquard loom was such a kick but piece of machinery.
The only question is how much cheaper is hardware, and when does it make sense to a replace a human wiht a machine, or maybe a piece of software. This is not always clear. There are still reletively develop places in the world where it is cheaper to pay someone to wash you clothes by hand than buy and maintain a washing machine.
Re: (Score:3, Interesting)
Actually, I would argue that advances in compilers and interpreters have been just as important to that trend as advances in hardware.
Well... (Score:2)
The main goal of writing solid code isn't to lower resource requirements.. it's to increase maintainability.
Sure you can hack out shitty code and make up for it with more hardware to handle the mem leaks and bloat... and probably save some money in the short term. In the long term though, when you need to add something to your mess of spaghetti code.. you're going to spend much more programmer time .. which is what you were trying to save from the get go.
I`m a firm believer that a little extra time and mone
Java, PHP et al (Score:2)
This is why interpreted or semi-interpreted programming languages make so much sense, especially for stuff such as web applications. Here you can scale to what ever the best hardware is, even changing CPU, without worrying that you will need to recode, or recompile. The same can't generally be said for languages such as C++. Its ironic that you would have to choose a approach that is probably less optimal to get cheaper long term improvements in performance.
What about desktops? (Score:3, Insightful)
This uses servers as an example, but what about desktops? We use Windows desktops where I am, and having AIM and Outlook open all the time is more or less mandatory for me. Plus there are these virus-scanning programs always running which eat up a chunk of resources. I open up a web browser and one or two more things and stuff starts paging out to disk. I'm a techie and sometimes need a lot of stuff open.
We have a call center on our floor, where the people make less than one third what I do, and who don't need as many windows open, yet they get the exact same desktop I do. My time is three times more valuable than theirs, yet the company gives me the same old, low-end desktop they get, resulting in more of my productive time being lost - those seconds I wait when I switch from an ssh client to Outlook and wait for Outlook to be usable add up to minutes and hours eventually. Giving everyone the same desktop makes no sense (I should note I eventually snagged more RAM, but the point is about general company policy more than my initial problems).
A couple of things that were ignored (Score:3, Insightful)
The first is that the hardware cost isn't the only cost involved. There's also the costs of running and maintaining that hardware. Many performance problems can't be solved by throwing just a single bigger machine at the problem, and every one of the multiple machines means more complexity in the system, another piece that can fail. And it introduces more interactions that can cause failures. An application may be perfectly stable using a single database server, but throw a cluster of 3 database servers into the mix and a problem with the load-balancing between the DB servers can create failures where none existed before. Those sorts of failures can't be addressed by throwing more hardware at the problem, they need code written to stabilize the software. And that sort of code requires the kind of programmer that you don't get cheap right out of school. So now you're spending money on hardware and you're still having to hire those pesky expensive programmers you were trying to avoid hiring. And your customers are looking at the failure rates and deciding that maybe they'd like to go with your competitor who's more expensive but at least delivers what he promises.
Second is that, even if the problem's one that can be solved just by adding more hardware, often inexperienced programmers produce code whose performance profile isn't linear, it's exponential. That is, doubling the load doesn't require twice the hardware to maintain performance, it requires an order of magnitude more hardware. It doesn't take long for the hardware spending to become completely unbearab le, and you'll again be caught having to spend tons of cash on enough hardware to limp along while spending tons of money on really expensive programmers to try and get the software to where it's performance curve is supportable and watching your customers bail to someone offering better than same-day service on transactions.
Go ask Google. They're the poster boy for throwing hardware at the problem. Ask them what it took on the programming-expertise side to create software that would let them simply throw hardware at the problem.
This only works with LAMP/FOSS (Score:3, Funny)
If your performance problem is in an Oracle or SQL Server database, throwing more hardware at the problem probably has a license fee attached to it, and that can easily be measured in multiple developer salaries. This also causes people to scale using bigger boxes, rather than more boxes, and that gets you out of the range of commodity hardware and into the land of $$$$$.
Which is why I don't care to deliver on Oracle, but my employer hasn't figured out that Postgres and MySQL will work for a lot of problems, and is still fellating the Oracle and IBM reps.
More factors (Score:3, Insightful)
In the long run, your best investment is still the good programmer, as long as you can keep him happy and productive, because then you can grow more/faster (by buying hardware as well).
programmers save lifetimes (Score:5, Interesting)
Andy Hertzfeld, engineer on the original Macintosh team:
Steve was upset that the Mac took too long to boot up when you first turned it on so he tried motivating Larry Kenyon by telling him well you know, how many millions of people are going to buy this machine - it's going to be millions of people and let's imagine that you can make it boot five seconds faster well, that's five seconds
times a million every day that's fifty lifetimes, if you can shave five seconds off that you're saving fifty lives. And so it was a nice way of thinking about it, and we did get it to go faster. (PBS, Revenge of the Nerds, Part 3)
Re:programmers save lifetimes (Score:5, Funny)
Someone evil (like me) may ponder the other side of that. If I can *put* another 5 seconds on the boot time then I can effectively kill 50 people and get away with it. But then I'm a bastard.
Anectodical counter example (Score:5, Insightful)
When I was programmer, we once had a programming job at a large bank. One of our main reports was running across all booked loans and calculated the futural finance stream (interest and amortization) either until the debt was paid off, or up to 40 years at current interest rates. This report was sent to the Federal Bank for control, and to the department tasked with managing the bonds to get enough capital for further loans.
This report took 200 processor hours to complete. To get it done, it was split into 18 tranches, each running 11 hours. So it was possible to complete the job during a weekend run on 18 processors, and restart it twice in case of errors.
A colleague of mine took the task to rewrite the report to speed it up. For that she hooked into each booking that changed the amount of loan or the interest rate, repayment, end-of-contract or amortization and modified it so it wrote a flag into a table.
Then she rewrote the central report to store the calculated finance stream each time it was calculated. Loans that were unchanged since the last calculation didn't have a flag set, so the report took the old calculation. This sped up the report about 150 times: Instead of 200 processor hours now it completed within 1:20 h.
It allowed to put four large RS/6000 out of service, cancelling of the service contracts, rescheduling the report to run daily instead on weekends and saving on weekend man hours. With the daily report to the bond managment department also the finance controlling unit became interested and used the report results to refine their own tools. This together easily paid the amount of programming time put into the report.
As you can see: There are programming task where just throwing more computing power at doesn't solve the problem. It hasn't even to be some high level programming job, sometimes it's a dull task (finding all points in a bookkeeping system where the booking changes the finance stream of a loan is a dull task!), but if someone gets it done, it pays off easily.
Re: (Score:3, Insightful)
And idiots are fatal (Score:4, Insightful)
The idea expressed in that article isn't just stupid, it is economy destroying, civilization threatening, mind-bogglingly stupid.
The author is trying to solve the problem of inadequate resources buy spending more to increase the brute force effort toward his already failing solution. It is the mythical man month expressed in CPU horsepower.
That isn't improving your situation, that is merely delaying your inevitable downfall. You're running to stand still, and eventually your organization will collapse of exhaustion, while your competitors, who invested in smart design and smart people, lap your corpse.
And if you simply can't afford better people, then your reach is exceeding your grasp. Scale back your ambition, plan for when you can, or accept your niche and buy the third party solutions produced by experts who can write scalable software.
Hardware is cheap, Software lasts forever (Score:3, Insightful)
The bottom line is, software improvement is a one time cost, once its done, it's done.
Hardware solutions on the other hand, though cheaper outright, are reoccurring (you'll need keep upgrading that hardware as it becomes outdated) and scale up with demand (if you double your number of servers, you'll need to double this hardware as well)
This is why, except in cases were demand won't increase, or the extra hardware is unlikely to become outdated, software solutions tend to be the more economical choice.
Stupid Idea (Score:3, Interesting)
Ever come across the n+1 selects problem in hibernate. How many junior devs are good enough to figure out whats going on? Not many.
It means if you are fetching 1,000 records from the database it takes as much as 1,000 times as long as it should. Is halving your dev team cost really worth a 1,000 fold increase in hardware costs because your programmers don't understand the technology properly.
Re:Stupid Idea (Score:5, Interesting)
Common issue indeed, and actually a problem with Hibernate... in this day and age, there are algorythms that can be implemented in Object relational mappers to avoid what is at least the common scenario for this to happen in Hibernate or LINQ to SQL/Entity Framework...I'm not sure why it never gets fixed.
That being said, if you read the article (I know, i know, slashdot), they're talking about premature optimisation. Basically, things like avoiding Hibernate completly because of its overhead, or optimising every single queries as much as possible (even if performance is acceptable) to save every last bit of juice, so that your app can run on 10 megs of RAM instead of 100. They're not, in any way, talking about using shitty programmers, but advocate using GOOD developer's time more efficiently (solving real problems, instead of spending too much time on performance).
Almost everyone who replied saying it was stupid, almost ALL brought up either how programming mistakes can screw things up, or how its possible to make a system that doesn't scale at all. All those people missed the point.
You can make a system that is slower, but still scales and is still correctly done a LOT (and I mean a LOT) faster, if you don't go nitpicky and try and optimise everything as you go. You simply avoid doing something totally dumb, code according to best practice, etc, but you're not going to rewrite your system in C to avoid Garbage Collection, you're not going to rewrite the data structures of the framework to squeeze a 1% performance, and you won't avoid Hibernate (completly) to avoid the mapping overhead. You can tap into these time saving paradigms just by upgrading your hardware. You still need competent developers!!! But those competent developers can do more in less time.
Thats -all- what the author was advocating.
Can not disagree more (Score:3, Interesting)
Where I am currently working, a pizza box server has an annual cost of 2.5 developers salaries for the same period of time. It's grossly out of balance from this article.
Perhaps there is a reason some companies need Government Bailouts...
Consumer products (Score:3, Insightful)
if(units() * savings() > programmercost())
hireprogrammer();
When you sell a million units a penny means $10,000 and $1 means a brand new Lamborghini. I guess this article only covers enterprise software where the number of machines thats running your code could be in the thousands. The opposite argument can be made when you talk about consumer products where the unit counts are in the millions.
some anecdotal evidence to the contrary (Score:5, Interesting)
I am late here for this story, but I would like to add something to it for the sake of the late readers anyway :)
In the second half of 2001 I was on a project for a long time defunct company called WorldInsure (hey, former Corelan guys, any of you still out there, working for Symcor by any chance?)
So, I came in about half way into the one year project, in a few months the person who was the most senior developer on the project left but the team was still about 40 people in total. The application was something like 5MegaBucks by the end, but the client didn't want to pay the last million, because the performance was outrageously slow. 12 concurrent transactions per second as opposed to the 200 that the client wanted on 2 gigantic for the time 4 way Sun servers.
The app was a very detailed page after page insurance questionnaire, that would branch into more and more pages and questions as previous questions were answered. At some point a PDF was generated with the answers and provided on one of the last pages. The problem was with moving from page to page, the waiting times were too long, approaching minute wait times for some pages.
I was asked to speed it up. Long story short, after 1.5 months of tinkering with code produced by a bunch of novices, here is the list of improvements that I can remember at this point:
1. Removed about 80% of unnecessary database reading by removing TopLink.
2. Removed about 80% of unnecessary database writing by changing the way the data was persisted. Instead of persisting the entire data set on each new page, only the incremental changes now were persisted.
3. Reduced the page pre-processing by getting rid of the XSLT transformers on XML structures and switching to JSPs instead.
4. Removed cluster IO thrashing by reducing the session object size from unnecessary 1MB to a manageable 10Kb.
5. Reduced CPU load by caching certain data sets instead of reprocessing them on each request within a user session.
6. Decoupled PDF generation into a separate application and communicated the request to generate the PDF via a simple home grown message queue done with a database table. This was one of the more serious problems within the app. because it could bring down a server due to the buggy code in the Adobe PDF generator that was used at the time. In fact the original application ran the PDF generation as a separate Java application that would be restarted after about 5 generations and would be called via System.execute call so not to bring down the BEA Weblogic. Later on this entire portion was rewritten and the Adobe code thrown away. I am sure that today the Adobe code is fine and all, but at the time it was a real pig.
7. Removed many many many many unnecessary System.out.println calls, and replaced with proper logging where needed.
8. Fixed the home grown servlet manager (similar to Struts main servlet), this code was freaking ugly as hell and totally unstable.
There were some other smaller fixes, but the main bulk is listed here. By the end of the month and a half the app was doing over 300 simultaneous transactions per second.
300/12, that's 25 times code performance improvement. I am not at all convinced that this improvement could have been achived through hardware at all, but even if it could, it would have cost much more than what I cost at the time (was like 70CAD/hr for 1.5 months.)
Oh, did I mention that the client coughed out the last million bucks after that? After all, the code met their performance expectations and exceeded them by half at least.
Re: (Score:2)
Personally I don't understand how they are comparable at all, since when does hardware programming? =P
The amount of programming scenarios where more hardware solves anything must be severely limited. Optimisation of software vs adding more hardware?
Re:Frist? (Score:5, Funny)
"Natalie Portman can't act for shit and she has the tits of an 11-year old girl. Grits are bland and best served to the inbred, down-syndrome-afflicted inhabitants of the Southern United States."
OK, OK, ya got me horny, hungry, and nostalgic for the folks back home, but what was your point?
Re:Frist? (Score:5, Interesting)
that's the point - they DO get off on it!
As for the rest, if you REALLY want to improve productivity:
The real productivity killers are poor morale, poor management, poor communications, poor specifications, poor research, lack of time for testing, lack of time for documenting, lack of time for "passing on knowledge" to other people, etc. Not hardware.
Yes, hardware IS cheap. Poor management is the killer - in every field. Just ask anyone who has been on a death march project. Or bought GM stock a year ago. Or who supported John McCain, then watched Sarah Palin become his "bimbo eruption." They all have one thing in common - people who thought they knew better, didn't do their research properly, and then screwed the pooch.
First Java Post? (Score:5, Funny)
Who will be the first to post "ICodeInJavaWithClassesWithReallyReallyReallyLongNames.youIgnorantClod();" ?
Re:First Java Post? (Score:5, Funny)
std::i_code_in_cpp_with_crptc_cls_names::you_insns_clod();
Re:Frist? (Score:5, Insightful)
If they're watching movies all day long, just fire them. No need to re-orient their monitors.
Re:Frist? (Score:4, Insightful)
Re:Frist? (Score:5, Informative)
A handy note for those that don't know, Under X11 in addition to the rgb and bgr subpixel orderings, you can chose vrgb and vgbr vertical orientations to allow subpixel rendering (ClearType) for odd or rotated lcd screens.
Re: (Score:3, Insightful)
When developers ask for a new monitor or dual monitors, let them have 'em but mandate that the monitors be in a vertical orientation [about.com]as opposed to the typical horizontal orientation. That way, they'll have to use the monitors for efficient viewing of code rather than watching movies all day long.
Well, look here. There's a lot of personal preference involved in efficient text handling, and arbitrarily forcing programmers to work in landscape or portrait just so they don't watch movies is ridiculous. Matter of fact, if you have coders doing that on the job, either give them the requisite attitude adjustment, or just fire their happy little asses and hire some responsible citizens. Maybe in their next position they'll be a little more focused.
Furthermore, I don't know about you but the apps I devel
Re: (Score:3, Informative)
Well, recent currency fluctuations aside, it has certainly been the case that historically UK prices were well above those of the US, hence coining of the phrase Rip-off Britain [wikipedia.org]. Stuff like the Tesco-Levi jeans battle [bbc.co.uk], where an independent retailer was barred from importing and re-selling goods from the US, reinforced the perc
Re: (Score:3, Funny)
extremism is bad at EVERYthing.
See? You used the shift key. That wasn't so hard, now, was it?
Re:Another u.s. specific problem. cost of living (Score:4, Informative)
Are you high? Granted, I haven't been to Europe (France, Germany, Netherlands) since 2006, but I can't name a single thing that was less expensive, and I live in one of the most expensive cities in the US ($9 Beer Night).
I specifically went looking for cheap Lacoste stuff in France, and there was essentially dollar to euro parity, while the exchange rate was about 1.5:1. In other words, while I would pay $70 (just got a couple for $30 each on sale) for a shirt in the US, in France the same shirt was going for 70 euros. Food and drink prices seemed to be roughly comparable as well. Consumer Electronics, however, were considerably more expensive than in the US, as was gas. The metro was no less expensive than the DC subway, and the trains weren't cheaper than Amtrak, though 1000% better. I'd have to assume the reason that you believe it's actually cheaper is because you enjoy being banged out for taxes all the time.