Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming The Almighty Buck IT Hardware Technology

Hardware Is Cheap, Programmers Are Expensive 465

Sportsqs points out a story at Coding Horror which begins: "Given the rapid advance of Moore's Law, when does it make sense to throw hardware at a programming problem? As a general rule, I'd say almost always. Consider the average programmer salary here in the US. You probably have several of these programmer guys or gals on staff. I can't speak to how much your servers may cost, or how many of them you may need. Or, maybe you don't need any — perhaps all your code executes on your users' hardware, which is an entirely different scenario. Obviously, situations vary. But even the most rudimentary math will tell you that it'd take a massive hardware outlay to equal the yearly costs of even a modest five person programming team."
This discussion has been archived. No new comments can be posted.

Hardware Is Cheap, Programmers Are Expensive

Comments Filter:
  • I agree. (Score:5, Insightful)

    by theaveng ( 1243528 ) on Saturday December 20, 2008 @11:53AM (#26183945)

    Recently my boss reviewed my schematic and asked me to replace 1% resistors with 2 or 5% "because they are cheaper". Yes true, but I spend most of the day doing that, so he spent about $650 on the task, thereby spending MORE not less.

    So yeah I agree with the article that's it's often cheaper to specify faster hardware, or more-expensive hardware, than to spend hours-and-hours on expensive engineers/programmers trying to save pennies.

    Or as Benjamin Franklin said, "Some people are penny-wise, but pound foolish." You try to save pennies and waste pounds/dollars instead.

  • by malefic ( 736824 ) on Saturday December 20, 2008 @11:54AM (#26183955)
    "10,000! We could almost buy our own ship for that!" "Yeah, but who's going to fly it kid? You?"
  • by MpVpRb ( 1423381 ) on Saturday December 20, 2008 @11:59AM (#26183983)

    With cheep hardware readily available, I agree that, for many projects, it makes no sense to spend lots of time optimizing for performance. When faced with this situation, I optimize instead for readability and easy debugging, at the expense of performance.

    But, and this is a big but, fast hardware is no excuse for sloppy, bloated code. Bad code is bad code, no matter how fast the hardware. Bad code is hard to debug, and hard to understand.

    Unfortunately, bad or lazy programmers, combined with clueless managers fail to see the difference. They consider good design to be the same as optimization, and argue that both are unnecessary.

    I believe the proper balance for powerful hardware is well thought out, clean unoptimized code.

  • by samkass ( 174571 ) on Saturday December 20, 2008 @12:03PM (#26184015) Homepage Journal

    We'll see. The good developers probably won't be in the first wave of folks looking for jobs. I know our company is still in the "we have to figure out how to hire fast enough to do next year's work" mode.

    Where having good engineering really helps, though, is in version 2.0 and 3.0 of the product, and when you try to leverage embedded devices with some of the same code, and when you try to scale it up a few orders of magnitude... basically, it buys you flexibility and nimbleness on the market that the "throw more hardware at the problem" folks can't match.

    Despite Moore's Law being exponential over time (so far), adding additional hardware is still sub-linear for any snapshot in time. So it's not going to automatically solve most hard scalability problems.

  • by Marcos Eliziario ( 969923 ) on Saturday December 20, 2008 @12:04PM (#26184019) Homepage Journal

    From someone who has been there, done that. I can say that throwing hardware at a problem rarely works.
    If nothing else, faster hardware tend to increase the advantage of good algorithms over poorer ones.
    Say I have an alghorithm who runs at O(N) and another one functionally equivalent that runs at O(N^2). Now let's say that you need to double the size of the input keeping the execution time constant. For the first algorithm you will need a machine which is 2X faster than the current one, for the second O(N^2) you'll need a 10X times faster machine.
    Let's not forget that you need not only things to run fast, but to run correctly, and the absurdity of choosing less skilled programmers with more expensive hardware will become painfully evident.

    PS: Sorry for the typos and other errors: english is not my native language, and I've got a bit too much beer last night.

  • by StCredZero ( 169093 ) on Saturday December 20, 2008 @12:06PM (#26184029)

    This only works for certain cases. Some your problems are too many orders of magnitude too big to throw hardware at them.

    Before you do anything: Profile, analyze, understand.

    It might be useless to spend a month of development effort on a problem that you can solve by upgrading the hardware. It's also useless to spend the money on new hardware and the administrator time setting it up and migrating programs and data, when you could've just known that wouldn't have helped in the first place.

    Two questions I used to ask when giving talks: "Okay, who here has used a profiler? [hands go up] Now who has never been surprised by the results? [almost no hands]"

    Before you spend money or expend effort, just take some easy steps to make sure you're not wasting it. Common sense.

  • by unity100 ( 970058 ) on Saturday December 20, 2008 @12:06PM (#26184033) Homepage Journal
    everything ranging from a measly meal to healthcare is so expensive that, any kind of rare labor becomes exponentially expensive. because, people need multiples of pay to make any advance in their standard of living due to cost of living.

    in u.s., due to the ease you let mega corporations run rampant because they yelp and wank 'hands off business', you people are paying a fortune for almost anything that is sold for much cheaper in any other country. even, the SAME corporations are selling same products for much cheaper in europe, whereas they are giving you the shaft on the price of the same product in u.s.

    'hands off business' was supposed to 'create jobs', 'increase standard of living' and so on.

    did it ? what we see currently is totally to the opposite.

    the wealth did not 'trickle down' (and why the hell should it anyway), you're losing jobs whilst cost of living almost stays the same (after all, corporations have to make profits, so that they can provide jobs arent they - but where are the jobs), spiral goes deeper and deeper.

    i blame this on one thing alone - extremism.

    extremism is bad at EVERYthing. every aspect of life, social or personal, without any exception.

    when you go extreme on something, you break some other things to the extent that it becomes a disaster. just make a list of such stuff you experienced in your life, and you'll see.

    business, economy are just features of social life, and they are no exceptions. if you go to extreme to ANY side, be it extreme 'freedom' or extreme regulation, it breaks down.

    america went to the extreme lawless end in the last 30 years. it cost entire world a crisis. n. korea went extreme control in the last 50 years, it cost their people a poverty.

    balance is the answer, balance is the key. take europe as an example. with all its faults, the system seems to be working exceptionally well. a lot of petite european countries which should not have any significance at all because of their lack of natural resources and manpower, are producing and creating much more compared to u.s. in a ratio scale. not only that, but the life standard of their people is much, much higher.

    one word; balance.
  • Wrong objective (Score:2, Insightful)

    by trold ( 242154 ) on Saturday December 20, 2008 @12:10PM (#26184057) Homepage

    Good hardware running code written by bad programmers just means the code will fail faster. The primary goal of a programmer is to make the code work, and that does not change no matter how fast your hardware is.

  • What a crock... (Score:5, Insightful)

    by johnlcallaway ( 165670 ) on Saturday December 20, 2008 @12:11PM (#26184063)
    For pure CPU driven applications, I would agree with this statement. But NONE of the business applications I write are bogged down by CPUs. They are bogged down by I/O, either user uploads/downloads, network, or disk access.

    I have yet to see any application that was fixed for good by throwing hardware at it. Sooner or later, the piper has to be paid and the problem fixed. Someone improved response time by putting in a new server?? Does that mean they had web/app/database/data all on one machine?? Bad, bad, BAD design for large applications, no where to grow. At least if it's tiered and using a SAN with optical channels more servers can be added. Sometimes, more, not faster is better. And resources can be shared to make optimal use out of the servers that are available.

    The FIRST step is to determine WHY something is slow. Is it memory, cpu, or I/O bound. That doesn't take a rocket scientist, looking at sar in Unix or Task Mangager in Windows can show you that. Sure, if it's CPU bound, buying faster CPUs will fix it.

    The comment about developers having good boxes isn't the same as for applications. My latest job gives every developer a top-notch box with two monitors, I was in heaven. Unfortunately, it can't stop there. I also need development servers with disk space and memory to test large data sets BEFORE they go into production.

    Setting expectations is the best way to manage over optimization. Don't say "I need a program to do this", state "I need a program to do this work in this time frame". It is silly to make a daily batch program that takes 2 minutes run 25% faster. But it's not silly to make a web page respond in under 2 secs., or a 4 hour batch job to run in 3 *if* it is needed. But without the expectation, there is no starting or stopping point. Most developers will state "it's done" when the right answer comes out the other end, while a few may continue to tune it until it's dead.
  • by itzdandy ( 183397 ) on Saturday December 20, 2008 @12:13PM (#26184077) Homepage

    I think you need to complicate this logic a bit by taking into account added electricity required to power the extra servers, run the servers at a higher load, or run the clients at a higher load as well as the air conditioning cost increase as well.

    also, time is money. If a program takes more time, there is more time for users to be idle which will also have a cost.

    best practice? program as efficiently as possible. Programming expenses are only spent once which the power bill lasts forever.

  • by 3seas ( 184403 ) on Saturday December 20, 2008 @12:16PM (#26184097) Homepage Journal

    ... throw the money at genuine software engineering (not psuedo engineering) so that we have much better tools by which to program with.

  • by olyar ( 591892 ) on Saturday December 20, 2008 @12:18PM (#26184111) Homepage Journal

    One thing not in the equation here: Hardware is cheap, but having that hardware managed isn't so cheap. When you scale from a couple of servers to a big bank of server, you have to pick up system admins to manage all of those boxen.

    Less expensive than a programmer (some times) but certainly not free.

  • by ShieldW0lf ( 601553 ) on Saturday December 20, 2008 @12:30PM (#26184191) Journal
    Everyone knows a blind mapmaker will finish his work much faster on a motorcycle than he will on foot. This is basically the same thing...
  • by nine-times ( 778537 ) <nine.times@gmail.com> on Saturday December 20, 2008 @12:32PM (#26184209) Homepage

    I think if you're paying for programming vs. hardware, you're just paying for different things. I would think that would be somewhat obvious, given their very different nature, but apparently there's still some uncertainty.

    The improvements you get from optimizing software are limited but reproducible for free-- "free" in the sense that if I have lots of installations, all the installations can benefit from any improvements you make to the code. Improvements from adding new hardware cost each time you add new hardware, as well as costing more in terms of power, A/C, administration, etc. On the other hand, the benefits you can get from adding new hardware is potentially unlimited.

    And it's meaningful that I'm saying "potentially" unlimited, because sometime effective scaling comes from software optimization. Obviously you can't always drop in new servers, or drop in more processors/RAM into existing servers, and have that extra power end up being used effectively. Software has to be written to be able to take advantage of extra RAM, more CPUs, and it has to be written to scale across servers and handle load-balancing and such.

    The real answer is that you have to look at the situation, form a set of goals, and figure out the best way to reach those goals. Hardware gets you more processing power and storage for a given instance of the applcation, while improving your software can improve security and stability and performance on all your existing installations without increasing your hardware. Which do you want?

  • by pikine ( 771084 ) on Saturday December 20, 2008 @12:33PM (#26184219) Journal

    The article seems to assume that bad programmers write slow but correct code, which is a big assumption. But the observation on cost also means that good programmers should focus on correctness rather than performance.

    Just to illustrate how difficult it is to get correctness right, on page 56 [google.com] of The Practice of Programming by Kernighan and Pike---very highly regarded book and highly regarded authors---there is a hash table lookup function that is combined with insert to perform optional insertion when the key is not found in the table. It assumes that the value argument can be safely discarded if insertion is not performed. That assumption works fine with integers, but not with pointers to memory objects, file descriptors, or any handle to a resource. An inexperienced programmer trying to generalize int value to void *value will induce memory leak on behalf of the user of the function.

  • Nothing new (Score:3, Insightful)

    by fermion ( 181285 ) on Saturday December 20, 2008 @12:33PM (#26184221) Homepage Journal
    This has been the trend for a very long time. Once, a long time ago, people wrote code in assembly. Even not so long ago, say 20 years, there were enough applications where it still made sense to do assembly simply because it was the only way for affordable hardware to perform well.

    Ten years ago many web servers were hand coded in relatively low level complied languages. Even though hardware had become cheaper, and the day of the RAID rack of PCs were coming on us, to get real performance one had to have software developers, not just web developers.

    Of course cheap powerful hardware has made that all a thing of the past. There is no reason for an average software developer to have anything but a passing familiarity with assembly. There is no reason for a web developer to know anything other than interpreted scripting languages. Hardware is, and always has been, cheaper than people. That is why robots build cars. That is why ISM sold a but load of typewriters. That is why the jacquard loom was such a kick but piece of machinery.

    The only question is how much cheaper is hardware, and when does it make sense to a replace a human wiht a machine, or maybe a piece of software. This is not always clear. There are still reletively develop places in the world where it is cheaper to pay someone to wash you clothes by hand than buy and maintain a washing machine.

  • by br00tus ( 528477 ) on Saturday December 20, 2008 @12:39PM (#26184249)

    This uses servers as an example, but what about desktops? We use Windows desktops where I am, and having AIM and Outlook open all the time is more or less mandatory for me. Plus there are these virus-scanning programs always running which eat up a chunk of resources. I open up a web browser and one or two more things and stuff starts paging out to disk. I'm a techie and sometimes need a lot of stuff open.

    We have a call center on our floor, where the people make less than one third what I do, and who don't need as many windows open, yet they get the exact same desktop I do. My time is three times more valuable than theirs, yet the company gives me the same old, low-end desktop they get, resulting in more of my productive time being lost - those seconds I wait when I switch from an ssh client to Outlook and wait for Outlook to be usable add up to minutes and hours eventually. Giving everyone the same desktop makes no sense (I should note I eventually snagged more RAM, but the point is about general company policy more than my initial problems).

  • by Todd Knarr ( 15451 ) on Saturday December 20, 2008 @12:40PM (#26184261) Homepage

    The first is that the hardware cost isn't the only cost involved. There's also the costs of running and maintaining that hardware. Many performance problems can't be solved by throwing just a single bigger machine at the problem, and every one of the multiple machines means more complexity in the system, another piece that can fail. And it introduces more interactions that can cause failures. An application may be perfectly stable using a single database server, but throw a cluster of 3 database servers into the mix and a problem with the load-balancing between the DB servers can create failures where none existed before. Those sorts of failures can't be addressed by throwing more hardware at the problem, they need code written to stabilize the software. And that sort of code requires the kind of programmer that you don't get cheap right out of school. So now you're spending money on hardware and you're still having to hire those pesky expensive programmers you were trying to avoid hiring. And your customers are looking at the failure rates and deciding that maybe they'd like to go with your competitor who's more expensive but at least delivers what he promises.

    Second is that, even if the problem's one that can be solved just by adding more hardware, often inexperienced programmers produce code whose performance profile isn't linear, it's exponential. That is, doubling the load doesn't require twice the hardware to maintain performance, it requires an order of magnitude more hardware. It doesn't take long for the hardware spending to become completely unbearab le, and you'll again be caught having to spend tons of cash on enough hardware to limp along while spending tons of money on really expensive programmers to try and get the software to where it's performance curve is supportable and watching your customers bail to someone offering better than same-day service on transactions.

    Go ask Google. They're the poster boy for throwing hardware at the problem. Ask them what it took on the programming-expertise side to create software that would let them simply throw hardware at the problem.

  • More factors (Score:3, Insightful)

    by Lazy Jones ( 8403 ) on Saturday December 20, 2008 @01:07PM (#26184429) Homepage Journal
    Generally, investing into hardware will usually mean more people with salaries like programmers' on the payroll (designing the architecture, the maintenance tools, installing the software and hardware, keeping it running ...). A lot of these things can be automated / done with little effort, but it takes someone as competent (and expensive) as a good programmer to get it right.

    In the long run, your best investment is still the good programmer, as long as you can keep him happy and productive, because then you can grow more/faster (by buying hardware as well).

  • Re:I agree. (Score:3, Insightful)

    by smack.addict ( 116174 ) on Saturday December 20, 2008 @01:10PM (#26184475)

    This is a failure on your part. Bean counters are not penny-wise, pound foolish. They do need a concrete financial analysis, however, to prove that you aren't just blowing smoke up their skirt.

    Because most of the time, programmers are doing just that.

    And also, programmers often fail to understand the cost of money and that sometimes it is better spend more tomorrow than a little bit today.

  • by Sique ( 173459 ) on Saturday December 20, 2008 @01:22PM (#26184559) Homepage

    When I was programmer, we once had a programming job at a large bank. One of our main reports was running across all booked loans and calculated the futural finance stream (interest and amortization) either until the debt was paid off, or up to 40 years at current interest rates. This report was sent to the Federal Bank for control, and to the department tasked with managing the bonds to get enough capital for further loans.

    This report took 200 processor hours to complete. To get it done, it was split into 18 tranches, each running 11 hours. So it was possible to complete the job during a weekend run on 18 processors, and restart it twice in case of errors.

    A colleague of mine took the task to rewrite the report to speed it up. For that she hooked into each booking that changed the amount of loan or the interest rate, repayment, end-of-contract or amortization and modified it so it wrote a flag into a table.

    Then she rewrote the central report to store the calculated finance stream each time it was calculated. Loans that were unchanged since the last calculation didn't have a flag set, so the report took the old calculation. This sped up the report about 150 times: Instead of 200 processor hours now it completed within 1:20 h.

    It allowed to put four large RS/6000 out of service, cancelling of the service contracts, rescheduling the report to run daily instead on weekends and saving on weekend man hours. With the daily report to the bond managment department also the finance controlling unit became interested and used the report results to refine their own tools. This together easily paid the amount of programming time put into the report.

    As you can see: There are programming task where just throwing more computing power at doesn't solve the problem. It hasn't even to be some high level programming job, sometimes it's a dull task (finding all points in a bookkeeping system where the booking changes the finance stream of a loan is a dull task!), but if someone gets it done, it pays off easily.

  • Re:I agree. (Score:3, Insightful)

    by ScrewMaster ( 602015 ) * on Saturday December 20, 2008 @01:28PM (#26184599)

    Recently my boss reviewed my schematic and asked me to replace 1% resistors with 2 or 5% "because they are cheaper". Yes true, but I spend most of the day doing that, so he spent about $650 on the task, thereby spending MORE not less.

    Which [potentially] shows why he's a boss - and you aren't. That $650 (overpaid in salary to you) is a one time cost - but it can also represent considerably savings, in setup time if 2 or 5% resistors are the standard wherever you circuits are manufactured, in total cost (of hardware) across a large production run (even more so if your design contains many resistors), etc... etc... Any engineer worth a damn knows enough accounting to be able to figure this stuff out.

    I think you missed the point. The guy was saying that he's well aware of the cost savings of using cheaper resistors, but that he'd already done the analysis. The boss overrode him using financial criteria alone, rather than what a good engineer does, which is try to find a balance between cost and functionality (or reliability, or performance, or accuracy, or whatever your project's target criteria.) Chances are, that design will go into production and not meet spec, which means the expense of a redesign and lost manufacturing time. I see that happen all the time.

  • Re:I agree. (Score:3, Insightful)

    by ultranova ( 717540 ) on Saturday December 20, 2008 @01:28PM (#26184603)

    So yeah I agree with the article that's it's often cheaper to specify faster hardware, or more-expensive hardware, than to spend hours-and-hours on expensive engineers/programmers trying to save pennies.

    Multiplied by how many servers, now that is the question ?

    I mean, if you have a thousand-server farm already, then a speedup of just one percent is going to save you from having to buy (and power, manage and eventually replace) ten servers. How much developer time is that one percent really going to cost ? And this is assuming absolute scalability, which is almost certainly not the case.

    The bigger site you already have, the more it makes sense to buy programmer time instead of hardware, because program optimizations are multiplied by site size.

  • by GravityStar ( 1209738 ) on Saturday December 20, 2008 @01:28PM (#26184605)
    Imagine server-software so craptastically written, that the maximum amount of people that can use the app at any one time is, say: 20. Now, imagine that when you double the hardware capacity, user capacity only goes up by a factor of say 1.4

    Still sure you're _only_ going to throw hardware at the issue when business wants the application online for a couple of thousand people?
  • by Anonymous Coward on Saturday December 20, 2008 @01:30PM (#26184617)

    Exactly. I live up in Canada where the cost of life is MUCH lower, and we'd hire just about anyone who wants to work for more than that. Couple months ago we hired a guy in his 50's that spends most of his day screwing PCBs into enclosures at like $12/hr (no qualifications needed whatsoever, you just need to want to get up in the morning), including good benefits and all (med insurance and everything). And that's work inside (no weather or anything), not particularly physically intensive, etc. I wouldn't even expect to find a half-way competent programmer, even out of school, at $8/hr (no offense).

    Hell, I have to turn down tons of existing clients at like $25/Hr myself... I just don't have the time. If I could somehow manage to work 120Hr/week, my boss would gladly use them all up!

  • by ceoyoyo ( 59147 ) on Saturday December 20, 2008 @01:33PM (#26184643)

    There's the hardware, cooling, space, someone to administer it, replacing it for the next twenty years (or isn't your code going to last that long?)....

    Of course, in my line of work the goal is to go from "a million years" to "realtime" so all the hardware in the world isn't really going to help much.

  • by 0xdeadbeef ( 28836 ) on Saturday December 20, 2008 @01:47PM (#26184747) Homepage Journal

    The idea expressed in that article isn't just stupid, it is economy destroying, civilization threatening, mind-bogglingly stupid.

    The author is trying to solve the problem of inadequate resources buy spending more to increase the brute force effort toward his already failing solution. It is the mythical man month expressed in CPU horsepower.

    That isn't improving your situation, that is merely delaying your inevitable downfall. You're running to stand still, and eventually your organization will collapse of exhaustion, while your competitors, who invested in smart design and smart people, lap your corpse.

    And if you simply can't afford better people, then your reach is exceeding your grasp. Scale back your ambition, plan for when you can, or accept your niche and buy the third party solutions produced by experts who can write scalable software.

  • by gbjbaanb ( 229885 ) on Saturday December 20, 2008 @02:08PM (#26184905)

    The biggest problem is that poorly optimised software can be ok (everyone runs Java or .NET acceptably, and they're not exactly resource light), but some poorly written software can be dreadfully slow - so much so that throwing more hardware at it will never work.

    You know, the websites written as a single jpeg image cut into 100 pieces, the loops that iterate over themselves several times to get 1 piece of data, etc etc. I'm sure we've all seen stuff that makes us gawk in wonder that someone actually did it like that. (if not, take a look at TheDailyWTF [thedailywtf.com]).

    So, although hardware is much more powerful that it makes some sense to run 'easy' languages like java, C# or a scripting language, that still doesn't mean you can get away with cheap, poor programmers. (if you think it does, you can hire someone for $100 to rewrite your entire app on rentacoder, assuming you find one who knows the right codez :).

    Something that no-one considers in this 'hardware cost v programmer cost' is the user. If you have an app that is used by 10 users, it could be that you don't care so much about software quality. If it's used by 10 million users, you'd be saving the wrong pennies by not spending the money on writing it with as much skill as you can hire.

  • Re:Get a rope (Score:5, Insightful)

    by Thumper_SVX ( 239525 ) on Saturday December 20, 2008 @02:53PM (#26185273) Homepage

    Besides, one thing that's not covered in the article is that hardware has an exponentially higher residual maintenance cost.

    In order to maintain production, many companies these days insist that hardware be in-warranty and thus able to be replaced at a moment's notice. There comes a point as well at which the amount that the hardware will cost on an ongoing basis far exceeds the cost of a single programmer to write a decent app that doesn't need it.

    I have recently saved my company the equivalent of my salary, doubled for the next two years purely in the cost of maintenance contracts for around 150 servers. Granted, this was using virtualization rather than programming to combat the problem, but in this case it made sense. The concept is still the same regardless.

  • Re:Frist? (Score:5, Insightful)

    by goatpunch ( 668594 ) on Saturday December 20, 2008 @03:03PM (#26185369)

    If they're watching movies all day long, just fire them. No need to re-orient their monitors.

  • Re:Wait, what? (Score:4, Insightful)

    by zrq ( 794138 ) on Saturday December 20, 2008 @03:21PM (#26185497) Journal

    So i guess the lesson is, If you're CERN, throw hardware at it. If you're Adobe, get a lot of good programmers/architects.

    Actually, I think that is the wrong way round. Places like CERN do 'throw hardware at it', lots of hardware, and it still isn't enough.

    Modern desktop systems have giga bytes of memory, hundreds of giga bytes of disk and multi core processors ... and in the Adobe example you are using it to display PDF documents or Flash movies. Your application would typically be using less that 1% of the available resources. Spending lots of money optimizing the performance does not make commercial sense.

    Large science projects like CERN are pushing the limits of hardware and software. They typically deal with data sets, data rates and processing requirements that are orders of magnitude larger that most systems can cope with.

    A typical science desktop application needs to be able to process and display giga byte data sets, often comparing more than one dataset visually in real time. A typical eScience grid service needs to be able handle extremely large (peta byte) datasets in real time, and you can't drop data or pause for a moment - the data stream is live and you only get one chance to process and store it.

    Same applies to Google, Yahoo, FaceBook etc. If your application is pushing the hardware to the limits, then optimizing the software to increase performance by 5% is worth a lot of developer time.

  • by julesh ( 229690 ) on Saturday December 20, 2008 @03:36PM (#26185613)

    But the observation on cost also means that good programmers should focus on correctness rather than performance.

    Just to illustrate how difficult it is to get correctness right, on page 56 of The Practice of Programming by Kernighan and Pike---very highly regarded book and highly regarded authors---there is a hash table lookup function that is combined with insert to perform optional insertion when the key is not found in the table. It assumes that the value argument can be safely discarded if insertion is not performed. That assumption works fine with integers, but not with pointers to memory objects, file descriptors, or any handle to a resource. An inexperienced programmer trying to generalize int value to void *value will induce memory leak on behalf of the user of the function.

    Or, for a modest increase in hardware requirements to get the same performance, we can introduce automatic resource management (aka garbage collection) which makes this particular little difficulty go away.

  • Re:Frist? (Score:4, Insightful)

    by cnettel ( 836611 ) on Saturday December 20, 2008 @03:51PM (#26185715)
    Pivoting means that ClearType or favorite subpixel rendering of your choice won't work. And I do really prefer ClearType, in the same way I prefer dualmon and high resolution.
  • by Krishnoid ( 984597 ) * on Saturday December 20, 2008 @04:04PM (#26185803) Journal
    It makes me think that applying data related to this together with Moore's law could produce a heuristic to estimate the relative benefits of each approach:
    • Say you can optimize the code to give you a shot (P probability) at speeding up your entire operation by a factor of N or by M orders of magnitude, for a cost of D dollars in person-hours
    • speedup/dollar == f(P, N or M, D), a mostly multiplicative estimate assuming you can get a rough idea of P from a profiling run and a little thought about the architecture
    • Applying some form of Moore's law to your hardware setup to compare c (CPU speed), i (I/O speed), and m (amount of memory) of your existing setup, vs C, I, and M for a new setup, costing H dollars in total upgrade costs
    • speedup/dollar == g(c,i,m,C,I,M,H), where g involves knowledge of how much your operation depends on the speed of the various components, again assisted by the profiling run, and likely depends on C/c, I/i, and M/m ratios

    One could compare the speedup/dollar in both cases, and if they're off by some major multiplicative factor adjusted against the absolute dollar figure involved in each case ($100::$200 (expense report) != $3400::$3500 or $3000::$6000 (purchase order)), you'd have a good first guess to use. In your situation, buying even 100x faster hardware wouldn't have improved the situation, and it seems like with one good profiling run (assuming the tools are available), your colleague could have easily made the case, at least in numbers.

  • by Bodrius ( 191265 ) on Saturday December 20, 2008 @05:24PM (#26186363) Homepage

    The $hardware$ $programmer-time$ equation is always based on the assumption that the programmer is always worth their qualifications.

    You are correct that this is an unrealistic assumption but, like the "rational self-interest" assumption in economics, it is a very useful one.

    Given a set of uniformly competent programmers, you quickly reach the point of diminishing returns on optimizing performance over hardware - but that's because a competent programmer should implement code with reasonable performance in the first place. Sadly, some people think they can compensate one with the other (competence vs hardware), when that is an entirely different problem, entirely different variable (e.g.: an incompetent programmer with more time is not always a good thing).

    First you have to reach the point of competence where you can talk about performance optimizations in the first place. What you describe is not 'unoptimized code', it is not a naive but reasonable implementation - its gross incompetence (assuming SQL qualifications were claimed in the first place).

    As you said, you can't pay for enough hardware to compensate for that. But in the same vein, you really, *really* do not want to pay for more of that programmer time either.

  • by wolf12886 ( 1206182 ) on Saturday December 20, 2008 @06:31PM (#26186779)

    The bottom line is, software improvement is a one time cost, once its done, it's done.

    Hardware solutions on the other hand, though cheaper outright, are reoccurring (you'll need keep upgrading that hardware as it becomes outdated) and scale up with demand (if you double your number of servers, you'll need to double this hardware as well)

    This is why, except in cases were demand won't increase, or the extra hardware is unlikely to become outdated, software solutions tend to be the more economical choice.

  • by Xest ( 935314 ) on Saturday December 20, 2008 @06:43PM (#26186871)

    Indeed. It depends entirely on the problem, this is where computational complexity comes in, but cheap programmers wont even know what computational complexity is. The more complex the problem, the more knowledgeable your programmers will need to be in coming up with novel solutions.

    You only have to look at most combinatorial optimization problems to see where you may run into trouble, a cheap programmer may try and brute force it and no matter how much hardware you throw at the problem that method simply isn't going to work for all but the smallest of data sets. You're going to have to get someone who knows the tricks (algorithms such as ACO) to produce acceptable solutions in a sensible time frame.

    But you don't even need the hardest COPs to demonstrate the types of problems you may run into, even the most basic COPs can throw lesser skilled programmers whilst better programmers can implement a solution without even needing to look up any references.

    It's another case of cutting corners. To the companies considering this option; sure if you wanna hire cheaper programmers and throw hardware at the problem that's fine. Just don't come crying when your entire system keels over under the weight of a problem it can't solve with the method implemented to solve it and when you then have to get someone in to do the job properly. Also then when you find yourself with a load of hardware lying round you never actually needed had it been done right to start with.

    Cheap programmers are great for throwaway or non-mission critical software, but make sure you have at least some good programmers around who have the computer science background underlying their software engineering abilities to deal with the tough/complex stuff.

  • Re:I agree. (Score:3, Insightful)

    by petermgreen ( 876956 ) <plugwash.p10link@net> on Saturday December 20, 2008 @11:14PM (#26188485) Homepage

    Can't wrong resistance give wrong results since one have calculated on the result and what is needed using exact values?
    We can't achive perfection so we have to be able to deal with variation in our designs. Designers should know when to specify precision components and when something more run of the mill is ok (1% resistors is kinda on the edge, it used to be regarded as precision but manufacuring improvements have meant 1% resistors are pretty cheap nowadays).

    What the parent was getting at was that swapping 1% resistors for 2% or 5% resistors in a design is a fools errand unless you are producing huge volume. You can't just blindly change tollerances you have to check whether it is appropriate in each case and that takes time. It takes a lot of 1% resistors to equal the cost of a day of engineer time.

  • Re:Frist? (Score:3, Insightful)

    by ScrewMaster ( 602015 ) * on Sunday December 21, 2008 @01:30AM (#26189091)

    When developers ask for a new monitor or dual monitors, let them have 'em but mandate that the monitors be in a vertical orientation [about.com]as opposed to the typical horizontal orientation. That way, they'll have to use the monitors for efficient viewing of code rather than watching movies all day long.

    Well, look here. There's a lot of personal preference involved in efficient text handling, and arbitrarily forcing programmers to work in landscape or portrait just so they don't watch movies is ridiculous. Matter of fact, if you have coders doing that on the job, either give them the requisite attitude adjustment, or just fire their happy little asses and hire some responsible citizens. Maybe in their next position they'll be a little more focused.

    Furthermore, I don't know about you but the apps I develop are generally not used with the monitor in a vertical configuration (matter of fact, given the nature of the software I work on that would be completely inappropriate) so it would nullify the advantages of a dual-monitor setup if I were forced to use them the way you describe.

    Continuing this theme, you can't just say, "programmers work better with monitors oriented THIS way." Sure, if you're hacking assembler code a vertical setup might (might!) be better for you because the lines tend to be relatively short, unless you're like me and like lots of comments. If you're coding in .Net or Java, you're probably happier with a horizontal layout given how wordy those languages are (.Net in particular, Christ on a crutch and I thought Cobol was verbose.) I've also found that when I'm editing source code, it's often nice to have the IDE running vertical, with the other, horizontal monitor for both my debug output and the application display itself.

    So, I'd say this: give your developers the tools, training and any good advice they need, and then let each of them figure out what works best. Otherwise you're just another overbearing manager more interested in exerting his authority, rather than running an efficient, productive development team. Beware of arbitrary constraints ... they're rarely helpful and usually counterproductive, because of considerable variation between individuals. We're not all alike, and we're not all maximally productive in the same identical environment.

    It's such a simple idea that I'm surprised that more businesses and coders haven't caught on to it.

    Well, now you know.

  • by Anonymous Coward on Sunday December 21, 2008 @07:20AM (#26190343)

    The reality is that programmers are scarce and increasingly so. The vast majority of those masquerading as programmers are merely coders - folks who know a language, a proprietary library and an IDE. Real programmers are engineers - problem solvers who also know languages and how to use them to solve the problems.

    As a result of this misdescription, productivity is low - largely because "solutions" usually prove inadequate at first round and require massive reworking (witness "service packs") - so costs are high. Those who measure "programmer productivity" in lines of code per time perpetuate the problem. The real measure has to be fully functional and robust solutions delivered per time. Against that measure there will always be a few real programmers who are vastly more affordable than the rank and file even though their apparent rates may be higher. The fundamental business problem is to identify them.

  • Consumer products (Score:3, Insightful)

    by WarJolt ( 990309 ) on Sunday December 21, 2008 @08:16AM (#26190505)

    if(units() * savings() > programmercost())
        hireprogrammer();

    When you sell a million units a penny means $10,000 and $1 means a brand new Lamborghini. I guess this article only covers enterprise software where the number of machines thats running your code could be in the thousands. The opposite argument can be made when you talk about consumer products where the unit counts are in the millions.

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...