Forgot your password?
typodupeerror
Programming The Almighty Buck IT Hardware Technology

Hardware Is Cheap, Programmers Are Expensive 465

Posted by Soulskill
from the optimization-takes-effort dept.
Sportsqs points out a story at Coding Horror which begins: "Given the rapid advance of Moore's Law, when does it make sense to throw hardware at a programming problem? As a general rule, I'd say almost always. Consider the average programmer salary here in the US. You probably have several of these programmer guys or gals on staff. I can't speak to how much your servers may cost, or how many of them you may need. Or, maybe you don't need any — perhaps all your code executes on your users' hardware, which is an entirely different scenario. Obviously, situations vary. But even the most rudimentary math will tell you that it'd take a massive hardware outlay to equal the yearly costs of even a modest five person programming team."
This discussion has been archived. No new comments can be posted.

Hardware Is Cheap, Programmers Are Expensive

Comments Filter:
  • Timing is everything (Score:4, Interesting)

    by BadAnalogyGuy (945258) <BadAnalogyGuy@gmail.com> on Saturday December 20, 2008 @11:50AM (#26183933)

    Sure, right now it may be more expensive to hire better developers.

    But just wait a couple more months when unemployment starts hitting double digits. You'll be able to pick up very good, experienced developers for half, maybe a third of their current salaries.

    Sure, invest in some HW now. That stuff will always be handy. But don't just go off and assume that developers will be expensive forever.

  • by Baldrson (78598) * on Saturday December 20, 2008 @11:56AM (#26183965) Homepage Journal
    Better recalculate the trade-offs for the current economic crisis:

    TFA says the average programmer with my experience level should be getting a salary of around $50/hour but you'll see I've recenetly advertised myself at $8/hour. [majorityrights.com]

    How many hundreds of thousands of jobs have been lost in Silicon Valley alone recently?

    The crisis has gutted demand for hardware as well, but things are changing so fast, yesterday's calculations are very likely very wrong. Tomorrow, hyperinflation could hit the US making hardware go through the roof due to the exchange rate.

  • by Analogy Man (601298) on Saturday December 20, 2008 @11:59AM (#26183985)
    Toss as much CPU and memory as you want at a chatty transaction and you won't solve the problem. What about the cost of your 2000 users of the application that wander off to the coffee machine while they wait for an hour glass to relinquish control to them? Over the years I have seen wanton ignorance from programmers that ought to know better about efficiency, scalability and performance.
  • Re:I agree. (Score:4, Interesting)

    by tinkertim (918832) on Saturday December 20, 2008 @12:05PM (#26184025) Homepage

    Or as Benjamin Franklin said, "Some people are penny-wise, but pound foolish." You try to save pennies and waste pounds/dollars instead.

    The same can be true in programming, but usually the scenario describes development itself, i.e. premature optimization. If your team is experienced, the only reason for this would be people trying to do big things in small spaces.

    I think it comes down to what you need, what you want and what you need to spec for your software to actually run.

    If your willing to spend quite a bit of money on some really talented people, what you need as far as hardware (at least memory) can be reduced significantly.

    What you want is to roll a successful project in xx months and bring it to market, so raising the hardware bar seems sensible.

    Then we come down to what you can actually spec, as far as requirements for your clients who want to use your software. Microsoft ended up lowering the bar for Vista in order to appease Intel and HP .. look what happened.

    If your market is pure enterprise, go ahead and tell the programmers that 4GB/Newer dual core CPU is the minimum spec for your stuff. If your market is desktop users .. may be a bad idea.

    I don't think there's a general rule or 'almost always' when contemplating this kind of thing.

  • Not always the case (Score:4, Interesting)

    by mark99 (459508) on Saturday December 20, 2008 @12:10PM (#26184059) Journal

    In a lot of big orgs it is amazing how expensive it can be to upgrade your hardware, or add to an existing farm. Not because of the hardware cost, but because of all the overhead involved in designing/specifying the setup,ordering, waiting for it to come, getting space for it, installation, patching, backing up, etc.

    In fact I've seen several orgs where the cost of a "Virtual Server" is almost as much as a physical one because the cost of all this servicing it is so high. Whether or not this is necessary I don't want to debate here, but it is undeniably the case.

    So I think the case for throwing hardware at issues is not as clear cut as this article implies.

  • Get a rope (Score:5, Interesting)

    by Anonymous Coward on Saturday December 20, 2008 @12:11PM (#26184067)

    I almost feel an order of magnitude more stupid for reading that article. Throwing more hardware at a problem definitely makes more sense for a small performance issue, but this is rarely the case. The whole idea makes me sick as a developer. This reminds me of the attitude of many developers of a certain web framework out there. Instead of fixing real problems, they cover up fatal flaws in their architecture with a hardware band aid. There's no denying it can work sometimes, but at quite a high cost and completely inappropriate for some systems. Not everyone is just building a stupid to-do-list with a snappy name application.

    Consider that many performance problems graphically have an upper limit. At some point throwing more hardware at the problem is going to do absolutely nothing. Further, the long term benefit of hardware is far less than the potential future contributions of a highly paid, skilled programmer.

    Another issue is there are plenty of performance problems I have seen that cannot be scaled easily just by adding more hardware. A classic example are some RDBMS packages with certain applications. Often databases can be scaled vertically (limited by RAM and IO Performance), but not horizontally because of problems with stale data, replication, application design, etc. A programmer can fix these issues so that you can yes then add more hardware, but it is far more valuable in the long-term to have someone to enable you to grow in this way properly.

    Actually fixing an application is a novel idea, don't you think? If my air conditioning unit is sometimes not working, I don't go and install two air conditioning units. I either fix the existing one or rip it out and replace it.

    Further, there are plenty of performance problems that can never be solved with hardware. Tight looping is one that I often see. It does not matter what you throw at it, the system will be eaten. Another example is a garbage collection issue. Adding more hardware may help, but typically delays the inevitable. Scaling horizontally in this case would do next to nothing because if every user hits this same problem, you have not exactly bought more time (therefore you must go vertically as well, only really delaying the problem).

    The mentality of this article may be innocent in some ways, but it reminds me of this notion that IT people are resources and not actual humans. Creativity, future productivity, problem solving skills, etc are far more valuable to any decent company than a bunch of hardware that is worthless in a few months and just hides piss poor work by the existing employees.

    I feel like a return to the .com bubble and F'd Company. I am sure plenty of companies following a lot of this advice can look forward to articles about their own failures. If someone proposes adding hardware for a sane reason, say to accommodate a few thousands more visitors with some more load balanced servers, by all means do so. If your application just sucks and you need to add more servers to cover up mistakes, it is time to look elsewhere because your company is a WTF.

  • Wait, what? (Score:5, Interesting)

    by zippthorne (748122) on Saturday December 20, 2008 @12:12PM (#26184071) Journal

    Surely that might work for a one-off, but if you're selling millions or even thousands of copies of your software, even a $100 increase in hardware requirements costs the economy millions. Just because it doesn't cost YOU millions doesn't mean you don't see the cost.

    If your customers are spending millions on hardware, that money is going to the hardware vendors, not to you. And more importantly, that money represents wasted effort. Effort that could otherwise be used to increase real wealth, thus making the dollars you do earn more valuable.

    So i guess the lesson is, If you're CERN, throw hardware at it. If you're Adobe, get a lot of good programmers/architects.

  • by Krishnoid (984597) * on Saturday December 20, 2008 @12:16PM (#26184095) Journal
    And at least one skilled person from that era [thedailywtf.com] in a leadership position made this tradeoff with significant economic -- as well as entertaining and educational -- consequences.
  • by MickLinux (579158) on Saturday December 20, 2008 @12:19PM (#26184117) Journal

    Well, unless the $8/hr is an introductory rate (that is, the first 200 hrs are at $8.50, then after that you go up to $15 or $20/hr), you could do better by joining a construction site. At our place (prestress, precast concrete plant), we are paying warm bodies $10/hr.

    Show that you can read drawings, and you can quickly rise up to $12-$14/hr. Which is, admittedly, a pittance, but if you live in a trailer home, you can make ends meet. Then you can still program in your spare time, and keep the rights to your work, to boot.

  • Absolutely True (Score:5, Interesting)

    by titzandkunt (623280) on Saturday December 20, 2008 @12:21PM (#26184135)
    When I was young, eager and naive I worked at a place that was doing some pretty heavyweight simulations which took a good three-four days on a (I think) quad-processor Sun box.

    It was quite a big site and had a relatively high turnover of decent hardware. Next to the IT support team's area was a room about 6 yards by 10 yards almost full to the ceiling with older monitors, printers and a shitload of commodity PC's. And I'd just stated reading about mainstream acceptance of linux clustering for paralellizable apps.

    Cue the lightbulb winking into life above my head!

    I approached my boss, with the idea to get those old boxes working again as a cluster and speed things up for the modelling team. He was quite interested and said he'd look into it. He fired up Excel and started plugging in some estimates...

    Later that day I saw him and asked him what he thought. He shook his head. "It's a non-starter" he said. Basically, if the effort involved in getting a cluster up and working - including porting of apps - was more than about four man-weeks, it's cheaper and a lot safer just to dial up the Sun rep, invoke our massive account (and commensurate discount) with them and buy a beefier model from the range. And the existing code would run just fine with no modifications.

    A useful lesson for me in innovation risk and cost.
  • by tukang (1209392) on Saturday December 20, 2008 @12:39PM (#26184253)

    From a purely algorithm perspective you are correct, but it will be easier to implement that O(n) algorithm in a high level scripting language like python than it would be to implement it assembly or even C and I think that's where the submitter's argument of relying on hardware to make up the speed difference makes sense.

  • Re:Frist? (Score:5, Interesting)

    by tomhudson (43916) <barbara.hudsonNO@SPAMbarbara-hudson.com> on Saturday December 20, 2008 @12:55PM (#26184345) Journal

    Natalie Portman can't act for shit and she has the tits of an 11-year old girl. Grits are bland and best served to the inbred, down-syndrome-afflicted inhabitants of the Southern United States. Get off it already.

    that's the point - they DO get off on it!

    As for the rest, if you REALLY want to improve productivity:

    • HARDWARE
      1. Dual monitors. They pay for themselves within weeks. This is a real no-brainer.
      2. Dual monitors. They pay for themselves within weeks. This is a real no-brainer.
      3. Dual monitors. They pay for themselves within weeks. This is a real no-brainer.
      4. Did I mention dual monitors? They really make a difference ...
    • PEOPLE
      1. Learn to manage people. The biggest time-waster is bad management.
      2. Learn some communications skills. This applies to everyone. Management, programmers, get your "people skills" in order.
      3. Give people the time they need to better self-organize. Unrealistic deadlines waste time as corners are cut.
      4. Learn to manage projects. This includes cutting features right at the beginning, instead of the usual "we have this checklist of features", and then the inevitable "feature creep", followed by the "what can we cut so we can ship the *^&@&%&^% thing?"

    The real productivity killers are poor morale, poor management, poor communications, poor specifications, poor research, lack of time for testing, lack of time for documenting, lack of time for "passing on knowledge" to other people, etc. Not hardware.

    Yes, hardware IS cheap. Poor management is the killer - in every field. Just ask anyone who has been on a death march project. Or bought GM stock a year ago. Or who supported John McCain, then watched Sarah Palin become his "bimbo eruption." They all have one thing in common - people who thought they knew better, didn't do their research properly, and then screwed the pooch.

  • by jopsen (885607) <jopsen@gmail.com> on Saturday December 20, 2008 @12:58PM (#26184363) Homepage

    Of course, if you're releasing software into the wild and it needs run on many different machines you better make sure it performs well especially if it's a retail product. So spend the extra money and make it really good.

    People usually buys a product before they realize the performance sucks... And retailers always says that it's just because your computer isn't new enough... Which makes people buy new computers, not complain about the software...
    - Or may be I'm wrong...

    But, I don't know many non-computer-freaks who can tell you the system requirements of their computer, and even less that compare them to the minimum requirements of a game, and almost nobody who know that recommended system spec. is actually the minimum requirements for any practical purpose...
    And I don't blame them... I'm a nerd, no gamer, and I can't tell the difference between most modern graphics cards...

  • by AmberBlackCat (829689) on Saturday December 20, 2008 @01:10PM (#26184463)
    I remember being subject to a class called Data Structures & Algorithms, which was all about choosing the right data structures to efficiently handle a problem, and then calculating the efficiency of algorithms to make sure it works well when you throw a huge amount of data or cycles at it. I also seem to remember being subject to Computer Architecture, in which we were taught about the underlying structure of the computer to better understand the hardware before we try to write software for it. I wonder if they teach that in those call centers in India, or those single programming courses companies make people take when they're trying to cut out the cost of programmers altogether.
  • by johnrpenner (40054) on Saturday December 20, 2008 @01:15PM (#26184505) Homepage

    Andy Hertzfeld, engineer on the original Macintosh team:

    Steve was upset that the Mac took too long to boot up when you first turned it on so he tried motivating Larry Kenyon by telling him well you know, how many millions of people are going to buy this machine - it's going to be millions of people and let's imagine that you can make it boot five seconds faster well, that's five seconds
    times a million every day that's fifty lifetimes, if you can shave five seconds off that you're saving fifty lives. And so it was a nice way of thinking about it, and we did get it to go faster. (PBS, Revenge of the Nerds, Part 3)

  • by davecb (6526) * <davec-b@rogers.com> on Saturday December 20, 2008 @01:16PM (#26184509) Homepage Journal

    Throwing hardware at a problem means the writer failed to use his sysadmin staff to do basic capacity planning while there wasn't a problem.

    And as johnlcallaway, said, the problem isn't usually CPU: most bottlenecks are either disk I/O or code-path length.

    I'm a professional capacity planner, and it seems only the smartest 1% of companies ever think to bring me in to prevent problems. A slightly larger percentage do simple resource planning using the staff they already have. A good example of the latter is Flickr, described by John Allspaw in The Art of Capacity Planning [oreilly.com], where he found I/O was his problem and I/O wait time was his critical measurement.

    Failing to plan means you'll hit the knee in the response-time curve, and instead of of a few fractions of a second, response time will increase (degrade) so fast that some of your customers will think you've crashed entirely.

    And that in turn becomes the self-fulfilling prophecy that you've gone out of business (;-()

    Alas, the people who fail to plan seem to be the great majority, and suffer cruely from their failure. The last few percent are those unfortunates whose professional staff planned, warned, and were ignored. Their managers pop up, buy some CPUs or memory to solve their I/O problem, scream at their vendor for not solving the problem and then suddenly go quiet. The hardware usually show up on eBay, so I think you can guess what happened.

    --dave

  • Re:Frist? (Score:2, Interesting)

    by Ethanol-fueled (1125189) * on Saturday December 20, 2008 @01:37PM (#26184675) Homepage Journal
    When developers ask for a new monitor or dual monitors, let them have 'em but mandate that the monitors be in a vertical orientation [about.com]as opposed to the typical horizontal orientation. That way, they'll have to use the monitors for efficient viewing of code rather than watching movies all day long. It's such a simple idea that I'm surprised that more businesses and coders haven't caught on to it.
  • by nabsltd (1313397) on Saturday December 20, 2008 @01:52PM (#26184801)

    Big deal.

    Right now, I've got a problem with a software upgrade where it has to convert the Oracle database to the new version. The conversion takes 40 hours on a database with less than 6 million rows because the code starts a transaction, updates one row, then ends the transaction. After seeing the actual SQL being used, it could be replaced by "UPDATE thetable SET field1 = field2 + constant WHERE field3 = anotherconstant".

    I literally could not buy hardware fast enough to overcome the stupidity of these programmers, and it would be far better to pay a lot more money for the people instead. Unfortunately, this is not an in-house product, and I don't get to pick the programmers that an outside company is going to hire.

  • Re:Nothing new (Score:3, Interesting)

    by Percy_Blakeney (542178) on Saturday December 20, 2008 @02:16PM (#26184959) Homepage

    Once, a long time ago, people wrote code in assembly... Of course cheap powerful hardware has made that all a thing of the past.

    Actually, I would argue that advances in compilers and interpreters have been just as important to that trend as advances in hardware.

  • Re:I agree. (Score:3, Interesting)

    by theaveng (1243528) on Saturday December 20, 2008 @03:06PM (#26185389)

    I don't know why this is "funny"? Ask a manager sometime how much he charges per hour for his programmers/engineers, and he'll tell you $90 or maybe even $100.

    What we actually get PAID is far below that. ;-)

  • Re:Frist? (Score:2, Interesting)

    by zrq (794138) on Saturday December 20, 2008 @03:29PM (#26185557) Journal

    Dual monitors. They pay for themselves within weeks. This is a real no-brainer.

    Ok, I'll bite .. what do you actually use the 2nd monitor for ?

  • Re:First Java Post? (Score:2, Interesting)

    by mR.bRiGhTsId3 (1196765) on Saturday December 20, 2008 @03:39PM (#26185637)
    It is also sometimes nice to be able to have some form of documentation open next to the code you are writing. I can't imagine needing 2 monitors stacked vertically to effectively edit something.
  • by mcrbids (148650) on Saturday December 20, 2008 @03:42PM (#26185653) Journal

    Although you mention scalability and flexibility, I don't think you really hit the nail on the head.

    Performance and scalability are NOT the same. They are fundamentally different. You can have a weakly performing software product that scales nicely, and you can easily have a high performance application that doesn't scale at all.

    Understanding this difference can be the make/break point in whether or not a mildly profitable company can become a world-changer! It's fairly easy to write high-performance software. But it's quite a bit more difficult to build software that scales!

    It all really comes down to understanding the Schlemiel the Painter algorithm [wikipedia.org] which is RAMPANT in software designs.

    Quite literally, there is simply no way to avoid these types of algorithms, but by designing your software correctly, you can limit the effect of these algorithms on the overall scalability of your software stack as the problem set grows larger and larger.

    And that's software that scales. For example, PHP often scales very nicely, because although it's not a fantastic performer, it's "share nothing" approach means that adding more processes and/or servers doesn't particularly impact your original infrastructure. But if you don't design your application right, PHP can scale miserably, depending on how you manage your resources.

    If you write software, ask yourself: what if the whole world were using your product? Could you handle it? Whatever your answer, if you feel sure of your answer, it's probably because you don't yet understand exactly what it means to scale.

  • Re:Absolutely True (Score:2, Interesting)

    by Orestesx (629343) on Saturday December 20, 2008 @04:25PM (#26185943)

    Four man-weeks is, what, $10,000? How much was the machine that he bought? My guess is that it wasn't cheaper to buy a new machine, but it was easier and safer.

    The risk is the thing. If it doesn't work, you're screwed.

  • Re:I agree. (Score:3, Interesting)

    by mrand (147739) on Saturday December 20, 2008 @04:36PM (#26186017)

    The difference between 1% and 5% is 0.1 cents according to Digikey, so it is going to take 650k resistors to recoup that cost. Assuming your board has 100 resistors on it, the cost is recouped after selling ~6500 boards. Only you can tell me if it the board is going to sell that many over its lifetime.

    Having said that, if you are going to need a 1% resistor somewhere for a reason, it makes even less sense to use both 1% and 5% for that value. Just buy the 1% and eliminate the duplicate effort required to buy, receive, stock, inventory, etc a second component of the same value.

          Marc

  • by crowne (1375197) on Saturday December 20, 2008 @04:50PM (#26186139) Homepage
    This comes down to choosing the right tool for the job. Perhaps their application is written using frameworks which enable/enforce this kind of normal transactional processing of requests.

    Now if the existing application is the only way that they know how to get to the data, then it may easily become the golden-hammer / silver bullet that gets used for performing an upgrade, rather than writing an external sql script which they might not be familiar with. Add to this the common convention of "nothing touches my database except my application", which has proven to be useful by preventing rogue updates which cause application 'bugs', and the golden-hammer / silver bullet becomes even more appealing.

    SQL script wouldn't be the only non-application choice, there are quite a few good ETL solutions available, however these tend to cost quite a bit and perhaps the vendor does not want to impose another licence fee onto the client. This brings up a point more relevent to the main article thread, when deciding whether to throw people or hardware ata particular problem, you always have to be aware of the hidden costs i.e. licences for all software used including pre-requisites, network capacity, server-room capacity power & cooling etc. Naturally wide-spread use of open source software makes the initial calculation of software licencing cost a lot easier, although I'm sure that there are those who could argue that the savings on open-source software licences are eroded by necessary additional staff costs.
  • Stupid Idea (Score:3, Interesting)

    by musicmaker (30469) on Sunday December 21, 2008 @12:36AM (#26188903) Homepage

    Ever come across the n+1 selects problem in hibernate. How many junior devs are good enough to figure out whats going on? Not many.

    It means if you are fetching 1,000 records from the database it takes as much as 1,000 times as long as it should. Is halving your dev team cost really worth a 1,000 fold increase in hardware costs because your programmers don't understand the technology properly.

  • by Anonymous Coward on Sunday December 21, 2008 @07:47AM (#26190419)

    The more you consider what a programmer makes the more you go towards outsourcing to another country all over again. I thought we dealt with this years ago and realized its not a good idea always or hardly ever to outsource many IT jobs to other countries and I'll let you ponder why.

  • by tacocat (527354) <tallison1 AT twmi DOT rr DOT com> on Sunday December 21, 2008 @07:54AM (#26190441)

    Where I am currently working, a pizza box server has an annual cost of 2.5 developers salaries for the same period of time. It's grossly out of balance from this article.

    Perhaps there is a reason some companies need Government Bailouts...

  • by roman_mir (125474) on Sunday December 21, 2008 @12:20PM (#26191701) Homepage Journal

    I am late here for this story, but I would like to add something to it for the sake of the late readers anyway :)

    In the second half of 2001 I was on a project for a long time defunct company called WorldInsure (hey, former Corelan guys, any of you still out there, working for Symcor by any chance?)

    So, I came in about half way into the one year project, in a few months the person who was the most senior developer on the project left but the team was still about 40 people in total. The application was something like 5MegaBucks by the end, but the client didn't want to pay the last million, because the performance was outrageously slow. 12 concurrent transactions per second as opposed to the 200 that the client wanted on 2 gigantic for the time 4 way Sun servers.

    The app was a very detailed page after page insurance questionnaire, that would branch into more and more pages and questions as previous questions were answered. At some point a PDF was generated with the answers and provided on one of the last pages. The problem was with moving from page to page, the waiting times were too long, approaching minute wait times for some pages.

    I was asked to speed it up. Long story short, after 1.5 months of tinkering with code produced by a bunch of novices, here is the list of improvements that I can remember at this point:

    1. Removed about 80% of unnecessary database reading by removing TopLink.
    2. Removed about 80% of unnecessary database writing by changing the way the data was persisted. Instead of persisting the entire data set on each new page, only the incremental changes now were persisted.
    3. Reduced the page pre-processing by getting rid of the XSLT transformers on XML structures and switching to JSPs instead.
    4. Removed cluster IO thrashing by reducing the session object size from unnecessary 1MB to a manageable 10Kb.
    5. Reduced CPU load by caching certain data sets instead of reprocessing them on each request within a user session.
    6. Decoupled PDF generation into a separate application and communicated the request to generate the PDF via a simple home grown message queue done with a database table. This was one of the more serious problems within the app. because it could bring down a server due to the buggy code in the Adobe PDF generator that was used at the time. In fact the original application ran the PDF generation as a separate Java application that would be restarted after about 5 generations and would be called via System.execute call so not to bring down the BEA Weblogic. Later on this entire portion was rewritten and the Adobe code thrown away. I am sure that today the Adobe code is fine and all, but at the time it was a real pig.
    7. Removed many many many many unnecessary System.out.println calls, and replaced with proper logging where needed.
    8. Fixed the home grown servlet manager (similar to Struts main servlet), this code was freaking ugly as hell and totally unstable.

    There were some other smaller fixes, but the main bulk is listed here. By the end of the month and a half the app was doing over 300 simultaneous transactions per second.

    300/12, that's 25 times code performance improvement. I am not at all convinced that this improvement could have been achived through hardware at all, but even if it could, it would have cost much more than what I cost at the time (was like 70CAD/hr for 1.5 months.)

    Oh, did I mention that the client coughed out the last million bucks after that? After all, the code met their performance expectations and exceeded them by half at least.

  • Re:Stupid Idea (Score:5, Interesting)

    by Shados (741919) on Sunday December 21, 2008 @12:24PM (#26191721)

    Ever come across the n+1 selects problem in hibernate

    Common issue indeed, and actually a problem with Hibernate... in this day and age, there are algorythms that can be implemented in Object relational mappers to avoid what is at least the common scenario for this to happen in Hibernate or LINQ to SQL/Entity Framework...I'm not sure why it never gets fixed.

    That being said, if you read the article (I know, i know, slashdot), they're talking about premature optimisation. Basically, things like avoiding Hibernate completly because of its overhead, or optimising every single queries as much as possible (even if performance is acceptable) to save every last bit of juice, so that your app can run on 10 megs of RAM instead of 100. They're not, in any way, talking about using shitty programmers, but advocate using GOOD developer's time more efficiently (solving real problems, instead of spending too much time on performance).

    Almost everyone who replied saying it was stupid, almost ALL brought up either how programming mistakes can screw things up, or how its possible to make a system that doesn't scale at all. All those people missed the point.

    You can make a system that is slower, but still scales and is still correctly done a LOT (and I mean a LOT) faster, if you don't go nitpicky and try and optimise everything as you go. You simply avoid doing something totally dumb, code according to best practice, etc, but you're not going to rewrite your system in C to avoid Garbage Collection, you're not going to rewrite the data structures of the framework to squeeze a 1% performance, and you won't avoid Hibernate (completly) to avoid the mapping overhead. You can tap into these time saving paradigms just by upgrading your hardware. You still need competent developers!!! But those competent developers can do more in less time.

    Thats -all- what the author was advocating.

Real Users hate Real Programmers.

Working...