Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming

What Today's Coders Don't Know and Why It Matters 368

jfruhlinger writes "Today's programmers have much more advanced languages and more forgiving hardware to play with — but it seems many have forgotten some of the lessons their predecessors picked up in a more resource-constrained era. Newer programmers are less adept at identifying hardware constraints and errors, developing thorough specifications before coding, and low-level skills like programming in assembly language. You never know when a seemingly obsolete skill will come in handy. For instance, Web developers who cut their teeth in the days of 14.4 Kbps modems have a leg up in writing apps for laggy wireless networks."
This discussion has been archived. No new comments can be posted.

What Today's Coders Don't Know and Why It Matters

Comments Filter:
  • Newsflash (Score:2, Insightful)

    by Anonymous Coward

    Experienced people have experience in things they have experienced

    • newsflash - many schools no longer teach low level programming.

      If you want to get good, truly good, you'll have to learn some of this on your own. And in the good old days all you had to do was count up clock cycles and you were done. Modern processors with multi-stage pipe-lines and out of order execution are much harder to hand optimise. When a pipe-line stall is very expensive you don't worry so much about clocks.

  • Fashion (Score:4, Informative)

    by funkatron ( 912521 ) on Friday August 05, 2011 @05:38PM (#37001556)
    So your particular skillset has fallen out of vogue for a while; it happens. If this stuff is useful, it'll come back. For instance, a lot of the hardware related skills mentioned are still around, they're just considered to be a specialisation these days, in most situations it's safe to assume that the hardware either performs within spec or that the lower layer (OS etc) is dealing with any irregularities.
    • So your particular skillset has fallen out of vogue for a while; it happens. If this stuff is useful, it'll come back. For instance, a lot of the hardware related skills mentioned are still around, they're just considered to be a specialisation these days, in most situations it's safe to assume that the hardware either performs within spec or that the lower layer (OS etc) is dealing with any irregularities.

      I'm actually a youngin' who took interest in the lower layers, and developed my skill set around that. I'm kind of waiting for the oldies to retire/kick the bucket and open up more openings for people like me. I anticipate that I'll be one of the hawt shite programmers once the population of systems programmers starts dwindling...

      • by lgw ( 121541 )

        Pretty much. There are very few comp sci programs these days that teach anything relevent to systems programming, and kernel jobs still pay nicely as a result. I stay away from the kernel stuff myself, as I find it quite tedious, but there are plenty of user-mode systems jobs around in Silly Valley - not so much elsewhere. (Stay away from embedded though, that field pays crap for some reason I've never grasped).

        We're re-inventing the mainframe all over again, and that promises to be a lot of work.

  • They don't know that old trick from liblawn
    Lawn::GetOffLawn(kid);

    • by Sarusa ( 104047 )

      This obviously calls for a LawnFactoryFactorySingletonFactory pattern

      • Re: (Score:3, Funny)

        No, it clearly demands a combination of the observer pattern with the command pattern: You observe your lawn, and if you see kids, you command them to get off it.

    • by billcopc ( 196330 ) <vrillco@yahoo.com> on Friday August 05, 2011 @05:58PM (#37001832) Homepage

      That lib requires cooperative event handling in the kid class. I much prefer the longer, but deterministic form:

      if ( $myLawn->getContents()->filter({type: kid})->count() > 0 ) {
          $myShotgun = new Shotgun()
          $myShotgun->loadAmmo();
          $myLawn->getOwner()->wieldObject($myShotgun);
          for( $i = 5; $i>0; $i--) { sleep(1000); }
          while ( $myLawn->getContents()->filter({type: kid})->count() > 0 ) {
              $myShotgun->fire();
          }
      }

    • by JamesP ( 688957 )

      What?

      it should be get_off(my_lawn), none of this modern 'object orientation' nonsense

      or maybe

      lea ax,[my_lawn]
      call get_off

  • The problem is (Score:5, Insightful)

    by geekoid ( 135745 ) <dadinportland@yaFREEBSDhoo.com minus bsd> on Friday August 05, 2011 @05:42PM (#37001596) Homepage Journal

    they aren't trained to engineer software, and the industry hasn't incorporated good engineering practices.

    • by ADRA ( 37398 )

      Coming from a legacy modernization project, just because people wrote programs 10, 20, 30 years ago doesn't mean that the code was good, or that the developers knew what they were doing. One would hope that decades of development experience would teach a well rounded set of skills and often it does.

      To sum up, a 5 year out of school brat learning technology X is any less capable than a 5 year out of school brat learning technology Y in the 80's/90's.

    • Re:The problem is (Score:5, Interesting)

      by wrook ( 134116 ) on Friday August 05, 2011 @07:20PM (#37002692) Homepage

      There aren't good engineering practices in software. This is why I abjectly refuse to call myself an engineer (that and I'm *not* an engineer). Can you tell me with a known degree of certainty the probability that a software component will fail? What procedures would you put into place to give you that number (keeping in mind the halting problem)? What procedures would you put into place to mitigate those failures? Because I'm drawing a big gigantic blank here.

      Look, I'm all for using "Software Engineering" practices. Personally, in my career I have championed TDD, peer review, acceptance tests written in advance by someone other than the programmers, etc, etc, etc. But this isn't engineering. The best I can tell anyone is, "Hey, it doesn't break on *my* machine. Yeah, I think the probability of it breaking on your machine is 'low'. No, I wouldn't like to specify a number, thank you very much." Why do you think software never comes with a warrantee?

      I often wonder what we could do to actually make this an engineering discipline. For one thing, I think we really need to invest in developing stochastic testing techniques. We need to be able to characterise all the inputs a program can take and to test them automatically in a variety of different ways. But this is the stuff of research. There are some things we can do now, but it's all fairly naiscent technology. Maybe in 20 years... :-P

      • Re:The problem is (Score:4, Insightful)

        by TheLink ( 130905 ) on Saturday August 06, 2011 @01:04AM (#37004512) Journal
        Years ago people could already look at BIND, Sendmail and other "ISC goodness" and work out the probability that they would have at least one exploitable vulnerability a year (>90%) ;). http://www.isc.org/advisories/bind?page=8

        The problem is many people (bosses, project managers, developers, etc) don't understand the big difference between "Software Engineering" and say Civil Engineering.

        In Civil Engineering creating all the blueprints and plastic models necessary is typically 1/10th the cost of building the "Real Thing", and make up a smaller portion of the total cost.

        For software, creating the source code (drafts, blueprints) costs more than 100 times the cost of "make all" (building the "Real Thing"), and form a large portion of the total cost.

        So if "stuff happens" and you need to spend 50% more to fix the design, with Civil Engineering the bosses are more likely to agree (unhappily) to "fix the design", because nobody can afford to build the whole building a few times till you get the design right...

        Whereas with "Software Engineering", the bosses are more likely to go "Ship and sell it, we'll fix it in the next release!".

        And if you're a boss, you'd likely do the same thing ;).

        So even if you could work out the probabilities of some software component failing, nobody would care. Because all you need to work out is: which bugs need to be fixed first, out of the hundreds or thousands of bugs.

        That changes if you are willing to spend 10x the cost (in $$$ and time) of creating each internal "release". By the time the 10th (final) release is written and tested (specs remaining the same - no features added) the stuff should work rather reliably. But you'd be 5 years behind everyone else...
  • would explain why Runescape runs perfectly well over dialup or EDGE(4KB/sec) speeds, while most other games do not: The original creators probably had some experience that way.

  • It doesn't matter. (Score:5, Insightful)

    by man_of_mr_e ( 217855 ) on Friday August 05, 2011 @05:42PM (#37001604)

    "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" - Donald Knuth

    Most developers will never need for their apps to run in constrained environments, and most employers don't want to spend money to eek out performance when average performance is perfectly fine.

    Too many programmers get caught up in trying to make something the fastest, or most memory efficient, or makes the best use of bandwidth. When most of the time, it just doesn't matter. Such things are expensive, and in the long run it's cheaper to be fast and sloppy than slow and lean.

    • by jackb_guppy ( 204733 ) on Friday August 05, 2011 @05:55PM (#37001790)

      I love D Knuth and have read is sorting and searching book many time over, always finding good times.

      SPEED does mater and so does SIZE and BANDWIDTH. It is important to design things right the first time versus loops and loops of daily optimization that must code is written in today. The understanding of record-locks, index optimization and other multiplexing methods are needed today. I see too much of sucking a 1+ million peaces into memory to find 1 item, "Because it is fast".

      Yes this sounds like "get of my grass", but "fast and sloppy", is a waste on everyone's resources not just a single computer.

      • Those are important... *IF* they are important. Duh. Most of the time, they are not. Thus, complaining that developers who don't need to write lean apps aren't writing lean apps is kind of pointless.

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          The thing is people use that quote to be *LAZY*. Yeah most of the time it doesnt matter. But guess what when it does you may find yourself rebuilding an entire framework because you made just plain stupid mistakes and LOTS of them.

          For someone who understands what optimizations are available vs your code jockey who just whips up some code the difference is miles different.

          I used to think the same way. "It doesnt matter much" but then I realized it does matter. It matters a lot. Think if all of your prog

        • Re: (Score:3, Insightful)

          by Anonymous Coward

          Nobody is suggesting you optimise to shave off the odd byte or machine cycle.

          However, you should optimise for the task in the sense of picking appropriate algorithms and structures.

          What I've seen in industry is that a lot of developers pay attention neither to elegance nor efficiency. And this really bites you in the pants when the code meets real data.

          Anyway, once you decide to ignore resource constraints in your engineering, what on earth is left that's challenging? Honestly, you might as well flip burg

      • SPEED does mater and so does SIZE and BANDWIDTH.

        Sometimes, one or more of those matters, and sometimes it matters enough that an otherwise correct naive implementation will not suffice, but most of the time, focussing on correctness first and optimizing only where problems become apparent is better than building for speed, size, or other performance aspects first.

        And if the problem analysis has been done well, you'll know the times when speed, size, etc. have critical inflexible constraints ahead of time,

      • by Mitchell314 ( 1576581 ) on Friday August 05, 2011 @06:14PM (#37001994)
        You want spend the most effort to conserve the most expensive resource. And that is not the cpu, ram, or disk time. It's human time. Hell, even working for low wage, a person is expensive. Thus the most effort should be put in having them do the least effort. Unless you have a case where the hardware time is getting expensive, but that's the exception as hardware costs go down while salary doesn't.

        And no, that's not an excuse to be sloppy. "Back in the ancient days" it was important to write good code for the limited resources. Now you still need to write good code, but the constraints are relaxed. But we still need code that is maintainable, dependable, extendable, flexible, understandable, etc.
        • Re: (Score:2, Insightful)

          by Anonymous Coward

          And all those users sitting waiting 5 minutes for the page to load, for the data to completely draw, or whatever?

          You do read thedailywtf.com, don't you? Plenty of stories where a 'quick app' becomes mission critical, and can't handle slightly (?) larger datasets.

          Well, those aren't _our_ users.. their time is _free_. And they can _leave_ if they don't like it. ....

        • You seem to have this mental model that more efficient code must take longer to develop. But not making bad decisions may take up exactly zero time if you are in the habit of making good decisions.

          A simple example is the ordering of loops. Exchanging the order of loops after the fact may take extra time, but writing the loops in the right order from the start doesn't take more time than writing them in the wrong order.

          • You seem to have this mental model that more efficient code must take longer to develop.

            I did not say, claim, or imply that. I was talking about factoring in developer time as a resource.

        • So to save one developer day we sacrifice 5 user minutes per day X 250 days per year X 100000 users? I know I have been the victim of this.

        • by Sycraft-fu ( 314770 ) on Friday August 05, 2011 @07:21PM (#37002698)

          You need to wait until the shit is done, then profile it. You suck at knowing what needs to be optimized, no I don't care how good you think you are. Ask the experts, like Knuth or Abrash.

          So if the speed matters, or the RAM usage, or whatever you write the program, then you profile it, in real usage, and see what happens. You then find where spending your time is worth it.

          For example support you find a single function uses 95% of all the execution time. Well, until you've optimized that, it is stupid to spend time optimizing anything else because even a small gain in that function will outweigh a massive gain elsewhere. You need to find those problems spots, those areas of high usage, and optimize them first, and you can't do that until the program is written and profiled.

          That is also pretty common too, the "Couple areas that use all the resources," thing. It is not usual for it to be spread across all the code. So you need a profiler to identify those and you then need to focus your effort. You can then break out the ASM hackery for those few lines that have to be super fast, if needed, and you'll achieve most if not all of the result you would from doing the whole thing low level.

          • Again, I agree to an extent... However some decisions up front can be just plain bad... for example, querying a database, pulling a fairly complicated set of joins for a result set, then only using 2 fields because the procedure already existed, to augment this information in a loop, where another query is made for each iteration, that now pulls more information across the wire to use three fields instead of a simpler query with a left join.

            I've seen many times over where a convention is used, and a res
        • by mcrbids ( 148650 )

          Hell, even working for low wage, a person is expensive. Thus the most effort should be put in having them do the least effort.

          Yeah, but here's where it gets weird: Software is highly valuable because it allows a single person to do the same task a very large number of times.

          Because of this, it's not like stacking logs or gluing tile. If you have sufficient leverage, you can spend a stupid amount of money getting everything just right and profit immensely from it. (See: Apple)

          On the other hand, most softwar

    • by billcopc ( 196330 ) <vrillco@yahoo.com> on Friday August 05, 2011 @06:17PM (#37002054) Homepage

      Is it truly cheaper to be sloppy ? Hardware keeps getting cheaper and faster, sure, but not matching the pace at which code is getting slower.

      Just look at your average web server. Today's machines are over a hundred times faster than they were 10 years ago, and we're not doing anything significantly different. Serving up text and graphics, processing forms, same old b.s. So then, why aren't we serving 100 times more pages per second ? Apache keeps getting fatter, PHP seems to do a "sleep(100)" after each line, and don't even get me started on Ruby.

      There was a time, not so long ago, when I would spend an hour tweaking an oft-used assembler loop, and the end result was a 3x speedup or more. I'm not saying we should rewrite everything in assembler, but I think we're become so far removed from the actual machine, relying on the compiler to "do the right thing", that people don't even have the slighest clue how to distinguish fast code from slow. How often do we use benchmarks to test different solutions to the same problem ? Almost never! People bust out the profiler only when things go wrong, and even then they might say "just add CPU/Ram/SSD" and call it a day.

      Or, if we must motivate the hippies, call it "green optimisation". Yes, faster code finishes quicker, using less power to complete the same job. If we're dealing with web servers, faster code would require less cluster nodes, or maybe free up some CPU time for another VM on the host, and those 60A circuits aren't cheap either. If spending an extra day optimizing my code could save me $2000 / mo off my colo bill, I'd be a fool not to invest that time.

      • by Vellmont ( 569020 ) on Friday August 05, 2011 @07:55PM (#37003028) Homepage


        Today's machines are over a hundred times faster than they were 10 years ago

        The raw CPU power times the amount of cores is 100 times faster. How much faster is the I/O? Serving up web pages is mostly about I/O. I/O from the memory, I/O from the database, I/O to the end user. The CPU is usually a small part of it.

        You actually sound like a perfect example of what the article is talking about. People who don't understand where the bottlenecks lie. Hell, it even mentioned the misunderstanding of the I/O bottlneck that exists today.

    • by antifoidulus ( 807088 ) on Friday August 05, 2011 @06:30PM (#37002214) Homepage Journal
      It all comes down to scale ultimately. It's rare in the computer science field to see code that runs x% slower than a more optimized version, at both very small and very large scales. Coders that don't know how the hardware and lower level software interfaces work tend not to write very scalable code because they have no ideas how the computers actually work, and even less of an idea of how a lot of them work.

      As an example, consider a database with very poorly designed primary and secondary keys. This choice will either:

      a) not matter in the least because the tables are so small and queries so infrequent that the other latencies(e.g network, hard disk etc) will dwarf the poor key choice or

      b) Will quickly make the entire database unusable as the time it takes for the database software to search through every record for things matching the given query takes forever and just kills the disk.

      I've seen plenty of (b), largely caused by total ignorance of how databases, and the hard disks they live on, work. The indices are there not only for data modeling purposes, but also to minimize the amount of uber expensive disk I/O necessary to perform most queries. And while you may be in situation (a) today, if your product is worth a damn it may very well get bigger and bigger until you end up in (b), and by the time you are in (b) it may end up being very, very expensive to re-write the application to fix the performance issues(if they can be fixed without a total re-write)

      Anyone who codes should have at least a fundamental grasp of computer architecture, and realize what computer hardware is good, and bad at. That isn't "premature optimization" as you seem to think it is, it is a fundamental part of software design. "Premature optimization" in this context means things like changing your code to avoid an extra integer comparison or two, that kind of thing. It is not,"lets just pretend the computer is a magic device and ignore performance concerns because it's easier".
    • There's a big difference between "optimization", and "designed with standard performance considerations in mind" though. The latter should always be done. The former, only when there's a genuine need to run As Fast As Possible (tm).

      Too many new coders today have no grasp of EITHER. They don't understand even the basics of algorithmic efficiency ("big O" notation, the difference between a linear and constant time lookup, etc), the cost of memory allocation (doing things like unnecessarily creating a new i

    • When most of the time, it just doesn't matter. Such things are expensive, and in the long run it's cheaper to be fast and sloppy than slow and lean.

      You have GOT to be in management. The vast majority of the cost of a software project go into the support. If it cost twice as much to do it right up front you will save 10 times that amount in support cost down the road. You can then spend that money evolving and improving your software rather than being forced to spend it trying to keep it going day to day. Attitudes like your's are the number one thing wrong with the software industry (well, besides software patients but that's another argument).

  • I am an electrical engineer. Here's some news for ya: us electrical engineers do learn these skills, so don't bother complaining about the state of the world because new students are taught these vital insights in computing every day.
  • but other than that I fail to see the outrage. I also don't see a lot of value in learning things you won't likely need to use. Whats the cost/benefit to learning and mastering assembly if you aren't going to need it? Building software as if you have low resources is fine, so long as you aren't compromising quality to make sure it will run on an archaic hardware. Making things as lean and fast as you can is always plus... if you have the time. Which is another thing today's programmers deal with more: ins
    • Whats the cost/benefit to learning and mastering assembly if you aren't going to need it?

      Less people master it, therefore it is more rewarding, both intellectually and financially.

      Would you rather be a replaceable coding monkey, doing what anyone else could do, or be an expert software architect that your company relies on?

  • Link in the summary is to the print page. Thanks jfruhlinger!
  • I remember when bits were made of actual stone, hand crafted, and strung by hand into computational frames. Today's kids don't even understand that this is what we are referring to when we talk about "Frame Works" today. Those Frames were tons of work, let me tell you.

    Computational systems used these frames so that sometimes you had to go through several of them to get to your goal (These were the early AND gates), and other times you allowed users to pick which one to pass through (OR gates). When
  • by AirStyle ( 2363888 ) on Friday August 05, 2011 @05:53PM (#37001766)
    I do agree with you. I'm new to programming myself, but I've always felt the need to learn more about the computer than just the high-level language. That's why I want to take up PERL. Apparently, there's still a strong Ruby community out there, so I might take that up as well. On top of that, I like to plan out my programs. I like to know exactly what it will do before I do it, which may require writing out the code first. I'm only two years into programming, so I still have a long way to go. I just want to make sure that what I do is very efficient so that all my future supervisor has to do is sit back and trust me.
  • One of those interviewed in the article complained about the fact that modern day programmers try to solve the problem through an IDE or debugger, instead of putting in statements which change the output of the program. They wanted printf debugging. While I do value a good tracing subsystem, I for one, am grateful for modern debuggers which let me view the state of the system without having to modify/redeploy the code.
    • One of those interviewed in the article complained about the fact that modern day programmers try to solve the problem through an IDE or debugger, instead of putting in statements which change the output of the program. They wanted printf debugging. While I do value a good tracing subsystem, I for one, am grateful for modern debuggers which let me view the state of the system without having to modify/redeploy the code.

      Oooh, I remember printing out debug statements. When I was at Uni.

      Tried doing it once while working on a massive program when I got a job after Uni, it was near useless due to the scale of the system. Figured out how to use a decent debugger properly (we might have been taught how to use a basic one at Uni) and haven't looked back.

      Recently found out with the debugger I am using that I can change variable values mid execution - can't do that with print statements. You're right - modern debuggers are great.

      • Not only can you change variables during execution, you can manually move the execution pointer around, you can recover from unhandled exceptions, and you can edit the source code during a breakpoint and then continue without having to restart your application.

        You can also still direct things to the Output window in the IDE if you fancy the printf style statements.

    • by wrook ( 134116 )

      I didn't read TFA, but here are some thoughts to consider. printf debugging is only useful if you have an uncomplicated subsystem. You have to be able to quickly find the area where the problem may lie. You have to be able to easily identify and set up the scenario that is causing the problem. You have to easily be able to modify the code, redeploy and run it. I find that a lot of people will say, "My system is too complex for that". What they don't quite grasp is that their system is broken.

      If you ne

  • by alostpacket ( 1972110 ) on Friday August 05, 2011 @05:55PM (#37001786) Homepage

    we had to code uphill in 10 feet of snow on an abacus using roman numerals.

    • by sconeu ( 64226 )

      You had Roman numerals? You lucky, jammy, bastard! I'd have killed for Roman numerals. We had to use tally marks. And if we didn't put the slash to mark the fifth, the compiler got confused and core dumped on us!

    • by tool462 ( 677306 )

      Roman numerals? Then how did you terminate your strings?!

  • Comments/phrases like these completely fail to grasp that things like this are RELATIVE. What is 'resource constrained' today isn't what was seen as 'resource constrained' 20 years ago. Likewise, many young programmers _today_ (including myself) DID in fact learn to code in what would be seen as resource constrained environments compared to today's machines. I cut my teeth on an 8MB Win95 machine and later a 32MB machine. Sure, that amount of RAM to play with is an insane luxury if we're thinking back to ea

  • How the fuck do you forget something you were never taught in the first place?

    This article should really read: "Crotchety old programmers fail to pass on tricks of the trade, then complain anyways"

  • Eh, didn't someone remind us of this a couple of months ago? Seems like someone really has teeth to grind with modern coders. Get a life, you suspicious person!

  • by coolmadsi ( 823103 ) on Friday August 05, 2011 @06:16PM (#37002022) Homepage Journal
    I get the feeling that hardware errors were a lot more common back in the authors day; they don't come up very often now.

    One of the first things I learnt when troubleshooting problems is that it is probably a problem with your code, and not the hardware or external software libraries (apart from rare cases).

  • by DeadCatX2 ( 950953 ) on Friday August 05, 2011 @06:19PM (#37002082) Journal

    We still use straight C, sometimes even intermixed with assembly. We know all about resource constraints, given that our microcontrollers sometimes only support a few kB for instructions and data RAM. As far as debugging goes, I'll see your debug-with-printf and I'll raise you a debug-with-oscilloscope-and-GPIO-toggling.

    • Oh, the things I didn't know when I started doing embedded software development. I learned a rather frustating lesson that Microchip's PIC16 series only has a 7 level call stack. Ironicly, I was only overflowing when I called a routine to output debugging information.

      To further muddy the waters, the device transmits data to a server where everything is written in C#. My work can be frustrating at times. Much less so since we switched development to the PIC32 series (128k ram? Heaven!)

    • by jepaton ( 662235 )

      Even on slightly fancier processors you can have limited JTAG debugging support. Severe limitations on the number of instruction breakpoints and data breakpoints can limit the usefulness of the debugger for everyday work. Even in not-particularly-time-critical software single-stepping through the code can be impossible - either the bug is time dependant (e.g. errors in hardware drivers or race conditions) or normal execution relies on timing (e.g. communications). The debugger is useful only for a very narr

  • it will last longer and be less hassle

  • "I see poor understanding of the performance ranges of various components," says Bernard Hayes, PMP (PMI Project Mgmt Professional), CSM (certified Scrum Master), and CSPO (certified Scrum Product Owner).

    There's such a thing as a certified Scrum Product Owner? Am I now being encouraged to go get management trained on how to be certified at owning Scrum Products?

    I'm not sure if I can take what someone with a set of certifications this ridiculous says seriously.

  • Every once in a while I read an article on Slashdot about how the current generation of programmers churn out only the shittiest of code; how they have no idea how a computer works; how they could never program without a fully-featured IDE with Intellisense that renders instantly. As an undergrad in a CS/ECE discipline, this has always surprised me-- I can only speak for the curriculum at my school, but I can assure you, these mythical 'lost programming skills' are alive and well. Most of the supposed mis

  • Some old-school developers prematurely optimize for things we no longer need to optimize for (and shouldn't). From an older post of mine [slashdot.org]:

    A recent experience with an ex-coworker illustrated this pretty well for me:

    Said fellow, call him "Joe", had about 30 years of COBOL experience. We're a Python shop but hired him based on his general coding abilities. The problem was that he wrote COBOL in every language he used, and the results were disastrous. He was used to optimizing for tiny RAM machines or tight resource allocations and did things like querying the database with a rather complex join for each record out of quite a few million. I stepped in to look at his code because it took about 4 hours to run and was slamming the database most of the time. I re-wrote part of it with a bit of caching and got the run-time down to 8 seconds. (Choose to believe me or not, but I'd testify to those numbers in court.) I gave it back to him, he made some modifications, and tried it again - 3 hours this time. I asked him what on Earth he'd done to re-break the program, and he'd pretty much stripped out my caching. Why? Because it used almost half a gig of RAM! on his desktop and he thought that was abhorrent.

    Never mind that it was going to be run on a server with 8GB of RAM, and that I'd much rather use .5GB for 8 seconds than 1MB for 3 hours of intense activity.

    So Joe isn't every COBOL programmer, but you and I both know that he's a lot of them. But back to the direct point, how much of that 250GLOC was written with the assumption that it'd be running on 512KB machines or with glacial hard drives or where making the executable as tiny as possible was an extreme priority? Doing things like storing cache data in hash tables would've been obscenely expensive back in the day, so those old algorithms were designed to be hyper-efficient and dog slow. Whether you think that constitutes "working well" is up to you.

    He was optimizing for resources that were no longer constrained, and consequently pessimizing for the resources we actually cared about. RAM? Dirt cheap, at least for the dataset sizes involved in that project. Much more expensive was all the extra disk and CPU load he was creating on the company-wide database server (which is sufficiently powerful to serve the entire company when it's not being deliberately assaulted).

    I'm not "anything goes" by any means, and I'm the guy responsible for making sure that lots of processes can peacefully coexist on an efficiently small number of servers. But for all intents and purposes, most of our apps have unlimited resources available to them. If they want to use 100% of a CPU core for 5 minutes or 2GB of RAM for half an hour a day, so be it. I'd much rather run simple, testable, maintainable code that happens to use a lot of server power than lovingly hand-mangled code that no one but the original programmer can understand and which interacts with the rest of the network in entertainingly unpredictable ways.

  • by codepunk ( 167897 ) on Friday August 05, 2011 @07:30PM (#37002786)

    Over the last few years the most disturbing trend I have seen is programmers that do not have the ability to ship a product. You can talk all the shit you want about architecture, cute technology, methodology but if you don't ship product you don't count.

  • by hughbar ( 579555 ) on Saturday August 06, 2011 @05:00AM (#37005626) Homepage
    First to declare interest, I'm 60 pre-internet and generation COBOL/Assembler, get off my lawn generation too!

    That said, there's a great many good reasons for doing things economically, concisely and elegantly, Occam's razor, optimal use of resources [a financial matter too], no constant upgrades and server/desktop refreshes. No endless addition of extra storage because stuff is duplicated everywhere. By the way, if someone starts a project to de-duplicate the web down to a 'safe' evel [only ten copies of each thing not 2K] , I'll sign up.

    I also resent, mainly government who send me half a dozen self-congratulatory jpegs with each email. You're wasting bandwidth and blocking up the pipes, you folks.

    So, if we pay a little attention, we won't need the sprawling power-hungry data centres that all the big players seem to be building. WE won;t need the constant hardware refreshes [admittedly a lot of those are Windows 'upgrades'].

    Anyways, don't listen to me, I'm old and grumpy, but take a look at this: http://en.wikipedia.org/wiki/Green_computing [wikipedia.org] especially Product Longevity.

There are three kinds of people: men, women, and unix.

Working...