Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming

What Today's Coders Don't Know and Why It Matters 368

jfruhlinger writes "Today's programmers have much more advanced languages and more forgiving hardware to play with — but it seems many have forgotten some of the lessons their predecessors picked up in a more resource-constrained era. Newer programmers are less adept at identifying hardware constraints and errors, developing thorough specifications before coding, and low-level skills like programming in assembly language. You never know when a seemingly obsolete skill will come in handy. For instance, Web developers who cut their teeth in the days of 14.4 Kbps modems have a leg up in writing apps for laggy wireless networks."
This discussion has been archived. No new comments can be posted.

What Today's Coders Don't Know and Why It Matters

Comments Filter:
  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Friday August 05, 2011 @07:17PM (#37002054) Homepage

    Is it truly cheaper to be sloppy ? Hardware keeps getting cheaper and faster, sure, but not matching the pace at which code is getting slower.

    Just look at your average web server. Today's machines are over a hundred times faster than they were 10 years ago, and we're not doing anything significantly different. Serving up text and graphics, processing forms, same old b.s. So then, why aren't we serving 100 times more pages per second ? Apache keeps getting fatter, PHP seems to do a "sleep(100)" after each line, and don't even get me started on Ruby.

    There was a time, not so long ago, when I would spend an hour tweaking an oft-used assembler loop, and the end result was a 3x speedup or more. I'm not saying we should rewrite everything in assembler, but I think we're become so far removed from the actual machine, relying on the compiler to "do the right thing", that people don't even have the slighest clue how to distinguish fast code from slow. How often do we use benchmarks to test different solutions to the same problem ? Almost never! People bust out the profiler only when things go wrong, and even then they might say "just add CPU/Ram/SSD" and call it a day.

    Or, if we must motivate the hippies, call it "green optimisation". Yes, faster code finishes quicker, using less power to complete the same job. If we're dealing with web servers, faster code would require less cluster nodes, or maybe free up some CPU time for another VM on the host, and those 60A circuits aren't cheap either. If spending an extra day optimizing my code could save me $2000 / mo off my colo bill, I'd be a fool not to invest that time.

  • by antifoidulus ( 807088 ) on Friday August 05, 2011 @07:30PM (#37002214) Homepage Journal
    It all comes down to scale ultimately. It's rare in the computer science field to see code that runs x% slower than a more optimized version, at both very small and very large scales. Coders that don't know how the hardware and lower level software interfaces work tend not to write very scalable code because they have no ideas how the computers actually work, and even less of an idea of how a lot of them work.

    As an example, consider a database with very poorly designed primary and secondary keys. This choice will either:

    a) not matter in the least because the tables are so small and queries so infrequent that the other latencies(e.g network, hard disk etc) will dwarf the poor key choice or

    b) Will quickly make the entire database unusable as the time it takes for the database software to search through every record for things matching the given query takes forever and just kills the disk.

    I've seen plenty of (b), largely caused by total ignorance of how databases, and the hard disks they live on, work. The indices are there not only for data modeling purposes, but also to minimize the amount of uber expensive disk I/O necessary to perform most queries. And while you may be in situation (a) today, if your product is worth a damn it may very well get bigger and bigger until you end up in (b), and by the time you are in (b) it may end up being very, very expensive to re-write the application to fix the performance issues(if they can be fixed without a total re-write)

    Anyone who codes should have at least a fundamental grasp of computer architecture, and realize what computer hardware is good, and bad at. That isn't "premature optimization" as you seem to think it is, it is a fundamental part of software design. "Premature optimization" in this context means things like changing your code to avoid an extra integer comparison or two, that kind of thing. It is not,"lets just pretend the computer is a magic device and ignore performance concerns because it's easier".
  • Re:This... (Score:5, Interesting)

    by JamesP ( 688957 ) on Friday August 05, 2011 @07:49PM (#37002388)

    the worse code, yes

    But that runs fast on old/not so fast hardware

    Example http://en.wikipedia.org/wiki/Fast_inverse_square_root [wikipedia.org]

  • by hism ( 561757 ) <hism.users@sf@net> on Friday August 05, 2011 @08:13PM (#37002620)

    Every once in a while I read an article on Slashdot about how the current generation of programmers churn out only the shittiest of code; how they have no idea how a computer works; how they could never program without a fully-featured IDE with Intellisense that renders instantly. As an undergrad in a CS/ECE discipline, this has always surprised me-- I can only speak for the curriculum at my school, but I can assure you, these mythical 'lost programming skills' are alive and well. Most of the supposed missing skills are addressed in the following mandatory courses, required for graduation from CS/ECE at my school:

    • - One data structures class and one algorithms class - We learn about properties of fundamental data structures; how to analyze and design algorithms for CS, combinatorial, and graph theory problems; complexity classes
    • - One intro compilers course - Nobody graduates without writing a compiler for a C-like language that outputs MIPS assembly.
    • - One operating systems course - You can either write an OS for a MIPS emulator, or you can write an OS for a physical 68000-based CPU, using C (without the standard library of course) and assembly. So yeah, we have debugged without an IDE, by staring at hex string dumps, for days at a time.
    • - One course on design patterns/architecture - Yes, we learn some Software Engineering principles too, even though it is an academic institution.

    Yeah, sure, the some folks at the bottom of the class may be shitty programmers who scraped by with a "barely passed" in the mandatory courses. But surely they existed 10, 20, 30, 40 years ago as well. I'm not sure where people are getting this idea that new coders have no skills. What do they think we do for four years, set up MySQL and write PHP for it?

  • Re:The problem is (Score:5, Interesting)

    by wrook ( 134116 ) on Friday August 05, 2011 @08:20PM (#37002692) Homepage

    There aren't good engineering practices in software. This is why I abjectly refuse to call myself an engineer (that and I'm *not* an engineer). Can you tell me with a known degree of certainty the probability that a software component will fail? What procedures would you put into place to give you that number (keeping in mind the halting problem)? What procedures would you put into place to mitigate those failures? Because I'm drawing a big gigantic blank here.

    Look, I'm all for using "Software Engineering" practices. Personally, in my career I have championed TDD, peer review, acceptance tests written in advance by someone other than the programmers, etc, etc, etc. But this isn't engineering. The best I can tell anyone is, "Hey, it doesn't break on *my* machine. Yeah, I think the probability of it breaking on your machine is 'low'. No, I wouldn't like to specify a number, thank you very much." Why do you think software never comes with a warrantee?

    I often wonder what we could do to actually make this an engineering discipline. For one thing, I think we really need to invest in developing stochastic testing techniques. We need to be able to characterise all the inputs a program can take and to test them automatically in a variety of different ways. But this is the stuff of research. There are some things we can do now, but it's all fairly naiscent technology. Maybe in 20 years... :-P

  • Some old-school developers prematurely optimize for things we no longer need to optimize for (and shouldn't). From an older post of mine [slashdot.org]:

    A recent experience with an ex-coworker illustrated this pretty well for me:

    Said fellow, call him "Joe", had about 30 years of COBOL experience. We're a Python shop but hired him based on his general coding abilities. The problem was that he wrote COBOL in every language he used, and the results were disastrous. He was used to optimizing for tiny RAM machines or tight resource allocations and did things like querying the database with a rather complex join for each record out of quite a few million. I stepped in to look at his code because it took about 4 hours to run and was slamming the database most of the time. I re-wrote part of it with a bit of caching and got the run-time down to 8 seconds. (Choose to believe me or not, but I'd testify to those numbers in court.) I gave it back to him, he made some modifications, and tried it again - 3 hours this time. I asked him what on Earth he'd done to re-break the program, and he'd pretty much stripped out my caching. Why? Because it used almost half a gig of RAM! on his desktop and he thought that was abhorrent.

    Never mind that it was going to be run on a server with 8GB of RAM, and that I'd much rather use .5GB for 8 seconds than 1MB for 3 hours of intense activity.

    So Joe isn't every COBOL programmer, but you and I both know that he's a lot of them. But back to the direct point, how much of that 250GLOC was written with the assumption that it'd be running on 512KB machines or with glacial hard drives or where making the executable as tiny as possible was an extreme priority? Doing things like storing cache data in hash tables would've been obscenely expensive back in the day, so those old algorithms were designed to be hyper-efficient and dog slow. Whether you think that constitutes "working well" is up to you.

    He was optimizing for resources that were no longer constrained, and consequently pessimizing for the resources we actually cared about. RAM? Dirt cheap, at least for the dataset sizes involved in that project. Much more expensive was all the extra disk and CPU load he was creating on the company-wide database server (which is sufficiently powerful to serve the entire company when it's not being deliberately assaulted).

    I'm not "anything goes" by any means, and I'm the guy responsible for making sure that lots of processes can peacefully coexist on an efficiently small number of servers. But for all intents and purposes, most of our apps have unlimited resources available to them. If they want to use 100% of a CPU core for 5 minutes or 2GB of RAM for half an hour a day, so be it. I'd much rather run simple, testable, maintainable code that happens to use a lot of server power than lovingly hand-mangled code that no one but the original programmer can understand and which interacts with the rest of the network in entertainingly unpredictable ways.

  • Whither specs? (Score:2, Interesting)

    by naasking ( 94116 ) <naasking@gmaEULERil.com minus math_god> on Friday August 05, 2011 @09:53PM (#37003438) Homepage

    [...] developing thorough specifications before coding [...]

    We have more code verification, static typing, contracts, tests and assertions than ever. Somehow I doubt the above.

    And if by 'specification', you mean a spec on paper separate from the code, that's because the lessons of the past have taught us that specs are never current, so what's the point? With more expressive, higher-level languages, the programming language becomes the specification language, and a specification in such a language is then naturally executable, ie. code is the specification with sophisticated types, or an executable program can be extracted from the specification ala Coq.

  • Re:tl;dr (Score:4, Interesting)

    by ultranova ( 717540 ) on Friday August 05, 2011 @10:37PM (#37003676)

    When the critical mass of programmers just don't worry about efficiency you get this sort of behavior of the type that makes other engineers cringe. Software right now is sort of like the auto industry in the 50's, we're all about adding cool looking tail fins while sticking a cast iron engine block up front.

    No, software right now is turning from a craft into an industry, which means that artisans are being replaced by minimum-wage drones and automated code generators. Of course quality is going to suffer as a result.

  • Re:tl;dr (Score:5, Interesting)

    by jellomizer ( 103300 ) on Friday August 05, 2011 @11:16PM (#37003884)

    Having developed on old platforms and new ones and back to old one. You actually get much better quality out of the new stuff. We look back at the old software with pride on how well it runs, forgetting the decades of errors and problems it had in the past, and the years of effort it took to get them run at that level.

    The old stuff seems to run faster but not really. Old word perfect, no realtime spell check, no fonts, bold and italics were the big features, along with multi-column and margins. The display was 80x25 and still the app crashed and you lost all your work. And took minutes to load off your floppy.

    Can new developers write tighter code. Sure they are not stupid. But how much can you loose from doing it? Getting in too late in the market? Loss of multi-platform support? Hard to maintain? Vulnerable to hacking?

    I just heard a report that the high frequency trading systems are easy to hack into because they have been developed for all speed... Leaving room for hacking in. As anyone has done coding knows it takes 10% to get the program to do what it needs to do and the rest to make sure humans don't cause it to do something else.

The moon is made of green cheese. -- John Heywood

Working...