Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software IT Technology

Old-School Coding Techniques You May Not Miss 731

CWmike writes "Despite its complexity, the software development process has gotten better over the years. 'Mature' programmers remember manual intervention and hand-tuning. Today's dev tools automatically perform complex functions that once had to be written explicitly. And most developers are glad of it. Yet, young whippersnappers may not even be aware that we old fogies had to do these things manually. Esther Schindler asked several longtime developers for their top old-school programming headaches and added many of her own to boot. Working with punch cards? Hungarian notation?"
This discussion has been archived. No new comments can be posted.

Old-School Coding Techniques You May Not Miss

Comments Filter:
  • Some, not all... (Score:5, Insightful)

    by bsDaemon ( 87307 ) on Thursday April 30, 2009 @12:17AM (#27768223)

    Some of those are obnoxious and good to see them gone. Others, not so much. For instance, sorting/searching algorithms, data structures, etc. Don't they still make you code these things in school? Isn't it good to know how they work and why?

    On the other hand, yeah... fuck punch cards.

    • Re: (Score:3, Insightful)

      by AuMatar ( 183847 )

      Its absolutely essential to know how those work and why. If not you'll use the wrong one and send your performance right down the crapper. While you shouldn't have to code one from scratch anymore, any programmer who can't do a list, hash table, bubble sort, or btree at the drop of a hat ought to be kicked out of the industry.

      • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Thursday April 30, 2009 @12:58AM (#27768459) Journal

        any programmer who can't do a list, hash table, bubble sort, or btree at the drop of a hat ought to be kicked out of the industry.

        Why?

        Lists, hash tables, and sorting is already built in to many languages, including my language of choice. The rest, I can easily find in a library.

        When performance starts to matter, and my profiling tool indicates that the sorting algorithm is to blame, then I'll consider using an alternate algorithm. But even then, there's a fair chance I'll leave it alone and buy more hardware -- see, the built-in sorting algorithm is in C. Therefore, to beat it, it has to be really inappropriate, or I have to also write that algorithm in C.

        It's far more important that I know the performance quirks of my language of choice -- for instance, string interpolation is faster than any sort of string concatenator, which is faster than straight-up string concatenation ('foo' + 'bar').

        And it's far more important that I know when to optimize.

        Now, any programmer who couldn't do these at all should be kicked out of the industry. I could very likely code one quickly from the Wikipedia article on the subject. But by and large, the article is right -- there's a vast majority of places where these just don't matter anymore.

        Not that there's nowhere they matter at all -- there are still places where asm is required. They're just a tiny minority these days.

        • by Ruie ( 30480 ) on Thursday April 30, 2009 @01:23AM (#27768591) Homepage

          any programmer who can't do a list, hash table, bubble sort, or btree at the drop of a hat ought to be kicked out of the industry.

          Why?

          Because if these well known tasks are difficult for them their job title is really typist, not programmer. The challenge is not to write bubble sort day in and day out, but being several levels above that so it is as easy as computing six times seven or reading road signs.

        • True story (Score:5, Interesting)

          by Moraelin ( 679338 ) on Thursday April 30, 2009 @02:07AM (#27768831) Journal

          Let me tell you a true story to illustrate why I think people should still learn that stuff.

          ACT I

          So at one point I'm in a room with what looks like two particularly unproductive Wallys. Though it's probably unfair to call both Wally, since at least one looks like the hard working kind... he just makes as much progress as a courier on a treadmill.

          So Wally 1 keeps clicking and staring at the screen all week and spewing things like "Unbelievable!" every 5 minutes. My curiosity gets the better of me and I ask what's happening.

          "Look at this," goes Wally 1, and I indeed move over to see him toiling in the debugger through a Hashtable with String keys. He's looking at its bucket array, to be precise. "Java is broken! I added a new value with the same hash value for the key, and it just replaced my old one! Look, my old value was here, and now it's the new one!"

          "Oh yes, we had that bug too at the former company I worked for," chimes in Wally 2. "We had to set the capacity manually to avoid it."

          I clench my teeth to stop myself from screaming.

          "Hmm," I play along, "expand that 'next' node, please."

          "No, you don't understand, my value was here and now there's this other key there."

          "Yes, but I want to see what's in that 'next' node, please."

          So he clicks on it and goes, "Oh... There it is..."

          Turns out that neither of them had the faintest fucking clue what a hash table is, or for that matter what a linked list is. They looked at its hash bucket and expected nothing deeper than that. And, I'm told, at least one of them had been in a project where they actually coded workarounds (that can't possibly do any difference, too!) for its normal operation.

          ACT II

          So I'm consulting at another project and essentially they use a HashMap with string keys too. Except they created their own key objects, nothing more than wrappers around a String, and with their own convoluted and IMHO suboptimal hash value calculation too. Hmm, they must have had a good reason, but I ask someone.

          "Oh," he goes, "we ran into a Java bug. You can see it in the debugger. You'd add a new value whose key has the same hash value and it replaces yours in the array. So Ted came up with an own hash value, so it doesn't happen any more."

          Ted was their architect, btw. There were easily over 20 of them merry retards in that project, including an architect, and neither of them understood:

          A) that that's the way a hash table works, and more importantly

          B) that it still worked that way even with Ted's idiotic workaround. It's mathematically impossible to code a hash there which doesn't cause the same collisions anyway, and sure enough Ted's produced them too.

          ACT III

          I'm talking to yet another project's architect, this time a framework, and, sure enough...

          "Oh yeah, that's the workaround for a bug they found in project XYZ. See, Java's HashMap has a bug. It replaces your old value when you have a hash collision in the key."

          AAARGH!

          So I'm guessing it would still be useful if more people understood these things. We're not just talking abstract complaints about cargo-cult programming without understanding it. We're talking people and sometimes whole teams who ended up debugging into it when they had some completely unrelated bug, and spent time on it. And then spent more time coding "workarounds" which can't possibly even make any difference. And then spent more time fixing the actual bug they had in the first place.

          • Re:True story (Score:5, Insightful)

            by fractoid ( 1076465 ) on Thursday April 30, 2009 @03:29AM (#27769341) Homepage
            My mother, who was programming before a fair few of us (including me) were born, once told me this: If you think you've found a bug in a compiler, or an operating system, or a programming language, or a well-known commonly used library... you're wrong.

            Of course, this doesn't hold true 100% of the time, especially when you're pushing the limits of new versions of large 3rd party libraries, but when one is just starting to program (and hence using very well known, well tested libraries and code) it's true 99.99% of the time.

            (Oh, btw, I love your sig. Makes me laugh every time. :)
          • Re:True story (Score:4, Informative)

            by noidentity ( 188756 ) on Thursday April 30, 2009 @03:36AM (#27769391)

            Turns out that neither of them had the faintest fucking clue what a hash table is, or for that matter what a linked list is. They looked at its hash bucket and expected nothing deeper than that. And, I'm told, at least one of them had been in a project where they actually coded workarounds (that can't possibly do any difference, too!) for its normal operation.

            It's all too-often that people get the wrong view of a program using the debugger, either because it's not showing what's really there, or they're not interpreting it right. If you think something's wrong based on what you see in the debugger, write a test program first. More often than not, the test program will pass. After all, the compiler's job is to output code which meets the language specification regarding side-effects, not to make things look right in the debugger. In this case, the developer should have written a simple test which inserted two different values that had the same hash code, and verified that he really could only access one of them in the container. He would have found that they were both still there.

          • by julesh ( 229690 ) on Thursday April 30, 2009 @04:33AM (#27769679)

            So Wally 1 keeps clicking and staring at the screen all week and spewing things like "Unbelievable!" every 5 minutes. My curiosity gets the better of me and I ask what's happening.

            Is it just me who would be_much_ more tempted to say, "You keep using that word. I don't think it means what you think it means."

            (Yes, I know it's a slight misquote, but it's close enough to be really tempting...)

          • Re:True story (Score:5, Interesting)

            by rve ( 4436 ) on Thursday April 30, 2009 @04:34AM (#27769683)

            Though it's probably unfair to call both Wally, since at least one looks like the hard working kind... he just makes as much progress as a courier on a treadmill.

            The hard working kind is the worst, because a manager can't really see why such a team member isn't working out.

            I used to work with one of those. This Wally was very smart, a genius in fact; extremely articulate and fluent in several world languages, a PhD, a decade of experience as an architect and developer for various high profile customers. A fantastic work ethic: worked 10 hours a day, meticulously tested everything they checked in so that the countless bugs this person checked in never showed up in normal user testing. Race conditions, memory leaks, thread safety, thousands upon thousands of lines of unreachable code, countless more lines of workarounds for supposed bugs in 3rd party tools that were actually the proper results to their improper input.

          • Re:True story (Score:5, Insightful)

            by Have Brain Will Rent ( 1031664 ) on Thursday April 30, 2009 @06:23AM (#27770329)
            I think you've got the bar a little high there. I'd settle for not continuing to run into bugs that result because people wrote code that copies a string into a buffer without knowing if the buffer was big enough to hold the string. Or, not quite a bug, people who place arbitrary, and small, limits on the size of strings (or numbers) - cause god forbid that anyone have a name longer than 12 characters, or a feedback comment longer than 40 characters, or ...
        • by Moraelin ( 679338 ) on Thursday April 30, 2009 @02:26AM (#27768945) Journal

          And a few more examples of cargo-cultism, from people who were untrained to understand what they're doing, but someone thought it was ok because the Java standard library does it for them anyway.

          1. The same Wally 1 from the previous story had written basically this method:

          public static void nuller(String x) {
              x = null;
          }

          Then he called it like this, to try to get around an immutable field in an object. Let's say we have an object called Name, which has an immutable String. So you create it with that string and can't change it afterwards. You have a getName() but not a setName() on it. So he tried to do essentially:

          Name name = new Name("John Doe");
          nuller(name.getName());

          I understand that he worked a week on trying to debug into why it doesn't work, until he asked for help.

          2. From Ted's aforementioned project:

          So they used the wrapper classes like Integer or Character all over the place instead of int or char. This was back in Java 1.3 times too, so there was no automatic boxing and unboxing. The whole code was a mess of getting the values boxed as parameters, unboxing them, doing some maths, boxing the result. Lather, rinse, repeat.

          I ask what that's all about.

          "Oh, that's a clever optimization Ted came up with. See, if you have the normal int as a parameter, Java copies the whole integer on the stack. But if you use Integer it only copies a pointer to it."

          AAARGH!

        • by IntentionalStance ( 1197099 ) on Thursday April 30, 2009 @03:29AM (#27769343)
          This is one of my favourite quotes:

          "The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet." - Michael A. Jackson

          That being said, when I hit the experts only situation I can usually get 2 orders of magnitude improvement in speed. I just then have to spend the time to document the hell out of it so that the next poor bastard who maintains the code can understand what on earth I've done. Especially given that all too often I am this poor bastard.
        • Re:Some, not all... (Score:4, Interesting)

          by A beautiful mind ( 821714 ) on Thursday April 30, 2009 @06:20AM (#27770307)

          It's far more important that I know the performance quirks of my language of choice -- for instance, string interpolation is faster than any sort of string concatenator, which is faster than straight-up string concatenation ('foo' + 'bar').

          This reminds me of a very educational example [perlmonks.org]. On Perl forums sometimes the question arises: which is faster - single quotes or double quotes, the difference being that the latter interpolates variables.

          People in the know pointed it out multiple times that the single vs. double quote issue is a benchmarking mistake. See, Perl is a VM based language, with compile time optimizations. The code that people write as single or double quotes gets compiled down to the same thing. This is the kind of knowledge that is useful, knowing a bit of the theory and design of the underlying language, instead of having benchmark results, but not knowing how to interpret the results...

        • by Rockoon ( 1252108 ) on Thursday April 30, 2009 @07:17AM (#27770619)

          When performance starts to matter, and my profiling tool indicates that the sorting algorithm is to blame, then I'll consider using an alternate algorithm.

          Sure, that profiler might say that you are taking n% of your time in it, but how are you going to objectively know that that n% can be reduced significantly? Is your profiler an artificial inteligence?

          That, my friend, is the problem with canned solutions. You never really know if the implementation is decent, and in some cases you don't even know what the algorithm used is. Still further, if you are a clueless canned solution leverager, you probably don't know the pitfalls associated with a given algorithm.

          Do you know what triggers quicksorts worst case behavior?

          Do you know why a boyer-moore string search performs fairly badly when either string is short?

          Do you know the worst case behavior of that hash table implementation? Do you know what the alternatives are? What is its memory overhead?

          Are any of the canned solutions you use cache oblivious?


          Now lets get into something serious. Algorithmic performance deficiencies are often used in Denial of Service attacks, and any time you use a canned solution you are setting yourself up as an easy target. Your profiler will never tell you why your customer is experiencing major problems, because the attack isn't on your development machine(s.)

          ..and finally.. being ignorant is not something to be proud of. Seriously. Your answer to discovering that the canned solution isnt acceptable is to "buy more hardware." Developers don't get to make that decision. Customers do... and thats assuming the hardware exists. If I was your boss I would fire you immediately for being a willfully ignorant bad programmer.

    • Re:Some, not all... (Score:5, Interesting)

      by Blakey Rat ( 99501 ) on Thursday April 30, 2009 @12:42AM (#27768383)

      I did a ton of work in THINK C 5 on Mac OS 7. Programming in C on a computer with no memory protection is something I never want to experience again. Misplace a single character, and it's reboot time-- for the 13th time today.

      What's *really* weird is that at the time I didn't think that was particularly strange or difficult. It was just the way things were.

  • by NotQuiteReal ( 608241 ) on Thursday April 30, 2009 @12:17AM (#27768229) Journal
    Heh, I had to turn in a punched card assignment in college (probably the last year THAT was ever required)... but I was smart enough to use an interactive CRT session to debug everything first... then simply send the corrected program to the card punch.

    I was an early adopter of the "let the machine do as much work as possible" school of thought.
    • by rnturn ( 11092 ) on Thursday April 30, 2009 @12:43AM (#27768385)

      ``I had to turn in a punched card assignment in college (probably the last year THAT was ever required)... but I was smart enough to use an interactive CRT session to debug everything first... then simply send the corrected program to the card punch.''

      Jeez. You must have taken the same course that I did. (Probably not actually.) In my case it was a programming class emphasizing statistics taught by someone in the business school who actually wanted card decks turned in. (This was probably no later than, maybe, '80/'81.) I did the same thing you did. I wrote all the software at a terminal (one of those venerable bluish-green ADM 3As) and when it was working I left the code in my virtual card punch. When I sent a message to the operator asking to have the contents sent off to a physical card punch, his message back was "Seriously?

  • by fractoid ( 1076465 ) on Thursday April 30, 2009 @12:18AM (#27768237) Homepage
    The one thing I don't think I'll ever, ever miss is writing loaders for some of the stupider file formats out there. Sure, it's not hard, per se, to write a .bmp loader, but once you've done it once or twice it gets old. Eventually I wrote a helper image library to do it all but it still would occasionally come across some obscure variant that it wouldn't load. Far worse were early 3D model formats, even now I tend to stick with .md2 for hobby projects just because it's simple, does what I want, and EVERYTHING exports to it.
  • by GreatDrok ( 684119 ) on Thursday April 30, 2009 @12:30AM (#27768291) Journal

    Yeah, some of these are pretty old. I do remember working on a machine where the compiler wasn't smart enough to make the code really fast so I would get the .s file out and hand edit the assembly code. This resulted in some pretty spectacular speedups (8x for instance). Mind you, more recently I was forced to do something similar when working with some SSE code written for the Intel chips which was strangely slower on AMD. Turned out it was because the Intel chips (PIII and P4) were running on a 32 bit bus and memory access in bytes was pretty cheap. The Athlons were on the 64 bit EV6 bus and so struggled more so were slower. Once I added some code to lift the data from memory in 64 bit chunks and then do the reordering it needed using SSE the AMD chips were faster than the Intel ones.

    Sometimes I think we have lost more than we have gained though with our reliance on compilers being smarter. It was great fun getting in there with lists of instruction latencies and manually overlapping memory loads and calculations. Also when it comes to squeezing the most out of machines with few resources, I remember being amazed when someone managed to code a reasonably competent Chess game into 1K on the Sinclair ZX81. Remember too that the ZX81 had to store the program, variables, and display all in that 1K. For this reason, the chess board was up at the left top of the screen. It was the funniest thing to be writing code on a 1K ZX81 and as the memory got full you could see less and less of your program until the memory was completely full and you could only see one character on screen....

  • by MacColossus ( 932054 ) on Thursday April 30, 2009 @12:30AM (#27768293) Journal
    Documentation!
  • What a retard! (Score:5, Insightful)

    by Alex Belits ( 437 ) * on Thursday April 30, 2009 @12:30AM (#27768297) Homepage

    First of all, most actual practices mentioned are well alive today -- it's just most programmers don't have to care about them because someone else already did it. And some (systems and libraries developers) actually specialize on doing just those things. Just recently I had a project that almost entirely consisted of x86 assembly (though at least 80% of it was in assembly because it was based on very old code -- similar projects started now would be mostly in C).

    Second, things like spaghetti code and Hungarian notation are not "old", they were just as stupid 20 years ago as they are now. There never was a shortage of stupidity, and I don't expect it any soon.

  • Dirty old Fortran (Score:4, Interesting)

    by wjwlsn ( 94460 ) on Thursday April 30, 2009 @12:31AM (#27768305) Journal

    Hollerith constants
    Equivalences
    Computed Gotos
    Arithmetic Ifs
    Common blocks

    There were worse things, horrible things... dirty tricks you could play to get the most out of limited memory, or to bypass Fortran's historical lack of pointers and data structures. Fortran-90 and its successors have done away with most of that cruft while also significantly modernizing the language.

    They used to say that real men programmed in Fortran (or should I say FORTRAN). That was really before my time, but I've seen the handiwork of real men: impressive, awe-inspiring, crazy, scary. Stuff that worked, somehow, while appearing to be complete gibberish -- beautiful, compact, and disgustingly ingenious gibberish.

    Long live Fortran! ('cause you know it's never going to go away)

    • by techno-vampire ( 666512 ) on Thursday April 30, 2009 @12:59AM (#27768463) Homepage
      They used to say that real men programmed in Fortran (or should I say FORTRAN).

      Years ago I programmed with a true genius. His language of choice was PL/1, but sometimes he had to use FORTRAN to fit in with what other people were doing. Any fool could write FORTRAN code in any language they wanted, but he was the only man I ever saw write PL/1 in FORTRAN.

    • Re: (Score:3, Interesting)

      by csrster ( 861411 )

      Hollerith constants Equivalences Computed Gotos Arithmetic Ifs Common blocks

      There were worse things, horrible things... dirty tricks you could play to get the most out of limited memory, or to bypass Fortran's historical lack of pointers and data structures.

      Long live Fortran! ('cause you know it's never going to go away)

      Didn't most of the tricks just boil down to "Define one big integer array in a common block and then pack all your data, whatever its type, into that"? All my PhD research was done with code like that. It was mostly written by my supervisor years and years before even that and I never actually learned how it worked.

  • by belmolis ( 702863 ) <billposerNO@SPAMalum.mit.edu> on Thursday April 30, 2009 @12:37AM (#27768353) Homepage

    For some reason the article says that only variables beginning with I,J,and K were implicitly integers in Fortran. Actually, it was I-N.

  • Duff's Device (Score:5, Interesting)

    by Bruce Perens ( 3872 ) * <bruce@perens.com> on Thursday April 30, 2009 @12:41AM (#27768371) Homepage Journal
    Duff's Device [wikipedia.org]. Pre-ANSI C-language means of unrolling an arbitrary-length loop. We had an Evans and Sutherland Picture System II at the NYIT Computer Graphics Lab, and Tom wrote this to feed it IO as quickly as possible.
  • by AuMatar ( 183847 ) on Thursday April 30, 2009 @12:42AM (#27768379)

    First off, most of the things on the list haven't gone away, they've just moved to libraries. It's not that we don't need to understand them, it's just that not everyone needs to implement them (especially the data structures one- having a pre-written one i good, but if you don't understand them thoroughly you're going to have really bad code)..

    On top of that, some of their items

    *Memory management- still needs to be considered about in C and C++, which are still top 5 languages. You can't even totally ignore it in Java- you get far better results from the garbage collector if you null out your references properly, which does matter if your app needs to scale.

    I'd even go so far as to say ignoring memory management is not a good thing. When you think about memory management, you end up with better designs. If you see that memory ownership isn't clearcut, it's usually the first sign that your architecture isn't correct. And it really doesn't cause that many errors with decent programmers(if any- memory errors are pretty damn rare even in C code). As for those coders who just don't get it- I really don't want them on my project even if the language doesn't need it. If you can't understand the request/use/release paradigm you aren't fit to program.

    *C style strings

    While I won't argue that it would be a good choice for a language today (heck even in C if it wasn't for compatibility I'd use a library with a separate pointer and length), its used in hundreds o thousands of existing C and C++ library and programs. The need to understand it isn't going to go away anytime soon. And anyone doing file parsing or network IO needs to understand the idea of terminated data fields.

    • by shutdown -p now ( 807394 ) on Thursday April 30, 2009 @04:52AM (#27769799) Journal

      you get far better results from the garbage collector if you null out your references properly, which does matter if your app needs to scale.

      You don't get any difference at all if you null out local variables. In fact, you may even confuse the JIT into thinking that the variable lifetime is larger than it actually has to be (normally, it is determined by actual usage, not by lexical scope).

  • Yes, I'm old (Score:5, Insightful)

    by QuantumG ( 50515 ) * <qg@biodome.org> on Thursday April 30, 2009 @12:42AM (#27768381) Homepage Journal

    * Sorting algorithms

    If you don't know them, you're not a programmer. If you don't ever implement them, you're likely shipping more library code than application code.

    * Creating your own GUIs

    Umm.. well actually..

    * GO TO and spaghetti code

    goto is considered harmful, but it doesn't mean it isn't useful. Spaghetti code, yeah, that's the norm.

    * Manual multithreading

    All the time. select() is your friend, learn it.

    * Self-modifying code

    Yup, I actually write asm code.. plus he mentions "modifying the code while it's running".. if you can't do that, you shouldn't be wielding a debugger, edit and continue, my ass.

    * Memory management

    Yeah, garbage collection is cheap and ubiquitous, and I'm one of the few people that has used C++ garbage collection libraries in serious projects.. that said, I've written my own implementations of malloc/free/realloc and gotten better memory performance. It's what real programmers do to make 64 gig of RAM enough for anyone.

    * Working with punch cards

    Meh, I'm not that old. But when I was a kid I wrote a lot of:

    100 DATA 96,72,34,87,232,37,49,82,35,47,236,71,231,234,207,102,37,85,43,78,45,26,58,35,3
    110 DATA 32,154,136,72,131,134,207,102,37,185,43,78,45,26,58,35,3,82,207,34,78,23,68,127

    on the C64.

    * Math and date conversions

    Every day.

    * Hungarian notation

    Every day. How about we throw in some reverse polish notation too.. get a Polka going.

    * Making code run faster

    Every fucking day. If you don't do this then you're a dweeb who might as well be coding in php.

    * Being patient

    "Hey, we had a crash 42 hours into the run, can you take a look?"
    "Sure, it'll take me about 120 hours to get to it with a debug build."

    • by Animats ( 122034 ) on Thursday April 30, 2009 @12:59AM (#27768465) Homepage

      Self-modifying code
      Yup, I actually write asm code.. plus he mentions "modifying the code while it's running".. if you can't do that, you shouldn't be wielding a debugger.

      Code that generates code is occasionally necessary, but code that actually modifies itself locally, to "improve performance", has been obsolete for a decade.

      IA-32 CPUs still support self-modifying code for backwards compatibility. (On most RISC machines, it's disallowed, and code is read-only, to simplify cache operations.) Superscalar IA-32 CPUs still support self-modifying code. But the performance is awful. Here's what self-modifying code looks like on a modern CPU:

      Execution is going along, with maybe 10-20 instructions pre-fetched and a few operations running concurrently in the integer, floating point, and jump units. Alternate executions paths may be executing simultaneously, until the jump unit decides which path is being taken and cancels the speculative execution. The retirement unit looks at what's coming out of the various execution pipelines and commits the results back to memory, checking for conflicts.

      Then the code stores into an instruction in the neighborhood of execution. The retirement unit detects a memory modification at the same address as a pre-fetched instruction. This triggers an event which looks much like an interrupt and has comparable overhead. The CPU stops loading new instructions. The pipelines are allowed to finish what they're doing, but the results are discarded. The execution units all go idle. The prefetched code registers are cleared. Only then is the store into the code is allowed to take place.

      Then the CPU starts up, as if returning from an interrupt. Code is re-fetched. The pipelines refill. The execution units become busy again. Normal execution resumes.

      Self-modifying code hasn't been a win for performance since the Intel 286 (PC-AT era, 1985) or so. It might not have hurt on a 386. Anything later, it's a lose.

    • Re: (Score:3, Interesting)

      by bennomatic ( 691188 )
      Another former c64 user here. I remember typing in lots of those ML programs, wiling away my summer vacation... But I also cut my programming teeth there; no sooner had I learned BASIC that it seemed too slow and limiting to me, so I picked up a book on 6510 assembler, another book on the C64 architecture, and started writing self-modifying code--how else could you do what you needed in 38k of available RAM?

      I still remember little dribs and drabs, like "032 210 255" was the assembled code for JSR $FFD2
    • Re:Yes, I'm old (Score:4, Interesting)

      by Lord Ender ( 156273 ) on Thursday April 30, 2009 @10:21AM (#27772445) Homepage

      I write web apps. I never have to sort anything, except when I ask the database to give me data in a certain order. Why would it be useful for me to implement and be intimately familiar with sorting algorithms? I haven't used them since college.

  • Memory Management (Score:3, Insightful)

    by Rob Riepel ( 30303 ) on Thursday April 30, 2009 @12:49AM (#27768407)

    Try overlays...

    Back in the day we had do all the memory management by hand. Programs (FORTRAN) had a basic main "kernel" that controlled the overall flow and we grouped subprograms (subroutines and functions) into "overlays" that were swapped in as needed. I spent hours grouping subprograms into roughly equal sized chunks just to fit into core, all the while trying to minimize the number of swaps necessary. All the data was stored in huge COMMON blocks so it was available to the subprograms in every overlay. You'd be fired if you produced such code today.

    Virtual memory is more valuable than full screen editors and garbage collection is just icing on a very tall layer cake...

  • by bcrowell ( 177657 ) on Thursday April 30, 2009 @12:51AM (#27768425) Homepage

    Circa 1984, when I did summer programming jobs at Digital Research (purveyors of CP/M), one of the programmers there showed me how you could put a transistor radio inside the case of your computer. You could tell what the computer was doing by listing to the sounds it picked up via the RF emissions from the computer. For instance, it would go into a certain loop, and you could tell because the radio would buzz like a fly.

    Documentation was a lot harder to come by. If you wanted the documentation for X11, you could go to a big bookstore like Cody's in Berkeley, and they would have it in multiple hardcover volumes. Each volume was very expensive. The BSD documentation was available in the computer labs at UC Berkeley in the form of 6-foot-wide trays of printouts. (Unix man pages existed too, but since you were using an ADM3A terminal, it was often more convenient to walk over to the hardcopy.)

    On the early microcomputers, there was no toolchain for programming other than MS BASIC in ROM. Assemblers and compilers didn't exist. Since BASIC was slow, if you wanted to write a fast program, you had to code it on paper in assembler and translate it by hand into machine code. But then in order to run your machine code, you were stuck because there was no actual operating system that would allow you to load it into memory from a peripheral such as a cassette tape drive. So you would first convert the machine code to a string of bytes expressed in decimal, and then write a BASIC program that would do a dummy assignment into a string variable like 10 A$="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx". Then you would write self-modifying code in BASIC that would find the location where the string literal "xxx...." was stored, and overwrite it with your machine code. So now if you gave the LIST command, it would display the program on the screen, with the string literal displayed as goofy unprintable characters. Then you would code the program so it would execute the machine code stored at the address of the variable A$. Finally you'd save the program onto cassette.

  • by Nakoruru ( 199332 ) on Thursday April 30, 2009 @12:52AM (#27768435)

    You will never find a programming language that frees you from the burden of clarifying your thoughts.

    http://www.xkcd.com/568/

  • The Story of Mel (Score:5, Informative)

    by NixieBunny ( 859050 ) on Thursday April 30, 2009 @12:58AM (#27768457) Homepage
    If you're going to talk about old school, you gotta mention Mel [pbm.com].
  • by roc97007 ( 608802 ) on Thursday April 30, 2009 @01:01AM (#27768479) Journal

    "Top-down" coding produced readable but horribly inefficient code. Doesn't do any good for the code to work if it doesn't fit in the e-prom.

    "Bottom up" code produced reasonably efficient spaghetti. Good luck remembering how it worked in 6 months.

    "Inside-out" coding was the way to go.

    You wrote your inside loops first, then the loop around that, then the loop around that. Assuming the problem was small enough that you could hold the whole thing in your head at one time, the "inside-out" technique guaranteed the most efficient code, and was moderately readable.

    At least, that's the way I remember it. 'S been a long time...

    Now, these new-fangled tools do all the optimizing for you. 'S taken all the fun outta coding.

  • by domulys ( 1431537 ) on Thursday April 30, 2009 @01:08AM (#27768517)
    x = x xor y
    y = x xor y
    x = x xor y

    Now you know!
    • by bziman ( 223162 ) on Thursday April 30, 2009 @08:46AM (#27771287) Homepage Journal
      This might be okay if you are SO constrained you can't afford one register's worth of temp space, but if you're into performance, this is 4-8x slower than using a temp variable, in every language I've tried it on. Run your own benchmarks, see what I mean. Also, don't obfuscate your code, just to be "clever".
  • 1. Writing on a slate. Man, did that compile slow!
    2. When you heard "stack," it meant firewood.
    3. The error message was a ruler across the knuckles.
    4. Memory overloads. My own memory, I mean.
    5. If you wanted to change your resolution, you had to wait until New Years.
    6. Try coding when you're drinking the original mountain dew.
    7. The rantings of code guru SFBM, the inventor of open Morse code.

  • by mykepredko ( 40154 ) on Thursday April 30, 2009 @01:58AM (#27768785) Homepage

    Back around 1980, the most common piece of self modifying code was to implement a 16 bit index read/write/goto instruction in the Apple ]['s (and Atari and C64) 6502 processor.

    The processor has an 8 bit index register but to allow it to access more than 256 byte addresses, you could either create 256 versions of the operation (each one accessing a different 256 byte address block in memory) or put the function in RAM and modify the instruction that selected the 256 byte address block.

    Sorry, I know longer have the code and my 6502 manuals are packed away, but I'm sure somebody out there remembers and has an example.

    myke

  • New Headaches (Score:4, Insightful)

    by Tablizer ( 95088 ) on Thursday April 30, 2009 @02:26AM (#27768949) Journal

    The biggest "new" headache that will probably end up in such an article 20 years from now is web "GUIs", A.K.A. HTML-based interfaces. Just when I was starting to perfect the art of GUI design in the late 90's, the web came along and changed all the rules and added arbitrary limits. Things easy and natural in desktop GUI's are now awkward and backassward in a browser-based equivalent.

    Yes, there are proprietary solutions, but the problem is that they are proprietary solutions and require users to keep Flash or Active-X-net-silver-fuckwhat or Crashlets or hacky JimiHavaScript up-to-date, making custom desktop app installs almost seem pleasant in comparison, even with the ol' DLL hell.

    On a side note, I also came into the industry at the tail end of punched cards (at slower shops). Once the card copy machine punched the holes about 1/3 mm off, making them not read properly, but on *different* cards each pass thru. It's like including 0.5 with binary numbers, or 0.4999 and 0.5001 with a quantum jiggle.

    Good Times
       

On the eighth day, God created FORTRAN.

Working...