Old-School Coding Techniques You May Not Miss 731
CWmike writes "Despite its complexity, the software development process has gotten better over the years. 'Mature' programmers remember manual intervention and hand-tuning. Today's dev tools automatically perform complex functions that once had to be written explicitly. And most developers are glad of it. Yet, young whippersnappers may not even be aware that we old fogies had to do these things manually. Esther Schindler asked several longtime developers for their top old-school programming headaches and added many of her own to boot. Working with punch cards? Hungarian notation?"
Begone, common file format loaders! (Score:4, Interesting)
Re:Hungarian Notation (Score:3, Interesting)
However, specific notation on some things IS a good thing. Conventions like CONSTANTS, m_memberVariables, and so forth are good because they remind you that the variable in question is an exception to what you'd normally expect (that it's a number, a string, or an object). They're not strictly necessary any more (my current workplace just uses upper camel case for everything, for instance, and my last job used trailing underscores to denote member variables which was downright annoying) but IMO it's good to prevent brain fail errors. Recognising that the programmer is the source of all errors is the first step towards getting rid of them. Well, except in Borland Turbo C++.
Get down to the metal (Score:4, Interesting)
Yeah, some of these are pretty old. I do remember working on a machine where the compiler wasn't smart enough to make the code really fast so I would get the .s file out and hand edit the assembly code. This resulted in some pretty spectacular speedups (8x for instance). Mind you, more recently I was forced to do something similar when working with some SSE code written for the Intel chips which was strangely slower on AMD. Turned out it was because the Intel chips (PIII and P4) were running on a 32 bit bus and memory access in bytes was pretty cheap. The Athlons were on the 64 bit EV6 bus and so struggled more so were slower. Once I added some code to lift the data from memory in 64 bit chunks and then do the reordering it needed using SSE the AMD chips were faster than the Intel ones.
Sometimes I think we have lost more than we have gained though with our reliance on compilers being smarter. It was great fun getting in there with lists of instruction latencies and manually overlapping memory loads and calculations. Also when it comes to squeezing the most out of machines with few resources, I remember being amazed when someone managed to code a reasonably competent Chess game into 1K on the Sinclair ZX81. Remember too that the ZX81 had to store the program, variables, and display all in that 1K. For this reason, the chess board was up at the left top of the screen. It was the funniest thing to be writing code on a 1K ZX81 and as the memory got full you could see less and less of your program until the memory was completely full and you could only see one character on screen....
Dirty old Fortran (Score:4, Interesting)
Hollerith constants
Equivalences
Computed Gotos
Arithmetic Ifs
Common blocks
There were worse things, horrible things... dirty tricks you could play to get the most out of limited memory, or to bypass Fortran's historical lack of pointers and data structures. Fortran-90 and its successors have done away with most of that cruft while also significantly modernizing the language.
They used to say that real men programmed in Fortran (or should I say FORTRAN). That was really before my time, but I've seen the handiwork of real men: impressive, awe-inspiring, crazy, scary. Stuff that worked, somehow, while appearing to be complete gibberish -- beautiful, compact, and disgustingly ingenious gibberish.
Long live Fortran! ('cause you know it's never going to go away)
Duff's Device (Score:5, Interesting)
Re:Some, not all... (Score:5, Interesting)
I did a ton of work in THINK C 5 on Mac OS 7. Programming in C on a computer with no memory protection is something I never want to experience again. Misplace a single character, and it's reboot time-- for the 13th time today.
What's *really* weird is that at the time I didn't think that was particularly strange or difficult. It was just the way things were.
Re:Punched cards - there was a machine for that (Score:5, Interesting)
Jeez. You must have taken the same course that I did. (Probably not actually.) In my case it was a programming class emphasizing statistics taught by someone in the business school who actually wanted card decks turned in. (This was probably no later than, maybe, '80/'81.) I did the same thing you did. I wrote all the software at a terminal (one of those venerable bluish-green ADM 3As) and when it was working I left the code in my virtual card punch. When I sent a message to the operator asking to have the contents sent off to a physical card punch, his message back was "Seriously?
Re:Someone doesn't get data compression (Score:3, Interesting)
Yeah, that was odd. I could see if the final field of each assembly instruction was an address and everything was aligned to 2-word boundaries (msb-first) or you didn't use memory passed a certain boundary (lsb-first) then you could save memory by compacting all the instructions by one bit (and then packing them together). Same for registers, or if didn't use instructions with op-codes over a certain threshold. But if you were really saving one bit per instruction and you managed to compress 7k into 5k, then that means your instructions were only 3.5 bits on average to begin with, which doesn't seem very likely. Something definitely got lost in translation there.
radio in the computer case (Score:5, Interesting)
Circa 1984, when I did summer programming jobs at Digital Research (purveyors of CP/M), one of the programmers there showed me how you could put a transistor radio inside the case of your computer. You could tell what the computer was doing by listing to the sounds it picked up via the RF emissions from the computer. For instance, it would go into a certain loop, and you could tell because the radio would buzz like a fly.
Documentation was a lot harder to come by. If you wanted the documentation for X11, you could go to a big bookstore like Cody's in Berkeley, and they would have it in multiple hardcover volumes. Each volume was very expensive. The BSD documentation was available in the computer labs at UC Berkeley in the form of 6-foot-wide trays of printouts. (Unix man pages existed too, but since you were using an ADM3A terminal, it was often more convenient to walk over to the hardcopy.)
On the early microcomputers, there was no toolchain for programming other than MS BASIC in ROM. Assemblers and compilers didn't exist. Since BASIC was slow, if you wanted to write a fast program, you had to code it on paper in assembler and translate it by hand into machine code. But then in order to run your machine code, you were stuck because there was no actual operating system that would allow you to load it into memory from a peripheral such as a cassette tape drive. So you would first convert the machine code to a string of bytes expressed in decimal, and then write a BASIC program that would do a dummy assignment into a string variable like 10 A$="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx". Then you would write self-modifying code in BASIC that would find the location where the string literal "xxx...." was stored, and overwrite it with your machine code. So now if you gave the LIST command, it would display the program on the screen, with the string literal displayed as goofy unprintable characters. Then you would code the program so it would execute the machine code stored at the address of the variable A$. Finally you'd save the program onto cassette.
swapping two values without a temporary variable (Score:5, Interesting)
y = x xor y
x = x xor y
Now you know!
Re:Hungarian Notation (Score:5, Interesting)
They missed the Apple ][ (6502) 16 bit index (Score:3, Interesting)
Back around 1980, the most common piece of self modifying code was to implement a 16 bit index read/write/goto instruction in the Apple ]['s (and Atari and C64) 6502 processor.
The processor has an 8 bit index register but to allow it to access more than 256 byte addresses, you could either create 256 versions of the operation (each one accessing a different 256 byte address block in memory) or put the function in RAM and modify the instruction that selected the 256 byte address block.
Sorry, I know longer have the code and my 6502 manuals are packed away, but I'm sure somebody out there remembers and has an example.
myke
True story (Score:5, Interesting)
Let me tell you a true story to illustrate why I think people should still learn that stuff.
ACT I
So at one point I'm in a room with what looks like two particularly unproductive Wallys. Though it's probably unfair to call both Wally, since at least one looks like the hard working kind... he just makes as much progress as a courier on a treadmill.
So Wally 1 keeps clicking and staring at the screen all week and spewing things like "Unbelievable!" every 5 minutes. My curiosity gets the better of me and I ask what's happening.
"Look at this," goes Wally 1, and I indeed move over to see him toiling in the debugger through a Hashtable with String keys. He's looking at its bucket array, to be precise. "Java is broken! I added a new value with the same hash value for the key, and it just replaced my old one! Look, my old value was here, and now it's the new one!"
"Oh yes, we had that bug too at the former company I worked for," chimes in Wally 2. "We had to set the capacity manually to avoid it."
I clench my teeth to stop myself from screaming.
"Hmm," I play along, "expand that 'next' node, please."
"No, you don't understand, my value was here and now there's this other key there."
"Yes, but I want to see what's in that 'next' node, please."
So he clicks on it and goes, "Oh... There it is..."
Turns out that neither of them had the faintest fucking clue what a hash table is, or for that matter what a linked list is. They looked at its hash bucket and expected nothing deeper than that. And, I'm told, at least one of them had been in a project where they actually coded workarounds (that can't possibly do any difference, too!) for its normal operation.
ACT II
So I'm consulting at another project and essentially they use a HashMap with string keys too. Except they created their own key objects, nothing more than wrappers around a String, and with their own convoluted and IMHO suboptimal hash value calculation too. Hmm, they must have had a good reason, but I ask someone.
"Oh," he goes, "we ran into a Java bug. You can see it in the debugger. You'd add a new value whose key has the same hash value and it replaces yours in the array. So Ted came up with an own hash value, so it doesn't happen any more."
Ted was their architect, btw. There were easily over 20 of them merry retards in that project, including an architect, and neither of them understood:
A) that that's the way a hash table works, and more importantly
B) that it still worked that way even with Ted's idiotic workaround. It's mathematically impossible to code a hash there which doesn't cause the same collisions anyway, and sure enough Ted's produced them too.
ACT III
I'm talking to yet another project's architect, this time a framework, and, sure enough...
"Oh yeah, that's the workaround for a bug they found in project XYZ. See, Java's HashMap has a bug. It replaces your old value when you have a hash collision in the key."
AAARGH!
So I'm guessing it would still be useful if more people understood these things. We're not just talking abstract complaints about cargo-cult programming without understanding it. We're talking people and sometimes whole teams who ended up debugging into it when they had some completely unrelated bug, and spent time on it. And then spent more time coding "workarounds" which can't possibly even make any difference. And then spent more time fixing the actual bug they had in the first place.
Re:Dirty old Fortran (Score:3, Interesting)
Hollerith constants Equivalences Computed Gotos Arithmetic Ifs Common blocks
There were worse things, horrible things... dirty tricks you could play to get the most out of limited memory, or to bypass Fortran's historical lack of pointers and data structures.
Long live Fortran! ('cause you know it's never going to go away)
Didn't most of the tricks just boil down to "Define one big integer array in a common block and then pack all your data, whatever its type, into that"? All my PhD research was done with code like that. It was mostly written by my supervisor years and years before even that and I never actually learned how it worked.
Re:Some, not all... (Score:1, Interesting)
I can squeeze (as a rule) at least a 10% performance enhancement out of any of the code that the others I program with write, purely because I bothered to learn things like the sorting algorithms, or that a for loop should be processed with >= 0 where possible (or split into a while loop) to reduce computation time. Incremental changes that make vast improvements to the performance of the code. This, incidentally, is one of the reasons I detest people learning perl or python first - ease of language or simple learning aside, the tendency is to write awful code in the long run - at least in the programmers I've seen.
Re:Eliminate Structured Programming? (Score:4, Interesting)
"goto cleanup;" however, is hard to mess up.
hehe, have a look at Ian Kent's code in autofs sometime.
His use of "goto cleanup;" is an infinite source of double free bugs.
Re:Yes, I'm old (Score:3, Interesting)
I still remember little dribs and drabs, like "032 210 255" was the assembled code for JSR $FFD2, the kernel jump table's address for "print a character to the screen.
Those were the days.
Re:swapping two values without a temporary variabl (Score:2, Interesting)
x,y = y,x
Is the easiest way to do it surely?
Re:True story (Score:5, Interesting)
Though it's probably unfair to call both Wally, since at least one looks like the hard working kind... he just makes as much progress as a courier on a treadmill.
The hard working kind is the worst, because a manager can't really see why such a team member isn't working out.
I used to work with one of those. This Wally was very smart, a genius in fact; extremely articulate and fluent in several world languages, a PhD, a decade of experience as an architect and developer for various high profile customers. A fantastic work ethic: worked 10 hours a day, meticulously tested everything they checked in so that the countless bugs this person checked in never showed up in normal user testing. Race conditions, memory leaks, thread safety, thousands upon thousands of lines of unreachable code, countless more lines of workarounds for supposed bugs in 3rd party tools that were actually the proper results to their improper input.
Re:swapping two values without a temporary variabl (Score:3, Interesting)
Now you know!
That's a terrible way of doing it! It guarantees that the compiler and processor can't do anything smart with reordering variable accesses since the values are now officially dependent on each other, despite not really being.
Re:True story (Score:3, Interesting)
Re:Some, not all... (Score:4, Interesting)
This reminds me of a very educational example [perlmonks.org]. On Perl forums sometimes the question arises: which is faster - single quotes or double quotes, the difference being that the latter interpolates variables.
People in the know pointed it out multiple times that the single vs. double quote issue is a benchmarking mistake. See, Perl is a VM based language, with compile time optimizations. The code that people write as single or double quotes gets compiled down to the same thing. This is the kind of knowledge that is useful, knowing a bit of the theory and design of the underlying language, instead of having benchmark results, but not knowing how to interpret the results...
String Packing (Score:3, Interesting)
Re:Some, not all... (Score:4, Interesting)
You mustn't do any real-time processing with any serious volumes of data. I do market data systems for my day job. All the microseconds I can save add up. Yesterday, I knocked several seconds off the time required to do an initial load by getting rid of some redundant copying that was going on. Today I improved the response latency for certain market events by changing the data type used for a key in a map. You might not need to understand when you're doing typical desktop applications, but you'll have to be content being a hack. The real software engineers will always have to understand.
Re:Some, not all... (Score:4, Interesting)
I suppose it depends on what the programmer is doing. A programmer that does little more than code web scripts that interface with previously coded databases.... yeah, they can probably manage to have a career doing that and not ever need to code a sorting algorithm, or do anything equally complicated. However, if you're tasked with writing a game, or a OCR scanner, or a natural language parser, fundamental concepts of "obsolete" functions such as sorting suddenly will become extremely important. Not so much because you're going to need to suddenly learn how to program a computer to sort a bunch of integers, but you might be faced with a multidimensional structure with millions of elements and need to come up with an efficient way to organize and access it... and previously coded "sort" algorithms aren't probably going to be of much help. You're going to want to have some idea of where to start.
-Restil
Re:Hungarian Notation (Score:4, Interesting)
Re:swapping two values without a temporary variabl (Score:3, Interesting)
I still get a kick out of instant compilers! (Score:2, Interesting)
my first enterprise application was HAND WRITTEN on coding sheets and handed to a data entry operator who typed it in and i had to wait 3-5 days for a compilation report and code listing printouts. Needless to say the first report was mostly syntax errors (bad handwriting, smudges and data entry errors). Then i had to hand in modifications to be made on new sheets - these would only take a day or 2 to come back!
a few years later i worked on a PRIME mini and had to submit all C applications into a queue for compilation - sometimes 30 minutes but on high load days, it could take 8 hours - just to compile the damn code! again i pick up the results in the form of a printout.
but now, even to this day, i still get a huge kick out of compiling, building, linking (including generating and optimizing) 1 million+ lines of code right on my desktop in seconds.
Re:Yes, I'm old (Score:4, Interesting)
I write web apps. I never have to sort anything, except when I ask the database to give me data in a certain order. Why would it be useful for me to implement and be intimately familiar with sorting algorithms? I haven't used them since college.
Re:Punched cards - there was a machine for that (Score:3, Interesting)
Around 1980, there was a program called SPIKE, which was a virtual keypunch. It put an image of a punch card up on the screen (character graphics on a Z-100). As you typed, it wrote the characters (uppercase) at the top of the card, and blacked out the appropriate squares.
It had a mode to do blind typing (not all keypunches printed the characters at the top of the card--that cost extra). It also had a 'dropdeck' command to shuffle the lines of your file.
Re:Some, not all... (Score:2, Interesting)
Too true. Some of the worst performing code I've ever seen was written by someone who "optimized" the C source first.
Sure, it worked on the broken compiler he was coding for, but when it hit a compiler with a GOOD optimizer, it sucked: The code was unreadable, and the "naive" and "maintainable" way of writing the code was much faster.
Your search word in the ANSI/ISO C standard is "sequence point". Follow that up with "invariant hoist", "common sub-expression", "register kills", and "non-volatile condition registers".
(Back in the day, hand-optimizing would work for an x86 chip, because there were no registers to speak of, so nothing could be held in the CPU. But on a RISC chip, that's just not the case. By creating extra sequence points, the compiler had to ensure certain calculations were serialized.)
Given this code:
A good optimizer, especially on a target with multiple condition registers, will just keep the result of the 'x>0' comparison in a CR, 'and'ing it with the other tests. It will also start the test of 'y' long before the 'if' statement logically appears, so there's no slow-down in the pipeline.
So don't fiddle with non-algorithmic changes to the code, until long after you've done everything else. If it makes a big difference, you need a better compiler anyway; or you need to re-think your declarations. Profile it, and only spend time impairing readability where performance really matters.
Replacing bubble sort with quick sort is worth it. "register boolean x_positive=x>0;" isn't.
Also, languages like C and C++ really suck if you put function calls inside tight loops. Unless the compiler "knows" the far side of the call, it has to assume memory and registers are killed (within the limits of the ABI), and so has to write variables to main storage and pull them back on either side of the function calls.
So, what you want is to hand large chunks of work around, not single values.
That'll be a big win for caching, too; if you're going to micro-optimize, you'd better pay attention to your cache stride and associativity.
Re:Premature optimizations (Score:2, Interesting)
Re:swapping two values without a temporary variabl (Score:1, Interesting)
y = x + y - (x = y)