Learning Computer Science via Assembly Language 1328
johnnyb writes "
A new book was just released which is based on a new concept - teaching computer science through assembly language (Linux x86 assembly language, to be exact). This book teaches how the machine itself operates, rather than just the language. I've found that the key difference between mediocre and excellent programmers is whether or not they know assembly language. Those that do tend to understand computers themselves at a much deeper level.
Although unheard of today, this concept isn't really all that new -- there used to not be much choice in years past. Apple computers came with only BASIC and assembly language, and there were books available on assembly language for kids.
This is why the old-timers are often viewed as 'wizards': they had to know assembly language programming. Perhaps this current obsession with learning using 'easy' languages is the wrong way to do things. High-level languages are great, but learning them will never teach you about computers. Perhaps it's time that computer science curriculums start teaching assembly language first."
Linux x86 assembly? (Score:3, Insightful)
Re:Linux x86 assembly? (Score:5, Informative)
Not the point! (Score:5, Insightful)
My first IBM PC job was C, but I had to learn 8086 so that I could debug since there was no source level debugging when using overlays.
Anyways, how do you find a compiler bug, if you can't read the code the compiler generates?
Re:Not the point! (Score:4, Funny)
Re:Not the point! (Score:5, Informative)
Tom
Just to clarify (Score:5, Informative)
However, in the real world, NANDS are cheap (2-3 transistors), so that's what everyone uses.
4-bit Full Adder Using Relays or Vacuum Tubes (Score:5, Funny)
However, in the real world, NANDS are cheap (2-3 transistors), so that's what everyone uses.
Well, NANDs are easy to make with MOSFETs or vacuum tubes.
But I suggest that, in order to simplify the learning of digital logic and avoid this whole nastiness of DeMorgan, we should adopt relays as our primary logic device.
Think about it: two relays with their contacts in parallel = OR. Two relays with their contacts in series = AND. A relay with normally-closed contacts = NOT.
In this way, all design work can be done with natural logic (AND, OR, NOT) rather than "efficient" NAND, NOR, etc.
On top of that, your computer would make satisfying clicking sounds reminiscent of a pinball machine's scorekeeping system or an old elevator contoller, while you're crunching SETI@Home units.
I'm building a 4-bit binary full adder with nothing but relays in order to demonstrate their sheer computing power, and was hoping that someone could write me drivers to allow it to have practical uses.
in reality... (Score:5, Informative)
In the cmos world, pass-gates are much cheaper than amplifying gates (in the size vs speed vs power tradeoff), although you can't put too many pass gates in a row (signal degradation). So in fact MUX (multiplexor to pass one of the two inputs using the control of a third) and XORS (use input A to pass either !B or B) are used quite a bit.
Some background might be helpful to think about the more complicated AOI struture, though...
In a cmos NAND-gate, the pull-up side is two p-type pass gates in parallel from the output to Vdd (the positive rail) so that if either of the two p-type gates is low, the output is pulled high. For the pull-down side, two n-type pass gates are in series to ground so both n-type gates have to be low before the output is pulled to ground. This gives us a total of 4 transistors for a cmos-nand where the longest pass gate depth is 2 (the pull-down). The pull-down is restricted to be the complement function of the pull-down in CMOS (otherwize either the pull-up and pull-down will fight or nobody will pull causing the output to float and/or oscillate).
A 2-input NOR gate has the p-type in series and the n-type in parallel (for the same # of transistors).
Due to a quirk of semi-conductor technology, n-type transistors are easier to make more powerfull than p-type so usually a NAND is often slightly faster than a NOR (the two series n-types in a NAND gate are better at pulling down than the two series p-types are at pulling up in a NOR gate). However, this isn't the end of the story...
Notice that you can build a 3-input NAND by just adding more p-type transistors in parallel to the pull-up and more n-type in series to the pull-down. You can make even more complicated logic by putting the pull-up and pull-down transistor in combinations of series and parallel configurations. The most interesting cmos configurations are called AOI (and-or-invert) since they are the ones you can make with simple parallel chains of pass transistors in series for pull-up and pull-down.
For most cmos semi-conductor technologies, you are limited to about 4 pass gates in series or parallel before the noise margin starts to kill you and you need to stop using pass gates and just start a new amplifying "gate". Thus most chips are designed to use 4 input AOI gates where possible and smaller gates to finish out the logic implementation.
Thus "everyone" really uses lots of different types of gates (including simple NAND and XORS as well as more complicated AOI).
Re:Vacume tubes? (Score:5, Funny)
Re:Linux x86 assembly? (Score:5, Informative)
So in truth, the kernel is the car. Asm can be the road, it can be the engine, it can be the passengers, it can be the wind resistance, it can be virtually any component. But nonetheless, if your writting an application sitting on top of the kernel you are going to need to speak to the kernel's api at some point (or the api of a layer sitting on top of it), just as if your writting a windows application in asm or c, or vb, you need to be speaking to the win32 api.
Asm is no different than any other language, knowing the language is great and all, but it's worthless without learning the proper api's you'll need to actually write a program that does something. That's a major flaw in most programming tutorials. They'll teach C or another language and not mention a single word about the api's one needs to know to actually write a program that does more than calculate pie.
Re:Linux x86 assembly? (Score:5, Informative)
I've worked before with programmers who had little experience in programming 'bare hardware'- they do really foolish things like not initing timers, setting up stack pointers, and the like.
Writing bare ASM code for a processor (where it boots up out of your own EPROM or on an emulator) is good experience in minimalism. It can give you a good feeling when the project is all done and you can say you did it all yourself.
For those interested in getting into this kind of thing, start with a PIC embedded controller and a cheap programmer. You can get PIC assembly language tools for free, and build a programmer, or buy a kit for a programmer, that plugs into your serial or parallel port. Your first PIC machine can be the CPU, a clock crystal, a few resistors and capacitors, and the LED you want to blink, or whatever other intrigues you. If you're not into complex soldering, and/or layout and complex schematics, you can buy pre-etched boards you just plug the PIC into.
Another easy-start processor would be the 68HC11. It has a bootstrap built into ROM. Basically, you can jumper the chip so it wakes up listening on the serial port for code you send down the wire at it, and burns it into the EEPROM memory in the 'HC11 chip itself. Move the jumper and reboot the chip, and it's running your code.
I think this is far more interesting that just writing apps that run on an Operating System you didn't roll yourself.
Re:Linux x86 assembly? (Score:5, Informative)
Re:Linux x86 assembly? (Score:4, Interesting)
I learned assembly as an undergraduate at Penn State (before attending a year of grad school at Drexel), but what I got out of the course had far more to do with understanding architecture (something not relevant for most developers, but much more relevant for hardware engineers).
Assembly does not have any of the high-level features (OOP, libraries, etc) features that developers need to know these days. It's rarely used, even in embedded programming since C/C++ compilers have gotten quite efficient and are available even for open-core (similar to open source) procesors for use on FPGA's.
On the other hand, assembly is important to know for computer engineering undergrads and graduates interested in architecture, and having taught in the CompEng department there, I can say that the depth of assembly in the cirriculum there is not sufficient.
Re:Linux x86 assembly? (Score:4, Insightful)
Also, you do know that compilers are written by programmers don't you?
Re:Linux x86 assembly? (Score:5, Insightful)
Assembly has little to nothing to do with either programming or computer science anymore. Computer science (IMHO) deals with the study of software engineering and algorithms
If it truly is a science, then someone who finishes a Bachelors and Masters program in Computer Science had better be capable of contributing to the advancement of this field.
This could be through the development of new languages, in which case I hope they know a thing or two about assembly in the first place.
Otherwise, a couple of Computer Science degrees would simply mean someone is a techno-wonk, a professional student, or just a technician rather than a professional engineer/scientist.
--
Disclaimer: 90% of the programmers out there do not
need a Computer Science degree, and 90% of the jobs
out there for developers don't need CS graduates
Re:Linux x86 assembly? (Score:5, Funny)
What, you don't build your own processors? What fun is that?
Re:Linux x86 assembly? (Score:4, Insightful)
I learned Asm before I learned C, and I must say that was a good way of going about it. I'm glad I don't view C as some sort of "hocus pocus", and I never did. Everything just made sense. Now-a-days you've got Joe Blow with dollar signs in his eyes and his shiny new degree in using Java, who doesn't understand the little black box he's entering commands into.
It sort of pisses me off, because I don't want to put gay little buzzwords in my resume like C#, or Java, or
Re:Linux x86 assembly? (Score:5, Interesting)
I don't know why, but just saying the words 'assembly language', sends a chill down my spine. I guess I am too weak minded to learn it.
Maybe individual brains just work in different ways. In school, I knew some people who were good with high-level languages but just couldn't hack assembler. They could not get down to that absolute minimal step-by-step instruction level. I'm not sure what that says about those of us who use assembler. :) BTW, I certainly don't advocate assembler as a first computer language - second, perhaps.
Re:Linux x86 assembly? (Score:3, Insightful)
Syntax, OS interfaces... (Score:5, Interesting)
Second, OS interfaces for making system calls (e.g., to read files, open network connections, etc) are different in Linux versus DOS or Windows).
Re:Syntax, OS interfaces... (Score:5, Informative)
Incorrect. There are at least four different assemblers and standards:
ASM - GNU Assembler. AT&T standard, as commonly used on Linux. The syntax hasn't changed since the 60's - which is both very good and very bad. I personally think it should be retired.
MASM - Microsoft Assembler. Intel standard assembly. The syntax is nice, but there are some ambiguous operators (is [] address of or address by value? - the meaning changes depending on the context). This is typically what the commercial Windows world uses. MASM itself is mostly obsolete - the Visual C compiler can now do everything that it could and supports all modern CPU instructions (even on Visual C++ 6 if you install the latest CPU pack).
NASM - Netwide Assembler. An assembler that set out to put right all the things that were wrong with MASM. The syntax is excellent, ambiguous operators are cleared up, documentation is also excellent, it interoperates beautifully with Visual C on Windows and GNU C on Linux. Ideally NASM would replace AS as the standard now that it's open source.
TASM - Borland Turbo Assembler. Based around the Intel standards, but does things slightly differently. Has extensions which allow for easy object-oriented assembly programming - which can make for some very nice code. Had a MASM compatibility mode, but nobody in their right mind used that if they could help it. I had version 5, but I don't believe they've kept it up to date, so it's obsolete now.
There are a couple of others as well, most notably AS86 (which was the leading independent solution for writing assembler back in the DOS days).
Re:Syntax, OS interfaces... (Score:5, Informative)
x86 instructions that deal with 2 data points can be written 2 ways:
instr src,dest
instr dest,src
The intel standard (used by nasm, tasm, masm) is dest,src. The ATT standard (used by gas) is src,dest
Re:Linux x86 assembly? (Score:5, Interesting)
Yes. Although it requires understanding the CPU's native capabilities to the same degree, Linux uses AT&T syntax, whereas most of the Wintel world uses (unsurprisingly) Intel/Microsoft syntax.
Personally, although I far prefer coding C under Linux, I prefer Intel syntax assembly. Even with many years of coding experience, I find AT&T syntax unneccessarily convoluted and somewhat difficult to quickly read through.
The larger idea holds, however, regardless of what assembler you use. I wholeheartedly agree with the FP - People who know assembly produce better code by almost any measurement except "object-oriented-ness", which assembly makes difficult to an extreme. On that same note, I consider that as one of the better arguments against OO code - It simply does not map well to real-world CPUs, thus introducing inefficiencies in the translation to something the CPU does handle natively.
Re:Linux x86 assembly? (Score:5, Insightful)
maxim: cycles are cheap, people are expensive. For the *vast majority* of software it is significantly better value to design and build a well architected OO solution than to optimise for performance in languages and methodologies that are more difficult to implement and maintain. Who cares if it's not very efficient - it'll run twice as fast in 18 months, and will be a lot cheaper to change when the client figures out what the actually wanted in the first place. But I guess you already knew that.
Re:Linux x86 assembly? (Score:5, Interesting)
True. This topic, however, goes beyond mere maximizing of program performance. Pur simply, if you know assembler, you can take the CPU's strengths and weaknesses into consideration while still writing readable, maintainable, "good" code. If you do not know assembly, you might produce simply beautiful code, but then have no clue why it runs like a three-legged dog.
it is significantly better value to design and build a well architected OO solution
Key phrase there, "well-architected". In practice, the entire idea of "object reuse" counts as a complete myth (I would say "lie", but since it seems like more of a self-deception, I woun't go that far). I have yet to see a project where more than a handful of objects from older code would provide any benefit at all, and even those that did required subclassing them to add and/or modify over half of their existing functionality. On the other hand, I have literally hundreds of vanilla-C functions I've written over the years from which I draw with almost every program I write, and that require no modification to work correctly (in honesty, the second time I use them, I usually need to modify them to generalize better, but after that, c'est fini).
Who cares if it's not very efficient - it'll run twice as fast in 18 months
Y'know, I once heard an amusing joke about that... "How can you tell a CS guy from a programmer?" "The CS guy writes code that either won't run on any machine you can fit on a single planet, or will run too slowly to serve its purpose until technology catches up with it in few decades". Something like tha - I killed the joke, but you get the idea.
Yeah, computers constantly improve. But the clients want their shiny new software to run this year (if not last year, or at least on 5-year old equipment), not two years hence.
Re:Linux x86 assembly? (Score:5, Insightful)
About
Re:Linux x86 assembly? (Score:5, Insightful)
Gack! I perhaps have phrased myself rather poorly. Throughout this entire thread, I have not meant to refer to writing even a single line of actual assembly code. I don't mean that humans can do it better than compilers (though often true, for small sections of code), I don't mean that asm always runs faster than the comparable C (again, often true), and I don't in any way mean that asm reads more clearly than a high-level language (about as false as they come).
Perhaps an example would help...
In C, I can make a 10-dimensional array (if the compiler will let me) as a nice, easily-readable organization of... Well, of something having 10 dimensions (superstrings?). I can make a pointer to a structure that contains an array of pointers to linked lists (which sounds obscure, but I can imagine it as a straightforward way to implement, say, a collection of variable-length metadata on a set of files). I can choose to have my loop indices run in row-major or column-major order, with no high-level reason to choose either way.
From an assembly point of view, I realize exactly the hellish task involved in dereferencing the first two example. I realize that row-major vs column-major ordering has a significant impact on the quantity of dereferencing needed. Even further, I realize that by choosing row-major or column-major indexing, I can ensure cache integrity, or obliterate it.
The specific examples I just gave perhaps seem absurdly obvious to any decent programmer. But countless other, more subtle, differences in how I would choose to lay out my code, come from an understanding of what the compiler will likely do with that code, and how the CPU will eventually have to deal with it. Rather than having a superficially obvious relation to the CPU, such choices would look more like stylistic preferences than careful decisions with significant implications to performance.
How about the size of an array, for example? Sometimes using a power of two will help immensely (if it allows a constant shift vs a multiply), and sometimes it will hurt immensely (if you plan to use it such that almost every access competes for the same cache line). Things like that, which a high-level-only programmer simply will not know without experiential (ie, programming in assembly) knowledge of the underlying architecture.
Re:Linux x86 assembly? (Score:5, Insightful)
That is not the point. The point is that knowing one assembly language gives far more insight into what higher level languages actually do. It is, e.g., very difficult to explain the actual workings of a buffer-overflow exploit to somebody without any assembly knowledge. Or what a pointer is. Or what pageing does. Or what an interrupt is. Or what impact the stack has and how it is being used for function arguments. Or how much memory a variable needs....
The only processor I know that actually made assembly programming almost a c-like experience was the Motorola 68xxx family. On the Atari ST, e.g., there were complex applications writen entirely in assembly. Today it would indeed be foolish to do a larger project in assembly language, but that is not the point of the book at all.
Bottom line: You need to understand the basic tools well. You don't need to restrict yourself to their use or even use them often. But there is no substitute for this understanding.
Re:Linux x86 assembly? (Score:5, Insightful)
Have you never used a decent class library? Writing reusable classes requires a much more careful approach to design and implementation than writing classes for one time use, and most people can't afford to spend that kind of time. The power of reusability lies in the fact that I can go out and buy a library of useful classes and feel pretty good that the code therein has already been well tested, usually at much lower cost and higher quality than I could produce myself.
Whether it's building a user interface with PowerPlant, Cocoa, or MFC, or manipulating data with STL, the amount of code that I reuse far exceeds the amount that I write myself.
Assembly for speed? (Score:5, Insightful)
I'm not saying that there aren't places where low level details aren't critical, but for the most part they just draw attention away from the thing that has the most impact on performance.
Application Architecture.
The choices of algorithms and data structures are far more important than any low level details. But low level details are more fun, and tend to make us feel more manly or guruly or something so we tend to focus on them instead. In practice I find that using low level languages or super optimized tools make it hard to worry about high level structure, so the structure gets ignored.
I once worked on a project in which people were seriously freaking out over the performance hit in using virtual functions while parsing the configuration file.
At the same time, the application (a firewall) was performing multiple linear searches through linked lists of several hundred items per packet. These searches were very carefully optimized, so they had to be fast... (sigh). When I switched the system to use STL dictionaries (and later hashes), total throughput jumped three fold, yet some of the developers were worried about the cost of the templates and virtual functions used.
The fact that the algorithm is more important thatnthe details of implementation is a lesson that everyone (myself included) needs to keep getting pounded into them, because it's so easy to forget.
There are places where assembler and hardware details matter a great deal. But they are usually places that contain a lot of repetition that can't be removed algorithmically. Graphics are the obvious example.
A recent example:
My brother in law gave me one of those boards with pegs in which you try to jump your way down to a single peg remaining. I have no idea what it's called, but anyway....
I decided to be cute, and wrote a 100 line python scrpt over lunch to find all possible solutions. I was suprised when it hadn't found a single solution by the time I was finished eating. I was a lot more suprised when it hadn't found anything by the end of the day.
So I killed it and started in optimizing for performance and tweaking and trying different things. This kept me occupied over lunch for a couple of weeks, but didn't produce anything else. Finally I started doing some analysis of the problem. The first thing I found was that the search space (for the board I had) was roughly 10**18.
I didn't matter how much I tweaked the details of my search, it wasn't going to find very many solutions in less than a century (actually, it looks like a naive full search will take several thousand years).
So, after wasting several weeks of lunch breaks, I have redefined the problem. Find A solution, and rewritten my search to use a heuristic. I finished everthing but the heuristic at lunch a couple of days ago. The new system will take 100 or even a 1000 times as long to perform a jump, but I'm expecting to find a solution before I'm dead.
So, don't get bogged down in the details of an implementation. They won't usually take you very far.
Actually, they DON'T. (Score:5, Interesting)
Actually, they don't.
A study was done, some decades ago, on the issue of whether compilers were approaching the abilities of a good assembly programmer. The results were surprising:
While a good assembly programmer could usually beat the compiler if he really hunkered down and applied himself to the particular piece of code, on the average his code would be worse - because he didn't maintain that focus on every line of every program.
The programmer might know all the tricks. But the compiler knew MOST of the tricks, and applied them EVERYWHERE, ALL THE TIME.
Potentially the programmer could still beat the compiler in reasonable time by focusing on the code that gets most of the execution. But the second part of Knuth's Law applies: "95% of the processor time is spent in 5% of the code - and it's NOT the 5% you THOUGHT it was." You have to do extra tuning passes AFTER the code is working to find and improve the REAL critical 5%. This typically was unnecessary in applications (though it would sometimes get done in OSes and some servers).
This discovery lead directly to two things:
1) Because a programmer can get so much more done and working right with a given time and effort using a compiler than using an assembler, and the compiler was emitting better assembly on the average, assember was abandoned for anything where it wasn't really necessary. That typically means:
- A little bit in the kernel where it can't be avoided (typically bootup, the very start of the interrupt handling, and maybe context switching). (Unix System 6 kernel was 10k lines, of which 1.5k was assembler - and the assembly fraction got squeezed down from then on.)
- A little bit in the libraries (typically the very start of a program and the system call subroutines)
- Maybe a few tiny bits embedded in compiler code, to optimize the core of something slow.
2) The replacement of microcoded CISC processors (i.e. PDP11, VAX, 68K) with RISC processors (i.e. SPARC, MIPS). (x86 was CISC but hung in there due to initera and cheapness.)
Who cares if it takes three instructions instead of one to do some complex function, or if execution near jumps isn't straightforward? The compiler will crank out the three instructions and keep track of the funny execution sequence. Meanwhile you can shrink the processor and run the instructions at the microcode engine's speed - which can be increased further by reducing the nubmer of gates and length of wiring, and end up with a smaller chip (which means higher yeilds, which means making use of the next, faster, FAB technology sooner.)
CISC pushed RISK out of general purpose processors again once the die sizes got big: You can use those extra gates for pipelining, branch prediction, and other stuff that lets you gain back more by parallelism than you lost by expanding the execution units. But it's still alive and well in embedded cores (where you need SOME crunch but want to use most of the silicon for other stuff) and in systems that don't need the absolute cutting-edge of speed or DO need a very low power-per-computation figure.
The compiler advantage over an assembly programmer is extreme both with RISC and with a poorly-designed CISC instruction set (like the early x86es). Well-designed CISC instruction sets (like PDP11, VAX, and 68k) are tuned to simplify the compilers' work - which makes them understandable enough that the tricks are fewer and good code is easier for a human to write. This puts an assembly programmer back in the running. But on the average the compiler still wins.
(But understanding how assembly instruction sets work, and how compilers work, are both useful for writing better code at the compiler level. Less so now that optimizers are really good - but the understanding is still helpful.)
Re:Actually, they DON'T. (Score:4, Insightful)
Okay, you had me up to that sentence. It's my understanding from people I know who actually used VAX assembly, that it was a bear. Especially if you had to decode the assembly by hand. It had variable length opcodes, which if I remember right, a single instruction could extend to be upwards of 60 bytes. Oh, and that's not to mention, the 11 different addressing modes, which could be mixed and matched on all three operands (dest, left data, right data). Because all instructions could store directly back to memory, it was a bestie to create.
Call me crazy, but somehow that much mixing and matching just doesn't sound like [fun].
I didn't actually use the VAX instruction set - though I did use the PDP11's - both writing and reverse-engineering. They were both designed by the same guy (Gordon Bell) or teams he lead. I'm prepared to defer to people who actually dealt with it that the VAX instruction set was difficult.
But in the case of the PDP11 (where the few-instructions, lots-of-address-modes style came into its own), the plethora of modes actually simplified the job of the assembly programmer (and the reverse engineer). I can tell you this from experience.
Multiple address modes shrank the job of understanding the instruction set by splitting it into two parts - the much smaller set of base instructions than you'd need without the symmetry, and the small set of addressing modes. Rather than having an explosion of special purpose instructions with their own addressing modes, you have an explosion of combinations of the members of two small sets. Once you learned the two sets, the proper combination to achieve the result you wanted was obvious.
The tricks were few - and brilliant.
- The indirect-increment and indirect-decrement modes let you use any register as a stack pointer or index register, for instance. The "official" stack pointer was just the one that was implied by certain other features: interrupt and subroutine calls, primarily.
- With the program counter as one of the general registers, applying indirect-auto-increment to it gave you inline constants.
- The indirect and indirect increment/decrement addressing modes made any register a pointer. (They also led directly to the ++, --, +=, and -= operators of the C language.)
- The register+offset mode and base/offset duality gets you to particular elements of an argument array, or index into a fixed-location array.
and so on.
If you understand a few things about C (Array/pointer duality, walking arrays with auto-increment, etc.), you have exactly the understanding you need to grok the modes of the PDP11 instruction set. (Which is hardly surprising: I understand that much of C was inspired by that instruction set.)
Re:Linux x86 assembly? (Score:5, Interesting)
For example, on the 6502 family (like the 6510 from the C64), you have only three registers; X, Y and A. These registers can only hold a byte each. Most of the variables you have are stored in zero pointers, a 255-byte range from address $00-$FF.
Then the 68k CPU (as in the Amiga, Atari, etc) you have several more registers which can be used more freely. You have D0-D7 data registers and A0-A7 address registers. These can be operated as bytes, words or longwords as you wish, from wherever you want.
The x86 assembly is written the "wrong way", and is pretty confusing at times. Where I would say "move.l 4,a6" on the 68k, I have to say "mov dx,4" on the x86. Takes a few minutes to adjust each time.
Once you master assembly language on one CPU, it's pretty easy to switch to another.
I still think the 680x0 series are the best.
Not So New Concept (Score:5, Insightful)
While starting Computer Science students off with assembly (without first introducing them to a high-level language) may be a relatively new concept these days, the idea of teaching low-level languages to Computer Science students is not a revolutionary technique whatsoever. Every decent Computer Science curriculum includes several semesters of courses in which assembly language is required, to demonstrate their knowledge of basic computer processes.
That reminds me of a great fortune:
"The C Programming Language -- A language which combines the
flexibility of assembly language with the power of assembly language."
Re:Not So New Concept (Score:3, Informative)
While I admit that it helps you understand the device more, I have to say it's much less intuitive and enjoyable than high-level programming (not that I'm the type to find scripting fun, but you know what I mean).
Re:Not So New Concept (Score:5, Interesting)
Forgetting the Most Important Point (Score:5, Funny)
Nothing else in the Universe can make students grateful -- grateful! -- to be allowed to use C
Re:Not So New Concept (Score:5, Insightful)
I will agree with the parent post, this is not a new concept. Now teaching assembly to beginners, that might be new.
And I don't know if "great" coders know assembly, but I think knowing assembly is a useful tool in being able to program efficient code. If you understand concepts like division, how bad it is, what the computer is actually doing when your C/C++ or whatever language (that is not interpreted) is compiled, then you are well on your way to being able to produce efficient code.
Re:Not So New Concept (Score:5, Informative)
Re:Not So New Concept (Score:4, Informative)
What's dishonest? It would have been dishonest had I registered a new account to make the submission. However, the way that you know that I was the author was because I _did not_ resort to dishonest tactics. I simply wrote from the third person, which is exactly how I wrote the back cover text for the book, the press releases, etc.
I don't even spamguard my email address, because I want people to know who I am and be able to reach me easily.
Re:Not So New Concept (Score:5, Insightful)
"A new book was just released which is based on a new concept
Re:Not So New Concept (Score:5, Funny)
flexibility of assembly language with the power of assembly language."
The way I heard it was far drier humor: "C: The language combining the power of assembly with the ease of use of assembly."
Re:Not So New Concept (Score:4, Insightful)
Now for my own rant on the topic:
My first programming classes at the University of Virginia had me programming in VHDL and m68k assembler. This wasn't the intent of the curriculum or the CS faculty at all, but rather a result of some schedule conflicts and a first year advisor who wasn't in the CS department and didn't know any better. It was a disaster. Normally students here get their first taste of assembly in a course that works its way down to it from the high-level languages they'd been introduced to first. My pain with assembly distracted from the course material on architecture. I learned more about efficient programming during two lectures on C# than I did while banging my head against the desk writing m68k assembler.
Anyone who has benchmarked the C++ standard template library extensively will tell you that using the fairly complicated, safely implemented data structures are incredibly fast. Using an STL deque as an array, without any deque operations, is actually faster than using an array, which is the same in basically any machine-code language, be it C++ or assembly. The STL vector is even faster.
Moral of the story? Compilers are amazing. This has not always been the case, but it is now. Writing code in assembly results in a product that takes forever to create, is less likely to correctly handle special cases, is impossible to debug, is unreadable to those who did not write it, and often is slower than compiled high-level code. In fact, JIT compilers are getting good enough that bytecode languages are nearly as fast as well.
It's true that assembly can often be used to get a bit of a speedup. This is why game developers will often write everything in C or C++ except for 3 or 4 functions that they'll implement in assembly. The advantages that high-level languages give for correctness vastly outweigh any miniscule performance gains in almost all circumstances. Assembly's advantages in compactness are becoming moot too, as embedded devices are now capable enough to run full-fledged operating systems.
Aside from teaching architecture though, assembly does teach an important skill to CS majors though, and that is staying up all night to find a one-line bug that is making everything go wrong.
Re:Not So New Concept (Score:3, Insightful)
Otherwise, true x86 assembly is not THAT difficult to understand. People bitch and complain, but it really isn't that hard. Once you understand the concept of registers and interrupts, there really isnt that much more to it besides learning the commands. It really isnt bad at all.
Re:Forget Computer Science! (Score:4, Insightful)
By definition. Science is the application of a rigorous discipline in an attempt to understand nature. Computing has nothing to do with understanding nature and everything to do with implementing logic in physical systems.
This is not to degrade computing. In fact, computing is provably correct as it is based on logic. Science is a statistical endeavour in which nothing is proven, but theories are constructed which demonstrate usefullness and have not yet been disproven. Computing is, however, built on the results of science.
Maths is a branch of logic. Science is a branch of logic. They are of course cross-fertilising. Computing is so close to father logic as to be almost indistinguishable - it's just a way of logic happening in acceptable time scales.
Of course, you might disagree.
Cheers!
Knuth (Score:5, Informative)
MIXAL (Score:3, Informative)
Somewhere in the middle... (Score:5, Insightful)
There has to be a happy medium IMHO, and I think this is a great start. While my Grandfather was on the cutting edge of the PC revolution, he now has trouble figuring out email, etc, because he operates at too LOW a level (and I feel that he now has no business being online!). Then you have the users who have the same problems because they operate at too HIGH a level (AOL, etc...). The majority of programmers nowadays fall about smack in the middle of these two groups, but I'd argue they should be a little closer to the lower levels than they currently are.
I learned LOGO and BASIC as a kid, then grew into Cobol and C, and learned a little assembly in the process. I now use C++, Perl, and (shudder) Visual Basic (when the need arises). My introduction to programming at a young age through very simple languages really helped to whet my appetite, but I think that my intermediate experiences with low level languages helps me to write code that is a lot tighter than some of my peers. Let's hope this starts a trend, it would be great if more young (and current) programmers appreciated the nuts and bolts!
Re:Somewhere in the middle... (Score:5, Funny)
Just say to him "Well Grandpa, my motto is anyone who can't describe, with exacting detail, all the functions of every organ in the human body doesn't deserve to live."
Re:Somewhere in the middle... (Score:5, Insightful)
I'm with you there. I learned C, C++ and assembler while at university, and came out with the ability to jump into anything. Give me any language and I can guarantee I'll be churning out useful code in a VERY short amount of time.
Compare this to my brother, 12 years younger than me who has just completed the same comp.sci course at the same uni, and knows only one language; Java. Things change, not always for the better. I know many courses haven't gone to the dogs as much as that, but many have. I'm not surprised the idea of teaching coders how the computer works is considered 'novel'.
I can see a great benefit for humanity the closer computers move to 'thinking' like people, for people. But that's just not done at the hardware level, it's done higher. The people who can bring that to the world are coders, and as far as I'm concerned thinking in the same way as the hardware works is absolutely essential for comp.sci. Less so for IT.
Re:Somewhere in the middle... (Score:5, Informative)
A recent article devoted to the *macho* side of programming
made the bald and unvarnished statement:
Real Programmers write in FORTRAN.
Maybe they do now,
in this decadent era of
Lite beer, hand calculators, and "user-friendly" software
but back in the Good Old Days,
when the term "software" sounded funny
and Real Computers were made out of drums and vacuum tubes,
Real Programmers wrote in machine code.
Not FORTRAN. Not RATFOR. Not, even, assembly language.
Machine Code.
Raw, unadorned, inscrutable hexadecimal numbers.
Lest a whole new generation of programmers
grow up in ignorance of this glorious past,
I feel duty-bound to describe,
as best I can through the generation gap,
how a Real Programmer wrote code.
I'll call him Mel,
because that was his name.
I first met Mel when I went to work for Royal McBee Computer Corp.,
a now-defunct subsidiary of the typewriter company.
The firm manufactured the LGP-30,
a small, cheap (by the standards of the day)
drum-memory computer,
and had just started to manufacture
the RPC-4000, a much-improved,
bigger, better, faster --- drum-memory computer.
Cores cost too much,
and weren't here to stay, anyway.
(That's why you haven't heard of the company, or the computer.)
I had been hired to write a FORTRAN compiler
Mel didn't approve of compilers.
"If a program can't rewrite its own code",
he asked, "what good is it?"
Mel had written,
in hexadecimal,
the most popular computer program the company owned.
It ran on the LGP-30
and played blackjack with potential customers
at computer shows.
Its effect was always dramatic.
The LGP-30 booth was packed at every show,
and the IBM salesmen stood around
talking to each other.
Whether or not this actually sold computers
was a question we never discussed.
Mel's job was to re-write
the blackjack program for the RPC-4000.
(Port? What does that mean?)
The new computer had a one-plus-one
addressing scheme,
in which each machine instruction,
in addition to the operation code
and the address of the needed operand,
had a second address that indicated where, on the revolving drum,
the next instruction was located.
In modern parlance,
every single instruction was followed by a GO TO!
Put *that* in Pascal's pipe and smoke it.
Mel loved the RPC-4000
because he could optimize his code:
that is, locate instructions on the drum
so that just as one finished its job,
the next would be just arriving at the "read head"
and available for immediate execution.
There was a program to do that job,
an "optimizing assembler",
but Mel refused to use it.
"You never know where it's going to put things",
he explained, "so you'd have to use separate constants".
It was a long time before I understood that remark.
Since Mel knew the numerical value
of every operation code,
and assigned his own drum addresses,
every instruction he wrote could also be considered
a numerical constant.
He could pick up an earlier "add" instruction, say,
and multiply by it,
if it had the right numeric value.
His code was not easy for someone else to modify.
I compared Mel's hand-optimized programs
with the same code massaged by the optimizing assembler program,
and Mel's always ran faster.
That was because the "top-down" method of program design
hadn't been invented yet,
and Mel wouldn't have used it anyway.
He wrote the innermost parts of his program loops first,
so they would get first choice
of the optimum address locations on the drum.
Programming or CompSci (Score:5, Insightful)
writing an RB tree or an A* search an assembly would be a huge pain in the ass, if you ask me.
compsci is a large part about data structures, how to choose the right datastructure, how to get the most out of an algorithm by picking the best datastructure, etc...
but i didn't read the book, so i'll just go back to my websurfing now...
CompSci's about more than that... (Score:3, Insightful)
Re:Programming or CompSci (Score:4, Insightful)
I did learn assembly language first. I wouldn't claim to be a wizard (although i'm certainly an old-timer); but i concur with the premise that learning assembly language makes you a better programmer *and* computer scientist. Assembly language exposes you to the basic architecture of the computers that most of work with, and i believe that helps one to understand everything from why certain data structures are preferable in certain situations to basic computational complexity.
Not new (Score:3, Informative)
Good idea, Bad Idea (Score:3, Insightful)
Re:Good idea, Bad Idea (Score:3, Informative)
Re:Good idea, Bad Idea (Score:3, Interesting)
What we need if we want to stay the forefront is for there to be fewer programmers of a higher quality. For the past decade we have had too many people not gettin turned off by CS. So they enter the workforce as commodity disposable workers, and they tarnish the reputation of you uys who are good too. Now our jobs go to India and either they have better or equal work
Re:Good idea, Bad Idea (Score:3, Interesting)
The problem with your idea is that the "simple to use" languages that are most often taught are too simple and are too many generations removed from what the computer was actually doing. Years ago, even using BASIC on a TRS-80 would teach you a lot about how the machine worked. To get the best performance out of the thing you had to at least
Re:Good idea, Bad Idea (Score:5, Insightful)
I completely disagree. Assembly is actually one of the simplest languages around. There is little syntax, and hardly any magic words that have to be memorized. Assembly makes an excellent tool for learning basic CS fundamentals; you get a very direct feeling for how CPUs work, how data structures can be implemented, and why they behave the way they do. I wouldn't recommend assembly for serious programming, but for getting an understanding of the fundamentals, it's hard to beat.
Re:Good idea, Bad Idea (Score:4, Insightful)
I disagree. Personally, I learned Basic, then x86 asm, then C (then quite a few more, but irrelevant to my point). Although I considered assembly radically different from the Basic I started with, it made the entire concept of "how the hell does that Hello World program actually work?" make a whole lot more sense.
From the complexity aspect, yeah, optimizing your code for a modern CPU takes a hell of a lot of time, effort and research into the behavior of the CPU itself. But to learn the fundamental skill of coding in assembler, I would consider it far less complex than any high-level language. You have a few hundred instructions (of which under a dozen make up 99% of your code). Compare that to C, where you have literally thousands of standard library functions, a good portion of which you need to understand to write any non-trivial program.
There are already problems with people interested in CS getting turned off by intro/intermediate programming classes.
You write that as though you consider it a bad idea...
We have quite enough mediocre high-level hacks (which I don't mean in the good sense, here) flooding the market. If they decide to switch to English or Art History in their first semester, all the better for those of us who can deal with the physical reality of a modern computer. I don't say that as an "elitist" - I fully support those with the mindset to become "good" programmers (hint: If you consider "CS" to have an "S" in it, you've already missed the boat) in their efforts to learn. But it has grown increasingly common for IT-centric companies to have a handful of gods, with dozens or even hundreds of complete wastes-of-budget who those gods need to spend most of their time cleaning up after. We would do better to get rid of the driftwood. Unfortunately, most HR departments consider the highly-paid gods as the driftwood, then wonder why they can't produce anything decent.
Hmm, okay, rant over.
New? (Score:5, Insightful)
Wussies (Score:5, Funny)
Re:Wussies (Score:5, Funny)
REAL programmers use cat >
Your book? (Score:5, Informative)
What you meant to say was that your new book has just been released. If you're going to pimp your wares on Slashdot, at least put an appropriate disclaimer on. That said, I completely agree with the premise of the book. I've met a lot of mediocre programmers, and a few good ones. But I've never yet met a real star that didn't have some background in assembly language programming. Personally, I haven't written anything in assembly in well over a decade. But that fact that I can do so if needed makes me a better programmer, and I'd recommend it to any aspiring coder as a key skill to learn. I wouldn't say IA32 is a particularly nice introduction (I'd start with a cleaner, simpler architecture, such as 6502), but it is at least widely available to anyone that wants to study it...
Re:Your book? (Score:5, Interesting)
Re:Your book? (Score:5, Interesting)
Had he admitted that it was his own book, I might've actually read about the book, read the summary and saw if I was interested (as I'm a developer and have a degree in CS).
But when he doesn't admit that and writes obviously biased remarks regarding knowing assembly to be a good programmer, I can't help but view it skeptically. In fact when I saw the response that it was his book I didn't even bother reading it anymore. All the words he posted lose credibility.
And considering I got Score: 5 quite quick I have a feeling other people would agree with me. People don't like being "tricked" into buying stuff. It's the same reason why people don't like vendor-lock in and hate Microshaft.
If you tell people its' your book and you give an open and honest review/opinion regarding it people will actually respect that and read about the book. Hey, it's just my guess, but I think I'm right.
Assembly language IS EASIER (Score:3, Insightful)
Probably a bad idea (Score:3, Insightful)
1) Difficulty - Assembly is harder to learn (and create meaningful programs with) than C++, or Java, which is replacing C++ in a lot of college curriculums. This means that students will be spending more time learning assembly, and less time learning about complicated algorithms and the things you really should be learning about (since languages change but algorithms are standard).
2) Job practicality - 99% of CS grads aren't going to use assembly in their day to day jobs. They will most likely be programming in Java, or VB, or some web language (PHP/ASP/etc). Maybe some C++. But unless you are doing something that requires the control that assembly can provide, like real-time software or game engine development, you simply aren't using assembly at work.
If it's harder to learn/teach, and you won't use it after you graduate, I can't see the point in teaching it at universities.
Mark
Re:Probably a bad idea (Score:5, Insightful)
It's like learning Latin. Nobody actually uses it, but it can give you a deeper understanding of the languages that are based on it.
TW
Available under GNU FDL (Score:5, Informative)
http://savannah.nongnu.org/projects/pgubook/ [nongnu.org]
It's also being used at Princeton [princeton.edu]
assembly? bah - real men program with punchcards (Score:3, Informative)
For those of you insane enough to take the plunge, check out this FREE online introduction course [ccsu.edu] (no reg, don't ya love it). The guy who wrote it is pretty wacky. I took his java introductory course [ctstateu.edu] and it was hip as well as very educational.
From the linked page (Score:5, Funny)
Yeah. Give my GF a book on Linux Assembly programming. That should get those panties off in a hurry.
Computer Science Curriculums (Score:3, Interesting)
Here at University of New-Brunswick (Canada), they may not teach assembly language "first", but we do have a second year course dedicated to assembly and the inner workings of computers. My only problem though at the school is that we learn the Motorola M68HC11 cpu and not current ones. Sure it's easier to learn and understand, but most computers we work on today are x86 based.
My 2 cents.
A good idea (Score:3, Insightful)
For the record (Score:5, Interesting)
I think it's a little weird to call this "Learning Computer Science via Assembly Language." It's programming, not computer science. Computer science is really only marginally about computers. It has to do more with algorithms, logic, and mathematics.
You can study computer science, and produce new knowledge in the field, without ever touching a computer.
This misunderstanding is, I think, part of the reason so many students drop out of CompSci. They head into it thinking it's about programming, and are startled to find that computation and programming are not equivalent.
That's why the Compilers course at PSU is considered the "filter" which kills all the students who aren't really interested in computer science. They really need to spin off a seperate "Software engineering" school for these students, since what they really want to study is programming.
Disclosure: The submitter is the Author. (Score:5, Informative)
Re:Disclosure: The submitter is the Author. (Score:4, Insightful)
Great concept. (Score:5, Insightful)
Anyone who disagrees with this probably doesn't have much experience coding in assembler to begin with. Asm really is fairly easy, the trick is that most who teach asm actually spend too much time on those computer concepts and not enough time on actual real coding. It's wonderful understanding how the machine works, and necessary to write good assembler but you should start with the 2 pages of understanding that is needed to "get" asm at all.
Then teach language basics and THEN teach about the machine using actual programs (text editor, other simple things) and explaining the reason they are coded the way they are in small chunks. Instead of handing a chart of bios calls and a tutorial on basic assembler, introduce bios calls in actual function in a program, most of them are simple enough that when shown in use they are quite clear and anyone can understand.
After all assembler, pretty much any assembler, is composed of VERY simple pieces, it's understanding how those pieces can be fit together to form a simple construct and how those simple constructs form together to create a simple function and how those simple functions form together to create a simple yet powerful program that teaches someone programming. Learning to program this way keeps things easy, but still yields a wealth of knowledge about the system.
It also means that when you write code for the rest of your life you'll have an understanding of what this and that form of loop do in C (insert language here) and why this one is going to be faster since simply looking at the C (insert language here) concepts doesn't show any benefit to one over the other.
X86??? OMG that sucks... (Score:5, Insightful)
Is it just me???
I DO think it's a good idea to be teaching assembly, not so sure as the core of a comp sci program however. I started playing with assembly fairly early, on 6052, z80, and then later with 68000 and IBM 370. It's good to know, but I would do major stuff in it anymore. That's what high-level languages are for. You only drop to assembly when you have to for speed or space.
Not a good idea (Score:3, Interesting)
emulator (Score:4, Insightful)
Assembler-first might work with beginners if it was on an emulator where they could see exactly what was happening, and there was no way to crash it. Otherwise, I just don't see the point of making things harder.
Of course if you really want to make it hard, you hand every twelve-year-old kid a copy of Knuth and a hardware implementation of Knuth's hypothetical processor. Then our generation could be completely assured of job security.
It depends where you want to go in CS (Score:5, Insightful)
Implementation specific vs. generic... (Score:5, Informative)
For example, a b-tree data structure is fundamentally the same thing whether you implement it in 32-bit ARM assembly language or 16-bit x86 assembly language or C or Java.
To understand how assembly language works, you need to understand how a processor works, how instruction decoding works, how register transfer language works, how clocking a processor makes it accomplish things. To understnad how registers hold values electrically and transfer values between registers you need to understand some physics and electronics.
To understand how a compiler takes a source language and translates it into a target language, you need to understand a little about the kinds of languages computers can understand (Context-Free Languages) and how they can parse them (Context-Free Grammars). Delving into that field will lead to the core theory of computer science, what is possible with machines in general and what is impossible.
A real computer science program at a university will take you through all of these subjects over several years, allowing for some excursions into other things like databases and cryptography. A real computer science program is always theory with projects that are applied to actual implementations.
How about an alternate? (Score:3, Funny)
Ugh... x86 (Score:3, Insightful)
Learning x86 isn't a bad idea, and for most nerds programming assembly that's probably where it will be most useful, but I just think it's a better idea to start off programming ASM on something a little more enjoyable so that you can really learn to appreciate it before diving in with x86.
Learn assembly first, by all means (Score:4, Insightful)
To be a programmer without ever... (Score:4, Funny)
To be a programmer without ever learning assembly language is like being a professional race car driver without understanding how your carburetor (sic) works.
To which I reply: To be a book writer without ever learning how to spell properly is like trying to teach programming by starting with assembly languages.
ASM is not the place to start. (Score:5, Informative)
Having taught an assembly/into computer arch class, I agree with the sentiment that students who get "under the hood" gain valuable knowledge and working skills. Not just pounding ASM, but in learning how the machine works. Point agreed.
Also having taught first year computer science students, and seen how some of academia's transitions in pedagogy affected students... I have to say that the idea of teaching first year students in assembly is friggin' daft.
My reasoning is the same as why I strongly advocated an objects-first teaching model. It is increasingly critical for students to build a strong sense of software design and abstraction early on. This foundation makes students much better prepared to solve problems of many different scales (asm to component-systems) in the long run.
There's evidence from a paper in one of the Empirical Studies of Programmers workshops that this approach does trade off design skills for purely algorithmic reasoning for students at the end of their first year. But my own experience, as well as that of some prominent Comp Sci Education (CSE) folks seems to indicate that this is far more than compensated for as a student's skills grow.
Here's my theory as to why this is the case:
The details of debugging, alogrithmic thinking, and problem solving are very much skill building exercises that really require time of exposure to improve. But it is much more difficult in my experience for students to build good design sense on their own. Once the framework for thinking in terms of good abstractions is laid down, it provides much stronger support for later filling all of those gory low-level details.
Historical perspective: Ironically, this same reasoning is much of why I believe that academia's switch to C++ from languages like Pascal, Modula-2, etc. was an educational disaster for many years. The astute reader is now thinking: "hey, you just said you like objects-first; what up?" In the Procedural Era, many schools wouldn't expose students to C in the first year, as it had too many pitfalls that distracted from learning the basics of algorithmic thinking and important abstraction skills. Once the foundation was put in place, it was okay to swtich 'em to C for the rest of the program.
When C++ and the early object boom really hit, this put on big pressure to teach first year students using C++. At one point in the mid-90's, upwards of 75% of 4-year institutions were teaching their first year in C++. Thus a language that had plenty more pitfalls than C, previously shunned for its pedagogical failings, entered the classroom. Combined with a lack of of proper OO mental retooling on the part of first year instructors and faculty made for something of a skills disaster on a broad scale. At best, students learned "Modula-C" instead of good OO style. At worst, they were so confused by this melange of one-instance classes and sloppy hybrid typing that they didn't get a cohesive foundation whatsoever.
This book (Score:5, Informative)
Computer Scientists aren't programmers (Score:5, Insightful)
Meanwhile the coder types graduate with a B.S. or maybe a masters then go into commercial development shops and crank out code, forgetting as much as they can about red-black trees and other subtle CompSci concepts.
So if you want to crank out programmers, then assembly is probably a good thing. God knows I learned a lot from the assembly classes I took.
If you're trying to scare students away then assembly is also a good tactic. Nothing like a good hex dump to get some non CompSci students eyes to glaze over. Sort of like making people take Biology or Physics, but instead of teaching about cells and newtonian motion, jump right into the finer points of quantum mechanics or amino acid chemistry.
On the other hand, for 2nd year CompSci students, Assembly is probably a good thing to get out of the way. It really sucks, for example to take economics for 4 years only to learn at the end "just kidding, reality is too complex to model so these are all just gross oversimplifications." Sort of like thinking programming == Java then finding out how it all _really_ works.
CS != uber-coder degree (Score:5, Insightful)
Perhaps it's time that computer science curriculums start teaching assembly language first.
It's more critical they actually teach computer science first, instead of programming. A new CS hire, assuming their school was worth a damn, can learn a new language. I want to know if they have the math background to understand the problems that will be handed to them and that they have the ability to self-learn.
You're a little bit confused. (Score:5, Insightful)
Computer science isn't "knowing computers on a deeper level." Computer science is algorithms and lots of math. Computer scientists don't care about how a computer works. They don't care about the language either. They are interested in data structures and how to work with them. What language is in use is really unimportant, be it Java or Assembly.
From a ring counter to OOP (Score:5, Interesting)
That was in 1965 and the unit was the only one for our whole school district and worth about $30,000 (CDN - about $35000 US at the time)
From there to today's Object Oriented Programming languages has been an interesting time. I wouldn't have missed it for anything, and I honestly think that living through it has given me a perspective that many more recent programmers don't have and IMHO need, sometimes.
Where "brute force and ignorance" solutions are practical, there is no gain in knowing enough about the underlying hardware and bit twiddling to make things run 1000% faster after spending 6 months re-programming to manually optimize. In fact, since (C and other) compilers have become easily architecture tuned, there really are few areas where speed gains from hardware knowledge can be had, let alone made cost effective. Most are at the hardware interface level - the drivers - most recently USB for example.
If you're happy with programming Visual Basic and your employer can afford the hardware costs that ramping up your single CPU "solution" to deal with millions of visitors instead of hundreds, then you don't need to know anything about the underlying hardware at the bit level.
On the other hand, if you need to wring the most "visits" out of off-the-shelf hardware somehow, then you need to know enough to calculate the theoretical CPU cycles per visit.
Somewhere between these two extremes lies reality.
Today I use my hardware knowledge mostly as a "bullshit filter" when dealing with claims and statistics from various vendors. I have an empiric understanding of why (and under what circumstances) a processor with 512 Megs of level 1 cache and a front-side bus at 500MHz might be faster than a processor with 256 Megs of L1 cache and a 800MHz FSB and vice versa. Same thing for cost effectiveness of SCSI vs IDE when dealing with a database app vs. access to large images in a file system (something that came up today with a customer when spec'ing a pair of file servers, one for each type application)
Back in the mid 70s I dealt with people who optimized applications on million $ machines capable of about 100 online users at one time. Today I deal with optimization on $1000-$3000 machines with thousands of 'active' sessions and millions of 'hits' per day. Different situations but similar problems. Major difference is in the cost of "throwing hardware at the problem" (and throwing the operating systems to go with the HW - but then I use Linux so not much of a difference ;)
Bottom line is that understanding the underlying hardware helps me quite a bit - but only as a specialist in optimization and cost-effectiveness now, not in getting things to work at all as in the past.
Re:Not necessarily the mark of a great programmer (Score:4, Funny)
ps. Don't make them learn x86 assembly. I think that's banned under the Geneva convention.
Re:Not necessarily the mark of a great programmer (Score:3, Insightful)
Re:No, but... (Score:5, Informative)
Try looking at the asm output from GCC at -O2 on those two statements.
Knuth had reasons for using ASM that were a lot better than that. It does give you a better idea of how things are laid out in memory, because you have to do it yourself. It's easier to do detailed performance analysis of algorithms, because you can get exact cycle counts. (Which in turn helps train your intuition, and tell you how to find out from a CPU's instruction set how it does at various things to tune algorithms.) You can look at how cache affects things.
Take a look at his reasons [stanford.edu].
Re:It is not the language, it is the paradigm. (Score:5, Informative)
1. Imperative
-- 1a. Procedural (Pascal/C/BASIC)
-- 1b. Object-Oriented (Eiffel/Smalltalk/Java/C++)
-- 1c. Assembly language
2. Functional-Type
-- 2a. Pseudo-functional (Scheme/Lisp)
-- 2b. Pure functional (Haskell/ML/Pure lambda calculus)
3. Declarative (Prolog)
Imperative languages are based on the execution of individual commands. Fundamentally they are based on the concept of assignment -- moving data from one place to another. Functional languages are based on the evaluation of expressions and the absence of side-effects. Pseudo-functional languages have variables, loops, and side-effects but are mainly based on functional concepts. Declarative languages are based on the concept of goals, and the recursive description of how those goals should be achieved, or the definition of what constitutes achievement of the goals.
I'm not sure why you consider Forth a declarative language. To me it seems more like an imperative language with an unusual syntax.
Re:nah (Score:4, Interesting)
Also, you can use object oriented principles in assembly, just like you can in C. There aren't any convenient keywords or enforced methodologies (ie everything is an object), but you can gather sections of code that tie data and associated functionality together. You learn the advantages of modular programs. You learn how to count in binary. It forces you to not code carelessly. You learn good coding principles that you can apply to higher languages. In general you get another tool in your tool box. I'd say there are different flavors of programming and they aren't mutually exclusive. ie:
Object Oriented
Procedural
Event-Based
Interpreted
Managed Memory (garbage collection)
Assembly
To leave one of these out of a CS program leaves me wondering how good the program is.
Your analogy is incorrect, amoung other things (Score:4, Insightful)
Being able to tell your crew that you think your car is leaning out under hard accelleration or that your suspension is too stiff or unbalanced is made easier if you understand the physics and engineering involved. Most professional race car drivers know a very great deal about these things indeed. Unless you are born rich, most dedicated racers build and repair their own cars and know a great deal indeed about the tools of their trade.
I have an EE degreee; I was taught how to build registers from logic gates; how to build counters and adders from those; how to form the basics of a primitive cpu and implement one in vhdl; how to program x86 assembly; I was also taught how the electrical signals interact to make those things possible; the physics of semiconductors and the things that make those logic gates possible. All of those things have made me able to more effectively program computers on a high level. Why would we expect less from a CS program? Computational engines, computers, are the things that drive the CS profession. I would expect anyone in the field to be intimately aware of their theoretical underpinnings.
Ironically, they have also made me a much better driver as I am intimately aware of the workings and how to tune my car's EFI system than most may be.
Would you go to a doctor who has never taken chemistry? Didn't think so.