Why Learning Assembly Language Is Still Good 667
nickirelan writes "Why Learning Assembly Language Is Still a Good Idea by Randall Hyde -- Randall Hyde makes his case for why learning assembly language is still relevant today. The key, says Randall, is to learn how to efficiently implement an application, and the best implementations are written by those who've mastered assembly language. Randall is the author of Write Great Code (from No Starch Press)."
Debugging (Score:5, Insightful)
Another reason: Sooner or later, you'll need to debug something without a source-level debugger. Knowing how to debug raw assembly language has saved my ass many times.
don't bother........ (Score:5, Insightful)
Hate to say, but the kind of optimization you learn about by knowing assembly language is just not necessary for most programmers these days.
I learned programming in the 80's, and I did learn assembly language, starting with 6502 assembly. I would subconciously code my C so that it would produce faster code. Every block of code I wrote would be optimized as much as practical. My code was fast and confusing.
When coding Perl or Java I would keep in mind the details of the underlying virtual machine so I could avoid wasteful string concatenation or whatever. I cache things whenever possible, use temp variables all the time, etc., etc.
I've spent the last few years trying to UNLEARN this useless habit. There is just no need. And in highly dynamic languages like Ruby, it's pointless. You can't predict where the bottlenecks will show up.. almost every project I've worked on has either had no performance problems, or had a couple major performance problems that were solved by profiling and correcting a bad algorithm.
Stuff like XP and agile development have it right: code as simply as possible, don't code for performance, then when you need performance you can drill down and figure out how to do it.
To me a beautiful piece of code is one that is so simple it does exactly what it needs, and nothing more, and it reads like pseudo-code. Minimalism is the name of the game.
So my advice is, don't learn assembly language. Learn Lisp or another abstract language. Think in terms of functions and algorithms, not registers and page faults. Learn to program minimally.
On another note, the tab in my Konqueror for this article reads: "Slashdot | Why Learning Ass...". Heh. :-)
Best, efficient or cheap (Score:4, Insightful)
Smaller code? We can hope... (Score:4, Insightful)
Misuse of high level languages such as visual basic, as well as off the shelf components for everything, has led to a level of code bloat in todays applications that is inexcusable.
(note: off the shelf components and high level languages aren't inherently bad, just not always suitable for commercial applications.)
Also, given that modern optimizing C compilers can often optimize better than humans, it may make sense to embed critical sections of assembly into C code, and let the compiler optimize the rest...
Also, whatever happened to profiling? Has this become a lost art among developers? Time your code. See where it bogs down. Find the fat. Cut it out. Please.
--
This post brought to you by Save the CPU Cycles!
Yes and no... (Score:5, Insightful)
The machine thinks one way. A human thinks in another. Code that is well designed for easy updating, and extending, is code that is easy for a human to understand. If that is not the most efficient way for the machine to do it, that may be the price for 'great' code in this project. (The ideal balance depends on the project, of course. A kernel should be machine-efficient, for example.)
Re:Schools not teaching assembly anymore (Score:5, Insightful)
Some of the more serious asselbly languages still scare the hell out of me though; have you ever looked at the assembly for the TI C6x DSPs? It took me quite a while to come to terms with something simple, and the C compiler can better use the 8 parallel execution units better than I can...
I disagree (Score:5, Insightful)
But let's be honest here. Computer Science 101: an efficient algorithm coded in an inefficient way will always beat out an inefficient algorithm coded by hand in 100% optimized assembly. I'll put my crudely coded Javascript quicksort algorithm against your finely honed 100% assembly bubblesort algorithm any day. Not only will my algorithm beat the pants off of your algorithm, but I'll also code it in far less time and with way fewer debugging sessions than you would. Also, the higher-level language you go, the better it is for security. How easy is it to introduce things like buffer overflows, array out of bounds, etc. errors in assembly? How easy is it to do that in Java, C#, etc.?
So yes, writing in assembly language is still good and has its places. But let's keep it to those places, shall we?
It's like learning any language (Score:3, Insightful)
Exposure useful, mastery not needed (Score:5, Insightful)
Knowing what assembly is and how it works is beneficial. Mastery of assembly is completely pointless for anyone outside of OS kernel, compiler construcution and embedded development...which probably means you. Your time will be better spent figuring out how to make Java programs 10% faster most of the time.
desire to teach someone 6502 assembly language (Score:5, Insightful)
Not just efficiency but correctness (Score:4, Insightful)
So What? (Score:5, Insightful)
That may seem impressive to you (especially if you're fourteen), but the fact is that exploits can be done in almost any language.
In other news, this doesn't have a hell of a lot to do with the posted article, either.
Re:don't bother........ (Score:5, Insightful)
Re:So What? (Score:4, Insightful)
Re:Schools not teaching assembly anymore (Score:5, Insightful)
I'm not saying that everyone has to become a proficient assembly level programmer but I think a lot of people would be a lot better HOL programmers if they understood something about assembly language. I wonder how many Windows buffer overflow exploits are simply the result of someone not understanding that just because you can't express it in a HOL doesn't mean you can't exploit it from assembly code.
Not sure about his premise (Score:2, Insightful)
Second, and perhaps more important, I think most of the performance issues in such environments stem from architectural, not algorithmical issues. Think J2EE, where if you are not careful enough you end up doing a round trip to the server every time you set a property in an object.
In a sense, he covers this in the article when he writes:
"This doesn't mean, of course, that a practicing engineer should sacrifice project schedules, readable and maintainable code, or other important software attributes for the sake of efficiency.
What it does mean is that the software engineer should keep efficiency in mind while designing and implementing the software."
But of course, most of his emphasis is on knowing how does the machine work. I ascribe more to modern thoughts considering efficiency (and other non-functional requirements, such as the maintenability also mentioned in the above quote) as architectural concerns, things that shape the architecture and that no amount of algorithm substitution and the like can fix if gotten wrong in the first place.
far more important than optimization (Score:5, Insightful)
Re:Smaller code? We can hope... (Score:3, Insightful)
In the past, I used to do a lot of assembly language programming, but would always end up being burnt by having to completely rewrite everything for a new CPU/graphics card. It's much more productive to write a generic algorithm in C/C++ and use the assembly output to identify where the optimisations can be made. In nearly all cases, I could restructure the C code to match the optimum assembler output.
Re:don't bother........ (Score:1, Insightful)
1) Minimalism and assembly are not mutually exclusive. As you said, optimize at the end, and that optimization could be implemented with assembly.
2) You can't tell me you are not a better programmer because you know assembly. Yes, we don't need to think about machine architecture or vm design as much as we used to, but you can't tell me it's not extremely useful in the circumstances that we do.
The more languages you know, it changes the way you program. Yes, learn assembly, learn lisp, but also learn eiffel, haskell, objective-c and prolog too.
Re:don't bother........ (Score:5, Insightful)
I have seen FAR FAR too many students in my various college programing classes who think nothing of calling functions with 15 parameters and copies of large datastructures (not references) and other such things. I really think that assembly should be one of the FIRST things taught to future programmers. So many people I've run up against don't have any idea how computers work. Sure things are "mentioned" in classes, but so much is lost on them. Somthing as simple as "passing 2 things is much MUCH faster/easier than passing 10" don't get taught.
By passing 10 things, their job is easy. That's all they see. They don't know about registers (other than they exist and are sorta like variables on the CPU). So they don't know that to pass 10 things you might put some in registers but the rest will have to be passed in memory (which is slow) as opposed to puting everything in registers (if at all possible) which is faster (especially for simple functions).
The only problem with assembly is the catch-22 mentioned in the article: you have to do all sorts of "magic" to print out to the screen or read from the keyboard, which can be confusing. And it takes a while to get them up to the point where they can start to understand that magic. My school teaches assembly (sorta) on little 68HC12 development boards that have built in callable routines that perform things equivelent to printf and such, so there is little voodoo involved which is nice.
I'm not saying assembly is neccessary, but I DEFINATLY think it's important for programers to learn how things work under the compiler. I have seen FAR too many hideous bits of code that no one who understood the underpinnings of assembly would never dream of.
Re:don't bother........ (Score:5, Insightful)
Learning assembly isn't all about optimization, either. Being familiar with how the machine works right down to the core will make you a better programmer, peroid. Personally speaking, it also helps develop that zen like ability to "think like the computer", and that helps you program not just more efficiently but more effectively since you can think things out better. You can't tell me you're not a better programmer for having been exposed to it... it simply changes the way you think about the machine.
It can also be argued that "beautiful" code has no bearing on performance. It's also the kind of "Oh performance isn't an issue anymore" and "make te source code pretty" thinking that we now need gigahert+ machines with 128MB RAM just to write a goddamn letter... it's really quite sad that so many programmers just let their applications fill the hardware vacuum they think their users will have, or should have, just because they didn't take an extra day to think about what they're doing and write their code a little more efficiently.
=Smidge=
Re:don't bother........ (Score:2, Insightful)
x86 aint what it used to be (Score:4, Insightful)
Nowadays, the x86 ISA is just an API...god knows how the core actually executes instructions and in what order, which makes it very hard to optimise code beyond a certain point. You get more mileage from optimising memory access patterns and doing other such dull, dull, dull work. I get my asm coding fix elsewhere nowadays.
Re:Because you can kill any 2.6.x kernel (Score:5, Insightful)
"This bug is confirmed to be present when the code is compiled with GCC version 3.3 and 3.3.2 and used on Linux kernel versions 2.4.2x and 2.6.x. It has been tested to work on, and crash, several lame free-shell provider servers."
If it affects all 2.4 and 2.6 linux kernels, I wouldn't call servers affected 'lame'. Especially free-shell provider servers. That's lame, testing a local exploit on a public shell server.
The problem with assembly... (Score:5, Insightful)
I haven't read this book, but I'd hope that there would be some pretty good justification of the above statement. I suspect that it's not, though. First of all, who defines what the "best implementation" is?
As Knuth says, the first rule of program optimization is: "Don't do it". Trying to optimize a program when you're writing it leads to all sorts of problems including difficult to maintain code, increated time and budget required for the project, and often it's not even a hot spot anyway.
I used to be very concerned about using making my code fast, but have (over the decades) decided that making it obvious is much more important than speed, particularly in the initial implementation. Profiling allows you to concentrate on the 20% of the code that the program is actually spending 80% of it's time in, instead of guessing where the hot spots are going to be.
I've found that another benefit of using simpler code is that I'm more likely to throw away whole sections of simpler code and try radically different algorithms or mechanisms. More complicated code I find I'll try to just tweek instead of dumping wholesale. Randically different approaches can lead to 10x speedups where tweeks of existing code may give you 2x speedups, if you're lucky.
Don't get me wrong, I'm all for trying different approaches. I'm not sure I would have come to the same conclusion I have now if I hadn't spent quite a long time trying to write optimized code. It was a very different world back then, but I know I wasted a lot of time optimizing code that didn't at all need it. It was an experience though.
Sean
Re:Schools not teaching assembly anymore (Score:5, Insightful)
ECE (especially those with a heavy electrical engineering lean) people deal with microprocessors. Motorola chips have special features that you can't access with most C compilers and thus it is necessary to know assembly.
Also, until recently, finding a good C compiler wasn't cheap. Now, of course, there are free ones.
Coming from an ECE program without a microprocessors class in which you apply Assembly will make you less competitive than the graduates coming from schools in which engineers are taught both practical assembly application, and high level languages.
Learning Asembly from Hyde made me a better coder (Score:2, Insightful)
What Randy Hyde taught me is that it is important for a beginner programmer to quickly learn what kind of instructions the high level code you write is being translated into, how basic machine organization works and how the compiler and OS figure into running your code. Writting Assembly is no longer important (thank goodness) however the process of LEARNING how to write Assembly is a crucial step in a well rounded CS curriculum.
When Randy Hyde taught x86 Assembly to us at UC Riverside it was the toughest lower-division class and the weeder for the people who shouldn't be computer scientists. Without that core people are making it to upper-division and performing very poorly in OS, compilers and architecture.
Re:don't bother........ (Score:5, Insightful)
From an old fart who likes assembly language, total agreement.
Assuming the primary goal is performance, the blunt reality is that about 90% of the code is irrelevant as to impacting that performance. Any screweys in that code, particularly trying to "improve" performance, will have indirect deletorious effects on that performance.
Re:Schools not teaching assembly anymore (Score:2, Insightful)
Re:Debugging (Score:4, Insightful)
No, you can't exactly debug a fubar memory stack with just printf. Maybe in your hello world program, but not when things get complicated. :) Trust me, I know. I'm writing a rather large network application at the moment and somewhere along the line I must've overshot an array, but one mistake can ruin a whole application, and printf'ing wont help you.
Learning how to debug is just as precious to a programmer as learning how to code.
Re:Smaller code? We can hope... (Score:5, Insightful)
You, sir, are insane. Much of my job involves pushing around regular expressions and hash tables (aka associative arrays aka dictionaries). I know several flavors of assembler on distinct hardware platforms (x86, 68k, 6502, MIPS) so I say this out of experience rather than fear of the unknown: I'd rather swallow my own tongue than write anything non-trivial in a low-level language.
Seriously, a lot of people who know what they're doing have provided a huge library of functionality for me to pick and choose from. If I need to write a GUI app, I'll do it in Python with GTK or QT bindings. I am competent to build it in assembler, but why? It wouldn't be portable, it'd shave a very small amount of size from the end product (most of the project's resources are likely to be spent in the GUI libraries and not the core of the program), and would take 20 times longer than necessary.
There are a very few areas where low-level languages make sense. I haven't touched any of them in years.
This is the wrong reason to learn assembly (Score:4, Insightful)
I'm fully in favor that most programmers should learn some assembly. But learning how to do efficient code is not the reason for learning assembly. Assembly should be learned thoroughly by systems programmers (who write operating systems, core libraries, compilers, etc) and certain embedded programmers because they might actually need to use that skill directly. Other programmers should learn some to the extent that it teaches them what's really going on inside the machine, but they should not dwell on it (unless they find it fun). Efficiency should focus on choosing (or developing) the proper algorithms for the application being developed.
If one is going to do programming where pointers would be used (systems programming and lower level applications programming, such as in C), then I suggest learning assembly as the first language. Two or more decades ago, that advice worked because most people didn't learn to program until they took a class in it or such. Now days, people destined to be programmers are learning some programming by around age 10 (usually in whatever language is easiest to get started in, which is generally not always the best to develop larger applications in). By the time they've done a lot of programming, they either "get it" with regard to pointers, or don't, and are set that way for life. This is unfortunate (and results in much insecure programming).
Re:Debugging (Score:3, Insightful)
Especially if you're trying to slay one of those infernal "only shows up in a release build" bugs.
Counterpoint (Score:5, Insightful)
With the incredible power provided to us by modern CPU's, efficiency is just about completely irrelevant for 99% of non-game applications. Think... when was the last time you thought "This word processor just doesn't respond to my keypresses fast enough." or "AIM takes way too long to open a new IM window."? The reason why these programs aren't getting "faster" (as the article complains) is because there is no way to do so. They spend 99.9% of their time waiting for user input already.
Optimizing code which doesn't need optimization is Bad with a capital 'B'. When optimizing code, there is almost always a tradeoff between efficiency and maintainability. Efficiency often requires cutting corners, killing opportunities for future expansion, or, at the very least, writing ugly code. When that added efficiency does not lead to any noticeable benefit to the user, why do it?
Now, granted, you shouldn't use an O(n) algorithm when an O(lg n) one exists to solve the same problem. However, knowing the difference between O(n) and O(lg n) has nothing to do with knowing assembly. The only benefits you can get out of knowing assembly are constant-multiplier speed increases. And, frankly, shaving off 50% of 0.1% CPU time used is not going to help much.
Really, the speed of modern CPU's is sickening. I can't count the number of times I've written a piece of code, thought "This is going to be so slow...", then watched it execute near instantaneously. Even when running programs in a prototype programming language I'm working on -- which currently runs about 40x slower than C, because it's a crappy prototype -- this happens to me regularly. The only time your code is going to be noticeably slow is if you are processing a very, very large data set or you are using slow algorithms. In the former case, sure, knowing assembly will help, but such cases are extremely rare in typical applications. In the latter case, find a better algorithm.
Long article makes one point. (Score:4, Insightful)
All too often, high-level language programmers pick certain high-level language sequences without any knowledge of the execution costs of those statements. Learning assembly language forces the programmer to learn the costs associated with various high-level constructs. So even if the programmer never actually writes applications in assembly language, the knowledge makes the programmer aware of the problems with certain inefficient sequences so they can avoid them in their high-level code.
Fair enough. Like he says, it works with any speed processor to make things faster.
Most of the rest sounds like praise of free software. Free software does not suffer, "unrealistic software development schedules." Free software authors can go read the source code to gcc and gcc and the gnu debugger both have had more attention lavished to them than any proprietary equivalent.
right on (Score:3, Insightful)
Don't forget Reverse Engineering (Score:5, Insightful)
Re:Schools not teaching assembly anymore (Score:5, Insightful)
Not that I disagree with your point (actually I do agree with it alot!
When you just start throwing processors (or thread contexts) on the problem, you will find soon enough that any problem is I/O bound...
Paul B.
It is this kind of bullsh*t attitude (Score:3, Insightful)
Look, exposure to low level languages and limited memory make you think about what you really need in the program and what you can do without.
In one shop I would take code from the consulting firm, trim out 40-60% of the code, and end up with a job running 25-35% faster (and cheaper). And then there was the other bug in the code. 50 lines of useless code in each of three programs was costing the company $60K in usage to a department per year!
Or how about added transfer time into an EDI program twice for each item calculated. It was causing Disney to hold $10 Million in the warehouses in inventory until I found it.
And it isn't just assembler. It is low level DBMS functionality too. I had one job where the first test ran 12 hours. By re-coding it in a completely different manner it took 5 minutes. The 12 hour method was the method recommended by the vendor of the DBMS. The 5 minute version was mine.
Finally, code can be in a high level language and still be efficient. And there are still plenty of devices out there with limited memory - how about PDAs? or Dive Computers?
Re:don't bother........ (Score:5, Insightful)
Use of assembly doesn't preclude thinking in terms of functions and algorithms. Like nearly any form of programming, it pretty much requires such thought -- in abundance. But given that I have a limited amount of attention to spend on each line of code, focusing on registers, branches, instruction sets and memory layout takes away from time better spent on clarity, modularity, and algorithmic sophistication.
It's much easier to take code written for clarity and correctness and make it fast, than take code written for speed and make it clear and correct. That's what profilers and coverage tools are for. Once you've measured, code your inner loops and bit-fiddling in assembler if you must, but only after your program is working and well-tested.
Problems with article (Score:5, Insightful)
- He faults inefficient coding for the failure of software speed to keep up with CPU speed (or at least, its a "large part".) This is much less true than he lets on; Amdahl's Law means that the CPU is less and less responsible for the speed of an application, while things such as disk seek/transfer times, memory access times, and network latency all play huge roles in the speed of your computer's software.
- He seems to think that it's not terribly hard to become an "efficient" assembly language programmer. Bzzt, wrong! In the modern era of superscalar architectures, pipelining, processor specific instructions, branch delays, and memory heirarchies, it takes a hell of a lot of knowledge and experience to beat the performance of a good compiler.
- He apparently hasn't tried any large assembly language porting efforts lately. I'd love to see the effort involved in porting a large x86 assembly language program to a MIPS architecture, all the while maintaining that coveted "ultra-efficiency". The reality is that a good compiler can be reasonably efficient at porting a program to a new architecture, while a programmer usually isn't.
- He also apparently hasn't tried debugging a large chunk of assembly code lately. It is a fact of life that it is very difficult to debug assembly. By using a high-level language, you are increasing the readability of your software, which tends to decreases the number of bugs.
I could go on, but needless to say, I'm not impressed with the numerous assumptions and generalizations about assembly that he makes. Learning assembly will make your high-level programming better, and limited use of it can be appropriate, but using it all over the place is a huge mistake.
Re:don't bother........ (Score:3, Insightful)
I have seen FAR too many hideous bits of code due to people not knowing appropriate data structures. I have seen FAR too many hideous bits of code just because people didn't know what clean code looks like (or didn't know why it helps maintenance).
Sure, you can save some time by passing fewer arguments on the stack. You generally will not save much time that way; if somebody passes 15 arguments to a function, it is usually because they do something with most of them, and that bounds the potential speedup. But you can often save mountains of execution time by using a better algorithm. You can also save mountains of developer time by trying to write clear, concise code in the first place.
Knowing how the CPU works is just one facet of competent programming, and is important to remember; but too many people focus on just one facet and ignore the others.
Crappy programmers make crappy programs (Score:3, Insightful)
A bad assembly programmer will write unreadable and unmaintainable code.
A bad C programmer will write unreadable and unmaintainable code.
A bad Java programmer will write unreadable and unmaintainable code.
A good assembly programmer will write beautifully elegant and efficient code.
A good C programmer will write beautifully elegant and efficient code.
A good Java programmer will write beautifully elegant and efficient code.
Learning assembly language will help you understand how the guts work, and yes, it can be used to optimize the last 20% that compilers can't do (ie for embedded and graphics). It will help you crack registration keys, solve stack bugs, and write viruses because you will have a better understanding of why certain behavior is prevalent in computer code. But you don't absolutely require it to write efficient and clean code.
Re:HLA?...nah. (Score:3, Insightful)
So? HLA is a good teaching tool. Randy is now a unversity professor.
Even if you don't use assembly, it does make you a better programmer. Learn assembly and pointers make sense. As a teaching tool, assembly has some advantages. You are programming an abstract machine at a simplified level. (If you read Knuth, you are programming a fictional abstract machine!) Some data structures and algorithms are easy to explain in assembly.
Plus, understanding assembly gives you a sanity check on C code. For example, look at the hoops some C programmers go through to get the double length result of a multiply or the quotient and remainder from a divide operation at the same time.
In particular, I remember a library routine that came with a well known MS-DOS C compiler by a very talented programmer. I could see that the source code was written in C and it was a convoluted mess. Worse it was wrong! About 20 lines of 8088 code would have been equivalent (or 4 lines of '386 code). The author was trying to do some low level bit whacking in C that should have been done in asm in the first place.
Because you can't always trust compilers (Score:5, Insightful)
I've been a programmer for over a decade and I've always found the worst problems to debug are when the problems aren't in your code but in the compiler. Compilers are programs too and have their own bugs. They aren't always 100% accurate at generating correct machine code for your source. And until the compiler gets fixed in the next patch or rev, you may be stuck with broken code unless you switch compilers.
Sometimes disassembly of the problem code and inlining correct assembly can be the difference between shipping a product or missing a deadline because you've spent months sitting around for the next compiler version to fix your problem for you.
ed
Absolutely (Score:5, Insightful)
Now (~5yrs later) I'm a fully capable programmer and an even better designer. My preference is C, binary file formats, networking protocols, crafting elegant solutions for multiplexing IO. I'm lead on a project used in production by many companies large and small.
I genuinely feel assembler is a vital part of the learning process for a programmer.
Re:don't bother........ (Score:3, Insightful)
To my mind, any programmer (I would not refer to the ivory towers of CS here
Paul B.
Re:Schools not teaching assembly anymore (Score:2, Insightful)
I do think this is important, and am spending a lot of time on my own to learn as much as I can assembler/C-wise, since I know jobs I get here and there (like my summer job) will require high-level languages (Java, C#) and I don't want to lose that low-level knowledge.
It's actually harder for us (the younger generation), because there have been so many abstractions that you really have to be on top of things to understand what's going on at the machine level. At this point, programs tend to go from source to bytecode to being executed on the physical machine, and in order to optimize programs you have to know what's going on at every step. That and you have to have a lot of discipline to learn the low level stuff, since the high level stuff can make you so productive without even thinking about it or putting in any effort. But this discipline ultimately makes you a better high-level programmer too, so it's worth it regardless...
Realtime video enhancement filters needs assembly! (Score:4, Insightful)
I had a lot of the habits that you describe, and I now program simply in C++ for either Linux or XP.
However, I had run into some performance issues with certain critical loops that were executed millions of times, such as a loop that iterates through pixels in image processing, and I wanted to view the disassembly of it. I understood enough assembly to be able to optimize a tight loop in a plain C code routine, and verified that the assembly was just as good as handcoded non-MMX assembly. (Some compilers do an amazing job now) The only way to improve the performance further in my case, would have to have written MMX/SSE/SSE2 for this 0.05% of a computer program, but even so, I deemed it not to be still worth the effort.
Now, if you are talking about realtime video filters, such as deinterlacing and sharpening (think Adobe Photoshop style plugins executed 50 or 60 times per second for every interlaced video field at 60 Hz for NTSC, 50 Hz for PAL), you still need matrix math operations such as MMX/SSE/SSE2 assembly language if you want to do lots of video enhancement realtime on a live video source.
One example program is the open-source dScaler project - dScaler Realtime Video Processor [dscaler.org] . You can do REALTIME sharpening filters, denoising filters, motion-compensated deinterlace filters, 3D-like chroma filters, diagonal-jaggie removal filters, etc, all the above simultaneously, on a LIVE real-time video source from a cheap $30 PCI TV tuner card, on today's high end Pentium 4 and Athlon systems. All this would not be possible without assembly language. Now, they are talking about adding realtime HDTV enhancement (1080 interlaced -> 1080 progressive). Run your cable/satellite/DVD box connected to your home theater PC running dScaler, and hook the home theater PC to your HDTV, and the live homemade "upconversions-on-the-fly" you are seeing are shockingly better looking than the bad quality upconvered video you watch on TV; (Important: Don't use S-Video output, connect the VGA output directly to the TV using a component-output adaptor. It's 6 times sharper than S-Video. For more information, see AVSFORUM's Home Theater Computers Forum [avsforum.com] section for more information about getting HDTV-quality video out of your computer to your HDTV television, especially if the HDTV television does not have a native VGA input.)
(For watching live realtime videoprocessed video, I don't recommend a $30 TV tuner card, the power users like to get more expensive cards such as approx-$250 PDI Deluxe card, which is a Conextant 23882-compatible card that actually has a Y-Pr-Pb component input for computers! Supposedly better analog signal-to-noise ratio, better A/D converter electronics, better power filtering.)
The point is that you don't need assembly language most of the time, but there definitely sure are times that it's exeedingly, absolutely critical.
Re:Debugging (Score:4, Insightful)
1) A race condition.
2) A deadlock.
3) A performance anomaly in I/O bound code.
4) A stack-smasher.
5) Heap corruption.
6) The, uh, printf family of functions.
Re:Smaller code? We can hope... (Score:2, Insightful)
It's also not about optimizing instructions.. to truly optimize on many modern processors requires years of study of that particular architecture in the first place.. that kind of task is best left to he compiler nowadays.
The point is that by coding some stuff in assembly, you learn a lot more about how the machine really works, how the OS really interacts with the machine, and your code.
Again, it's not about optimizing instructions but more about having a critical understanding of what is really going on.. taking out the magic.
Understanding things like:
How are arguments passed? By register, or by stack? Which is faster? Why?
How are shared libraries handled? How does an interrupt affect things?
What is a context-switch?
Re:Because you can kill any 2.6.x kernel (Score:3, Insightful)
I then asked people on irc channels if anyone were willing to test this before compiling other kernels myself and I instantly got replies like "Works, took down free-shell provider #ali on EFNet". People who tested on their own systems also confermed crashes. I'm not sure if I cut&paste the phrase "lame free-shell provider" from my irc window or if I actually did type this myself.
I do not encourage hacking, doing so is bad whatever your target it.
Btw, the "l33t howto" is meant to be funny, you know.. a joke.
Another reason: so your mental model is correct (Score:5, Insightful)
Example: When programming in languages like C or C++, you have to know what a stack frame is and basically how it's implemented, so that when something goes wrong you can correctly diagnose the problem. If you just know the corresponding language syntax (i.e. the scoping rules), you won't have the first clue where to start.
This applies to Java as well - just replace "machine code" with "bytecode" and "CPU" with "virtual machine".
In all these cases, a compiler takes your program specification (the source code) and produces the *real* program (in machine code or bytecode) - and that is what is executed and that is what you will be debugging and analysing. If you don't understand basically what machine code is and how it works, you will keep running into brick walls. I've seen this over and over again - the new graduates who just can't see why their program is behaving the way it does, because they never did assembly programming, or studied the run-time environment of programming languages, and so have these bizarre ad-hoc mental models of what's happening that bears little or no relation to reality.
I'm not saying that assembler should be used any more than it is currently, but if we are going to be using compiled languages (C, C++, Java), then it simply *must* be taught. There is simply no way to avoid this if you want to be a half-way productive programmer in those environments.
Re:don't bother........ (Score:4, Insightful)
That is exactly the point! Programming in assembly is another of these facets, and just as important as all the others. Instead of shouting at each other, maybe you should recognize that you're both right!
Re:Assembly vs. scripting (Score:1, Insightful)
Shocking!
Re:Debugging (Score:5, Insightful)
But if Perl is written in C, wouldn't that mean that Perl can never be "faster" than C?
To put it more concretely, couldn't I just write a program in C that does EXACTLY what the Perl program does, down to the last data structure? And if I did, wouldn't that mean that Perl can't ever (theoretically) be faster than C?
You could even take it a step further. You could write an exact duplicate of the Perl program, leave out the parts of the Perl interpreter that you don't use, and you probably would end up with an overall faster program. Thus, in most cases, C could trump Perl.
Re:don't bother........ (Score:5, Insightful)
SO WHAT! Programs should only be optimized if:
1) the program is doing stuff so intensive that it runs slow
or
2) It is being run all the time in the background by the system and can slow down the system as a whole.
98% of the time it just does not matter.
Easilly readable code is FAR MORE IMPORTANT.
I have written code in more than half a dozen different languages but my favorite language these days is Python. It runs ten times slower than C, but in most cases, it just doesn't matter. Most of the time, the code feels instantanious.
*A* big ass computer??? (Score:3, Insightful)
What if you want to run your app on (God forbid!
Last time I checked the price difference between a $100 ARM-based unit and $5,000 G5 was still $4,900, so if you intend to run your program on a 1000 units you can as well pay $900,000 to a guy who can port from the latter to the former and keep the cool $4M.
Or if you think about selling your solution (HW+SW) to some millions of customers... (Wow!)
Paul B.
Re:Smaller code? We can hope... (Score:3, Insightful)
I don't think there's many people hoping for a complete return to assembly language. However, a return to _understanding_ assembly language and how a computer works underneath is what is needed. I wrote a book on assembly language for just that purpose. The code I write professionally is usually Perl, Python, PHP, or C. However, knowing assembly language makes me a much better coder in all of those languages.
Anyway, see my sig for the book I wrote on this.
Re:assembly still doesn't seem like the best choic (Score:5, Insightful)
Of course, some of these, even if you don't HAVE to know assembly language to understand them, knowing assembly language makes it easier to understand. Most people who know assembly language have a much more concrete view of the differences between pointers and values. When you have personally had to think about whether to push the value or the memory location, when you have to think about which addressing mode you need to use in that situation, it makes the idea of pointers and stacks and calling conventions a TON more concrete. It also makes many of the ideas of sequencing and linearity a lot more concrete. This is something that I've found a lot of new programmers have difficulty with - they have trouble thinking in straight linear fashion, and assembly language absolutely forces you to think that way.
Anyway, that's the reason I wrote my book on assembly language. See my sig for more info. Randall Hyde actually wrote me a pretty good review on barnesandnoble.com. I got a good one from Joel Spolsky, too.
With all due respect... (Score:2, Insightful)
I agree with the premise of the article and I understand the logic behind his arguments. I too, believe every programmer should have a strong math foundation and learn assembly language - regardless of the architecture. Generally speaking, knowing the "nuts and bolts" is what separates the 4-year CS degree programmers from the 2-year, fast-track-to-developer "programmers". All too often schools do future developers and the companies they end up working for a disservice by churning out less-than-ready programmers with a weak foundation in the why's and theory in exchange for a bunch of how-to's on a particular language syntax.
However, regarding the article, there's a piece of the puzzle missing. In the case of a byte-code language like Java or C#, the code isn't compiled to assembly or machine code where it's easy (other than a debugger disassembly) for a developer to get at. In the case of C# or any other .NET CLR language, it's nearly impossible to know how the compiler generates assembly instructions. In the case of Java, one may be able to study the JIT compiler of an open-source implemenation like Blackdown but that's not necessarily the best option or even the most efficient implementation either. There are so many different JIT compilers for Java, which one would you study? With the .Net CLR, the only possibility would be to study the source to ROTOR to gain any insight into the JIT compiler.
The implication of the article is that the programmer has some access or insight to the compiler output other than looking at the object files. I submit that, especially in the case of a byte-code language, that insight isn't nearly as available. After all, we're talking about programmers that don't know assembly, in all probability don't know what an object file is, and perhaps don't have the expertise to know how or where to garner such information.
I would like to see these programmers learn the difference between the stack and the heap. It's not exactly easy, especially when you don't know where to begin, to figure out how the compiler for a high-level language creates assembly instructions from source. Learning that information (think YACC and Bison) is on the same level as assembly, combinatronics, and two's complements. Again I say, it's not enough to learn assembly - Without the solid background in math, data structures, and other concepts, the 2-year degree programmer is, in all probability, lost. You can't learn or use assembly without two's complement, binary, and hex arithmetic. How could you figure out how to add 64-bit numbers when you only have 32-bit registers?Like I said, I don't disagree with the premise of the article. However, simply learning (or trying to learn) assembly is not the cure-all for all programmer deficiencies. You just can't take a person that can't add 2 hex numbers, run them through assembly 101 and expect the Mona Lisa of software. I suppose there are broader implications to learning assembly like having to learn all of the aforementioned skills first but that was never indicated in the article.
Re:Schools not teaching assembly anymore (Score:3, Insightful)
Retail, banking, most all of the big financial companies all have decades of legacy COBOL programs hanging around that need maintenance and upgrades. It's not uncommon for a Fortune 500 company to have several hundred programmers writing in COBOL, but have fewer than a dozen programmers capable of writing in assembler. Of those, perhaps one or two is fluent enough to maintain an ancient BAL (Basic Assembly Language) program.
There are many more jobs for COBOL speakers than there are for BAL speakers. A community college's primary focus is very different from that of a research university. They are there to teach skills that will make its students employable. A COBOL coder can get a job. A BAL coder can get in the unemployment line behind the IBMer with 20 years BAL experience.
Assembly's still important. (Score:2, Insightful)
Understanding of assembly / machine code can also be quite important for hardware people. Example: when debugging a new board design with a logic analyzer hooked up to the system's buses, C/C++ knowledge isn't going to help decipher that long list of numbers being transferred from RAM to processor.
Re:far more important than optimization (Score:4, Insightful)
Btw: Most of those "inner loops" you could save tons of time with assembly have slowly dissapeared with vertex shaders and gpus in general.
Re:don't bother........ (Score:3, Insightful)
Agreed -- but I would argue that if you are making lots of function calls that take 10 arguments each, your code isn't very readable either. In most cases like that, the thing to do is create a class that contains all the parameters and just pass in a reference to an object of that class. Then your code is both efficient AND easy to understand.
Re:Counterpoint (Score:2, Insightful)
There is truth in what you say, but I don't think this is a counterpoint to what he said. What he is lamenting is the lost art of being aware and caring about performance at the micro-level, as opposed to the macro-level.
In a way, he acknowledges that knowing assembly itself isn't required. Simply knowing the way the machine works (and having a basic understanding of assembly) provides most of the benefits. Good programmers do not pass 2K structs by value to functions in C, they pass pointers. Good programmers move independent code out of loops. And that is better code in every way - more efficient, because it saves executing it repeatedly, and because if it's independent, then looping it doesn't actually make semantic sense. Later readers of the code might wonder if they're just misreading it, and maybe it isn't independent code... now they have to check for obscure relationships.
The best point to take away from the article is that efficiency choices are not just the "big" ones like which algorithm to use, they are everywhere. Every programming decision has efficiency concerns, from the choice of algorithm down to the program design (how much data must be passed around?) and the size of data structures (big ones use more memory, which could affect cache locality).
That doesn't mean that for every choice, you must pick the one that is the most efficient. It simply means that you keep effiency in mind, and you take it into account when choosing. That way when you pick a simpler (but slower) algorithm, you're at least knowingly making the tradeoff... same if you pick the faster algorithm.
To some extent you are missing the point (Score:5, Insightful)
But here is where you are missing the point.
It is not the only way and sometimes it is the wrong way.
With virtual machines and interpreting languages, knowing the machine code of the CPU becomes pointless. You need to know what is costly for _your_ language.
What I think that you _really_ meant was that your students should understand compiler technology. That is how you understand the cost of different language constructs in a way that is portable across compiled, interpreted and byte code languages. This is unfortunately not something that you can have beginners learn, it is more of a third year thing.
Also, I've programmed professionally for 10+ years in most programming languages known to man, and I agree with your parent poster. Write simple and sensible code. Optimize when needed. More often than not, you will find that the code is fast enough. If it isn't, then you saved so much time writing the code originally, that you can spend a lot of time optimizing the problem areas.
If you disagree with this approach (write simple code and optimize when needed) then I'm willing to bet that you've never programmed outside a university environment.
Re:Debugging (Score:1, Insightful)
Not exactly. If I was able to write an assembler in BASIC, it doesn't mean that BASIC runs as fast as assembly language.
RE: No, really, don't bother. (Score:4, Insightful)
A number of us are true computer science students, and we cut our teeth in assembly, so-to-speak. That being said, I disagree that it is necessary (or even good) to understand the machine at the low-level. I have never done x86 development (instruction set and memory models never made sense to me) and I have never seen the JVM byte-code that I use daily. Nor do I care to.
If you're writing code that is supposed to be optimized for the machine, you've missed close to a decade of compiler development. Dealing with multiple pipelines, delayed branching, etc is best left to a machine. I have more pressing issues to solve - like delivering good software.
The compiler optimizations are pretty astounding today. The JVM run-time optimizations are amazing. My knowledge of hardware architecture is 20+ years old. I'll trust the compiler writers as well as the JVM designers.
The focus for the bulk of us is on maintainable applications that can be delivered "on time, within budget, blah blah blah." Illogical algorithms and/or writing code for the computer and not for the human don't help anybody. In fact, I'd probably just throw it out and start again - it's the fastest and least stressful way to deal with it.
The most important tool to hone and keep tuned is your mind. Those with good logical reasoning and critical thinking are going to do well. They are the ones *I* look up to.
I would suggest teaching unit testing (ie, JUnit) - including what to test and how to test correctly (both difficult topics) - and debugging skills (which I wished I had more of when I started) instead.
If you want to cover hardware, use a book like CODE (by Charles Petzold) to give people an idea of computer structure. Nothing more than that - and even that isn't required.
Re:Problems with article (Score:1, Insightful)
Doing nothing but ASM is a) going to make security practices much harder to visualize and keep track of, b) make it much harder to understand code written a few weeks ago and present opportunities for security related bugs there, and c) make tracking down bugs harder.
I personally value more advanced security over it going fast.
In addition to this, programming is NOT about making stuff go fast. That, to me, is a secondary concern. It may be pertinent to games, but that's about it. Instead, we now value more complex functions. We don't WANT the same old word processor to go faster, we want the word processor to be able to do more things. We only have so much programming time, have it spent on features, not speed. There are obviously concerns about bloat, and there are tradeoffs.
Keeping these tradeoffs in mind, ASM is one extreme of the scale, and all kinds of high level scripting languages the other end of the scale. You use the right tool for the right damn job.
On another note, the main premise of the article, I think that it is indeed a valuable skill to know and learn, and that it will help visualize advanced function.
Why stop there? (Score:2, Insightful)
Alternativly they could learn more about the problem domain, so that the design better reflects the problem domain and will be more flexible in the changing environment. Nah, that really would be a waste of time.
Duh. (Score:4, Insightful)
DUH!
Imagine a mechanic who has built his own engine from scratch, or a painter who has made his own paint, or a musician who crafted his own instrument. Those seeking to exercise their art on the lowest possible level will always have superior insight into those that don't.
Re:Debugging (Score:5, Insightful)
As tools have matured, I think we can see that the emphasis on time has only increased. Reusability and abstraction for more reliable interfacing are important OOP goals. Standard and broad libraries, IDE's, gc, etc. are all geared towards saving someone time. There are less obvious perspectives to this paradigm as well - low, economical maintenaince and error-free code, for example.
I think a lot of programmers have gotten into the habit of sacrificing time for performance, and vice versa. In some form, this will always be true. However it is an asset of the programmer to have the choice of exactly what component of his project he should spend the most time on. Hopefully, this is the design.
Re:Debugging (Score:5, Insightful)
Re:Because you can't always trust compilers (Score:3, Insightful)
To take it even further, you can't even trust machine language all the time -- look at Intel's floating point bug a decade ago. There's a reason why x86 processors now have ways of patching the microcode.
Re:don't bother........ (Score:5, Insightful)
The things that are most important in performance are, generally speaking, algorithms. It's important to understand things like:
Etc. The sort of thing you're proposing, with stuff like function call arguments, loop conditionals, etc. are micro-optimizations, and are very seldom worthwhile for programmers. Micro-optimizations are almost always best left to the compiler writers, who can, in effect, program them once and let everyone reap the benefits.
Consider your example in particular: A function with 2 arguments instead of 10 isn't really faster. First off, it's only slower on an x86 - many architectures have these things called registers, which you can use for things like function arguments. Second, those function arguments are spilled to the stack just before the function call jump. This means they're extremely hot in the D$, and will hardly even be any more expensive than a register to reload at all. Third, if you break the big function into a lot of little ones, you're incurring more call overhead and more pressure on the I$. Fourth, breaking the function up causes multiple copies of the function prologues and epilogues, which will easily overwhelm the register spilling cost. Etc. etc. etc.
In other words, those 10 arguments are only microscopically slower, and may even be faster!
In this case, the student should avoid writing a function with 10 arguments if it makes the code clearer. The value of this sort of incredibly trivial micro-optimization is fundamentally dwarfed by the value of readable code - if the student can read her code, that means she'll have fewer bugs and can spend more time optimizing it. And that's what'll really make the program faster.
You should only consider worrying about optimizations at this level if you've already optimized your program at an algorithm level fully, have profiled it, and determined some particular pieces are extremely dense hot-spots that need to be improved by hand. But if you're doing that, you may want to consider recoding the hot-spots in assembly anyway.
These days, these sorts of hot spots tend to be media codecs, and the way to speed those up is to use SIMD instructions - which can only be used properly from asm. So even before worrying about these sorts of extremely tight micro-optimizations, you'd want to recode in assembly just to use the special vector instructions! And in asm, readability is even harder to obtain, so you'll probably avoid a lot of the sort of micro-optimizations a crazy compiler will do just so you can make sure the code works right.
Re:desire to teach someone 6502 assembly language (Score:3, Insightful)
Flash-forward a quarter of a century, and we have bloated, massively complex "development systems" such as Microsoft's Visual Studio and others, which have such an incredible amount of complexity that no mere human memory could possibly learn all of it. Add to this the fact these development tools are changed regularly, arbitrarily and with malice aforethought, and it isn't hard to see why overall productivity hasn't increased as much as one might have expected. The truth is that the market-driven creeping-featuristic aspect of modern programming environments has, in many ways, proven detrimental to the development process.
In fact, it's been shown in several studies I've read over the past decade or so that because of this ridiculous degree of complexity, programmers simply learn a subset of the available instructions, just enough to get the job done, and rarely bother to learn anything more. And that's entirely justifiable, because if they DID invest huge amounts of time trying to learn all of the available features, odds are the langauge vendor will have changed the language anyway. Consequently, most of the power of modern tool chains is simply never realized by most users of them.
I agree... and disagree... (Score:4, Insightful)
1) the program is doing stuff so intensive that it runs slow
or
2) It is being run all the time in the background by the system and can slow down the system as a whole.
98% of the time it just does not matter.
I agree with points 1 and 2, however, if you're doing any non-trivial programming, I wholeheartedly disagree with the 98% figure. Not every bit of code is a throw-away piece. If your code only runs occassionally, and it's not performance critical, then yes, make readability your main priority.
But a lot of times your code is performance critical, or at least will be in the future. Code has this tendancy to stick around once it's actually working. Too often have I seen code that, when it was written, was not getting used a lot. It worked, so nobody thought anything of it. As they scaled the system up, that code became a major bottleneck and eventually somebody had to go back and optimize the hell out of it, or simply throw it out and rewrite it to be fast.
It's good to write readable code. But it's also good to write code that is reasonably optimized at the same time. No need to go to extremes, just don't do stupid things like passing around huge 4 kilobyte variables to functions and such (example I've seen). Pass a pointer instead. Or a reference. Just write smart code. You can still make readable code while making it optimal enough to scale pretty well. Only very, very rarely do you have something that needs to be super well optimized, and then you usually are better off writing the critical sections in machine code anyway.
Easilly readable code is FAR MORE IMPORTANT.
Easy readability is far more important when that code scales to the level you need it to scale to. Readable code that doesn't actually work in the system you're trying to put it in is worse than useless.
Re:Debugging (Score:2, Insightful)
Please explain how printf will let you debug the following (more than just a binary chop):
An extensive logfile, and intelligent use of grep/awk/etc to analyse the behaviour of the system over time ?
Leastways, that's what I do...
Re:Schools not teaching assembly anymore (Score:2, Insightful)
Erm, if you want interactive response out of a CPU-bound application, multithreading is exactly what you want to do. Timeslices are smooth, waiting til your computation finishes is jerky. You want snappy interaction if it's user input driving the calculation in the first place.
Re:Debugging (Score:3, Insightful)
Oh yes. Furthermore, if you can analyse what is doing the heavy lifting and somehow, anyhow, gain an advantage, you can make a program which is overall faster, but everywhere except the heavy lifting, is significantly slower.
You could even take it a step further. You could write an exact duplicate of the Perl program, leave out the parts of the Perl interpreter that you don't use
Now you reach the turf where the hackers and home-grown systems have an unfair advantage. About three significant problems that you don't have to worry about is sufficient for poor talent to triumph. Downsides are that you wind up with archaic systems that no one dares touch and circumstances can change so that the non-problems now become problems.
Re:Why it's still good... (Score:5, Insightful)
Personally I think the curriculum for coding should begin with asm, and the student should work his way up to the higher level languages.. c, pascal, and finally java or perl.
Than it wouldn't be an afterthought that some things are actually easier to accomplish in asm (most things depending on your line of work). Not to mention the eventual nested nested nested loop that just needs to be optimized.. shouldn't be a roadblock for any programmer.
Re:don't bother........ (Score:1, Insightful)
Are they supposed to pass all those 10 things or not? If they are, then passing some in registers and the rest on the stack is going to be loads faster than marshalling them all into a structure and passing a pointer to it. Shit, what's any piece of the program doing manipulating 10 independent pieces of state anyway? This has less to do with performance than with crappy high level design.
Small point or two (Score:3, Insightful)
Many posters seem to be promoting compilers over learning the nuts and bolts of a particular architecture yourself: somebody had to learn it to write that compiler! Probably a whole bunch of somebodies, because writing a good compiler is a hard task.
Even then, if you don't understand how your language is compiled, you cannot properly debug your code. With C/C++, it isn't just a case of stack frames, there's memory allocation, pointer dereferencing, etc. Sometimes you need to look at the assembler level to get a grip on some bugs.
I know from experience that compilers are buggy, don't perform the same way with the same switches on different platforms and while they may optimize generally better than a human, sometimes it's a bug to optimize at all!
Some appreciation of the assembley level is better than nothing at all.
Re:Not exactly on that not exactly. (Score:3, Insightful)
You're talking about a program in an interpreted language that spits out machine code. It's kind of like pretending a horse riding a human is the same as a human riding a horse - although the same two animals are involved in both cases, the overall situation is completely different.
That said, I still say Perl is going to have a darned hard time being faster than C since it has to precompile the program every time it executes whereas a C executable is already in machine code. Not saying it can't happen, just saying a situations where it happens probably aren't all that common.
Re:Schools not teaching assembly anymore (Score:1, Insightful)
Optimization vs. Algorithms (Score:1, Insightful)
If there's a better algorithm, you need to use it.
If optimizing your code can make it 10 times faster, you need to optimize it.
Now if optimizing your code will yeild less than a 2x increase (with less than simple changes), then perhaps you shouldn't bother, especially if it makes the code a lot harder to work with, but the same applies to algorithms as well. There's no need to implement a complicated sorting algorithm to sort five items, even if it will double the speed.
The problem is people usually don't know that their code can be optimized anymore than they know that a better algorithm exists.
People don't intentionally write slow code, they write slow code for the same reason they use slow algorithms, because they don't know any better.
Just as you should research algorithms for whatever you're doing, you should also learn how to write fast code in whatever language you are using. The only reason assembly language does well here is because more code equals more work for the processor, whereas in high level languages fifty lines code may execute very quickly while one single line elsewhere takes twenty times as long as those 50 lines.
Learning assembly isn't necessary. All that necessary is knowing what the difference between "($one, $two, $three) = $things =~
BTW, a compiler being "optimizing" doesn't mean it creates the best code, it just means it doesn't create as bad of code as it could. You can create a compiler that always turns certain C statements into certain code, then you can make an optimizing one that eliminates some code since it knows that this followed by that doesn't need a few of those instructions, but you can't replace someone who knows what they're doing.
After all, isn't the reason you're supposed to use C's strcat instead of copying the string yourself because the strcat uses the movsb instruction which is faster than any loop the compiler would create on it's own?
Of course, even so, you could still do better using assembly, depending on what you're doing. If you implemented pascal type strings you would remove the requirement of doing an scasb on the strings to find their lengths which you need to do the movsb. This could be a major optimization depending on the size of your strings and how often you're using strcat.
Re:Debugging (Score:4, Insightful)
Okay, I'm also writing a large network application, and I find printf statements very helpful indeed. It's Windows, so my main debugging tools tend to be message-boxes and fprintf to files. Even though there's a good "debugger" available, it's quite often at too low a level to see what's actually happening in the program.
One problem is running multi-process, multi-threaded code on several computers at once. Sure, debuggers can be made to work that way if you install Visual Studio on each machine you're testing on, but it can be inconvenient to say the least.
With print statements, your program can alert you in real time, using all the functions available in code but not in a debugger, and doesn't need to be carefully compiled as debug with the appropriate modules running in a debugger with the right break-points set. Just add a message box, and your program will tell you what it's doing.
Debuggers are great for examining memory structures, or for when you really don't know what the hell your program is doing, but for most purposes a few well-placed print statements and a logical series of tests can help you find the problem.
I go for clarity first, linecount second. (Score:3, Insightful)
Of course this is after thinking about the design for awhile. Rarely have I just gone and said "Ok I'm going to code this function now." without thinking "Ok what will this break? What will I want to add in the future and will it mean redoing this a different way?" I spend alot more time thinking about how to get the best flexibility out of a design than actually coding. I tend to think up the data structures first, by figuring out what data I need and what I'm going to do with it. Once I have that, I construct objects around the structures and figure out what methods I need to manipulate them. Everything revolves around the data, not the process. I try to keep the objects segregated on a "need to know" basis so that it's easier to swap out one implementation for another later. I don't want to be sitting forever rewriting the entire project because I've found that quicksort sucks for this data and I'm better off with a heap than a linked list.
When I actually get around to coding, my first concern is clarity. If I can't read my code, I'm going to be screwed when I have to debug it tomorrow, and totally lost when I need to add a feature or rework something next month. I tend to name my variables and functions for what they are, sometimes with fairly long names. Sure it's a pain in the ass to type cosine_offset and I can typo it alot, but it's a hell of alot easier than figuring out that w stands for cosine offset a month from now. Compilers catch name typos, they won't tell you what the hell you were thinking when you wrote the code. I find the stupid "hungarian warts" where you get names like pfstrqlrdwFOO are absolutely worthless. The only prefixer I use is p, for pointer, because pointer screwups are bad and it's possible to forget something is a pointer and requires different operands. I can usually figure out if something's an int or float or string pretty easy by following the names and what functions it comes from/goes to. GetInputString(myinput) makes it pretty obvious what myinput is, I don't need pcstrz in front of it.
Having other people able to read your code is a huge bonus. Readability = less time debugging = more time to improve or optimize later. I don't know ASM, but if my code is readable by someone who does, it's alot easier to get them to help me if they don't have to spend a week going "WTF is this variable for? What is the point of this function?"
Another thing I do that was taught to me by a very realisticly-oriented CS professor is go for low linecount. Outside of clarity, linecount is king. Why? Because fewer lines are easier to comprehend, tend to have fewer bugs, can be easier to optimize (into ASM for example). Functions over 50 lines are rare for me. My functions do exactly one thing that they're named for, I don't group an entire 9 step procedure into a single long function. If I have to do that then it'll be a function calling a bunch of other functions in order. That way when I have a bottleneck, I can just rewrite 25 lines of code instead of poring over 250 looking for where the slow code is and misunderstanding parts of it.
Granted I'm not the best programmer. I'm probably not even an average programmer around here given my lack of experience. I had a basic course in ASM at college that mostly covered MIPS, and an EE course that covered up through basic CPUs and registers. I know some of the concepts but have never written in ASM. Maybe when I finish the main parts of my current project I'll learn x86 ASM so I can optimize it.
even better: learn to write a compiler (Score:3, Insightful)
Randall states that CS students don't learn assembly language anymore, or that if they do, they aren't being taught well. I have to disagree.
In our CS curriculum (Universiteit Leiden, NL) we had courses that learned us
(1) how to design a CPU, including ALU, load/store architecture and microinstructions for a fake CPU that interprets (a subset of) the MIPS instruction set. (digital systems design)
(2) write a compiler from a subset of pascal to MIPS code. The end result had to be tested on SGI Indy's. Present students still have to do this, but need SPIM, since the Indy's have gone, how sad.
(compiler construction *)
My point is that our curriculum is arguably representative of the standard CS bachelor, at least in Europe, and that learning everything from high-level languages up to micro-instructions greatly surpasses dabbling with programming in assembly.
I do agree with his main point though: you should know these things. My feeling is that more people are already acquinted with Randall's dark art than he thinks.
(*) offtopic: this course was the major stumbling point of half of all CS students. Some handed in the assignments 4 years late, after their final MSc paper.
It appears that either you are good at the math side and suck at coding or it is the other way around. I did fairly well here, but I won't go into my math
Assembly is bogus (Score:5, Insightful)
I look after pnm2ppa, which is a print processor to convert pnm image bitmaps from Ghostscript to PPA, which are HP's worst ever printers. Ever. They are so dumb, they make Bush look like a Mensa candidate.
When I first came to the code, it was written by someone who thought they knew better than the compiler, and structured the code accordingly.
We had hand-unrolled loops, unusual and rampant use of the "register" keyword, the occasional volatile, and strange padding in structures to try to align the data to what he thought the processor would use. There were arithmetic "if"'s, nasty pointer usage, throwing away type information (ie casting to void *), and strange methods of going through the data.
When I hand simplified all the code, it went about 15% faster. In inner loop case, over 100% faster by re-rolling a single inner loop because the person who unrolled it didn't understand how branch prediction worked and even less about large data structure walking and L1/L2 cache interaction. gcc 3.3 improved the performance of the code by about another 15%.
But you know what made the biggest change? A simple replacement of floating point gamma correction with a lookup table ordered in the simplest possible way. That shaved literally 30+ seconds off every page render on my PIII/800.
And you know what? The new GLUT is shorter and more readble, and is easier to tune for color correct output. It costs about 4 MB of RAM.
Assembly has no place in the modern day programmer's skillset. Humans do not know how to schedule instructions properly. They do not know how branch prediction will work unless the data they use is static. They should not waste their time on understanding the difference in L1 cache strategies (which are wildy different in the x86 families and AMD Opterons). They cannot work out how to best keep the data pipeline full on a wide range of processors. But you can help compilers work this out for you by:
* Design the system in the correct way first time - what do you actually need to do? Don't do anything else
* Learn and keep up with the best generic algorithms for a wide range of activities (such as sorting, arrays, dictionaries, etc) and keep a library of well tested and bug free examples
* Write simple, clear, maintable code
* Never, ever, ever throw away type information
* Never, ever, ever throw away data aliasing
* Never, ever, use the "register" keyword
* Never use "volatile" unless you know why you need it
* Document tests, data and code properly. This pays off big time every time you come to add new features or fix old ones
Lastly, program like a software engineer not a cowboy. Code must be correct then fast. Not fast and wrong.
Re:I agree... and disagree... (Score:2, Insightful)
And if that bottleneck was clearly written it will likely be much easier to refactor/optimize. It's really hard without profiling to know which pieces of code will be the bottleneck with current usage and especially future usage. For any decently large program, optimizing the whole program is IMO a supreme waste of time and likely to make it late, brittle, and buggy.
If your code is performance critical now, write it simply, profile it's execution, fix the slow parts.
Don't spend alot of time optimizing code now that *might* be performance critical in the future. Chances are that it won't be or that the requirements will have changed enough so that the parts you spent all that time optimizing aren't the critical parts or need rewriting for functional changes anyway.
The strategy I've developed is as follows:
It took me a long time to get over the notion that every program should be as efficient and fast as I could make it. But my code productivity, maintainability and correctness have improved noticably since I have.
Obligatory Quotes:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.- Donald Knuth
The price of reliability is the pursuit of the utmost simplicity.
Premature optimization is the root of all evil in programming.
-- C.A.R. Hoare
More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason - including blind stupidity.
- W.A. Wulf, A Case Against the GOTO
Rules of Optimization:
Rule 1: Don't do it.
Rule 2 (for experts only): Don't do it yet.
- Michael A. Jackson
Re:Debugging (Score:2, Insightful)
This is the rub, as it were. And, I believe the core of the article's message.
It is not merely an issue of what level language the implementation is written in but the ability and skill of the programmer based on their underlying knowledge and ability with the medium.
Optimising C code or Perl can be done until the cows come home, but in each case it will depend on the efficiency and power of the compiler or interpreter that it depends on to render it into something the machine can understand.
In human languages it is similar to reading a foreign text based on our own fluency with that language or reading it via babelfish. Any given processor only knows its own language. It relies on how good babelfish is at converting what we write.
An ability to write in assembler puts us much closer to the needs of the target audience. In this case, the processor.
Even if we then choose to write in a higher level language, we will, at least, be able to "massage" the function to the processor's requirements where that is possible in our chosen language.
Code optimisation will be based on making things as efficient for the thing actually doing the work, in this case the processor, rather than making things easier for us. This requires us to have an understanding of what it is doing and writing to that objective and end.
Re:Good luck... (Score:4, Insightful)
All things being equal, any CPU-bound program written in C is highly likely to be faster than the same in Perl.
The reason I write lots of Perl and hardly any C these days is because virtually everything I write is I/O bound and just doesn't need the performance. I may as well use Perl and save myself a heap of time.
Pardon me? (Score:3, Insightful)
All things being equal, any equivalent program written in C is highly likely to be far longer, more complex, and more difficult to write than the same in Perl--that's because Perl does a ton of things for you, many of which I have already outlined. Of course, if you don't need these features in the first place, it might make sense to write it in C. But given an arbitrary Perl program and then being tasked to write an 'equivalent' program in C... well, it's just going to end up being way more complex. If that Perl program uses eval, you might just have to write an entire Perl interpreter.
Of course this entire line of argument has been ridiculous from the start--Perl and C work together just fine, so you can always use Perl for the rapid development, and C for speed, as needed. In fact, there are tons of Perl modules that just consist of a Perl API to a bunch of C functions, usually for speed, or just to use an existing C API in Perl.
I know what you mean about being I/O bound--I do a lot of PHP programming. Now, PHP can be an embarassingly slow language sometimes, but that doesn't matter when you're busy waiting on a network connection to a database!
Re:Debugging (Score:5, Insightful)
Just because you have access to source code does not mean you can do source-level debugging. Under Windows at least, the target binary must be built correctly, the correct symbols must be available, the source used to actually build the target must be available, etc.
While I was at Microsoft, most of my debugging was done using the console-based debuggers: i386kd/alphakd/etc for kernel-mode, and cdb/ntsd for user-mode. For many years, these debuggers were incapable of any form of source-level debugging, so we did without.
Knowing how to read disassembled code in the debugger and match it up with source code is a vital skill, far more important than the ability to write assembly language from scratch.
For and Against (Score:4, Insightful)
For some applications - embedded systems, drivers, games (my specialty), or any other real-time application - assembly is either very important to understand or actually essential. There is (or was) no other way to program the PS2's vector units, for instance.
For database work, batch or text processing, network admin, or anything else where speed is nice but not a show-stopper, "make it work" is much more important than "make it work fast."
I've always felt knowledge of assembly makes one a better programmer regardless of the application. Even on a high level, understanding why (unnecessarily) using data types larger than the system's register size is going to hurt performance can only be a good thing. Understanding assembly is fairly fundamental to understanding computers, as opposed to just using them.
Re:Schools not teaching assembly anymore (Score:3, Insightful)
They put CS, ECE and EE students together and grouped us up for the first two courses - assembly programming was the first. Wiring chips together and programming FPGLAs was the second. The capstone was a general "how computers work from top to bottom" course that discussed processor design and had us writing some MIPs assembly. Anyway, it was a miserable series of classes to actually take, but verry useful and great in getting engineers from slightly different disciplines to practice working together. The CS guys tended to really help a group in assembly programming while the other guys tended to be more competent when wiring things together.
I really feel that the time I took understanding computer achitecture is a bit more useful that the time I spent learning assembly - although to a certain extent learning assembly forces you to learn the archeticture. Keeping in mind that with a relatively small set of data, all the work will be done on the CPU at blazingly fast speeds, while a single fetch out to the disk changes the times you're messing in by orders of maginitude.
Being able to read assembly is good for debugging but I'm not convinced it is required to write great code. Pure blazing speed is important for some applications, but for many it just isn't. The guy in the article mentions word processors and how they aren't 16,000 times faster than they used to be. Frankly, I'm not sure how you'd measure. I know the Macs I used in high school were slow enough that a fast typer could outpace their ability to render new characters, but my current word processor puts up characters quickly enough. It also checks my spelling and grammar on the fly and unlike old word processors it's more or less WUSISYG and not markup. The only slow part is launch time. That's more a process of big code requiring disk access to be loaded rather than slow code.
For me, good code is the following: 1. It gets the jobs done. 2. It's very readable. 3. It's designed in such a way that I can change it without destroying the system
For most applications though, the easiest algorith to read is the best one to use. That said, I agree with the author that it's best to know when you are writing code that is inefficient. It should make you wince.
I winced a lot when writing a mock-up of simple app that ran against the contact database of my customer. However, despite each request looking through every record in 2 tables I worked worked with instead of using some sensible SQL, it ran effectively instaneously on an average machine. So would I look at that level of ineffiency as aweful code? Probably. Should I? I'm not so sure.
Re:Debugging (Score:3, Insightful)
And that's exactly why assembly is mostly irrelevant (with a few exceptions, of course).
Assembly is faster than C only if you have a lot of time to spare. And C is faster than [your favorite higher lever language here] only if you have lots and lots of time to spare.
The problem is: there is no REALLY good high level language for generic application development (things like word processors, web browsers, etc). However, a language like C# could become such a thing.
I think that in a few years time only the time critical parts of an application are written in C and the hardware dependant parts are written in assembly. For all the rest there is [insert future high level language here].
After careful consideration... (Score:5, Insightful)
I do disagree on several points that have been raised, but they don't defeat the final conclusion:
- I do agree that premature optimization has been lethal to many software projects. But I have met as many people who commit PO in HLL's as assembler, so this is not an argument for or against the language.
- The comparisons of startup times and code sizes with the '80s (the 80's! Why, in the 70's we had only... never mind) are amusing, but uninformative; there are a lot more services embedded in the average OS or word processor today. There is a degree of bloat, but the statistics are misleading.
- Hand-crafted assembly code is unlikely to be optimal in light of processor pipelining, multiple execution units, and scheduling. I used to know how many clock cycles each instruction in the PDP-11 instruction set would take to execute for each addressing mode; this information is not nearly so useful for today's processors.
- There are architectural considerations beyond assembly. As early as 1983 a colleague of mine brought a VAX-11/780 (a screamer for its day) to its knees, and came to me complaining bitterly about the processor and/or compiler performance. It turned out that the code in question, which used massive multi-dimensional arrays (in FORTRAN), had compiled into a two-instruction loop (three-operand multiply and an increment/branch), but the code was generating six page faults per iteration! He would not have avoided the problem just by using assembler, but my deeper understanding of the machine led to the identification of the problem.
All that being said, the title of the article is "Why Learning Assembly Language is Still Good." At the end of the day, while I opt to write in Java (or Objective-C, which I'm just picking up), I am better equipped to write good code knowing assembler, and a few other things behind the language and runtime I'm using.
Re:Schools not teaching assembly anymore (Score:3, Insightful)
I interview people for jobs all the time, and you can tell right off who are the ones who've been taught a language as a tool (VB'ers), and who visualises a computer from the ground up. They think around problems and beyond the API.
Re:JIT? (Score:4, Insightful)
Assm and Perl (Score:3, Insightful)
Speaking about Perl and Assembly, it is important to mention that there are modern object-oriented assembly languages with asynchronous I/O, events, threads, multiple inheritance, garbage collection, built-in Unicode support, etc. See: Parrot [parrotcode.org] Assembly, and IMCC [parrotcode.org]:
"IMC stands for Intermediate Code; IMCC stands for Intermediate Code Compiler. You will also see the term PIR which is for Parrot Intermediate Representation and means the same as IMC, but for some each Parrot developer has his favorite term. PIR was the original term, where IMC seems to be the vernacular. It is an intermediate language that compiles either directly to Parrot Byte code, or translates to Parrot Assembly language. It is the preferred target language for compilers for the Parrot Virtual Machine. PIR is halfway between a High Level Language (HLL) and Parrot Assembly (PASM)."
How Is IMCC different than Parrot Assembly language?"PASM is an assembly language, raw and low-level. PASM does exactly what you say, and each PASM instruction represents a single VM opcode. Assembly language can be tough to debug, simply due to the amount of instructions that a high-level compiler generates for a given construct. Assembly language typically has no concept of basic blocks, namespaces, variable tracking, etc. You must track your register usage and take care of saving/restoring values in cases where you run out of registers. This is called spilling.
"IMC is medium level and a bit more friendly to write or debug. IMCC also has a builtin register allocator and spiller. IMC has the concept of a "subroutine" unit, complete with local variables and high-level sub call syntax. IMCC also allows unlimited symbolic registers. It will take care of assigning the appropriate register to your variables and will usually find the most efficient mapping so as to use as few registers as possible for a given piece of code. If you use more registers than are currently available, IMCC will generate instructions to save/restore (spill) the registers for you. This is a significant piece of every compiler.
"While it is possible to write more efficient code by hand directly in PASM, it is rare. IMC is still very close to PASM as far as granularity. It is also common for IMCC to generate instructions that use less registers than handwritten PASM. This is good for cache performance."
For a good introduction to Parrot, read Parrot: Some Assembly Required [perl.com] by Simon Cozens. There is a great article (also on ONLamp.com) Building a Parrot Compiler [onlamp.com] by Dan Sugalski (I have no idea why it wasn't posted on the Slashdot front page).
(By the way, for those who read off-line, here is a printable version [onlamp.com] of the linked Why Learning Assembly Language Is Still a Good Idea article in one piece.)
Why C isn't always fastest (Score:5, Insightful)
Superficially, that seems an obvious truth, but it doesn't necessarily hold in practice for several reasons:
In other words, with today's compiler technology, and more importantly today's run-time environments, C is no longer automatically the king of performance, and it is both theoretically and practically possible for much higher level languages to outperform even hand-optimised compiled C code.
Of course, the price you pay is the initial overhead for the JIT compilation process, usually when a program first loads. However, this is one area where rapidly increasing hardware speeds really tells, because that directly reduces the overhead of that bootstrapping process, so the field of more level the faster hardware gets.
I expect traditional, compile-only technologies to fade into the background over time; in the programming language "performance vs. safety+power" spectrum, they aim at a target nobody will need to hit any more. There will always be a need for LLLs, if only to write the underlying platforms to support HLLs, but for regular application development, their days are numbered.
The Doctor Analogy (Score:3, Insightful)
I disagree that assembler is faster than compiled (Score:4, Insightful)
In theory, a really good assembler programmer could produce more highly optimized code, but not on a consistent basis and within schedule constraints.
I don't argue that assembler isn't useful. I learmed more about how computers work wwhen I took an 8080 assembler class in college. And for certain problem domains like embedded systems, assembler is often necessary. But I don't write any more code in assembler than absolutely necessary.
I think that's pretty much completely wrong (Score:3, Insightful)
In addition, games don't just compete on framerate. Games have to be very, very, very stable. If a word processor crashes, people seem to be able to live with it. If a game crashes just as you were about to rack up a new hiscore, then the player is going to be very cross indeed. The more assembly is used, the more unstable the game is likely to become.
Thirdly, there's the difficulty of time. Assembly would obviously take more time, for benefits that are pretty much negilible. Coding inner loops in assembly really wouldn't improve framerate by any great degree, and would just contribute to the overall expense and instability of the system.
There's no problem knowing assembly. In many cases, understanding how computers work is very useful. But coding in assembly on modern PCs/consoles, even for video games, isn't likely to have very good results.
3D optimisation is all about having good algorithms. Case in point; I designed a 3D, OpenGL demo in C++. I then designed a similar demo in python. The one in python ran much faster, partly because I knew what I got wrong on the C++ demonstration, but mainly because in python, I could code better optimisations (various frustrum culling and octrees were the main ones), in a much shorter space of time.
In C++/SDL I had to write algorithms for taking a bitmap and converting it into an array, and then creating a heightmap. In python/pygame, image loading was done for me, and creating a heightmap was quite simple. The rest of the time I used designing an octree interface for my terrain.
Cart Before Horse (Score:4, Insightful)
But I'd turn the premise around - I think that the best programmers learn how to code at the lowest because they want to and are interested in it. Then they learn about both the benefits and complications (pipeline stalls, cache effects...).
But on another level, teaching assembler in college is increasingly difficult. Students in many CS programs are hard pressed to learn much more than Java and C# - very few know any language other than those in the C/C++/Java/C# family plus Perl and Python. Instead they learn all about GUI's, IDE's, .NET and so on.
I'd love to see students really learn assembly language (though ideally it would be for something other than the plug-ugly x86 series architectures), but then I'd also love to see them learn Lisp, Haskell and a few more languages, as well as Unix, Windows, VMS and a few more OS's, as well as HTML, XML, TeX and a few more ways to mark up information, as well as OpenGL, Postscript, X windows library calls and a few more graphics systems, as well as Calculus, Linear Algebra and a bit more math, as well as.... (well, you get the idea).
What assembly should be used for (Score:3, Insightful)
However, I think the only times assembly is really needed is (in a few cases, C++ will do the job almost as well):
I think it IS important to know how computers work - as in the CPU, registers, etc. But in most cases, it's simply not needed.
If you want to optimize, you can learn how to optimize code in your own language. For example, 2 * 2 * 2 will be about extremely faster than 2^3. StrComp() type functions are faster than If (UCase(string1) = UCase(string2)), etc.
Efficiency is not the biggest problem (Score:3, Insightful)
* Usability
* Security
* Learnability
As a developer, I also rarely care about efficiency. I'd much rather developers spent more time making their code:
* Readable
* Maintainable
* Debuggable
I also agree with other comments that even if you think efficiency is important, assembly by itself does not help very much in understand what's efficient. That's because so many other factors besides the lines of code impact how your program runs, such as:
* Compiler and runtime environment (e.g., JVM)
* OS implementation (e.g., scheduler, virtual memory management)
* CPU architecture (e.g., pipeline, cache, superscalar execution)
Would knowing assembly be better than not knowing it? Sure. But considering all the things that it would benefit a programmer to know, how important is assembly? For the vast majority of applications out there, I think assembly is not nearly as important as many other things.
Re:Because you can kill any 2.6.x kernel (Score:2, Insightful)
If only you had said "any Windows system", you would have gotten +5 insightful w/o a second thought.
one size fits all? (Score:3, Insightful)
The profession of programming has broadened to the point where it is IMPOSSIBLE to advocate a one-size-fits-all mantra to all programmers.
Computers have become so ubiquitous and the things they do so broad that the profession itself has splintered into many different subcategories each with their own INDIVIDUAL best practices.
So I think anyone trying to advocate a way of doing things should always qualify it by specifying the environment that they think suits it best.
The way someone writing code for a PIC is going to approach it differently from a device driver who is going to approach it very differently from a web programmer writing C# code to deliver web pages or a game programmer doing real-time graphics or someone writing an embedded app who is concerned about battery life or a missile guidance system or medical instrument who is concerned about stability.
That's not to say that there aren't any universal best practices, but everything is a cost-benefit assessment. There are cases where the time investment in the optimization will not payoff with a measurable increase in efficiency that actually helps the bottom line to any degree. Ultimately the best programmers are those who are able to satisfy BUSINESS needs, not just thier own perfectionism.
This is true of just about any form of creation. Someone building a cookie-cutter apartment building approaches it differently from the replacement World Trade Center or a tent or a treehouse. Take artwork. A comic strip artist is not going to lavish the same amount of detail on his daily strip that went into the Sistine Chapel. The work that goes into creating a Hyundai is different from what goes into a Bentley or a Hummer or a tractor or a motorcycle.
Always customize your approach to the niche you're in.