Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Why Learning Assembly Language Is Still Good 667

nickirelan writes "Why Learning Assembly Language Is Still a Good Idea by Randall Hyde -- Randall Hyde makes his case for why learning assembly language is still relevant today. The key, says Randall, is to learn how to efficiently implement an application, and the best implementations are written by those who've mastered assembly language. Randall is the author of Write Great Code (from No Starch Press)."
This discussion has been archived. No new comments can be posted.

Why Learning Assembly Language Is Still Good

Comments Filter:
  • by Nasarius ( 593729 ) on Friday June 11, 2004 @09:47PM (#9404128)
    Interesting stuff. It's not clear whether it's the Gentoo patches or the move to 2.4.26 that fixes the bug, though. Hang on, I'll check...I run Gentoo, but always with a vanilla kernel.

    Yep, it kills 2.6.6.

  • by Trigulus ( 781481 ) on Friday June 11, 2004 @09:48PM (#9404132) Journal
    Most of what I see come out of "computer science" curriculums including BS and MS are woefully unskilled and sloppy. You dont want these people touching assembly. I learned assembly without even realizing it as a child with a Radio Shack Microcontroller Trainer. One of those old kits with all the spring terminals except this one had a hex keypad to enter programs with. I can't find a link to the product its too damn old.
  • Assembly language will always be needed to optimize certain types of algorithms, that don't translate efficiently into C. Try writing a FFT algorithm on C using a DSP, and compare it to what can be done in native assembly. The difference can be an order of magnitude or more. Some processors have special purpose modulo registers and addressing modes (such as bit reverse) that don't translate well into C, at least not without extending the language. Fixed point arithmatic operations are not supported in ANSI C either, but are a common feature on special purpose processors.

    For low power/embedded applications, efficiency makes sense as well. Every CPU cycle wasted chips away at battery power. A more efficient algorithm means a smaller ROM size, and the CPU can either be clocked slower (can use cheaper memory and/or CPU) or put into a halted state when it isn't needed. (longer battery life) Coding small ISRs in assembly makes sense as well, as C compilers often must make worst case assumptions about saving processor context.

    That being said, only a fool would try and re-write printf or cout in assembly, if they have a C/C++ compiler handy. Hand optimization is best used as a silver bullet, for the most computationally intensive or critical functions.

  • Re:I disagree (Score:2, Informative)

    by no longer myself ( 741142 ) on Friday June 11, 2004 @09:58PM (#9404196)
    I'll put my crudely coded Javascript quicksort algorithm against your finely honed 100% assembly bubblesort algorithm any day.

    Depends on what you are sorting. Anyone will tell you that the quicksort algorithm works fastest when dealing with total randomness, but a properly designed bubblesort can quickly shake out a couple of minor swaps much faster than a quicksort.

  • Re:Which Platforms? (Score:4, Informative)

    by Archeopteryx ( 4648 ) <benburch.pobox@com> on Friday June 11, 2004 @10:01PM (#9404212) Homepage
    I would start with an emulated 8-bit microprocessor or microcontroller, such as the Z-8 or the 68HC908. This way they can run the emulation on a platform they already have, and such devices, embedded within ASICs are the most likely target for a pure assembly effort anyway.

    Just a couple of years ago I did a fairly large 68HC908 application for a housekeeping processor entirely in assembler.
  • by Anonymous Coward on Friday June 11, 2004 @10:05PM (#9404232)

    Here they are. [intel.com]

    Or you may prefer AMD-64, here [amd.com].

  • by jay2003 ( 668095 ) on Friday June 11, 2004 @10:18PM (#9404288)
    With C++ or at least a C compiler And then your compiler generates bad assembly and if you don't understand assembly, you can't figure what the problem is. Compilers outputing incorrect assembly has happened to me several times.
  • by Chazmyrr ( 145612 ) on Friday June 11, 2004 @10:20PM (#9404295)
    In the real world, you generally don't write an application targetted to a specific CPU. You trust that your compiler is generally going to produce efficient machine code for your algorithm. Sometimes it won't, but if there isn't a performance problem with that particular section of code it usually isn't worth the effort to do anything about it.

    The point of using a profiler to optimize your application is that usually you're going to identify a couple key areas where you need to do some tuning because your algorithm is not efficient regardless of the architecture on which its running.

    Further, within the x86 family, different CPUs have different performance characteristics. The most efficient machine code on one may be the slowest on another. So to write the most efficient programs for x86, according to this guy's definition, I would end up having to implement run time CPU checking with different code paths depending on the result.

    Guess what? It isn't worth the additional development time unless the result is substantial. How do we know which improvements would be substanial? We profile the code.

    This guy needs to take a leave of absence from teaching and work as a programmer for a while before he shoots off his mouth on topics where he clearly has no clue.
  • by duffhuff ( 688339 ) on Friday June 11, 2004 @10:20PM (#9404297)
    I fully agree with the parent. Assembly coding is critical in embedded and DSP development.

    It's nice to have a compliant C/C++ compiler for your target device, some even have language extensions allowing you to take advantage of more specific features on the chip. However, when it comes down to speed and critical timing issues assembly can't (yet) be beat.

    On some recent DSP projects we used the supplied C compiler to do simple things like control logic and other tasks, and it works wonderfully for that type of task, but the critical algorithms were all hand-tuned assembly. Often we would use the compiler output as a start and work from there, but we would invariably blow the doors off it performance-wise, but the compiler did surprise us a few times.

    The biggest impact comes with tight timing restrictions. If you've only got 500 microseconds to get a sample done, or process a set of data, *and* you've got to do it a very tight memory space with minimal power consumption, you've got to rely on your assembly language skills to work it out. It's gruelling work, but we can't all just buy faster chips or more memory and use the compiler.

  • by Dark Nexus ( 172808 ) on Friday June 11, 2004 @10:33PM (#9404354)
    You forget one thing. You're only talking about programming for a PC. Pretty narrow view.

    Considering that imbedded processors, where every single bit of performance is important, outnumber PCs by a LARGE margin it's still applicable.

    Sure, most of the code for them would be done in C as well, those habits you gained out of assembly would come in handy.
  • assembly as a game (Score:3, Informative)

    by j1m+5n0w ( 749199 ) on Friday June 11, 2004 @10:36PM (#9404369) Homepage Journal

    This is a bit of a digression, but I'd like to point out that not only is assembly useful to implement games, it can be a game in itself: 7th anual International Functional Programming Contest [upenn.edu], CoreWars [koth.org] (though I don't expect them to displace the market share of UT2004)

    -jim

  • by nerdsv650 ( 175205 ) <nerdsd&nerdy1,com> on Friday June 11, 2004 @10:42PM (#9404391) Homepage
    Don't bother, my a**. This is the idiotic notion that has yielded Windows and all it's associated bloatware. If you don't understand your machine, will you think to pad structures to cache line boundaries? Will you know to declare variables with stricter alignment restrictions before those with more relaxed restrictions? True, this may not always be portable from one architecture to the next, but the reality is that you'll at least know enough to locate the macros that yield the cache-line size and use them for structures that are likely to end up in arrays. Code written by someone with a strong assembly language background is less likely to fragment the heap since they'll likely combine small allocations or endeavor to allocate equal size data items. I could go on, and on, and on, but those who believe already do, those who don't are plain lazy. IMO, of course.

    -michael
  • Re:Debugging (Score:3, Informative)

    by Profane MuthaFucka ( 574406 ) <busheatskok@gmail.com> on Friday June 11, 2004 @10:43PM (#9404397) Homepage Journal
    http://www.datatek.net/Humor/Mel,%20the%20Programm er

    Or, just Google for Mel the Programmer and hit I'm feeling lucky.
  • by Venner ( 59051 ) on Friday June 11, 2004 @10:48PM (#9404427)
    Well, I'm a recent (May '03) computer engineering graduate. We programmed in assembly on MIPs, x86, Motorola, and several other architectures that aren't coming to mind at the moment.

    The CompE curriculum at my university was very electronics / hardware oriented, so there was quite a bit of asm to learn. Not to mention VHDL, etc, which is also quite valuable stuff to know. Nothing like building individual logic from scratch and putting it together to simulate a microprocessor. Then trying to implement it on a chip.
  • Re:it's a tradeoff (Score:1, Informative)

    by Anonymous Coward on Friday June 11, 2004 @10:53PM (#9404451)
    You sound like someone who has read Knuth. Write clean and then find the bottlenecks. But remember that Knuth wouldn't argue against learning assembly language because to become a wizardly programmer you must do many things simultaneouly: understand how the computer works and be able to think like it (digital electronics and assembly programming), know how to translate abstract ideas into code (discrete mathematics and high level languages), know how to make something abstract enough to be able to be extended gracefully, know how to document code to be able to be understood, and know how to find bugs (thorough understanding of the architecture--OS, hardware, and application and an in-depth knowledge of memory operations (only taught by debugging assembly code)). Assembly isn't taught so that you can know tricks. Its taught so that you can have the knowledge to become a system expert if needed. This is similar to how most automotive mechanics don't know much about the metallurgical properties of the metals they are working on, but if you want to build a Formula 1 racecar you had better have one who does. Your companies VB monkies don't need to know assembly but it would be very nice to have at least one person who does.
  • Re:Counterpoint (Score:3, Informative)

    by Temporal ( 96070 ) on Friday June 11, 2004 @10:53PM (#9404452) Journal
    Actually, I left something out... the most common reason for slowness in any program is I/O. Reading from hard drives or from the network can be very, very slow. A good way to improve efficiency is to minimize these operations, even at the expense of added CPU usage. Of course, knowing assembly isn't going to help you with this one either.
  • by Brandybuck ( 704397 ) on Friday June 11, 2004 @11:13PM (#9404542) Homepage Journal
    No, you're the one without a clue. Read his article. He is saying that it is useful to *learn* assembly, not that it is useful to use assembly for everything you write.
  • Oklahoma State University seems to be at least phasing it out of some of their programs. OSU-Okmulgee doesn't even teach it at all. How they plan on truly educating their students in their new security training when they don't know how code works at the assembly level I have no idea.
  • by Anonymous Coward on Friday June 11, 2004 @11:24PM (#9404607)
    I fail to see how Java byte code is not a "true assembly language." It has been implemented in hardware to various degrees, after all.
  • embedded code (Score:3, Informative)

    by EmbeddedJanitor ( 597831 ) on Friday June 11, 2004 @11:35PM (#9404652)
    It is close to impossible to write embedded code without using at least some assembler. All C programs - err well just about -- need some assembly code to set things up before calling main().

    Another reason why assmbler will always be useful is that many CPUs have valuable instructions that have no C equivalent. Therefore to use these instructions one typically needs to write assembler.

    Frequently it is simpler to write a function that accesses hardware directly in assembler than it is to write it in C. With C the compiler will often jerk you around and optimise away specific behaviour that you want.

  • by johnnyb ( 4816 ) <jonathan@bartlettpublishing.com> on Friday June 11, 2004 @11:49PM (#9404707) Homepage
    You need to know the low-level stuff for a few reasons:

    1) You are a programmer, and knowing how the computer functions is your job

    2) Many of the high-level constructs are better understood when you know what it is they are trying to abstract. It will also keep you from doing stupid things like making everything in java a BigNum or whatever that is.

    3) The idea of references and pointers are a lot more fuzzy for programmers who never learned assembly language. The difference between a pointer and a value is harder to grasp.

    4) Debugging is a lot easier when you know assembly language, because you know how the parts fit together. You understand what a calling convention is, you understand how memory mapping works, you understand how the stack works - you just can see the whole picture of how the machine is processing your data.

    There's even some optimizations that you can do still in higher-level languages that you get from knowing assembly language. For example, in C, the first member of a struct is accessed faster because the compiler can just do straight indirect addressing rather than base pointer addressing. It might also convince you to rewrite your loops so they have a better chance of fitting entirely into the instruction cache. But even without these things, knowing assembly language is useful for the four reasons I outlined above. It's also useful for people who are having trouble learning to code, because it forces them to think on a much more exacting, step-by-step, concrete level.
  • by Hemlock Stones ( 636570 ) on Friday June 11, 2004 @11:56PM (#9404744)
    Which "real world" are you talking about? The real world sitting on a desktop or the real world in a modern fighter jet that can't be flown unless computer software is used to keep it stable? Where time concerns are critical and every last machine cycle counts. Welcome to the "real world" of hard real-time computing. Where a slip in time costs lives. Profilers can't be used here because they increase execution time.

    Or how about in the embedded "real world" with 64K (that's 65,536) bytes (or less) of program code space and 512 bytes (or less) of RAM? Believe it or not, there are literally thousands of these computers for every desktop computer on earth. Where not ony time efficency but cost and power consumption (lower speed processors), code and data efficency are critical. Profilers aren't much use here.

    It may also suprise you to know that some compiled programs do run time CPU type checking and use different code paths depending on the results for exactly the reasonds you give. A profiler can't always help produce code that is efficent enough on all the variations of the x86 CPU architecture. Few programs need this level of efficency, but when they do, there are few alternatives.

    And, last. No compiler can produce realy good optimized code without a programmer with an intimate knowledge of the instruction set and hardware architecture of the processor the compiler is producing machine instructions for.

    Maybe you need to learn more about "other" types of "real world" computer software uses before you shoot your foot off talking about subjects you clearly do not know enough about.
  • Re:Debugging (Score:3, Informative)

    by adamruck ( 638131 ) on Saturday June 12, 2004 @12:25AM (#9404864)
    in a word... no

    I can write two programs to do the same thing... one that takes a second one that takes a year. Both written in C of course. If optimized C takes a year and perl takes a second... there ya go... counterexample.

    Of course I would have to see it to believe it.

  • JIT? (Score:4, Informative)

    by magnum3065 ( 410727 ) on Saturday June 12, 2004 @12:41AM (#9404922)
    I suggest you watch this presentation [sourceforge.net] on the Psyco just-in-time compiler for Python and do some research on the Transmeta Crusoe processor to learn about run-time optimization.
  • by Methuseus ( 468642 ) <methuseus@yahoo.com> on Saturday June 12, 2004 @12:42AM (#9404926)
    And your point has nothing to do with assembly in particular. You are basically saying to use object oriented programming practices. This is not something that assembly will teach you better than any other programming language.
  • by Anonymous Coward on Saturday June 12, 2004 @12:46AM (#9404941)
    There are many so called rom-hackers out there who would love someone like you to tutor them in the ways of learning 6502 assembly for hacking console system games that use the 6502 processor. Such as NES, 2600, TG16, etc. I've dabbled in 6502 asm for a while now making demos, and reverse engineering nes roms and you really do get a intimate feel for the computer/console your working with because of assembly language. Anyways there are many hungry people out there in the console dev/rom-hacking scene wanting your help...

    -- HighT1mes
  • by Leeji ( 521631 ) <slashdot&leeholmes,com> on Saturday June 12, 2004 @01:01AM (#9404978) Homepage

    I wrote a C# performance comparison tool [leeholmes.com] to help me in this respect. When you're trying to optimize a hot-spot in your program, you can click on the "ILDasm Result" tab to see how .Net compiles it down to its Microsoft Intermediate Language (MSIL) representation.

    I've also got the book, "Inside Microsoft .Net IL Assembler" [amazon.com] that's very helpful. For whatever reason, you can get it for like $2 + shipping used from Amazon.

  • Re:Debugging (Score:3, Informative)

    by damiam ( 409504 ) on Saturday June 12, 2004 @01:06AM (#9404993)
    Huh? When you write a Perl program, it's run by an interpreter, which is written in C. Your argument would make sense if assembly was an interpreted language, which it's not (Bochs, VPC, and JVMs excepted).
  • Re:Counterpoint (Score:3, Informative)

    by Reziac ( 43301 ) on Saturday June 12, 2004 @01:16AM (#9405030) Homepage Journal
    ...when was the last time you thought "This word processor just doesn't respond to my keypresses fast enough."

    Erm... today? Seriously. Today.

    See, I had this mere 4 megs of actual data stored in 48,000 records, that happened to come to me as 1500 pages of fairly clean HTML tables (originally in about 50 separate files), and I needed to convert it all to a single file in comma-delimited format.

    It choked every modern program I tried to get to handle it -- and that was while it was still in much smaller chunks of half a meg or less. The only Win32 app it didn't crash outright was EditPad, and even that worthy got a bit laggy on the needful search and replace.

    I wound up doing most of the conversion by hand, in ... are you ready for this?? WordPerfect 5.1 for DOS. In about 2 minutes to S&R the table structures to CSV, and 5 seconds to sort the records. And no editing lag at all.

    As it happens, WP5.1 was written in ASM.

    Well, you asked :)

  • Re:ARGH (Score:3, Informative)

    by Bastian ( 66383 ) on Saturday June 12, 2004 @02:04AM (#9405163)
    I want everyone who thinks this to go out and play the Mac version of Halo. Notice how amazingly slow it is. This isn't because PPC is a slow architecture - this is because the people who ported the program didn't pay attention to the underlying architecture they were porting to.

    I'm not saying everyone has to be a hardcore assembly programmer, but I am saying understanding assembly and the underlying architecture on which you are developing software can make a huge difference. The fact of the matter is, an optimising compiler is not a magic bullet any more than anything else is a magic bullet. The optimiser is great for doing the real low-level instruction shifting type optimizations that a lot of people think are the real optimizations.

    But the optimiser can't help you with poor data structure design, poor memory access patterns, or any other of a host of high-level decisions that still have a lot to do with architecture.

    It's not about hand-optimised assembly anymore, anyway - at least, not if you're working on a beefy architecture with out-of-order execution and all that. But you still need to have a general idea of what kinds of things your particular computer does well and what kinds of things it drags ass on, otherwise you can still end up writing your own little version of Halo PPC.
  • by Anonymous Coward on Saturday June 12, 2004 @02:44AM (#9405291)
    Moderators: Please note that "twitter" is a known fanatical psycophant whose obnoxious offtopic rants are legend here on Slashdot. It doesn't matter what the topic is, he'll find a way to scrape in some pointless Microsoft bashing. While nobody expects us to love Microsoft in any way, his particularly tepid style of calling anyone he replies to "troll" or "liar" or "fanboy" because he happens to disagree with whatever they're saying is well documented and should not be rewarded. If anything, twitter is the type of person that should not be part of the open source/free software community. He is an anathema to all that is good about free software.

    I'm posting this so that you (the moderator) have some context to consider twitter and not mod him up whenever he posts his filler preformatted rants about installing Knoppix or whatever that unfortunately get him karma every single time and allow him to continue posting his trademark toxic crap (read on) day in and day out. You may consider this a troll - I consider it community service. And I ain't kidding.

    If you're a /. subscriber, I invite you to look through some of his posting history [slashdot.org]. I guarantee that you'll be hard pressed to find someone that is more "out there" than twitter. You'll also probably notice he's got quite an AC following. Don't just read his posts, make sure you go through the replies.

    To get an idea of what I'm talking about, check this [slashdot.org] post out. I mean, this is an article about email disclaimers, right? The parent of the post is complaining about the ads in the linked page and so on, and twitter actually goes off on a rant to blame it on Microsoft and recommend Lynx. WTF?

    Here's another. In this post [slashdot.org] twitter not only calls the OP a troll but attempts to "tell it like it is" while making some vague argument about "GNU". Yes, if you're confused, you're not alone. The reply (modded +4) proceeds to simply destroy his bogus argument. You will notice he did not reply. This is what some people call "drive-by advocacy". A sort of I'll just leave you with my thoughts here and move on to the next flamebait kind of deal. In fact, he almost never replies because he knows that his fanatical arguments simply do not hold up to any sort of discussion. It's not that he's chosen the wrong cause - he's just going at it in a completely wrong way.

    More? Just read though this [slashdot.org] post and the subsequent replies. I guess this stands on its own. Or this [slashdot.org]. Or this [slashdot.org].

    More? Bad spelling in astounding conspiracy theories [slashdot.org], more [slashdot.org] offtopic [slashdot.org] FUD [slashdot.org] and uninformed "I'm right, look at me" rants [slashdot.org], promptly proven wrong. Worse even, twitter wants to be RMS [slashdot.org], apparently [slashdot.org] (that first one is a winner). I mean, really [slashdot.org]. You think [slashdot.org]?

    FUD [slashdot.org], FUD [slashdot.org], FU [slashdot.org]

  • by trisweb ( 690296 ) on Saturday June 12, 2004 @03:54AM (#9405465) Journal
    I find the opposite is true... in fact, a class on machine structures is required at Berkeley, and our professor is very enthusiastic about teaching assembly. In fact, he probably got most of the points in this article out over the semester. So, "No one really needs to know this stuff, and you'll never use it, but it is required by this program so we've got to struggle through the next several weeks studying this material." was not what I experienced at all. We were thouroughly taught how a processor runs our code, how the processor itself works, how to optimize machine code, etc etc. Great class.
  • by drolli ( 522659 ) on Saturday June 12, 2004 @04:08AM (#9405490) Journal
    Actually programming in jython, java & matlab, some of the principles which i learned while programming on a C128 and an 8051 with 128 bytes and C on linux are the following:
    • Never Copy - if not necessary (be careful with the "=" operator in C++ and other high-level Lang.)
    • Aggregate Information as soon as possible
    • Preserve Locality
    • Save Memory always - your Computer has lots of RAM, but little Cache.
    • Be careful with 2D-Arrays and similar structures and consider the cache organiziation
    • Try to use Regular automata wherever appropriate instead of more complex structures.
    Some more important non-algorithmic ideas?
  • Re:Counterpoint (Score:3, Informative)

    by gnu-generation-one ( 717590 ) on Saturday June 12, 2004 @04:23AM (#9405524) Homepage
    "With the incredible power provided to us by modern CPU's, efficiency is just about completely irrelevant for 99% of non-game applications."

    And this is why your computer still takes a minute to boot-up, despite being 8 million times faster than a 1980s computer that boots in 6 seconds.

    "Think... when was the last time you thought "This word processor just doesn't respond to my keypresses fast enough."

    Last time I loaded Microsoft Word. It takes about 10 seconds to start on my 2GHz machine at work.

    "The reason why these programs aren't getting "faster" (as the article complains) is because there is no way to do so. They spend 99.9% of their time waiting for user input already."

    So tell me why KPPP (a dial-up network program with just 3 buttons) takes 15 seconds to load in my WindowMaker desktop, or why KDE itself takes almost 25 seconds to load? Waiting for my user-input, is it?

    "When that added efficiency does not lead to any noticeable benefit to the user, why do it?"

    Everyone here has used fast applications and slow applications. The fast applications feel nice to use and let you get stuff done. The slow applications are annoying and frustrating and difficult to use. I'd certainly count speed (and memory footprint) as a benefit to the user.
  • by Anonymous Coward on Saturday June 12, 2004 @05:55AM (#9405752)

    Understanding how a computer works is more important than learning to write in assembly language. In fact, assembly code has simpler constructs than most high level languages. The perceived complexity of assembly language is due to the tedium of programming solutions to even simple problems.

    A good start then is to learn how a computer works by reading "Code" by Charles Petzold or "Computer Organisation & Design : The Hardware and Software interface" by Hennesey and Patterson. With such an understanding you will know "when" to write in assembly instead of high level language. When you ought to write code in assembly, grab an instruction set reference manual for your architecture and/or a tutorial on how to interface C/C++ functions with assembly and you are off.

    Assembly language is surely on the way out. It becomes irrelevant in distributed, cluster or grid computing. How will you optimise if your cluster is made of heterogenous processors ? A variant of the problem has already cropped up with modern processors supporiting different kinds of SIMD instructions leading to messy template switched code. Optimisation then shifts strongly towards choosing efficient algorithms rather than fidding with assembly. In any case, code generation should be the headache of the compiler where its better solved than each application trying to do it on its own.

  • by RPoet ( 20693 ) on Saturday June 12, 2004 @06:27AM (#9405816) Journal
    Yes, here [theaimsgroup.com].
  • by EsbenMoseHansen ( 731150 ) on Saturday June 12, 2004 @07:06AM (#9405894) Homepage
    Misquoting is not nice. The actual quote was
    This exploit has been
    reported used to take down several "lame free-shell providers" servers (this is illegal in most parts of the world and strongly discouraged)
    Emphasized left as in the original.
  • Re:Which Platforms? (Score:3, Informative)

    by TheRaven64 ( 641858 ) on Saturday June 12, 2004 @07:41AM (#9405949) Journal
    Why use a real machine at all? In my second year as an undergrad we were expected to build an optimising compiler for a simple language. The output for this compiler was assembly for a very simple virtual machine (the Tiny Machine from Kenneth C. Louden's compilers book. Here's the source code [swan.ac.uk] if anyone wants to play with it). I think this (combined with exposure to things like URMs and courses on microprocessor design) gave a better grounding in efficient program design than learning any real assembly language.

    If I had learned assembly then as a child it would have been 6502 or Z80, as an undergrad it would have been x86 (or a small subset of x86 assembly, since it really is horrible). Now, most of the coding I do is on PowerPC, so none of them would have been any use and would have taught me misleading things (for example, register allocation is horrible on x86 and context switches are very expensive. On PowerPC most of the registers are general purpose and context switches have about the same overhead as function calls).

    I doubt that learning a real assembly language is much use (unless you learn PIC assembly, or similar and then develop embedded systems), but using something like the Tiny Machine is very good experience (and since it runs on about any platform with a C compiler and a libc it eliminates hardware constraints).

  • by bro1 ( 143618 ) on Saturday June 12, 2004 @08:35AM (#9406083) Homepage
    As far as I understood from the article the author does not advocate writing software in ASM. He is merely advocating learning ASM in order to understand how to write efficient programs in high level languages.

    Which does not actually make much sense for virtual machine based languages such as Java and C#.
  • I agree, kinda (Score:2, Informative)

    by nicnak ( 727633 ) on Saturday June 12, 2004 @08:45AM (#9406110)
    I have seen to many people wasting their time reordering code and coming up with obscure ways of doing an if statement that might use a few less cpu cycles.

    But I still think inorder to program efficant code you still must know assembly. The best courses I took were about compliers. This lets you know exactly how the compiler will optimize that assembly you know, so you know what is worth optimizing in a high level language.

    For example, a lot of the things people do like rearanging local variable instanciation has absolutly nothing to do with how the machine code is actually written. The first think a compiler does is translate your code into a meta-code that has the exact same syntax style everywhere. All the variable declarations are moved to the front, every for, while, and do-while is turned into a while. All the code is normalized into a functional equivalent.

    Then the code is optimized, if statements may be turned into the logical opposite, extra variables will be eliminated, code will be rearranged to increase parallel processing.

    If you write code with this in mind you focus more on the important problems, like is this loop n-squared or just n.

    But in-order to know what to optimize in code you still need to know assembly.

    -nicnak
  • Re:Debugging (Score:4, Informative)

    by prisonernumber7 ( 540579 ) on Saturday June 12, 2004 @01:26PM (#9407385) Homepage
    This is called a heisenbug [astrian.net], in case you are wondering. They occur mostly due to a smashed stack and are indeed damn hard to track.

    You can of course use assembly to track the bug, but I myself find that tedious. If you are programming in plain C (and not C++), you can use lint, a tool that evaluates sourcecode, very often. When lint reports no more possible problems you are done.

    If you happen to use C++ you'll probably have to shell out big bucks for a linter or be out of luck because there are only commercial linters available.

    Tho, that's why I always have a Linux system with valgrind [kde.org], which is amongst other things a memory debugging tool, available on it (unfortunately valgrind does not work on any of the BSDs). Valgrind will scream and give a stack backtrace when your program does something wrong - be it an off-by-one error, be it memory being read uninitialized or whaterver. A truly genial tool.

  • Gibson Research (Score:2, Informative)

    by bluyonder ( 643628 ) on Saturday June 12, 2004 @01:59PM (#9407562)
    For more on the joys of Assembly programming, you should go see Gibson Research [grc.com].
  • Re:Debugging (Score:3, Informative)

    by Elladan ( 17598 ) on Saturday June 12, 2004 @02:39PM (#9407805)
    1) A race condition.

    Debuggers are useless for this, since they permute the execution order of a program. Only some hardware-level debuggers running on external machines are any use here.

    The normal way to find these is print statements and thinking.

    2) A deadlock.

    Debuggers are useful here. But you can debug a deadlock with print statements by putting a timeout in your lock primitives that dumps the call stack. That's useful if you don't have a debugger, for example if your software is deployed somewhere and you need to debug the production system.

    3) A performance anomaly in I/O bound code.

    Debuggers are essentially useless here. Since it's IO bound, you probably want to use logging and possibly some sort of profiler code if you don't think it's a network issue.

    4) A stack-smasher.

    Debuggers tend to be useless here since the stack gets shredded before the program trips an exception. You might be able to find some stack information with a lot of effort, but it's sketchy.

    Try logging instead - you'll hopefully be able to localize where the error comes from. At that point, maybe the debugger can help, but usually you can just look at the code and see the error.

    5) Heap corruption.

    Debuggers are mostly useless here. What you want to use is a heap debugging library. Usually writing your own is best, since you usually need to write your own hacks to only track the source of the problem if possible.

    6) The, uh, printf family of functions.

    Heh. write(2)

    I program for a living, and I hardly ever use a debugger. For that matter, I hardly ever *have* a debugger. I do recognize that this is somewhat unusual though - most programmers do more apps work, where debuggers are common.

    The ironic thing is, in systems level work, I really need to understand assembly. ;-)

Life is a whim of several billion cells to be you for a while.

Working...