Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Books Media Book Reviews IT Technology

Hacker's Delight 178

Ben Olmstead writes with the review below of Henry S. Warren's Hacker's Delight, which is not about tricking folks into providing sensitive information, but rather about how to cleverly manipulate computers into doing more work on their part with less work on yours. Read on for his brief review.
Hacker's Delight
author Henry S. Warren Jr.
pages 320
publisher Addison Wesley Professional
rating Excellent
reviewer Ben Olmstead
ISBN 0201914654
summary Collected Tips & Tricks for Programmers

Hacker's Delight is an impressive compendium of clever tricks for programmers. Warren concentrates on micro-optimizations -- few of the tricks in this book operate on more than 3 or 4 words of memory -- and he displays an impressive knowledge of diverse computer systems in the process.

Who Should Read This Book

Hacker's Delight is hardcore in its presentation and subject matter. I would not recommend this for a beginning programmer -- to fully understand the material requires at least some knowledge of concepts such as Assembly and Machine languages. However, anyone who writes performance-critical software should read this book, even if they do not plan to write Assembly code, both to learn the tricks given, and to learn the concepts behind them.

What's Good

The book is organized into chapters where Warren presents related tricks. In each chapter, he presents a few tricks which perform related tasks -- for example, in Chapter 3, he presents tricks for rounding (up or down) to the next power of 2, rounding to a multiple of a known power of 2, and detecting power-of-2 boundary crossings (i.e., checking for page faults). For each trick, he discusses why it works, whether the technique is generally applicable, related tricks which might be better in specific situations, and where a trick might be used in the real world.

Warren keeps his discussion architecture-neutral, while noting optimizations and problems for specific architectures for specific tricks -- in the process, he displays a vast array of knowledge about specific processors, from 1960's mainframes to x86, MIPS, PPC, Alpha, and others. He also skims the surface of hardware-design issues in a few places -- for example, he devotes a page or two to explaining why computers use base 2 for arithmetic, and why this is the most efficient choice.

What's Bad

This is an extremely dense book, and there are sections which are difficult to understand. Furthermore, there are many tricks which, while interesting, would be difficult to apply to real-world applications, and use of these tricks does violate the Keep It Simple, Clock Cycles Are Cheap And Someone May Have To Understand Your Code philosophy which is harped upon so heavily (not without reason) in modern software design. However, someone writing a compiler or high-performance code may feel that the benefit outweighs the potential risk.

The Summary

If you want a better understanding of the hardware on which your code runs, or you need to squeeze clock cycles, or you just enjoy seeing clever tricks, this is an excellent book. If you primarily use high-level languages such as VB, perl, python, etc., this may not be the right book for you. Be prepared for very dense material.


You can purchase Hacker's Delight from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

Hacker's Delight

Comments Filter:
  • by Gortbusters.org ( 637314 ) on Thursday January 16, 2003 @10:49AM (#5094639) Homepage Journal
    Almost as interesting as those lovely discrete math textbooks were. This sounds more like 'Optimizer's Delight.'

    To be honest, 'Hacker's Delight' sounded more like a cookbook title.
    • It also sounds like an interesting song:

      I said a hip hop the hippie the hippie
      to the hip hip hop, a you dont stop
      the hack it to the bang bang boogie say up jumped the boogie to the rhythm of the boogie, the beat
    • "To be honest, 'Hacker's Delight' sounded more like a cookbook title."

      It'll only be a cookbook title if the hacker's doing something to tweak an AMD processor, and manages to deep fry something in the computer. :)

    • To be honest, 'Hacker's Delight' sounded more like a cookbook title.

      By Jeffrey Dahmer?

    • Much of the basic material in this book was formerly published in 1996 under the more mundane title of:
      The PowerPC Compiler Writer's Guide
      ISBN 0-9649654-0-2
      edited by Steve Hoxey, Faraydon Karim, Bill Hay and Hank Warren.

      That book obviously had no qualms about targeting a narrow audience and it served its purpose well.

      When Sun was awarded patent #6,073,150 it had a familiar ring. Sure enough: figure 3-25 on page 50 of "PowerPC Compiler Writer's Guide". Only the patent was awarded in 2000 and the Guide was published in 1996. That was about where I lost respect for the patent process.

  • omg (Score:1, Funny)

    by Anonymous Coward

    i said a hip hop a hippie the hippie
    to the hip hip hop, a you dont stop
    a rock to the bang bang boogy say upchuck the boogy,
    to the rhythm of the boogity beat.

    now what you hear is not a test, i'm hacking to the motherfuckin beat
    and me, rob malda, and the rest are gonna try and move your feet
    see i am timothy and i'd like to say hello
    to the ACs, freaks, and logged-in kooks, and all the goatse trolls
    but first i gotta bang bang the boogie to the boogie
    say up jump the boogie to the bang bang boogie
    let's rock, you dont stop
    rock the rhythm that will make your body rock
    well so far youve heard my voice but i brought two friends along
    and next on the mike's my man hemos
    come on, hem, sing that song
  • Dubious value? (Score:5, Insightful)

    by SuperMario666 ( 588666 ) on Thursday January 16, 2003 @10:49AM (#5094648)
    Furthermore, there are many tricks which, while interesting, would be difficult to apply to real-world applications.

    Maybe you should break open the old CS textbook instead. IMO, learning general principles would be a much better use of your time.
    • Re:Dubious value? (Score:5, Insightful)

      by KDan ( 90353 ) on Thursday January 16, 2003 @11:19AM (#5094911) Homepage
      Seriously depends what you're doing. If you're writing the next entreprise application, sure, optimization tricks are not really your main concern... If you're writing a game engine, though...

      I remember back when I was younger and had much more free time (*longing sigh*) I spent most of a term and a summer writing a 3D wolfenstein-like engine, mostly under the careful instruction of a book: Tricks of the game programming gurus [amazon.com]. The book was great, and though it gave some optimizing ideas here and there the resulting engine was very slow (esp. compared to the wolf3d engine, which was so perfectly smooth... and the engine I made didn't even do monsters and doors and items). So then I turned to another book I had, called "PC Interdit", which was written in french and oriented towards Pascal rather than C which I was using, but explained a number of optimization tricks which made all the difference (examples: page flipping in mode X instead of double-buffering in mode 13h, basics of coding fast assembler functions to optimize C functions, etc). Before using that book's advice, my engine would run at something like 10 fps or so on my 486DX4 100Mhz in turbo mode, and 1fps more or less without turbo mode... After the optimizations, it ran very smoothly in turbo mode and at least 5-6fps in non-turbo.

      So if you're programming a game engine, those books are really really useful. Or in fact if you're programming anything where squeezing every tiny bit of performance is critical. If you're programming a J2EE servlet engine, though, then for sure, it's a waste of your time.

      Daniel
      • Not really. I worked on several "Enterprise" applications whereby shaving time off algorithms was vital to the competitiveness of the application. Graphics tricks are somewhat more specialised, but look inside a real flight simulator, and you will see how the programmers try to get everything to work as fast as possible (they work at the limits of the hardware).

        The real problem for commercial users is that this level of optimisation is dirty, i.e., difficult to test and maintain. However, it is usually only need for a very small percentage of the application.

      • Ahh it's good to see the old great habit's still alive. I had a c-64 and would rewrite part's of Ea's graphic routines to increase the spead of them. then I would work on cool stuff for the c-64.

        To the outside viewer, code optimizing is most likely to be viewed as magic than anything else. Very few people I know could reduce code on the 6502 like myself. And those few I met back then ( 1983 (84 maybe ) onwards were highly gifted and they made optimizing code seem like magic.

        Onepoint
  • The author.. (Score:5, Informative)

    by J x ( 160849 ) on Thursday January 16, 2003 @10:52AM (#5094670)
    I know the author well. Here's some background for you slashdotters who may doubt his expertise:

    Henry S. Warren, Jr., has had a forty-year career with IBM, spanning from the IBM 704 to the PowerPC. He has worked on various military command and control systems and on the SETL project under Jack Schwartz at New York University. Since 1973 he has been with IBM's Research Division, focusing on compilers and computer architectures. Hank currently works on the Blue Gene petaflop computer project. He received his Ph.D. in computer science from the Courant Institute at New York University.
    • Re:The author.. (Score:2, Flamebait)

      by derch ( 184205 )
      Dude, you ripped that background from the BN site.

      If you really know him, write up your own background.
  • by Jethro On Deathrow ( 641338 ) on Thursday January 16, 2003 @10:54AM (#5094705) Journal
    ...is the exact oposite of afternoon delight, I would imagine.
  • I said a hip, hop, hippy, hippy to the hip hop hacking you don't stop a hacking until the bang bin boogie said backslash the boogie to the rhythm of the boogity beat..

    What you hear is not a test, I'm hacking to the beat. And me, the compiler, and my code are gonna start to move your screen.

    See, I am das MB and I'd like to say hello
    To the linux loners and the mac fairys and the losers on windows.

    But first I gotta..bang slash bin slash P E R L said hack kernel yes hack hack the kernel until the whole machine runs like hell.

    Proper.
    • Offtopic? I think not. You can't review a book called "Hacker's Delight" and not have somebody do a bad parody of "Rapper's Delight." It's a given.

      But I suppose I can't expect your average slashdot moderator to understand the great works of old school hip hop.
      • From dasmegabyte:
        Offtopic? I think not. You can't review a book called "Hacker's Delight" and not have somebody do a bad parody of "Rapper's Delight." It's a given.

        But I suppose I can't expect your average slashdot moderator to understand the great works of old school hip hop.

        Yeah! and I thought it was pretty good ... but then they up and made your post offtopic as well. That doesn't make any sense either. These moderators are relentless ... or maybe it's just one moderator who just can't stand to see anything about that song. Damn them youngsters with their foul-mouthed rap music! Why back in my day...

    • love the reference

      know it's not in th spirit of rapping (to revise) but how 'bout a slight change to make a better rhyme in the second line

      What you hear is not a test, I'm hacking to the beat. And me, the compiler, and my code are gonna
      give your cpu some heat
  • by Anonymous Coward
    This is a great book for those looking to expand thier minds beyond the usual low-level-phobic computer science pulp. I do not employ any of the book's teqniques in my code, but I'm glad to know of them.
  • Sounds cool (Score:4, Interesting)

    by photon317 ( 208409 ) on Thursday January 16, 2003 @11:02AM (#5094777)

    Sounds like he knows his stuff. The world needs more asm-aware programmers. High level languages and all the trickery that is "keep the source simple, waste the abundant cycles" and all are important things. The problem IMHO is that these are techniques to be applied by a fully-fledged programmer, who is capable of doing it the hard way in C or even asm - but too many modern programmers have only ever know the world of OO languages. The Leaky Abstractions paper applies here too.
  • by stevey ( 64018 ) on Thursday January 16, 2003 @11:06AM (#5094809) Homepage

    If you like this topic you may well appreciate this Assembly Language Gems Page [df.lth.se]

    It's a little biased towards x86 assembly, but there are some neat tricks there, and some stunningly lovely code.

  • Recommended (Score:3, Informative)

    by Anonymous Coward on Thursday January 16, 2003 @11:06AM (#5094810)
    I got this a couple of months ago and found it rather good -- If you are looking to shave a couple of cycles off you implementation of integer logarithms then have a look at it. I'd agree with the reviewer that it is rather dense, and you'll need to be numerate (graduate maths or C/S) to understand the algorithms, but not to find it useful. There are also quite a few amusing anecdotes from the author's time at IBM. Worth the cover price.

    Jim Green
  • ...he devotes a page or two to explaining why computers use base 2 for arithmetic, and why this is the most efficient choice.

    Why is that? I always figgered it had something to do with it being easier/cheaper to build hardware that only needs to store and detect 2 states (on/off) than multiple intermediate states.

    • Re:Base 2 (Score:5, Informative)

      by KDan ( 90353 ) on Thursday January 16, 2003 @11:31AM (#5095005) Homepage
      Well, I remember when I was reading a book about assembler they expressed it beautifully by saying that if school taught kids binary numbers instead of the decimal system, the entire mathematics syllabus could be taught in a couple of months with time to spare.

      Binary maths make many integer operations ridiculously simple, and while the fact that it's cheaper and more feasible to detect 2 states than 10 is true, there's also a certain simplicity that you can get to by coding everything with binary logical gates which wouldn't quite be there if you used some sort of decimal logical gates...

      Basically, binary arithmetic is really simple so can be optimized really well and is much more universal, in the wider philosophical sense, than decimal arithmetic. Everything in the universe seems to revolve around a binary concept, rather than a decimal one... matter/antimatter, existence/non-existence, quantum spin states, etc.

      Daniel
      • Basically, binary arithmetic is really simple so can be optimized really well and is much more universal, in the wider philosophical sense, than decimal arithmetic. Everything in the universe seems to revolve around a binary concept, rather than a decimal one... matter/antimatter, existence/non-existence, quantum spin states, etc.

        You know, there are two types of people in the world ...
      • Everything in the universe seems to revolve around a binary concept, rather than a decimal one... matter/antimatter, existence/non-existence, quantum spin states, etc.

        Please explain in what sense non-existence is something "in the universe."

        • There's about two billion examples I could point out. Here's one:

          In semiconductors, you may have heard of "holes" that carry charge. They can be treated as particles when you do the maths, as if they really existed. Yet all they are is electrons that are missing from filled conduction bands. So in this case, lack of existence IS most definitely something. If it wasn't computers would not exist.

          But that's a trivial example. On the more philosophical side, existence is defined by non-existence. If there were electrons everywhere, it would be meaningless to say that there are electrons somewhere. Just like at the moment, because there is spacetime everywhere that is somewhere, it's meaningless to say that there's a bit of spacetime here or there. Anywhere which exists is spacetime. However the discreteness of the existence of particles in that spacetime is a core concept to understand the universe.

          Of course, if you dig a bit deeper it all gets a whole lot more confusing when you find that those "discrete" particles are probability distributions smeared out across the entire universe. Still, I don't think that diminishes in any way the importance of non-existence.

          That's the sense in which I thought of it when I posted the previous post.

          Daniel
    • Simple answer is that states are represented by voltage ranges. If a voltage is in one range, the state is TRUE. If it's in another range, it's FALSE.

      The neat thing with that is that it can only be TRUE or FALSE. There may be an indeterminate state in the middle which is neither, but at that point the receiving chip will hold the current state until the input voltage goes to TRUE or FALSE.

      Now consider that you have three states. Say one state is 0-1V, another state is 2-3V and the third is 4-5V. It's impossible to go from the first state to the third state without passing through the second state. So how does your receiving chip know whether you really wanted the second state, or whether you're just en route to the third state? Simple answer is that it can't, so you'd need to have some requirement like staying in a state for some length of time before the state's confirmed. This would be a complete pain in the arse, so no-one ever tried doing this.

      There's also a compatibility issue. Electronics derived from relay logic, and relays can only be on or off, so there was a bunch of legacy material already on Boolean logic.

      Grab.
      • Actually, it doesn't matter that much whether you can get from the `zero' state to the `two' state with or without going through the `one' state first. Even in current processors, interference (inductive coupling), ground bounce, and other effects may cause what should be a `zero' to bounce up into the voltage range that would make it a `one'. That's one reason why the processor uses a clock. It doesn't matter what states the signal goes through so long as it has settled down to the correct voltage by the time the clock ticks and you sample the voltage (well, less the setup time of the latch...).

        As mentioned by a comment a few posts down, they used to use base ten in machines (BCD = binary coded decimal), where each base-10 digit was encoded with a 4-bit binary value. Besides the increased complexity of base-10 arithmetic circuits, since each digit uses four bits, there's 6 possible states that aren't utilized. So for example, with two BCD digits, you can represent from 0-99 in BCD, but if you used those same eight bits in binary, you could represent 0-255. I think that's what people may be talking about when they say binary is more "efficient" than BCD. Otherwise from an information theoretic perspective, I don't think one unit is any better than the other (it would be like saying using "kilograms" is more efficient than "grams" or "milligrams" or even "slugs").

  • by RT Alec ( 608475 ) <alec@slashdot.chuck l e . com> on Thursday January 16, 2003 @11:09AM (#5094829) Homepage Journal
    I am pleased to see the correct use of the term "hacker". Now if we could just work on the folks at CNN...
  • by _Sprocket_ ( 42527 ) on Thursday January 16, 2003 @11:12AM (#5094849)
    I noticed this book at the local Barnes and Noble. Unfortuately, it was (and still is) mis-catagorized and firmly stuck in the "Security" area of the technical / computer section.

    Now I know that I'm toying with the usual hacker/cracker jihad. None the less, it seems the definition of "hacker" associated with secuirty is so engrained in to society that it manages to overcome even the content of the book itself. I would have thought the B&N folks, being in the book profession, would manage to catch this. Judging a book by its cover and all that (makes me wonder where a book called 'Pinky Fuzzy Bunnies' that studies furry erotica would land).

    Of course, B&N are not the definitive measure of language. Where they stick a book doesn't go much beyond acknowledging one use of our much-flamed word. It doesn't negate the history of the word nor offer final proof of its popular definition. But it does show the power of that popular definition despite the obvious intent of the book's author.

    Be it for good or not - there it is.
  • by shoppa ( 464619 ) on Thursday January 16, 2003 @11:13AM (#5094855)
    This sort of subject has been around for years, and gets rediscovered every so often, by a "new" generation of hackers. (Look, for instance, at the big deal made about Duff's Device [tuxedo.org] when C came along.) The problem is, that implementations of these ideas are often non-portable. (To other architectures, languages, or even the next version of the compiler.)

    That's not to say that I don't enjoy reading about these clever things; there is a lot to be learned by studying this stuff. But implementing them is usually a mistake these days, if for no other reason than because there's already a portable way to do it which is probably more efficient. To go back to the Duff's Device example, almost all compilers will implement loop unrolling already. And that's a C-language trick, supposedly already a high-level language. Note I said supposedly! :-)

    • there's already a portable way to do it which is probably more efficient.

      Nitpick: Duff's device is portable

      To go back to the Duff's Device example, almost all compilers will implement loop unrolling already

      Even where the number of iterations is not known until runtime, as in Duff's Device?

      • Nitpick: Duff's device is portable

        But you've got to be sure that you don't unroll the loop so much that you go out of your processor's I-cache...

        And a programming trick that works *only* in C is hardly a portable one.

        Even where the number of iterations is not known until runtime, as in Duff's Device?

        Yes. Unrolling a loop is *not* rocket science. Any compiler from the last decade will know how to do it better than you can.

    • This sort of subject has been around for years, and gets rediscovered every so often, by a "new" generation of hackers. (Look, for instance, at the big deal made about Duff's Device [tuxedo.org] when C came along.) The problem is, that implementations of these ideas are often non-portable. (To other architectures, languages, or even the next version of the compiler.)

      I'm not sure why you mentioned Duff's device. Duff's device is portable. If I had an ANSI C spec in front of me, I could quote you chapter and verse that explicitly allows it.

      I think it is a damn spiffy idiom. It is relatively obvious what it means (at least after you've seen it once) and why you would use it. Yes, modern C compilers may do loop unrolling, so it is probably not something that is worth expending a lot of time on. As Tom Duff told me:

      I did it once. I haven't done it since. I don't recommend you do it.

      I too believe that code can be too clever, but increasingly I see code written with very little regard to either space or time behavior. Gratuitously inefficient code should not be tolerated, and understanding idioms like Duff's device puts one in a mindset of where such inefficiences are revealed more often.

      • Duff's Device had its reason for being, but Duff is right; don't do it now. Can you say "irreducible flow graph"? Sure, I knew you could. Modern compilers will get themselves out of the pickle Duff's Device and analogous code schemata put them in, if they do at all, by replicating code, thus undoing one of the things people think clever about it.
  • Be Wary (Score:4, Funny)

    by OldStash ( 630985 ) on Thursday January 16, 2003 @11:13AM (#5094861)
    manipulate computers into doing more work on their part with less work on yours

    To paraphrase the great Terry Pratchet: "Beware labour saving devices which are smaller than their manuals".
  • I tried all kinds of tricks to get my computer to work harder. And whaddya know, one night it patched its speech software into the modem and called the union on me.

    Ingrate! If it weren't for me, it'd be running gene sequences all day and night. Computers have no sense of perspective.

  • will it teach me how to hack Windows ME??? It's so hard -- I can't figure it out!
  • oh please (Score:2, Interesting)

    by tps12 ( 105590 )
    For 99% of people, these kinds of unreadable but "neat" optimizations are going to have no impact on execution time whatsoever. Good algorithm design and efficient architecture -- and yes, optimization, once you've profiled and located a bottleneck -- are worth far more than stupid bit shifting tricks, and your code will actually end up maintainable. If you follow the advice in this book, you're liable to produce code that looks like the Linux kernel.
    • Re:oh please (Score:3, Insightful)

      by Pseudonym ( 62607 )
      For 99% of people, these kinds of unreadable but "neat" optimizations are going to have no impact on execution time whatsoever.

      So? Most technical books are useless for 99% of people. I personally have no use for The Black Art of C# in 21 Days For Idiots. For my part, I used to write compilers. This book would have been invaluable for me. I guess I was in the 1%.

      The only danger in this kind of book is that people will use the techniques in it blindly, in which case they arguably shouldn't be writing software anyway (or at the very least should have it thorougly reviewed before committing).

      If you follow the advice in this book, you're liable to produce code that looks like the Linux kernel.

      So if you have to produce code like the Linux kernel, it sounds like the book for you.

      Incidentally, the Linux kernel is that way for several reasons, some of which are valid (e.g. profiling or back-of-the-envelope calculations showed that something was going to be a bottleneck) and some of which made sense at the time. For an example fo the latter, see do_pipe() in fs/pipe.c. If starting the kernel again today, it would make a lot more sense to use C++ which would make all those gotos unnecessary, but it's a bit late now.

  • explaining why computers use base 2 for arithmetic, and why this is the most efficient choice

    I always thought that ternary computers [google.com] were theoretically more efficient, from a mathmatical point of view.

  • Hard to understand? (Score:5, Interesting)

    by cperciva ( 102828 ) on Thursday January 16, 2003 @11:21AM (#5094928) Homepage
    use of these tricks does violate the Keep It Simple, Clock Cycles Are Cheap And Someone May Have To Understand Your Code philosophy

    In some cases, this may be true, but not always. If you want to increment a multiple-precision value, the textbook method is
    int i, carry=1;

    for(i=0;icarry+=x[i];
    x[i]=carry;
    carr y/=radix;
    };
    while the "cute trick" method is
    int i=0;

    while(++x[i]==0) i++;

    The textbook method takes a while to recognize, just because it's very similar to many other loops; but the second is distinctive and can be recognized immediately. If I'm maintaining someon else's code, I'd much prefer to see the second.
    • Many times the 'tricks' method is less readable, and nothing more.

      A good compiler will recognize produce the same optimized byte code for different code blocks. It should unroll shorter fixed lenth loops and automatically inline function calls when it determines that it's not worth the overhead of popping the caller onto the stack.

      In the end it's the design as a whole that will determine efficiency. For instance, recursion. As soon as it's learned, a coder has a tendency to use the 'cute trick' of recursion everywhere, and doesn't realize that it's rarely the optimal solution to a problem.

      Personally, I loathe the 'cute-ass cryptic trick' coding philosophy. I constantly battle with all the bad habits I picked up having been born and raised on the Commodore 64. Unconditional branches (goto), cramming all the code you can onto one line, one or two letter variable names, functions longer than a chapter of the bible. Blech.
    • by miu ( 626917 )
      But those distinctive shortcuts that everyone recognizes can cause you to "label" the code and move on while reviewing code that is due for maintenance.

      Everyone recognizes what:

      while (*dst++ = *src++) ;
      is supposed to be doing, but I recently cleared a mystery bug in a very old routine that used such code. The problem had been missed for years because people "recognized" the code and moved on - despite the fact that there were several problems with it.
  • In principal, I like the sound of this book. However, I have a shelf full of so-called 'secrets of the masters' books, each weighing in at around half a ton, containing 800 pages all stating the obvious. I look forward to hearing comments from those who have actually bought the thing.
    If you want a better understanding of the hardware on which your code runs, or you need to squeeze clock cycles, or you just enjoy seeing clever tricks, this is an excellent book. If you primarily use high-level languages such as VB, perl, python, etc., this may not be the right book for you.
    Time for me to state the obvious... I have worked on many applications that run uneccessarily slowly as a result of an accumulation of inefficient code. Sure, it is often better to sacrifice raw performance for portability, maintainabilty and plain readability, but code does not need to be obcure to be efficient. Optimisers take much of the hard work out of achieving this, but taking the time to examine compiler output once in a while will help you write high-level code in such a way as to give the optimiser room to strut it's stuff. If anything, there is often more to be gained by programs written in high-level languages (VB, perl, python, etc) if the coder takes time to examine the structure of his code and attempts to eliminate bottlenecks. Inefficiency is not a function of the development tools, it's a function of laziness.
  • Whoa! (Score:3, Funny)

    by mschoolbus ( 627182 ) <`moc.liamg' `ta' `yelirsivart'> on Thursday January 16, 2003 @11:24AM (#5094955)
    3313 bytes in body

    I almost thought that was 31337 or something!
  • I thought that hacking was how to cleverly manipulate computers into doing more work on their part with way too much work on yours. Just get out of the house and fricken buy a skillet... no need to hack one up. [handyscripts.co.uk]
  • HACKMEM (Score:5, Interesting)

    by ctrimble ( 525642 ) <ctrimbleNO@SPAMthinkpig.org> on Thursday January 16, 2003 @11:46AM (#5095119)
    HACKMEM is a document from the Elder Days at the MIT AI lab. It's not about optimisation, like Hacker's Delight, but it's full of lots of cool math/comp sci tidbits. I first discovered it back in the 80s when I was a script kiddie looking for cracking info (I hadn't understood the distinction between hacking and cracking at the time) and discarded it as lame. I revisited it about five years later after spending some time in the CS department and realised what a gem it really is.

    Here's a sample:

    ITEM 63 (Schroeppel, etc.):
    The joys of 239 are as follows:

    * pi = 16 arctan (1/5) - 4 arctan(1/239),
    * which is related to the fact that 2 * 13^4 - 1 = 239^2,
    * which is why 239/169 is an approximant (the 7th) of sqrt(2).
    * arctan(1/239) = arctan(1/70) - arctan(1/99) = arctan(1/408) + arctan(1/577)
    * 239 needs 4 squares (the maximum) to express it.
    * 239 needs 9 cubes (the maximum, shared only with 23) to express it.
    * 239 needs 19 fourth powers (the maximum) to express it.
    * (Although 239 doesn't need the maximum number of fifth powers.)
    * 1/239 = .00418410041841..., which is related to the fact that
    * 1,111,111 = 239 * 4,649.
    * The 239th Mersenne number, 2^239 - 1, is known composite, but no factors are known.
    * 239 = 11101111 base 2.
    * 239 = 22212 base 3.
    * 239 = 3233 base 4.
    * There are 239 primes < 1500.
    * K239 is Mozart's only work for 2 orchestras.
    * Guess what memo this is.
    * And 239 is prime, of course.
    HACKMEM [inwap.com]
    • The 239th Mersenne number, 2^239 - 1, is known composite, but no factors are known.

      Not true. It wasn't even true when it was written: It takes only about a minute with pencil and paper to discover that 2^239 - 1 is a multiple of 479.
      • It takes only about a minute with pencil and paper to discover that 2^239 - 1 is a multiple of 479.

        If you have a minute, then, how does that work?

        • Re:HACKMEM (Score:2, Informative)

          by legerde ( 583680 )
          Im no math wizard, but bc is a nice tool...

          scale=6
          ((2^239)-1)/479
          1844308000812509738604 6946771846748599451815207469 92649396248644355553.000000

          seems to be without a remainder, but this does not equal a proof.
        • Re:HACKMEM (Score:4, Interesting)

          by cperciva ( 102828 ) on Thursday January 16, 2003 @02:45PM (#5096760) Homepage
          If you have a minute, then, how does that work?

          All prime factors of 2^p-1 are of the form 2kp+1 for some k. If we're looking for a factor, the obvious place to start is with k=1, which tells us that we should look at 2*239+1 = 479.

          Now, 2^7 = 128, so
          2^14 mod 479 = 98,
          2^29 mod 479 = 2*98^2 = 48
          2^59 mod 479 = 2*48^2 = 297
          2^119 mod 479 = 2*297^2 = 146
          2^239 mod 479 = 2*146^2 = 1

          so 2^239-1 is a multiple of 479.
  • by Ella the Cat ( 133841 ) on Thursday January 16, 2003 @11:49AM (#5095147) Homepage Journal

    I've not read the book yet, but I do have a general worry, that optimisation isn't always done in the right context or for the right reasons. Code that runs faster in a small test program can break when part of a larger program (by thrashing the cache for example). What's the point of optimising something that's seldom invoked, in other words, always ask an enthusiastic optimiser to show you their profiling results.

    My favourite hacks are Jim Blinn's floating point tricks - 10% accurate square roots and reciprocals that blow away a floating point unit and are just what you need in graphics and games.

  • I just discovered the weirdest bug ever. I've tested it with NT4sp6 and win2Ksp2.

    Try creating or renaming any file or your system to con.* * could be anything like .html .jpg .txt. The OS won't let you, it'll give an error. NT and 2K give different error mesasages. Under NT it just says the name is used already. Under 2K it says it's a reserved device name.

  • I found some copies here. [amazon.com] I might get one.
  • Shouldn't all software be built with performance in mind? Personally, I think that many developers have taken a love to today's high speed CPUs as it allows them to practically ignore performance issues. While it is true that in some areas performance tweaking is still vital, such as server software and high-end gaming, it seems that most mundane software, such as office tools and other nominal is over bloated and slow. Granted, it isn't as important, perhaps, but its still annoying when on my Dual Athlon machine I have speed issues, like when moving my media player causes the movie to stall and skip. I still chuckle at the popularity of Phoenix, an apparently slimmed down and tweaked browser. Even with fewer features then Mozilla, it stands on its own just because its fast. Does anyone else feel that programming in general today has been sloppy and aimed more at getting more features then having a fast, stable program?
    • aimed more at getting more features then having a fast, stable program

      As I understand it, "stable" is kind of in opposition to both "fast" and "features". The reviewer's point is that super-optimized code tends to have strange tricks in it that are difficult to read and understand. That makes the code hard to maintain and increases the likelihood of bugs, so performance tweaking isn't a great idea unless speed is really important to an application.

    • Do the math. As a first approximation, optimization of software running on a desktop machine has to save the users as much or more time than it takes the developer to produce and support the optimization. This means that if the statements you are optimizing will not be executed hundreds of millions of times, it's not worth worrying about. OTOH, if MS could shave one second off the start-up time for Word or IE, that would be worth tens of millions of dollars annually in additional productivity for the US economy overall.
  • by carambola5 ( 456983 ) on Thursday January 16, 2003 @01:48PM (#5096213) Homepage
    The reviewer speaks truth about this book. It is quite dense and, in many cases, violates the "Code should be easily decoded by future programmers" rule.

    I got this book for Christmas because I specifically asked for it. My mom was a bit put off by the title, though. The title refers to the original definition of "hacker," so don't get excited if you're all about computer security. There's nothing in there for you.

    One of my favorite concepts in this book is the author's use of non-breaking code. As many of you know, the mechanism for sending instructions to the CPU requires a bit of quasi-premonition. Riddle your code with many if-, while-, and (the hideous) goto-statements, and you will end up with slow code due to the seemingly random jumps inside memory. Use some of the methods in this book, however, and you will end up with more efficient code in the longrun. Need I remind you of the speedup generated when you use non-breaking code within a lengthy while loop?
  • Hacker culture (Score:2, Informative)

    by The OPTiCIAN ( 8190 )
    I have come to despise the whole hacker culture based on the use of the sort of tricks the review illustrated. I comment my code like crazy, avoid confusing booleans, put null on the lhs of code, etc.

    But unfortunately, the other people on my team do none of that, and it would only be more painful if they were trying the sort of stunts this book focuses on.
  • I may be missing the point here... but, in todays software world, shouldn't I be worried about coding reliable, and easily read code?

    And then let my compiler read through my code and determine the most efficient way to turn it into assembly? I mean, I would rather be multiplying by 2 rather than bitshifting; and then let my compiler turn it into whatever it needs to in order to run the fastest.

    In fact, I really wouldn't be surprised if many of the better compilers already do most of these tricks for us, and we don't even know it.
    But then again, I can't say for sure, since I have not read the book
    • Of course, it may be doing other tricks we don't know about.
      I once tracked a bug to the compiler optimizing away a whole branch. Neat, but it wasn't an if ( 1==0) branch, it should have been reachable. Very confusing even in a debugger, just seemed to jump over lines of C code... only when I compared the assembler output files, between optimized and nonoptimized did I notice what happened. Of course, having some assembler knowledge did help.
      It was a major unix vendor and their compiler.

"I'm a mean green mother from outer space" -- Audrey II, The Little Shop of Horrors

Working...