Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

The Law of Leaky Abstractions 524

Joel Spolsky has a nice essay on the leaky abstractions which underlie all high-level programming. Good reading, even for non-programmers.
This discussion has been archived. No new comments can be posted.

The Law of Leaky Abstractions

Comments Filter:
  • Great examples (Score:5, Interesting)

    by Ratface ( 21117 ) on Thursday November 14, 2002 @11:31AM (#4668427) Homepage Journal
    I loved the articel - great examples, especially this favourite bugbear of mine:

    "Problem: the ASP.NET designers needed to hide the fact that in HTML, there's no way to submit a form from a hyperlink. They do this by generating a few lines of JavaScript and attaching an onclick handler to the hyperlink. The abstraction leaks, though. If the end-user has JavaScript disabled, the ASP.NET application doesn't work correctly, and if the programmer doesn't understand what ASP.NET was abstracting away, they simply won't have any clue what is wrong."

    This is a perfect illustration of why I will never hire a web developer who cannot program HTML by hand. The kids who are going to university nowadays and building everything using Visual Studio or Dreamweaver will never be able to ensure that a site runs in an acceptable cross-platform way.

    Another favourite bugbear that this example reminds me of is people building page layouts using 4 levels of embedded tables where they reallly only needed 1 or 2.

    Great article!
  • Re:Informative (Score:4, Interesting)

    by Bastian ( 66383 ) on Thursday November 14, 2002 @11:35AM (#4668462)
    I think backward(really more slantwise or sideways) compatibility is almost certainly one of the reasons behind why C++ treats string literals as arrays of characters.

    I program in C++, but link to C libraries all the time. I also pass string literals into functions that have char* parameters. If C++ didn't treat string literals as char*, that would be impossible.
  • by Nexus7 ( 2919 ) on Thursday November 14, 2002 @11:40AM (#4668508)
    I guess a "liberal arts" major would be considered the quintessential "non-programmer". Certainly these people profess a non-concern for most technology, and of course, computing. I don't mean that they wouldn't know about Macs and PCs and Word, but we can agree that is a very superficial view of computing. But appreciating an article such as this "leaky abstractions" required some understanding of the way the networks work, even if there isn't any heavy math in it. In other words, the non-programmer wouldn't understand what the fuss is about.

    But that isn't how it's supposed to be. Liberal arts people are supposed to be interested in precisely this kind of thing, because it takes a higher level view of something that is usually presented in a way that only a CS major would find interesting or useful, and generalizes an idea to be applicable beyond the specific subject, networking.

    That is, engineers are today's liberal arts majors. It's time to get the so called "liberal arts" people out from politics, humanities, governance, management and other fields of importance because they just aren't trained to have or look for the conceptual basis of decision making and correctly apply it.
  • Re:Informative (Score:5, Interesting)

    by binaryDigit ( 557647 ) on Thursday November 14, 2002 @11:43AM (#4668524)
    I think it's a mistake to simply say that "high level languages make for buggier/bloated code". After all, many abstractions are created to solve common problems. If you don't have a string class then you'll either roll your own or have code that is complex and bug prone from calling 6 different functions to append a string. I don't think anyone would agree that it's better to write your own line drawing algorithm and have to program directly to the video card, vs calling one OpenGL method to do the same (well unless you need the absolute last word in performance, but that's another topic).
  • by today ( 27810 ) on Thursday November 14, 2002 @11:47AM (#4668556) Homepage
    The mechanical, electrical, chemical, etc, engineering fields all have various degrees of abstractions via object hiding. It just isn't called "object hiding" because these are in fact real objects and there is no need to call them objects because it is natural to think of them that way. When debugging a design in any of these fields, it is not unusual to have to strip down layers and layers of "abstraction" (ie, pry into physical objects) to get to the bottom of a real tough problem. Those engineers with the broadest skills are usually the best at dealing with such problems. There isn't really anything new in the article.
  • by redfiche ( 621966 ) on Thursday November 14, 2002 @11:48AM (#4668570) Journal
    Abstractions are good things, they help people understand systems better. An abstraction is a model, and if you use a model, you need to understand its limitations. High level languages have allowed a tremendous increase in programming productivity, with a price. But just as you cannot be really good at calculus without a thorough understanding of algebra, you cannot be a really good coder if you don't know what's going on underneath your abstractions.

    Great article, but don't throw out the high level tools and go back to coding Assembler.

  • by RobertB-DC ( 622190 ) on Thursday November 14, 2002 @11:50AM (#4668582) Homepage Journal
    As a VB programmer, I've *lived* leaky abstractions. Nowhere has it been more obvious than in the gigantic VB app our team is responsible for maintaining. 262 .frm files, 36 .bas modules, 25 .cls classes, and a handful of .ctl's.

    Much of our troubles, though, come from a single abstraction leak: the Sheridan (now called Infragistics [infragistics.com]) Grid control.

    Like most VB controls, the Sheridan Grid is designed to be a drop-in, no-code way to display database information. It's designed to be bound to a data control, which itself is a drop-in no-code connection to a database using ODBC (or whatever the flavor of the month happens to be).

    The first leak comes in to play because we don't use the data control. We generate SQL on the fly because we need to do things with our queries that go beyond the capabilities of the control, and we don't save to the database until the client clicks "OK". Right away, we've broken the Sheridan Grid's paradigm, and the abstraction started to leak. So we put in buckets -- bucketfuls of code in obscure control events to buffer up changes to be written when the form closes.

    Just when things were running smoothly, Sheridan decided to take that kid with his finger in the dike and send him to an orphanage. They "upgraded" the control. The upgrade was designed to make the control more efficient, of course... but we don't use the data control! It completely broke all our code. Every single grid control in the application -- at least one and usually more in each of 200+ forms -- had to have all-new buckets installed to catch the leaks.

    You may be wondering by now why we haven't switched to a better grid control. Sure enough, there are controls out there now that would meet 95% of our needs... but 1) that 5% has high client visibility and 2) the rest of the code works, by golly! No way we're going to rip it out unless we're absolutely forced to.

    By the way, our application now compiles to a svelte 16.9 MEG...
  • Bad term choosen IMO (Score:2, Interesting)

    by shadowtramp ( 573751 ) on Thursday November 14, 2002 @11:54AM (#4668611)
    Don't know how the term leak fills to mentioned nonprogrammers but in programmers' slang the word leak has distinct meaning. And it does diverge from what Joel use it for.

    I think that term ooze would suite better in this case. It's possesses a kind of dirtiness to itself and the fealing the word 'ooze' gives me fits good with the matter of described problem. :o)

    Back to the article. To be serious, i think that Joel mixed all things as examples of 'Leaky abstraction' to no purpose. Too different situations make concept to fall apart. Here what i mean:
    In case of tcp/ip it denotes limits of abstraction. And regardless of programmer background every sane man should now those limits do exist.
    In case of page faults it's a matter of competence - there is no abstraction at all. You either do know how your code is compiled and executed or you don't. It's the same when you know what the phrase in a given language do realy mean or you don't. I simplify here.
    In the case of C++ strings i saw the only good example. What in my opinion the experience of STL and string class usage tells in this case is: one should understand the underlying mechanics fully before rely on abstraction behaviour.

    In programming it is realy simple to tell will the given 'abstraction' present you with an easter egg or not: if you can imagine FSM for the abstraction you will definitely know when to use it.

  • by unfortunateson ( 527551 ) on Thursday November 14, 2002 @12:02PM (#4668690) Journal
    For something like IP packets, leaky is acceptable, but for many of those other abstractions, constipated might be a better adjective. Some of the tools and technologies out there (remember 4GL report-writers?) were big clogging masses that just won't pass.

    The first thing I do when I start in on a new technology (VBA, CGI, ASP, whatever) is to start digging in the corners and see where the model starts breaking down.

    What first turned me on to Perl (I'm trying hard not to flamebait here) was the statement that the easy things should be easy, and the hard things possible.

    But even Perl's abstraction of scalars could use a little fiber to move through the system. Turn strict and warnings on, and suddenly your "strings when you need 'em" stop being quite so flexible, and you start worrying about when it's really got something in it or not.

    On the HTML coding model breaking down, my current least-fave is checkboxes: if unchecked, they don't return a value to the server in the query, making it hard to determine whether the user is coming at the script the first time and there's no value, or just didn't select a value.

    Then there's always "This space intentionally left blank.*" Which I always footnote with "*...or it would have been if not for this notice." Sure sign of needing more regularity in your diet.
  • by Bastian ( 66383 ) on Thursday November 14, 2002 @12:03PM (#4668695)
    I got into that problem at a carreer fair the other day. Someone asked me if I had VB experience, and I said I didn't because I can't afford a copy of VB, but I was familiar with other flavors of BASIC, event-driven programming, and other RAD kits, so I would probably be able to learn VB in an afternoon.

    He just looked at me as if I had said someting in Klingon.

    I've been getting the distinct impression that the good programmer who knows his shit but doesn't have skills X, Y, and Z from the start is fucked because the people who do the hiring are clueless enough about programming that all they can do is watch for X, Y, and Z on a resume and fail to notice anything else.
  • by gillbates ( 106458 ) on Thursday November 14, 2002 @12:07PM (#4668738) Homepage Journal

    Amen.

    I can't tell you how many times this has happened to me. After 5 years of programming, my favorite language has become assembler - not because I hate HLL's, but rather, because you get exactly what you code in assembler. There are no "Leaky Abstractions" in assembly.

    And knowing the underlying details has made me a much better HLL coder. Knowing how the compiler is going to interpret a while statement or for loop makes me much more capable of writing fast, efficient C and C++ code. I can choose algorithms which I know the compiler can optimize well.

    And inevitably, at some point in a programmer's career, they'll come across a system in which the only available development tool is an assembler - at which point, the HLL-only programmer becomes completely useless to his company. This actually happened to me quite recently - my boss doesn't want to foot the bill for the rather expensive C++ compiler, so I'm left coding one of my projects in assembly. Because my education was focused on learning algorithms, rather than languages, my transition to using assembly has been a rather graceful one.

  • by Frums ( 112820 ) on Thursday November 14, 2002 @12:11PM (#4668775) Homepage Journal

    The problem that this article points to is a byproduct of large scale software development primarily being an exercise in complexity management. Abstraction is the foremost tool available in order to reduce complexity.

    In practice a person can keep track of between 4 and 11 different concepts at a time. The median lands around 5 or 6. If you want to do a self-experiment have someone write down a list of twenty words, then spend 30 seconds looking at them without using memnonic devices such as anagrams to memorize them then put the list away. After thirty more seconds write down as many as you can recall.

    This rule applies equally when attempting to manage a piece of software - you can only really keep track of between 4 and 11 "things" at the same time, so the most common practice is to abstract away complexity - you reduce an array of characters terminated by a null characters and a set of functions designed to operate on that array to a String. You went from half a dozen functions, a group of data pieces, and a pointer to a single concept - freeing up slots to pay attention to something else.

    The article is completely correct in its thesis that abstractions gloss over details and hide problems - they are designed to. Those details will stop you from being productive because the complexity in the project will rapidly outweigh your ability to pay attention to it.

    This range of attention sneaks into quite a few places in software development:

    • Team sizes: teams of between four and ten people are generally the most productive - they, and the project manager can track who is doing what without gross context switching.
    • Object models: When designing a system there will generally be between four and eleven components (which might break into more at lower levels of abstraction). Look at most UML diagrams - they will have four to eleven items (unless they were autogenerated by Rose).
    • Methods on an object: When it is initially created an object will generally have between four and eleven methods - after that it is said to start to smell, and could stand to be decomposed into multiple objects.
    • Vacation Days in the US: Typoically between five and ten - management can think about that many at one time, any more and they cannot keep track of them all in their head so there are obviously too many ;-)
    • Layers in the standard networking stack
    • Groups in a company
    • Directories off of /

    other schemes exist for managing complexity, but abstraction is decided human - you don't open a door, rotate, sit down backwards, rotate again, bend legs, position your feet, extend left arm, grasp door, pull door shut, insert key in iginition, extend right arm above left shoulder, grasp seatbelt, etc... you start the car. Software development is no different.

    There exist peopel that can track vast amounts of information in their heads at one time - look at Emacs - iirc RMS famously wrote it as he did because he could keep track of what everythign did, no one else can though. There also exist memnonic devices aside from abstraction for managing complexity - naming conventions, taxonomies, making notes, etc.

    -Frums

  • Neal Stephenson... (Score:4, Interesting)

    by mikeee ( 137160 ) on Thursday November 14, 2002 @12:26PM (#4668901)
    Neal Stephenson talks about something similar in In the Beginning was the Command Line. He calls it interface shear; he's specificially referring the the UI as an abstraction (an interesting idea in itself). His take on it was that abstractions are metaphors, and that "interface shear"/"leaky abstractions" occur in regions where the metaphors break down.

    Interesting stuff...
  • by PacoSuarez ( 530275 ) on Thursday November 14, 2002 @12:37PM (#4668979)
    I think the article is great. And this principle can also be applied to Math. Theorems are much like library function calls. You can use them in your own proofs, without caring about how they are proved, because someone has already taken care of that for you. You prove that the hypothesis are true, and you get a result which is guaranteed to be true.

    The problem is that in real Math, you often need a slightly different result, or you cannot prove that the hypothesis are true in your situation. The solution often involves understanding what's "under the hood" in the theorem, so that you can modify the proof a little bit and use it.

    Every professional mathematician knows how to prove the theorems that he/she uses. There is no such thing as a "high-level mathematician", that doesn't really know the basics, but only uses sophisticated theorems in top of each other. The same should be true in programming, and this is what the article is about.

    The solution? Good education. If anyone wants to be considered a professional programmer, he/she should have a basic understanding of digital electronics, micro-processor design, assembly language (at least one), OS architechture, C, some object oriented language, databases... and should be able to understand the relationship between all those things, because when things go wrong, you may have to go to any of the levels.

    It's a lot of things to learn, but there is no other way out. Building software is a difficult task and whoever sells you something else lies.

  • joelindenial (Score:5, Interesting)

    by epine ( 68316 ) on Thursday November 14, 2002 @12:37PM (#4668980)

    In physics the abstractions leak. Newton's laws leak like crazy. Einstein's theories leak. Presently there are no fundamental theories in physics which don't leak like crazy when quantum mechanics and gravity interact.

    In sports the abstractions leak. That's how we get players like Gretzky and pay a lot of money to watch what they do.

    And how about the reason why didn't C++ didn't define a native string type. Because there isn't any way to implement a string class that serves all possible applications. The premise of C++ is not being stuck with someone else's choice on what part of the abstraction should leak. Because C++ doesn't define a native string type, the user is free to replace the default standard string implementation with any other string implementation and have it integrate with the language on an equal footing with the standard string type.

    If a language is imposes standard abstractions it only takes one abstraction you can't live with to make that choice of language untenable. Which is how C++ has been so successful despite being the worst of all possible languages (except for all the others).
  • by Anonymous Coward on Thursday November 14, 2002 @12:39PM (#4669004)
    I agree with Joel, but some people seem to be taking it as a call to stop abstracting. That's silly.

    Humans form abstractions. That's what we do. If you abstractions are leaking with detrimental consequences, then it could be because the programming language implementation you're using is deficient, not because you shouldn't be abstracting.

    Try a high-performnce Common Lisp compiler some time. Strong dynamic typing and optional static typing, macros, first class functions, generic-function OO, restartable conditions, first class symbols and package systems make abstraction much easier and less prone to arbitrary decisions and problems that are really:

    (i) workarounds for methods-in-once-class-rule of "ordinary" single-dispatch OO

    (ii) workarounds for the association of what an object is with the name of the object rather than it itself (static typing is really saying "this variable can only hold this type of object", dynamic typing is saying "the object is of this type". Some languages mix these issues up, or fail to recognise the distinction.

    (iii) workarounds for the fact that most languages, unlike forth and lisp, are not themselves extensible for new abstractions

    (iv) workarounds for the fact that one cannot pass functions as parameters to functions in some languages (doesn't apply to C, thanks to function pointers - here's where the odd fact that low level languages are often easier to form new abstractions in comes in)
    (v) workarounds for namespace issues

    (vi) workarounds for crappy or nonexistent exception processing

    Plus, Common Lisp's incremental compile cycle means faster development, and it's defined behaviours for in place modifications to running programs makes it good for high-availability systems
  • by iamwoodyjones ( 562550 ) on Thursday November 14, 2002 @12:54PM (#4669125) Journal
    When told to convert Fortran code over to C (over a million lines) I knew it was going to take me forever. f2c doesn't work in this case since the code is soooo messed up to begin with. So, I found myself doing repetitive conversions over and over again that are specific to the code base.

    Solution:
    Created a perl script that translates parts of it for me and highlights the rest that has to be hand changed and looked over.

    So, to solve one probem I created a slew of more problems with the script freaking out and messing up code.

    So far though, it's saved me every bit of that time that I would have spent working on tedious simple stuff. Which in turn allows me to post to Slashdot more!!!!
  • by Badgerman ( 19207 ) on Thursday November 14, 2002 @01:10PM (#4669303)
    Loved this article. Sent it on to my manager and a co-worker.

    One thing I liked especially is the danger of the Shiny New Thing. It may be neat and cool and save time, but knowing how to use it does not mean that you can do anything else - or function outside of it.

    Right now I'm on an ASP.NET project - and some ASP.NET stuff I actually like. But the IDE actually makes it harder to program responsibly, and even utilize .NET effectively. Unless one understands some of the underpinnings of this NEW technology, you actually can't take advantage of it. Throw in the generated code issues and the IDE, an abstraction of an abstraction, really is disadvantageous.

    A friend of mine just about strangled some web developers he worked with as they ONLY use tools (and they love all the Shiny New Ones) and barely know what the tools produce. This has led to hideous issues of having to configure servers and designs to work with their products as opposed to them actually knowing how they work. The guy's a saint, I swear.

    I think managers and employers need to be aware of how abstract things can get, and realize good programmers can "drill down" from one layer to another to fix things. A Shiny New Thing made with Shiny New Things does NOT mean the people who did it are talented programmers, or that they can haul your butt out of a jam when the Shiny New Thing looses its shine.

  • by YU Nicks NE Way ( 129084 ) on Thursday November 14, 2002 @01:41PM (#4669623)
    And I had my mod points expire this morning...

    He's exactly right. No leaky abstractions? I once worked on a project that was delayed six months because a simple, three-line assembler routine that had to return 1 actually returned something else about one time in a thousand. The code was basically "Load x 5 direct; load y addr ind; subt x from y in place", where we could see in the logic analyzer showing the contents in the address which was to be moved into register y was 6. Literally, 999 times in a thousand, that left a 1 in register y. The other time...

    We sent the errata off to the manufacturer, who had the good grace to be horrified. It then took six months to figure out how to work around the problem.

    And, hey, guess what? Semiconductor holes are a leaky abstraction, too. And don't get me started on subatomic particles.
  • by Zinho ( 17895 ) on Thursday November 14, 2002 @01:46PM (#4669670) Journal
    No one would cut a lawn with scissors

    You'd be surprised what people will cut lawns with. In Brasilia (Capital of Brasil) the standard method of trimming lawns is to use a machete. No, I'm not talking about hacking down waist-high grass, I'm talking about trimming 3-inch high grass down to two inches by hacking repeatedly at it with a machete, trying to swing parallel to the ground as best you can. No, you don't do this yourself, you hire someone to do it. And if you're a salaried groundskeeper, it makes sure that you always have something to do - you woldn't want to be found slacking off during the day. On rare occasions I've seen people using hedge trimmers (aka big scissors) instead. My family was the only one I knew about in our neighborhood that even owned an American-style lawn mower. My parents were too cheap to hire a full-time groundskeeper, and I have lots of brothers and sisters who work for free :)

    Moral of the story; if it works and fits the requirements better, someone will do it.
  • Re:Informative (Score:3, Interesting)

    by MrResistor ( 120588 ) <.peterahoff. .at. .gmail.com.> on Thursday November 14, 2002 @02:12PM (#4669990) Homepage
    even assembler is an abstraction

    I have to disagree. Every assembly instruction directly maps to a machine code instruction, so there is absolutely nothing hidden or being done behind the scenes.

    Assembly is just mnemonics for machine code. There is no abstraction in assembly since it doesn't hide anything, it simply makes it easier for humans to read through direct substitution. You might as well say that binary is an abstraction; you'd be equally correct.

    Also, there is no such thing as an "assembly compiler". There are assemblers, which are not compilers.

  • Here goes.... (Score:3, Interesting)

    by gillbates ( 106458 ) on Thursday November 14, 2002 @02:16PM (#4670027) Homepage Journal

    many high level abstractions simply do not exist in assembly language.

    Consider the following assembly language code:

    WHILE [input.txt.status] != [input.txt.eof] main_loop
    mov bx,infile_buffer
    call input.txt.read_line
    call input.txt.tokenize
    call evaluate_expression

    IF [expression_result] == 1 expression_match
    call write_fields
    ENDIF expression_match
    ENDWHILE main_loop

    Okay, so this is a little snippet of some assembly language I've just recently worked on. Here's the declaration for the input file:

    textfile input.txt

    That's it. Is this readable? Is it abstracted at a level high enough? The primary difference between assembly and a HLL is that in assembly one must invent their own logical abstractions for a real world problem, where languages such as C/C++ simply provide them.

    You've probably noticed that I'm using a lot of macros. In fact, classes, polymorphism, inheritance, and virtual functions are all easily implemented with macros. I'm using NASM right now (though I'm using my own macro processor), and it works very well. Because I understand both the high-level concepts and low level details, I can code rather high-level abstractions in a relatively low level language such as assembler. I get the best of both worlds: the ease of HLL abstraction with the power of low level coding.

    Please tell me what you think of this - I would honestly like to know. For the past few years, I've been working on macro sets and libraries that make coding in assembly seem more like a HLL. I've also set rules for function calls, like a function must preserve all registers, except those which are used to pass parms. With a well developed library of classes and routines, I've found that I can develop applications quickly and painlessly. Because I stick to coding standards, I'm able to reuse quite a bit (> 50%) of my assembly code.

    You might be tempted to ask, "Why not just write in a HLL then?" I do. In fact, I prefer to write in C++. But when the need arises, it's nice to be able to apply the same abstractions of a HLL in assembly. It just so happens that the need has arisen - I'm working on a project that will last a few weeks, and my boss doesn't consider it fiscally responsible to buy a $1200 compiler that will be used for such a short time.

    Interestingly, the use of assembly has made me a better programmer. Assembly forces one to think about what one is doing before coding the solution, which usually results in better code. Assembly forces me to come up with new abstractions and solutions that fit the problem, rather than fitting the problem into any given HLL's logical paradigm. Once I prove that the abstract algorithm will indeed solve the problem, I'm then free to convert the algorithm into assembly. Notice that this is the opposite of the way most HLL coders go about writing code - they find a way in which to squeeze a real world problem into the paradigm of the language used. Which leaves them at a loss when "leaky abstractions" occur. Assembly has the flexibility to adapt to the solution best suited to a problem, where as HLL's, while very good at solving the particular problem for which they were designed, perform very poorly for solving problems outside of their logical paradigms. While assembly is easily surpassed by C/C++, Java, or VB for many problems, there are simply some problems that cannot be solved without it. But even if one never uses assembly professionally, learning it forces one to learn to develop logical abstractions on their own - which in turn, increases their general problem solving ability, regardless of the language in which they write.

    I see the key difference between a good assembly coder and a HLL coder is that an assembly language coder must invent high level abstractions, where as the HLL coder simply learns and uses them. So assembly is a bit more mental work.

  • by biobogonics ( 513416 ) on Thursday November 14, 2002 @02:16PM (#4670031)
    After 5 years of programming, my favorite language has become assembler - not because I hate HLL's, but rather, because you get exactly what you code in assembler. There are no "Leaky Abstractions" in assembly.

    Ah, but you are wrong, and I'm speaking as someone who has written over 100,000 lines of assembly code. The great majority of the time, when you're faced with a programming problem, you don't want to think about that problem in terms of bits and and bytes and machine instructions and so on. You want to think about the problem in a more abstract way.


    I'd love to put Randy Hyde (author of High Level assembler) in the same room with Monte Davidoff (Multician, PL/I fan and author of the math package in Altair Basic).

    Sometimes the abstraction is best cast at a lower level, that's one reason Knuth used MIX in his "Art of Computer Programming". Other times, higher level languages don't do the job.

    Here are three examples:

    1) Write a transparent filter for Windows 9x that runs in a DOS box. It must handle binary files without discarding LF on input and prefixing LF with CR on output. Try various C compilers and fail.

    2) Translate Microsoft MBF (Microsoft Binary Format) single precision to IEEE singles. Yes you can do it in C, but the assembly version is compact and elegant (ignoring exponent underflow and de-normalization). Portable - no!

    3) Examine the built in random number generator in PCC 1.2c. (DeSmet's C). It was supposed to be the same algorithm used in the so called "minimal standard" (also common to APL) but it's buggy. Not only does the C generated library code completely screw up by confusing unsigned and signed arithmetic, but it's a horror to debug. Even restricting yourself to 8088 code, a routine using simulated division is faster, cleaner and easier to verify as correct. On a 386+, even in real mode DOS, an assembly routine is a snap.

  • Re:Informative (Score:4, Interesting)

    by __past__ ( 542467 ) on Thursday November 14, 2002 @02:52PM (#4670419)
    If you're curious, yes, there was a B, but there was not actually an A (or rather, there was, but it was called ALGOL).
    Between ALGOL and B, there was BCPL (and CPL before that). Hence there was a dispute whether the language following C should be called D or P (and AFAIK, for each name there were several experimental languages that all didn't succeed), until C++ became popular.
  • by Animats ( 122034 ) on Thursday November 14, 2002 @02:55PM (#4670458) Homepage
    There's been a trend away from non-leaky abstractions. LISP, for example, was by design a non-leaky abstraction; you don't need to know how it works underneath. So is Smalltalk. Perl is close to being one. Java leaks more, leading to "write once, debug everywhere". C++ adds abstractions to C without hiding anything, which increases the visible complexity of the system.

    It's useful to distinguish between performance-related leaks and correctness leaks. SQL offers an abstraction for which the underlying database layout is irrelevant except for performance issues. The performance issues may be major, but at least you don't have to worry about correctness.

    C++ is notorious for this; the language adds abstractions with "gotchas" inside. If you try to get the C++ standards committee to clean things up, you always hear 1) that would break some legacy code somewhere, even if we can't find any examples of such code anywhere in any open source distro or Microsoft distro, or 2) that only bothers people who arent "l33t".

    Hardware people used to insist that everything you needed to know to use a part had to be on the datasheet. This is less true today, because hardware designers are so constrained on power, space, heat, and cost all at once.

  • Plus ... (Score:3, Interesting)

    by SimonK ( 7722 ) on Thursday November 14, 2002 @05:14PM (#4671961)
    ... machine code itself is an abstraction in the first place. This is especially true for modern processors that reorder instructions, execute them in parallel, and in extreme cases convert them into an entirely different instruction set.

Today is a good day for information-gathering. Read someone else's mail file.

Working...