Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Software

Knuth Got It Wrong 298

davecb writes "Think you've mastered the art of server performance? Think again. Poul-Henning Kamp, in an article at ACM Queue, finds an off-by-ten error in btrees, because they fail to take virtual memory into account. And he solves the problem in the open source 'Varnish' HTTP accelerator, for all of us to see and use."
This discussion has been archived. No new comments can be posted.

Knuth Got It Wrong

Comments Filter:
  • by commodoresloat ( 172735 ) * on Monday June 14, 2010 @06:24PM (#32572660)

    "Who the hell are you and what are you doing in my house?"

  • by MrEricSir ( 398214 ) on Monday June 14, 2010 @06:28PM (#32572700) Homepage

    10 times faster? Yawn. Wake me up when it's 11 times faster.

  • Nope, he didn't (Score:5, Informative)

    by siddesu ( 698447 ) on Monday June 14, 2010 @06:29PM (#32572714)
    Knuth's analysis is valid in the framework of his assumptions, and what is described in the linked article has been known as "cache oblivious b-tree" for not so short time.
    • Agreed (Score:5, Insightful)

      by eldavojohn ( 898314 ) * <eldavojohnNO@SPAMgmail.com> on Monday June 14, 2010 @06:35PM (#32572792) Journal

      Knuth's analysis is valid in the framework of his assumptions, and what is described in the linked article has been known as "cache oblivious b-tree" for not so short time.

      Yeah, using this logic, once quantum computers get to the point of solving general problems there's going to be an awful lot of people who "got it wrong" because their algorithms do not apply in a quantum computer. Advancements in technology causing algorithms & data structures to be tweaked means the original person who thought them up "got it wrong"? Ridiculous.

      "Oh, RSA was brilliant. But they got it wrong. You see, they failed to account for xanxinglydiumapping in 2056 computers. Poor stupid wrong bastards."

      • Re: (Score:3, Interesting)

        by jonadab ( 583620 )
        I think "got it wrong" in the past tense is a little harsh, but I would say that a LOT of stuff in Knuth is wrong today, or at least terribly obsolete. It's not useless to be aware of, but you don't want to assume that everything is today still exactly the way Knuth described it in the digital stone age.

        I mean, just for example, Knuth spends whole chapters talking about sorting algorithms. Almost all modern programmers don't have any use for that stuff, because re-implementing sort in your application cod
    • Re:Nope, he didn't (Score:5, Insightful)

      by SashaMan ( 263632 ) on Monday June 14, 2010 @06:55PM (#32572986)

      Mod parent up. There was a comment on the article that referenced cache oblivious algorithms, which was a new concept to me and very interesting. Basically, this set of algorithms assumes a memory hierarchy (e.g. fast ram vs slow disk) that is optimized to limit the number of times the slower memory is accessed. Importantly, cache oblivious algorithms are optimal REGARDLESS of the size of the cache. That's opposed to a cache aware algorithm, like a normal b-tree, where the size of each node is set according to the page size of the machine.

      A very helpful overview here from MIT Opencourseware: http://www.catonmat.net/blog/mit-introduction-to-algorithms-part-fourteen/ [catonmat.net]

      • Re: (Score:3, Interesting)

        by Darinbob ( 1142669 )
        Much of this also depends on the relative size of the data elements compared to sizes of page or cache lines.

        For instance if the b-tree node is roughly the size of a page, the entire node must be read into memory at once from disk. Even reading a single byte of the record may necessitate reading the entire page. However this size will likely be much larger than a cache line. So you could have quite a lot of variance in performance based on how much of the structure you access; say scanning through the wh
    • Re: (Score:3, Interesting)

      by lalena ( 1221394 )
      Yes, you are correct. I think the author could have just showed figures 5 & 6 and said obviously figure 6 is better and I would have gotten the point.
      Instead I had to read a bunch of text and timing charts before he simply showed what his improvement was. Yes, cache oblivious is worse, but that wasn't the problem Knuth was trying to solve. You could make further arguments that the paging system should group links by location AND popularity. You could move more popular links to the top of the tree so y
      • Many servers (Score:5, Informative)

        by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Monday June 14, 2010 @08:36PM (#32573746) Homepage Journal

        Unless you have many servers

        The article does in fact mention many servers, specifically replacing twelve Squid servers with three Varnish servers.

        it is cheaper to throw money at the problem in the form of physical RAM before you start thinking about problems like this.

        No matter how much physical RAM you have, you're still limited by the speed of the interface between RAM and the CPU. If a set of closely related nodes can fit in a cache line, this improves locality so that the CPU can do more work instead of twiddling its thumbs.

    • Re:Nope, he didn't (Score:5, Insightful)

      by Darinbob ( 1142669 ) on Monday June 14, 2010 @07:47PM (#32573432)
      Agreed. Very often these results get misused and misunderstood. Every operation has a cost, and you need to know what those costs are before you decide which one of those gets measured.

      For instance, it is common with sorting algorithms to measure (in Big-Oh notation) how many comparison operations there are. However these may not necessarily be very expensive relative to other operations; such as the cost of swapping two elements. If the elements are large structures and the comparison keys are integers, then you should be concerned about the number of copies. But if the elements are pointers to structures and the keys are long strings, then the concern will be with the number of comparisons.

      Almost every text book on algorithms you see is going to assume some sort of idealized computing device. In the real world there are multiple levels of memory speeds; caches up to virtual memory to disk. There are also other external factors that must be taken into account as well; for instance if you don't have enough memory, virtual or not, to hold all the data at once. Say you're sorting a database selection; or you're on a mainframe with limited memory space but reading customer data from tapes.
    • Re: (Score:3, Insightful)

      by jemfinch ( 94833 )

      No, and no. The data structure described by Henning-Kamp is not a B-tree, but a heap. Additionally, it's cache-aware, not cache-oblivious.

    • Re:Nope, he didn't (Score:4, Insightful)

      by JasterBobaMereel ( 1102861 ) on Tuesday June 15, 2010 @02:12AM (#32575314)

      Does it work : Yes

      Is there a better way : Yes

      Did it take 46 *years* for anyone to find a better way : yes

      If Knuth's mistakes take this long to spot then I still rate his code over most peoples

  • by gnasher719 ( 869701 ) on Monday June 14, 2010 @06:30PM (#32572724)
    You should have read a bit further to the bit with B-trees and B*-trees.
  • by conspirator57 ( 1123519 ) on Monday June 14, 2010 @06:30PM (#32572726)

    article summary:

    "disk is slower than RAM. some doofs don't realize their system is swapping. ergo algorithm is bad."

    throw in 'Knuth is wrong' to generate page hits.
    ???
    profit.

    non-sensationalized takeaway: "remember swap is slow; try not to use it."

    • by Anonymous Coward

      Poul-Henning is definitely not a "doof". He's single-handedly responsible for a huge amount of the FreeBSD kernel. Yes, this is the same FreeBSD that powers Yahoo!, that provided a significant portion of Mac OS X, and runs hundreds of thousands of businesses world-wide.

      To suggest that phk doesn't know what he's talking about is absurd. He's one of the top three UNIX gurus in the entire world. In fact, the Internet today is what it is thanks to his hard work and dedication.

      • by MightyMartian ( 840721 ) on Monday June 14, 2010 @06:59PM (#32573028) Journal

        I don't think anyone accused him of not knowing what he's talking about. He's been accused of using an extremely hyperbolic headline to draw readers. Besides, what he's saying isn't exactly brand new.

        • by ugen ( 93902 )

          That's PHK. You should have seen him 15 years ago in the FreeBSD core. He's a very smart developer but even more so he's very adept at blowing his own horn.

      • by McNihil ( 612243 ) on Monday June 14, 2010 @07:04PM (#32573072)

        To place algorithm theory in the same space as implementation is just plain wrong... and the article does say that in a better way than kdawson's sensationalist header.

        The implementation of a particular algorithm on specific hardware is more inline with resource economy than anything else and to subject comparison to the theoretical implementation is just ludicrous.

        For instance one could with simple means device a unit where only bubble-sort would be possible because of the massive overhead of qsort on said machine. This is trivial... for instance Casio Fx180 calculator.

      • Yes, but... (Score:3, Funny)

        by l00sr ( 266426 )

        He's one of the top three UNIX gurus in the entire world. In fact, the Internet today is what it is thanks to his hard work and dedication.

        Still, I'd trust Don Knuth over Poul-Henning any day--at least Knuth can spell his own first name correctly.

    • non-sensationalized takeaway: "remember swap is slow; try not to use it."

      Not really. Closer to "Remember, swap is slow. Think about how you use it."

      • Re: (Score:3, Funny)

        by dominious ( 1077089 )

        non-sensationalized takeaway: "remember swap is slow; try not to use it."

        Not really. Closer to "Remember, swap is slow. Think about how you use it."

        "Your response has exactly the same length as the parent's quote. Interesting"

    • Re: (Score:3, Insightful)

      by maraist ( 68387 ) *
      Did you read the article? Not that it's terribly innovative or anything, but he's not saying anything about swap is slow.. He's saying when you need to page to disk (e.g. a database or disk-spilling cache), don't use traditional B-Tree's or B+Tree's which tend to only store a single level in a memory-page / disk-page. This causes Log(n) disk-lookups to find the leaf-node (where all B+Tree data lives, and half of B-Tree data lives). Instead store multiple levels of the tree in a contiguous mem/disk page.
      • Instead store multiple levels of the tree in a contiguous mem/disk page.

        I seem to recall a chapter in a programming book that I read maybe 15 years ago, talking about 'locality of reference' for optimisation. It was the same concept as this: Structure your memory usage to keep data that are accessed together, stored together. Then, whatever caching mechanism is in place will be able to spend less time paging (or spread the paging out over a larger number of CPU cycles) and thus have better throughput.

    • by gmack ( 197796 )

      More like: swap is slow. When you are using it try to make things you need to access at the same time closer together to minimize reads.

    • "doofs"? You should look up some information on the author :)

    • Re: (Score:3, Informative)

      by borgboy ( 218060 )

      In no way whatsoever did he say 'remember swap is slow; try not to use it.'

      That's as wrong as the idiotic summary.

      Here's a relevant quote:

      A 300-GB backing store, memory mapped on a machine with no more than 16 GB of RAM, is quite typical. The user paid for 64 bits of address space, and I am not afraid to use it.

      The article is about redesigning binary heaps to account for non-linear access times between nodes due to swap. This point is critical. He's NOT avoiding swap, he's planning for it.

  • You mean people are implementing heap like that ?

    That would indeed be stupid. People have been using B-tree http://en.wikipedia.org/wiki/B-tree [wikipedia.org] instead of Binary search trees for the cache access (or sequential disk IO) reason forever.

    More recently we have been talking about cache oblivious and cache aware algorithm : http://en.wikipedia.org/wiki/Cache-oblivious_algorithm [wikipedia.org]

    I should write an article about Z-order and be on the first page of slashdot...

  • by InsertWittyNameHere ( 1438813 ) on Monday June 14, 2010 @06:36PM (#32572794)
    Yes it's true. In some real-world applications an algorithm encounters it's worst case running time more than the predicted theoretical average case running time. This is where case by case optimizations come into play.

    Knuth never claimed the algorithm was the best choice in YOUR particular case. Don't drag his name through your sensational mud for the sake of your slashvertisement.
    • by DragonWriter ( 970822 ) on Monday June 14, 2010 @06:47PM (#32572904)

      Yes it's true. In some real-world applications an algorithm encounters it's worst case running time more than the predicted theoretical average case running time.

      That's actually not the issue.

      The issue is that in the real world, the assumptions underlying the calculation of algorithmic complexity may not hold. For instance, Knuth's analysis that the author of the article here holds to be misleading (not, as the Slashdot title suggests, "wrong") calculates the complexity based on the assumption of ideal random access memory, that is, memory for which all accesses are equal cost.

      In real-world computers using caches (as pretty much all real-world computers do, often at many different levels) that assumption does not hold -- access to some parts of the memory address space are much more expensive than accesses to other parts of the address space, and which parts of the address space are expensive changes over time (and how it changes over time potentially depends on the caching strategy used at every level lower than the level at which the algorithm is implemented.)

      This means that algorithms in the real world can scale worse than their theoretical "worst-case", if that theoretical worst-case scaling is based on the assumption of constant memory access cost, since that assumption does not hold in the real world.

      • by Darinbob ( 1142669 ) on Monday June 14, 2010 @08:05PM (#32573536)
        But everyone already knows that (or should). Knuth knows that as well, caches and paging systems are not unknown things to him. He was writing a text book for students, and simplified a complicated problem to the point where it can be studied and analyzed more easily. Similar to teaching students Newtonian physics. It is sensationalist and naive to call Knuth wrong here.
        • He was writing a text book for students, and simplified a complicated problem to the point where it can be studied and analyzed more easily. Similar to teaching students Newtonian physics.

          I don't find it all that similar. Newtonian mechanics describes behavior of the objects that people see within their daily lives to within measurable noise level. Ideal random-access memory does not. It's as if aerospace engineers were taught the version of gravity with g (Earth sea level acceleration) instead of the version with G*m/(r + h)^2 (acceleration given arbitrary planetary mass, planetary radius, and altitude). g works for land vehicles but not spacecraft, and as a lead-in to discussion of G*m/(r

      • This means that algorithms in the real world can scale worse than their theoretical "worst-case", if that theoretical worst-case scaling is based on the assumption of constant memory access cost, since that assumption does not hold in the real world.

        I agree with all of your post except this part. Algorithmic complexity theory is about orders of magnitude, not about precise numbers. That's why we have O(n) as a complexity class but not O(2n), O(3n) as separate classes; saying that an algorithm has O(n) wor

  • by jfengel ( 409917 ) on Monday June 14, 2010 @06:37PM (#32572812) Homepage Journal

    There isn't any "off by ten error", and this isn't telling us anything we don't already know (in CS terms): implementation on an actual computer can be different in performance from an abstract machine.

    What the author is saying (quite well) is that the virtual memory performance amounts to cache misses, which cause extra performance overhead. He found a case where it was significant and got a 10x speedup in his particular application.

    The article is a little over-zealous its characterization, though it's careful to note that this is not actually a theoretical novelty. The summary, on the other hand, bastardizes and exaggerates it.

    The article is interesting, and worth reading, but if you RTFS without RTFA you'll be dumber than you were before. Thanks, kdawson.

    • by juuri ( 7678 )

      What's troubling to me about this article is it reads like this guy hasn't been paying attention for quite a while.

      Squid is his use case to beat? Has he been in any high performance app stack in the past five years?

      Squid... really?

      • Re: (Score:3, Interesting)

        by Lord Crc ( 151920 )

        Squid is his use case to beat? Has he been in any high performance app stack in the past five years?

        Squid... really?

        Considering Varnish was written due primarily to the lacking performance of Squid, I don't see why this is so bad?

      • Re: (Score:3, Interesting)

        by Cirvam ( 216911 )

        Other then varnish and squid what caching reverse proxy software is there? It looks like Nginx added caching to their core recently, although I'm not exactly sure if its intended to act in the same capacity as squid and varnish or to be more like mod_cache for lighttpd. I guess you could use apache and mod_proxy but that's not exactly high performance. I know my employer looked at the various offerings and we ended up writing our own on top of a webserver called OKWS.

    • by steveha ( 103154 )

      My favorite quotes from the article:

      Would you believe me if I claimed that an algorithm that has been on the books as "optimal" for 46 years, which has been analyzed in excruciating detail by geniuses like Knuth and taught in all computer science courses in the world, can be optimized to run 10 times faster?

      It would of course be unjust and unreasonable to blame Williams for not realizing that Atlas had invalidated one of the tacit assumptions of his algorithm: only hindsight makes that observation possible.

  • I am sympathetic with his argument, but trying to figure out the best algorithm for a given OS and hardware combo is very difficult and usually not portable.

    • I am sympathetic with his argument, but trying to figure out the best algorithm for a given OS and hardware combo is very difficult and usually not portable.

      Assuming that the number of actual caching strategies is smaller than the number of hardware and OS combos (which it would seem likely to be, since at least some of those should share caching strategies), what is really needed is analysis of the determinants of algorithm performance that take into account different caching strategies. That's still diffi

    • If you make sure that you require less pages to do something, you will generate less page faults. I would think that this fact is rather universal. Of course, you can go further by tweaking the parameters in such a way that they run well on a specific processor.

      • I suppose, but have all modern OS's finally agreed on a universal LRU algorithm, or do we still have "clever" algorithms that try to figure out if the application is user-centric or server-centric then changes the page expiry accordingly?

  • news at 11 (Score:3, Insightful)

    by inkyblue2 ( 1117473 ) on Monday June 14, 2010 @06:47PM (#32572906)

    Algorithm revised in light of real-world performance constraints! Read all about it!

    Seriously, we just rewrote a tree (that runs in a high traffic serving environment) this month at work because it wasn't streamlined just right to take full advantage of the underlying architecture. No one will write a paper about it.

    Also, hey kids, profile your code.

  • Why trust the OS? (Score:5, Interesting)

    by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Monday June 14, 2010 @06:50PM (#32572942) Homepage
    I'm not sure what the advantages are of dancing around with the OS's virtual memory system. If it were me, I would detect the amount of physical memory available, allocate 80% of it, lock it into physical memory and manage the paging myself. It would be portable, not subject to changes in the VM algorithm by the OS authors, and easier to directly monitor and improve performance. But that's just me.
    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Then someone starts two copies of your program and you lock 160% of available memory
    • In a virtual world it lies: I suspect that most Operating Systems lie about what physical hardware:

      VMware allows for over-commitment of most hardware. (CPU, Memory, and Hard Disk). Windows allows for over-commitment of Memory

      Since this is making assumptions about memory management: (Flash: Various algorithms may be tuned for optimized use in your specific use-case).

      In your case of grabbing 80% of memory: This works if you really need the memory - in which case you have to have it: If this forces Windows (li

      • In Windows, the API to allocate physical memory is VirtualAlloc [microsoft.com].

        Good point about VMWare. I hadn't thought about that.

        I just used 80% as an example. Obviously one would provide command-line switches or config options for either X% of memory or Y MB of memory.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Why is this funny? Almost all DBMS kernels do it -- they manage physical memory and disk blocks themselves. More to the point, Dr. Kamp is glossing over the fact that one doesn't use in-memory data structures where data my reside on disk.

      In this case, the suitable data structure is any variation of B+Tree, which actually contains children of a node in one memory page and when there is no more room in that page a page split happens. He used the wrong D/S to begin with, concludes Knuth was wrong and finally e

    • Very simply, because the OS developers are usually pretty smart and when you try to outsmart them and fail you look like a tool. And you're going to fail. If I knew that everyone who I posted a response like this would pay up, I'd bet you $1k that you'd fail and make mad cash.

    • Re: (Score:3, Insightful)

      by gnud ( 934243 )
      Because your paging algorithm will be much less tested, used and bugfixed than any OS paging algorithm. And the memory you allocated might be swapped out by the OS anyway.
  • well (Score:2, Informative)

    I don't have time to read through the article and verify the numbers (at least right now) but anyone who's even paged through TAOCP knows it was written for computers where "swap" was what the operator did when the tape was full. (Ok, they also had drum memory).
  • by turing_m ( 1030530 ) on Monday June 14, 2010 @06:54PM (#32572978)

    If you meet him some day, and you think this stuff is worth it, buy him a beer.

  • Stupid headline (Score:4, Insightful)

    by Sloppy ( 14984 ) on Monday June 14, 2010 @06:57PM (#32573020) Homepage Journal

    Good article (increase your locality to get fewer page faults). Stupidly wrong Slashdot headline and summary. "Off by ten error?" Please!

    kdawson, next time, RTFA before you post someone's lame summary.

  • by SashaMan ( 263632 ) on Monday June 14, 2010 @06:59PM (#32573040)

    The summary is wrong when it talks about "an off by ten error in btrees". In fact, the article talks about how normal binary heap implementations are slow when virtual memory is taken into account.

    In fact, b-trees ARE cache aware and ARE optimized to limit paging on disk. PHK's algorithm is essentially a cache-aware version of a binary heap.

    That is, binary tree is to b-tree as binary heap is to PHK's b-heap.

  • by MadAndy ( 122592 ) on Monday June 14, 2010 @07:04PM (#32573074)
    Knuth's stuff assumes everything is RAM-resident - as long as you don't violate that what he wrote is as valid as ever. I'm quite certain that he'll have different suggestions for tasks involving storing data on disk. Even though the disk writes are implicit because the OS is doing them, they're still there, just as they would be had he coded them explicitly. So of course you're going to get poor performance using a RAM-resident algorithm for a disk-resident application.

    The RAM resident stuff is still useful, both at a lower level, but also for those of us creating applications that can live entirely in memory. A web site I did recently is careful to ensure that the entire primary data set can fit in memory, and for that site everything he wrote is still perfectly valid.

    In fact, for very high performing websites you try to ensure that at least most of your requests come from memory rather than disk, which makes Knuth's stuff more important than ever. If you can't do it in RAM then you'd better have a lot of spindles!

    • Knuth's stuff assumes everything is RAM-resident - as long as you don't violate that what he wrote is as valid as ever.

      CPU cache.

  • Continuing a discussion...

    Seems to be that this bheap just reduces the number of pages probably needed from log(n) to log(n)/C where C is the number of levels that are stored in the same page. And for removing a node it may need to access log(n)/C random pages. So this is just a constant factor improvement... it's just that the constant is number of pages so has a large real world cost.

    I'd like to get people's thoughts on using timer wheels [lkml.org] instead, like the linux kernel uses for timers. Say you take say

    • Re: (Score:3, Informative)

      by chhamilton ( 264664 )

      The main reason they are using the buckets is to delay sorting costs as far as possible into the future so that there is less cost for most timers (as most timers are apparently deleted far before expiring). I'd suggest that the major performance gain is due to this lazy sorting, and not because their data structure avoids page faults. (Well, it does avoid page faults that the old linked list algorithm would have caused, but these page faults are due to the sorting going on in the original code which is a

  • by peter303 ( 12292 ) on Monday June 14, 2010 @07:28PM (#32573294)
    I forget the exact amount, but it was like PI or E dollars for every typo. I am not sure what the payment is for an algorithmic error.
    • Re: (Score:3, Funny)

      by kybred ( 795293 )

      I forget the exact amount, but it was like PI or E dollars for every typo. I am not sure what the payment is for an algorithmic error.

      I think it's e ^i(pi/2) [wikipedia.org]

  • by shoppa ( 464619 ) on Monday June 14, 2010 @07:29PM (#32573306)

    "Virtual memory leads to virtual performance."

    Just what I always wanted, a B-tree implementation that is guaranteed to swap.

  • Extra, Extra!
    GOTO Successfully Used Without Harm!
  • I'm interested to see if this counts as a true error and thus deserving a check.

  • by cwills ( 200262 ) on Monday June 14, 2010 @11:12PM (#32574674)

    This really isn't anything new. Knuth didn't get it "wrong". He based his analysis of the algorithms assuming a system that had dedicated memory and where each instruction of code ran uninterrupted and in a consistent fashion.

    Certain memory access patterns are "bad" under a system that uses virtual memory, especially when the base system is memory constrained. This has been a well known fact for decades. In fact one of the maybe lost arts of programming was ensuring reference locality, not only of data, but also of code. It was a common practice to ensure that often called subroutines or functions where either located in same page of memory as the calling code, or to group all the often called functions into as few pages of memory as possible.

    Basically, every address space has what is sometimes called a working set, a set of pages that have been recently referenced. There are three things that can happen with a working set. It can remain the same size, it can grow and it can shrink. If it remains the same, there is no additional load to the operating system. If it shrinks, there is no additional load to the operating system, in fact this can help a memory constrained system. A growing working set however an lead to a thrashing system. Some operating systems will monitor working set sizes and can adjust dispatch priorities and execution classes depending on what the recent working set size history is. An application with a growing working set may very will find itself at the end of the queue way behind applications that have a static working set size.

    Take for an example the following very simple program

    static string buffer[256][4096]
    while not infile.eof() do
    infile.readinto(buffer[0],256)
    outfile.writefrom(buffer[0],256)
    end

    Here the working set of this program will be very small. Ignoring the file i/o routines, all the code and data references will be limited to basically a fixed section of memory. From a virtual memory stand point, this is a "well behaved" application.

    Now take the following

    static string buffer[256][4096]
    while not infile.eof() do
    bindex = random(0,4095)
    infile.readinto(buffer[ bindex ], 256)
    outfile.wwritefrom(buffer[ bindex ], 256)
    end

    Functionally the same program, however the data reference pattern here is all over the place. The working set will be large, since many of the buffer pages will be referenced. The program never stays long on the same memory location.

    Finally take the following example

    static string buffer[256][4096]
    infile.readinto(buffer[0], 256* 4096) // fill the entire buffer
    for i = 0 to 4095 do
    numbercrunch( buffer[i] )
    end

    Here there will be an initially huge working set as the data is read in. However, the working set will shrink to a reasonable size once the numbercrunching phase starts since the data references will all be localized to a small block of memory.

  • by phkamp ( 524380 ) on Tuesday June 15, 2010 @03:26AM (#32575568) Homepage

    What a misleading title, it is not even in the same continent as the article.

    A large number of people obviously didn't read the actual article.

    And I guess Knuth has quite a fanboi community on slashdot. I wonder if he really appreciates that ?

    Some of those who did read the article, does not seem to know the difference between a binary heap and a binary tree, and even the pretty strong clue to the difference in the text, did not make them go check wikipedia. 10 out of 10 for selfesteem, but 0 out of 10 for clue.

    Those who think CS should be unsullied by actual computers should make sure to note this belief on their resume. (Trust me, everybody you send your resume to will appreciate that.)

    Those who advocate getting rid of Virtual Memory must have much more money for RAM than is sensible. I wish I could afford that.

    About five comments tries, in vain it seems, to explain the point of my article to the unwashed masses (kudos!, but really, what are you doing here ?)

    Not one comment rises to a level where I feel a need to answer it specifically. On Reddit over the weekend there were about a handful.

    Is that really par for the course on slashdot these days ?

    Sic transit gloria mundi...

    Poul-Henning

You know you've landed gear-up when it takes full power to taxi.

Working...