Knuth Got It Wrong 298
davecb writes "Think you've mastered the art of server performance? Think again. Poul-Henning Kamp, in an article at ACM Queue, finds an off-by-ten error in btrees, because they fail to take virtual memory into account. And he solves the problem in the open source 'Varnish' HTTP accelerator, for all of us to see and use."
Knuth didn't get it wrong (Score:5, Insightful)
don't use swap, doofs (Score:4, Insightful)
article summary:
"disk is slower than RAM. some doofs don't realize their system is swapping. ergo algorithm is bad."
throw in 'Knuth is wrong' to generate page hits.
???
profit.
non-sensationalized takeaway: "remember swap is slow; try not to use it."
Agreed (Score:5, Insightful)
Knuth's analysis is valid in the framework of his assumptions, and what is described in the linked article has been known as "cache oblivious b-tree" for not so short time.
Yeah, using this logic, once quantum computers get to the point of solving general problems there's going to be an awful lot of people who "got it wrong" because their algorithms do not apply in a quantum computer. Advancements in technology causing algorithms & data structures to be tweaked means the original person who thought them up "got it wrong"? Ridiculous.
"Oh, RSA was brilliant. But they got it wrong. You see, they failed to account for xanxinglydiumapping in 2056 computers. Poor stupid wrong bastards."
Theoretical performance vs real-world performance (Score:3, Insightful)
Knuth never claimed the algorithm was the best choice in YOUR particular case. Don't drag his name through your sensational mud for the sake of your slashvertisement.
Knuth didn't get anything wrong (Score:5, Insightful)
There isn't any "off by ten error", and this isn't telling us anything we don't already know (in CS terms): implementation on an actual computer can be different in performance from an abstract machine.
What the author is saying (quite well) is that the virtual memory performance amounts to cache misses, which cause extra performance overhead. He found a case where it was significant and got a 10x speedup in his particular application.
The article is a little over-zealous its characterization, though it's careful to note that this is not actually a theoretical novelty. The summary, on the other hand, bastardizes and exaggerates it.
The article is interesting, and worth reading, but if you RTFS without RTFA you'll be dumber than you were before. Thanks, kdawson.
news at 11 (Score:3, Insightful)
Algorithm revised in light of real-world performance constraints! Read all about it!
Seriously, we just rewrote a tree (that runs in a high traffic serving environment) this month at work because it wasn't streamlined just right to take full advantage of the underlying architecture. No one will write a paper about it.
Also, hey kids, profile your code.
Ow my eyes (Score:1, Insightful)
R-G colorblindness makes it near impossible to distinguish the graphs from each other.
Re:Nope, he didn't (Score:5, Insightful)
Mod parent up. There was a comment on the article that referenced cache oblivious algorithms, which was a new concept to me and very interesting. Basically, this set of algorithms assumes a memory hierarchy (e.g. fast ram vs slow disk) that is optimized to limit the number of times the slower memory is accessed. Importantly, cache oblivious algorithms are optimal REGARDLESS of the size of the cache. That's opposed to a cache aware algorithm, like a normal b-tree, where the size of each node is set according to the page size of the machine.
A very helpful overview here from MIT Opencourseware: http://www.catonmat.net/blog/mit-introduction-to-algorithms-part-fourteen/ [catonmat.net]
Stupid headline (Score:4, Insightful)
Good article (increase your locality to get fewer page faults). Stupidly wrong Slashdot headline and summary. "Off by ten error?" Please!
kdawson, next time, RTFA before you post someone's lame summary.
Re:Are you serious? Do you even know who phk is? (Score:4, Insightful)
I don't think anyone accused him of not knowing what he's talking about. He's been accused of using an extremely hyperbolic headline to draw readers. Besides, what he's saying isn't exactly brand new.
Re:don't use swap, doofs (Score:3, Insightful)
Re:Are you serious? Do you even know who phk is? (Score:5, Insightful)
To place algorithm theory in the same space as implementation is just plain wrong... and the article does say that in a better way than kdawson's sensationalist header.
The implementation of a particular algorithm on specific hardware is more inline with resource economy than anything else and to subject comparison to the theoretical implementation is just ludicrous.
For instance one could with simple means device a unit where only bubble-sort would be possible because of the massive overhead of qsort on said machine. This is trivial... for instance Casio Fx180 calculator.
Seymour Cray on virtual memory (Score:4, Insightful)
"Virtual memory leads to virtual performance."
Just what I always wanted, a B-tree implementation that is guaranteed to swap.
Re:Why trust the OS? (Score:2, Insightful)
Re:Nope, he didn't (Score:5, Insightful)
For instance, it is common with sorting algorithms to measure (in Big-Oh notation) how many comparison operations there are. However these may not necessarily be very expensive relative to other operations; such as the cost of swapping two elements. If the elements are large structures and the comparison keys are integers, then you should be concerned about the number of copies. But if the elements are pointers to structures and the keys are long strings, then the concern will be with the number of comparisons.
Almost every text book on algorithms you see is going to assume some sort of idealized computing device. In the real world there are multiple levels of memory speeds; caches up to virtual memory to disk. There are also other external factors that must be taken into account as well; for instance if you don't have enough memory, virtual or not, to hold all the data at once. Say you're sorting a database selection; or you're on a mainframe with limited memory space but reading customer data from tapes.
Re:Theoretical performance vs real-world performan (Score:4, Insightful)
Re:Nope, he didn't (Score:3, Insightful)
No, and no. The data structure described by Henning-Kamp is not a B-tree, but a heap. Additionally, it's cache-aware, not cache-oblivious.
That quote does sound like a doofus (Score:1, Insightful)
A 300-GB backing store, memory mapped on a machine with no more than 16 GB of RAM, is quite typical. The user paid for 64 bits of address space, and I am not afraid to use it.
As an undergraduate in the early 1990s, I am sure I was told about "out of core" algorithms within my first two years in school, including how people were starting to apply these to "out of cache" scenarios. In other words, everyone knows and takes for granted that you do not run a RAM-oriented algorithm such that its working set expands beyond the RAM space that has the access metrics you require. And B-trees/B*-trees are the classic data structure to organize data that is too large for your main working RAM. And the HPC community has been tuning hybrid data structures to live in multi-level RAM hierarchies for a long long time. I don't care how much of a "guru" the author is. If he wants to believe virtual memory is a transparent abstraction (a typical unix acolyte mistake), he is ignoring so much computer science; virtual memory is a slightly more graceful failure handler for temporary resource crises. If your average working set ever meets swapping, you are doing it wrong.
Re:Nope, he didn't (Score:4, Insightful)
Does it work : Yes
Is there a better way : Yes
Did it take 46 *years* for anyone to find a better way : yes
If Knuth's mistakes take this long to spot then I still rate his code over most peoples
Re:Nope, he didn't (Score:5, Insightful)
Re:Why trust the OS? (Score:3, Insightful)
Re:Careful there... (Score:3, Insightful)
Where, n is the number of items stored, the lookup time is given by log(n)*COMPARRISON_COST + log(n)*MEMORY_ACCESS_COST.
That gives O(logn) only if COMPARRISON_COST and MEMORY_ACCESS_COST are *constant* - the entire point of the article is that MEMORY_ACCESS_COST is not constant, but increases as n increases.
Re:Careful there... (Score:3, Insightful)
Except it's a bounded increase. That's my whole point. There is nothing slower than the hard drive to cache from. You can assume every operation has the worst possible access time and you'd still end up with the same O() running time.
log(n)*COMPARRISON_COST + log(n)*MEMORY_ACCESS_COST log(n)*COMPARRISON_COST + log(n)*MEMORY_ACCESS_COST_MAX CONSTANT*log(n)