Can "Page's Law" Be Broken? 255
theodp writes "Speaking at the Google I/O Developer Conference, Sergey Brin described Google's efforts to defeat "Page's Law," the tendency of software to get twice as slow every 18 months. 'Fortunately, the hardware folks offset that,' Brin joked. 'We would like to break Page's Law and have our software become increasingly fast on the same hardware.' Page, of course, refers to Google co-founder Larry Page, last seen delivering a nice from-the-heart commencement address at Michigan that's worth a watch (or read)."
Comment removed (Score:4, Informative)
Re:I don't think that holds up (Score:5, Informative)
All he has done is put numbers into Wirth's law.
I remembered this as "software gets slower faster than hardware gets faster", but Wikipedia has a slightly different wording: "software is getting slower more rapidly than hardware becomes faster".
http://en.wikipedia.org/wiki/Wirth%27s_law
In fact, that article also cites a version called "Gates's Law", including the 50% reduction in speed every 18 months.
K.
Re:The 'easy' way (Score:5, Informative)
Make developers target a slow and memory constrained platform. Then you get stellar performance when it runs on the big machines.
Hardly. Have you never heard of space-time tradeoffs? ie, the most common compromise one has to make when selecting an algorithm for solving a problem? If you assume you have a highly constrained system, then you'll select an algorithm which will work within those constraints. That probably means selecting for space over time. Conversely, if you know you're working on a machine with multiple gigabytes of memory, you'll do the exact opposite.
In short: there's *nothing wrong with using resources at your disposal*. If your machine has lots of memory, and you can get better performance by building a large, in-memory cache, then by all means, do it! This is *not* the same as "bloat". It's selecting the right algorithm given your target execution environment.
Grosch's (other) Law (Score:4, Informative)
Herb Grosch said it in the 1960's: Anything the hardware boys come up with, the software boys will piss away.
Benefits of being able to render over 100 fps (Score:4, Informative)
Anything past ~70 fps is really unnoticeable by the average human eye.
I disagree. If you can render the average scene at 300 fps, you can:
If you design the game to run at 70 fps for a slow and memory constrained machine [...] you've sacrificed a lot in visual quality.
A well-engineered game will have (or be able to generate) meshes and textures at high and low detail for close-up and distant objects respectively. On high-spec PCs, you can use the high-detail assets farther from the camera; on the slow and memory-constrained PCs that your potential customers already own, they get the low-detail assets but can still enjoy the game.
Re:Of Course (Score:3, Informative)
This attitude is actually the root cause of the problem, I've never heard this called "Page's Law" but in the industry it's known as "Code Bloat".
No, it's called making a design decision. If the RAM is cheaper than doing it with less RAM, then you buy more RAM. If it isn't, you spend more time on design. The only bad part about it is when it leads to excessive power consumption. Which is, you know, all the time. But that's really the only thing strictly WRONG with spending more cycles, or more transistors.
Re:Benefits of being able to render over 100 fps (Score:4, Informative)
Re:Of Course (Score:1, Informative)
KDE4 is ~30% faster than KDE3 (Score:3, Informative)
Page's Law is really May's Law! (Score:3, Informative)
"Page's law" is simply a restatement of May's law:
"Software efficiency halves every 18 months, compensating Moore's Law".
David May is a British Computer scientist who was the lead architect for the Transputer. See:
http://en.wikipedia.org/wiki/David_May_(computer_scientist) [wikipedia.org]
and page 20 of:
http://www.cs.bris.ac.uk/~dave/iee.pdf [bris.ac.uk]
quote (Score:3, Informative)
A supercomputer is a device for turning compute-bound problems into I/O-bound problems.
-- Ken Batcher
Re:Of Course (Score:3, Informative)
I think there are a lot of copy constructors called when using STL that a lot of people do not expect. The algorithms themselves tend to be well implemented though. None of that really has much to do about OO though, templates with structs with no member functions and lots of casting of void * could have been used instead.
Re:Of Course (Score:3, Informative)
Almost any modern system, to the CPU, a 32-bit aligned access is atomic. If they were just reading the global array, there is no need to use a lock, in fact this is a placebo lock, that adds unnecessary burden on the memory or reservation system. It is common for realtime that the global data is only modified at certain times when readers are not running.
Re:Of Course (Score:1, Informative)
Re:Of Course (Score:3, Informative)
This wasn't a realtime system, and they weren't just doing atomic updates to a single array. There were transactions that involved changes to multiple data structures, and there was no synchronization of any kind. There were several long-standing bugs caused by reader threads getting their data while another thread was halfway through an update.
Sorry for not including all this detail in my original post.
Re:Of Course (Score:3, Informative)
Then MS and Vista must have knocked your sox off!
Funny you should say that, because it's an example that drives the point home so well. For all the flack Vista got about performance, on the day it was released, you could buy a PC - for less than the price of the cheapest iMac (a machine that would barely run OS X at all) - that would run it *well* (dual-core, 2GB RAM, 256MB GPU).
In contrast, it took Apple *years* after OS X 10.0 - and due at least as much to dramatically faster hardware as improved software - before it could even be described as "not slow". You quite literally could not buy hardware that OS X ran well on for _years_ after its release. It wasn't until the G5 Macs (and a few $129 OS updates) that anyone could even begin to call it "fast" with a straight face.
However, the increases from 10.2 to 10.3, and 10.3 to 10.4 were impressive in their own right because 10.2 was where the new OS X reached speed parity with OS 9, IMO.
Well, I have to disagree. OS 9 was quite quick even on paltry ~200Mhz G3s. To my standards, OS X doesn't run "fast" on anything less than a G5 (not, not even dual G4s). Even then, my Mum's 1.9Ghz/2.5GB G5 iMac can be sluggish without reason.
I'm hoping that 10.6 will be the speed boost I was expecting since they are claiming to have focused on 'under the hood' improvements (whatever that really means).
It means they're modifying it to make better use of the 4+ cores/CPU future, just like Microsoft did with Vista and Windows 7. As a result, on "low-end" single-core machines it will probably be slower, and "mid-range" dual-core machines it won't improve much.
Re:Of Course (Score:3, Informative)
Have you read his follow-on comment?
http://slashdot.org/comments.pl?sid=1252121&cid=28173153 [slashdot.org]
Here's the money quote:
There were several long-standing bugs caused by reader threads getting their data while another thread was halfway through an update.
That behaviour can't be intentional. :)
Re:Of Course (Score:1, Informative)