Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Businesses Google The Internet IT Technology

Can "Page's Law" Be Broken? 255

theodp writes "Speaking at the Google I/O Developer Conference, Sergey Brin described Google's efforts to defeat "Page's Law," the tendency of software to get twice as slow every 18 months. 'Fortunately, the hardware folks offset that,' Brin joked. 'We would like to break Page's Law and have our software become increasingly fast on the same hardware.' Page, of course, refers to Google co-founder Larry Page, last seen delivering a nice from-the-heart commencement address at Michigan that's worth a watch (or read)."
This discussion has been archived. No new comments can be posted.

Can "Page's Law" Be Broken?

Comments Filter:
  • Comment removed (Score:4, Informative)

    by account_deleted ( 4530225 ) on Monday June 01, 2009 @08:59AM (#28166709)
    Comment removed based on user account deletion
  • by Keith_Beef ( 166050 ) on Monday June 01, 2009 @09:07AM (#28166775)

    All he has done is put numbers into Wirth's law.

    I remembered this as "software gets slower faster than hardware gets faster", but Wikipedia has a slightly different wording: "software is getting slower more rapidly than hardware becomes faster".

    http://en.wikipedia.org/wiki/Wirth%27s_law

    In fact, that article also cites a version called "Gates's Law", including the 50% reduction in speed every 18 months.

    K.

  • Re:The 'easy' way (Score:5, Informative)

    by Abcd1234 ( 188840 ) on Monday June 01, 2009 @09:25AM (#28166981) Homepage

    Make developers target a slow and memory constrained platform. Then you get stellar performance when it runs on the big machines.

    Hardly. Have you never heard of space-time tradeoffs? ie, the most common compromise one has to make when selecting an algorithm for solving a problem? If you assume you have a highly constrained system, then you'll select an algorithm which will work within those constraints. That probably means selecting for space over time. Conversely, if you know you're working on a machine with multiple gigabytes of memory, you'll do the exact opposite.

    In short: there's *nothing wrong with using resources at your disposal*. If your machine has lots of memory, and you can get better performance by building a large, in-memory cache, then by all means, do it! This is *not* the same as "bloat". It's selecting the right algorithm given your target execution environment.

  • Grosch's (other) Law (Score:4, Informative)

    by Anonymous Coward on Monday June 01, 2009 @09:34AM (#28167069)

    Herb Grosch said it in the 1960's: Anything the hardware boys come up with, the software boys will piss away.

  • Anything past ~70 fps is really unnoticeable by the average human eye.

    I disagree. If you can render the average scene at 300 fps, you can:

    • Apply motion blurring (think 4x temporal FSAA) at 60 fps. Film gets away with 24 fps precisely because of motion blur.
    • Keep a solid 60 fps even through pathologically complex scenes.
    • Render at 60 fps even when four players have joined in on the same home theater PC.

    If you design the game to run at 70 fps for a slow and memory constrained machine [...] you've sacrificed a lot in visual quality.

    A well-engineered game will have (or be able to generate) meshes and textures at high and low detail for close-up and distant objects respectively. On high-spec PCs, you can use the high-detail assets farther from the camera; on the slow and memory-constrained PCs that your potential customers already own, they get the low-detail assets but can still enjoy the game.

  • Re:Of Course (Score:3, Informative)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday June 01, 2009 @10:30AM (#28167833) Homepage Journal

    This attitude is actually the root cause of the problem, I've never heard this called "Page's Law" but in the industry it's known as "Code Bloat".

    No, it's called making a design decision. If the RAM is cheaper than doing it with less RAM, then you buy more RAM. If it isn't, you spend more time on design. The only bad part about it is when it leads to excessive power consumption. Which is, you know, all the time. But that's really the only thing strictly WRONG with spending more cycles, or more transistors.

  • by Shin-LaC ( 1333529 ) on Monday June 01, 2009 @10:30AM (#28167835)
    Mod parent up. And here [100fps.com] is a page that explains some common misconceptions.
  • Re:Of Course (Score:1, Informative)

    by Anonymous Coward on Monday June 01, 2009 @10:36AM (#28167939)
    Wow, maybe you should have listened to your senior programmers. Faster execution speed is not often the goal. Static allocation is deterministic. Slower and deterministic is better in certain types of programming, than faster and non-deterministic. You scoff at their O(N^2) algorithm without even considering all the ramifications. Let me guess: Java programmer?
  • by kojot350 ( 1330899 ) on Monday June 01, 2009 @10:40AM (#28167985)
    KDE4 is ~30% faster than KDE3, mainly because of the Qt4 vs. Qt3 improvements and vast redesign of the KDE itself...
  • by Winter Lightning ( 88187 ) on Monday June 01, 2009 @11:20AM (#28168531)

    "Page's law" is simply a restatement of May's law:

    "Software efficiency halves every 18 months, compensating Moore's Law".

    David May is a British Computer scientist who was the lead architect for the Transputer. See:
    http://en.wikipedia.org/wiki/David_May_(computer_scientist) [wikipedia.org]
    and page 20 of:
    http://www.cs.bris.ac.uk/~dave/iee.pdf [bris.ac.uk]

  • quote (Score:3, Informative)

    by Jeremy Erwin ( 2054 ) on Monday June 01, 2009 @11:42AM (#28168815) Journal

    A supercomputer is a device for turning compute-bound problems into I/O-bound problems.

    -- Ken Batcher

  • Re:Of Course (Score:3, Informative)

    by mzs ( 595629 ) on Monday June 01, 2009 @12:25PM (#28169469)

    I think there are a lot of copy constructors called when using STL that a lot of people do not expect. The algorithms themselves tend to be well implemented though. None of that really has much to do about OO though, templates with structs with no member functions and lots of casting of void * could have been used instead.

  • Re:Of Course (Score:3, Informative)

    by mzs ( 595629 ) on Monday June 01, 2009 @03:39PM (#28172249)

    Almost any modern system, to the CPU, a 32-bit aligned access is atomic. If they were just reading the global array, there is no need to use a lock, in fact this is a placebo lock, that adds unnecessary burden on the memory or reservation system. It is common for realtime that the global data is only modified at certain times when readers are not running.

  • Re:Of Course (Score:1, Informative)

    by LavosPhoenix ( 743501 ) on Monday June 01, 2009 @04:15PM (#28172815)
    yeah, because passing a implicit "this" pointer in C++ and a typed object pointer to a function are so vastly different in storage sizes. Not. And seriously, if you are loading tons of useless data into an object, you've completely missed the point of Object oriented programming in the first place. So don't blame your failure to use logic and reasoning in OOP as a general case that applies to all software. C# and Java use garbage collection, which involves nondeterministic reclaimation of objects, which will affect performance. Sure, GC may allow for easier lockfree structures, but it simply pushes the delay to the GC, and a longer term storage of the deleted objects which then have to be reclaimed by the properly implemented lockfree GC. It's far more important to make sure your data fits in cache lines to prevent cache trashing, which means that your data has to be reloaded into the CPU's cache from higher level caches, like L3 or RAM. Just take a look at Intel's Thread Building Blocks. None of their concurrent data structures are lockfree, but do make sure to properly allocate objects to fit cache lines.
  • Re:Of Course (Score:3, Informative)

    by NewbieProgrammerMan ( 558327 ) on Monday June 01, 2009 @04:36PM (#28173153)

    This wasn't a realtime system, and they weren't just doing atomic updates to a single array. There were transactions that involved changes to multiple data structures, and there was no synchronization of any kind. There were several long-standing bugs caused by reader threads getting their data while another thread was halfway through an update.

    Sorry for not including all this detail in my original post.

  • Re:Of Course (Score:3, Informative)

    by drsmithy ( 35869 ) <drsmithy@nOSPAm.gmail.com> on Monday June 01, 2009 @05:03PM (#28173615)

    Then MS and Vista must have knocked your sox off!

    Funny you should say that, because it's an example that drives the point home so well. For all the flack Vista got about performance, on the day it was released, you could buy a PC - for less than the price of the cheapest iMac (a machine that would barely run OS X at all) - that would run it *well* (dual-core, 2GB RAM, 256MB GPU).

    In contrast, it took Apple *years* after OS X 10.0 - and due at least as much to dramatically faster hardware as improved software - before it could even be described as "not slow". You quite literally could not buy hardware that OS X ran well on for _years_ after its release. It wasn't until the G5 Macs (and a few $129 OS updates) that anyone could even begin to call it "fast" with a straight face.

    However, the increases from 10.2 to 10.3, and 10.3 to 10.4 were impressive in their own right because 10.2 was where the new OS X reached speed parity with OS 9, IMO.

    Well, I have to disagree. OS 9 was quite quick even on paltry ~200Mhz G3s. To my standards, OS X doesn't run "fast" on anything less than a G5 (not, not even dual G4s). Even then, my Mum's 1.9Ghz/2.5GB G5 iMac can be sluggish without reason.

    I'm hoping that 10.6 will be the speed boost I was expecting since they are claiming to have focused on 'under the hood' improvements (whatever that really means).

    It means they're modifying it to make better use of the 4+ cores/CPU future, just like Microsoft did with Vista and Windows 7. As a result, on "low-end" single-core machines it will probably be slower, and "mid-range" dual-core machines it won't improve much.

  • Re:Of Course (Score:3, Informative)

    by ion.simon.c ( 1183967 ) on Monday June 01, 2009 @06:41PM (#28175055)

    Have you read his follow-on comment?

    http://slashdot.org/comments.pl?sid=1252121&cid=28173153 [slashdot.org]

    Here's the money quote:

    There were several long-standing bugs caused by reader threads getting their data while another thread was halfway through an update.

    That behaviour can't be intentional. :)

  • Re:Of Course (Score:1, Informative)

    by Anonymous Coward on Monday June 01, 2009 @08:54PM (#28176253)
    You gave an example of a complex system that can be done well without OO. GP asked for a complex system that can't be done well with OO.

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...