Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Businesses Google The Internet IT Technology

Can "Page's Law" Be Broken? 255

theodp writes "Speaking at the Google I/O Developer Conference, Sergey Brin described Google's efforts to defeat "Page's Law," the tendency of software to get twice as slow every 18 months. 'Fortunately, the hardware folks offset that,' Brin joked. 'We would like to break Page's Law and have our software become increasingly fast on the same hardware.' Page, of course, refers to Google co-founder Larry Page, last seen delivering a nice from-the-heart commencement address at Michigan that's worth a watch (or read)."
This discussion has been archived. No new comments can be posted.

Can "Page's Law" Be Broken?

Comments Filter:
  • The 'easy' way (Score:3, Interesting)

    by Dwedit ( 232252 ) on Monday June 01, 2009 @08:55AM (#28166677) Homepage

    Make developers target a slow and memory constrained platform. Then you get stellar performance when it runs on the big machines.

  • Re:Of Course (Score:2, Interesting)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday June 01, 2009 @09:32AM (#28167051) Homepage Journal

    The law isn't linear, it's more sawtooth-style.

    All data looks notchy if you sample it at high resolution and don't apply smoothing.

    One catch in performance is that it sure is faster to use RAM for data, but there is also a lot of useless data floating around in RAM, which is a waste of resources.

    RAM is cheap these days. Storage devices are still slow and the most interesting ones have a finite (Though still large) number of writes.

    This often explains why old languages like C, Cobol etc. are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed. The disadvantage is that the old languages require more skills from the programmer

    In fact you will often see today that a job that could be handled by a 555 and a couple of caps has been replaced with an internally-clocked microcontroller simply because it's a known platform and development is easy. When you have is a vertical mill, everything looks like a machining project. But you can make a water block with a drill press...

  • Re:Of Course (Score:4, Interesting)

    by Carewolf ( 581105 ) on Monday June 01, 2009 @09:36AM (#28167109) Homepage

    Exactly firefox 3 vs 2 is an excelent example. Especially because Firefox between major releases have been know for the opposite: Getting slower with each minor release.

    There are also examples of the opposite. The KDE 3.x got faster and faster for the entire generation, while KDE 4.0 was much slower again, but here 4.1, 4.2 and especially the next 4.3 is many times fast than the 4.0 release.

    So I don't think Google's ideas are unique. The issue is well known and fought against in many different ways in especially open source.

  • by cylcyl ( 144755 ) on Monday June 01, 2009 @09:39AM (#28167141)

    When companies go into feature race, they forget that it quickly becomes diminishing returns. As the features you enable are less and less likely for your client base to be interested in.

    However, if you improve the performance of your core functions (thru UI or speed), your entire customer base gets improvement and have a real reason to upgrade

  • Re:Of Course (Score:1, Interesting)

    by Anonymous Coward on Monday June 01, 2009 @09:44AM (#28167221)

    One word : Embedded. With advent of low-power generacl computing, ARM Netbooks operating again in hundred mhz range and battery life being prioritized above all else, Page's Law will get and is getting a thorough workout.

  • Re:Nope (Score:1, Interesting)

    by Anonymous Coward on Monday June 01, 2009 @09:52AM (#28167325)

    Make them work on a Netbook with a 8.9" 800x600 display, 512MB RAM (much less available with the OS and other applications running), 4GB Flash storage (much less available with the OS and other applications installed).

    The reason? There is such hardware currently in use out there.

  • Larger user base (Score:3, Interesting)

    by DrWho520 ( 655973 ) on Monday June 01, 2009 @09:54AM (#28167341) Journal
    Making later versions of software run more efficiently on a baseline piece of hardware may also make the software run more efficiently on lesser pieces of hardware. Does the increase in possible install base (since your software now runs on hardware slower than your baseline) justify a concerted effort to write software that runs more efficiently?
  • by toby ( 759 ) * on Monday June 01, 2009 @09:57AM (#28167375) Homepage Journal

    10.0, 10.1, 10.2, 10.3, and maybe 10.4 was a series of releases where performance improved with each update. I don't run 10.5 so can't comment if the trend continues.

  • Re:Of Course (Score:3, Interesting)

    by Shin-LaC ( 1333529 ) on Monday June 01, 2009 @10:22AM (#28167713)
    That's not true. I ran 10.3 on a 233 MHz iMac G3 (a machine designed for Mac OS 9), and used that as my main machine for a couple of years. It ran fine.
  • Re:Of Course (Score:2, Interesting)

    by Anonymous Coward on Monday June 01, 2009 @10:28AM (#28167807)
    What you fail to grasp is what your senior programmers understand: heap allocation is non-deterministic. Any code that your write that mallocs after initialization is done, wouldn't even pass a peer review where I work (doing safety-critical, fault-tolerant, real-time embedded). Maybe you should learn a little more before running off at the mouth.
  • by jollyreaper ( 513215 ) on Monday June 01, 2009 @10:37AM (#28167949)

    Business managers don't want to pay for great when good will do. Have you gotten the beta to compile yet? Good, we're shipping. I don't care if it was a tech demo, I don't care if you said your plan was to figure out how to do it first, then go back through and do it right. We have a deadline, get your ass in gear.

    Then the next release cycle comes around and they want more features, cram them in, or fuck it we'll just outsource it to India. We don't know how to write a decent design spec and so even if the Indians are good programmers, the language barrier and cluelessness will lead to disaster.

    And here's the real kicker -- why bother to write better when people buy new computers every three years? We'll just throw hardware at the problem. == this is the factor that's likely to change the game.

    If you look at consoles, games typically get better the longer it's on the market because programmers become more familiar with the platform and what it can do. You're not throwing more hardware at the problem, not until the new console ships. That could be years and years away, just for the shipping, and even more years until there's decent market penetration. No, you have to do something wonderful and new and it has to be done on the current hardware. You're forced to get creative.

    With the push towards netbooks and relatively low-power systems (low-power by today's standards!), programmers won't be able to count on power outstripping bloat. They'll have to concentrate on efficiency or else they won't have a product.

    There's also the question of how much the effort is worth. $5000 in damage to my current car totals it, even if it could be be repaired. I can go out and buy a new car. In Cuba, there's no such thing as a new car, there's only so many on the market. (are they able to import any these days?) Anyway, that explains why the 1950's disposable rustbuckets are still up and running. When no new cars are available for love or money, the effort in keeping an old one running pays for itself.

    Excellence has to be a priority coming down from the top in a company. If cut-rate expediency is the order of the day, crap will be the result.

  • Re:Of Course (Score:4, Interesting)

    by hedwards ( 940851 ) on Monday June 01, 2009 @10:53AM (#28168159)
    That's definitely a large part of the problem, but probably the bigger problem is just the operating assumption that we can add more features just because tomorrows hardware will handle it. In most cases I would rather have the ability to add a plug in or extension for things which are less commonly done with an application than have everything tossed in by default.

    Why this is news is beyond me, I seem to remember people complaining about MS doing that sort of thing years ago. Just because the hardware can handle it doesn't mean that it should, tasks should be taking less time as new advancements are going, adding complexity is only reasonable when it does a better job.
  • Re:Of Course (Score:3, Interesting)

    by Trillan ( 597339 ) on Monday June 01, 2009 @10:56AM (#28168197) Homepage Journal

    I found (and measured) 10.3 faster than 10.2 on my then-computer, and 10.4 faster than 10.3 (once indexing was complete). Numbers long since lost, though, sorry.

  • Re:Of Course (Score:4, Interesting)

    by Jeremy Erwin ( 2054 ) on Monday June 01, 2009 @11:08AM (#28168393) Journal

    Let me give you an example. 10.2 introduced Quartz Extreme, which offloaded certain 2D graphics operations from the CPU onto the graphics card. If you had a graphics card capable of supporting non-powers-of-two textures, it was snappy.

    OSX 10.3 introduced Expose, a method of manipulating windows that leveraged Quartz Extreme. Flashy, but it also made skilled users more productive. It is dog slow on any mac with non-QE graphics card. It imposed a somewhat minimal load on any modern mac. 10.3 feels faster than 10.2, even though there's more going on in the background.

    As for memory, memory's cheap. I recall someone defining supercomputing as "buying processing power with increased memory usage..."

  • Re:Of Course (Score:5, Interesting)

    by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Monday June 01, 2009 @11:35AM (#28168741) Homepage

    Well, that depends, OOP alone is certainly not the guilty one for causing all the slowdown, but abstraction in general is guilty for a lot of things. Todays software is just way to removed from the actual hardware to allow certain kinds of optimizations. Random example: When you have a 2D game on older hardware (say GBA or similar) you could scroll by manipulating two bytes that represent the scroll offset, everything else was done in hardware. How do you scroll in a 2D game today? Fullscreen refreshes, as you don't have any access to the hardware to allow faster ways to scroll. So in the worst case you have to manipulate not 2 bytes, but around six million of them. Thats quite a few orders of magnitude difference there, that you can't really optimize away today.

    Now for real games of course you might have a GPU that can handle that amount of speed and since modern games are 3D you don't really have a choice of not doing fullscreen refreshes to begin with, but as soon as you look into web games you can see all the problems, games in Flash or Javascript most of the time run completly terrible, worse then games you might have played a decade or two ago, because those games don't even have GPU access but instead pump their data through layers upon layers of abstraction before they finally hit the graphics card.

    In the end I think the core problem is simply that todays software is written far to often for an abstract black box, instead of for a actual hardware. Especially web development is just way to removed from the actual machine to even have a chance of running quickly. To make things really fast you would have to optimize all layers of abstractions that the code has to run through, but most often you just don't have the control over it, as development is far more spread out these days. Its no longer your code and the hardware, its your code, dozens or even hundreds of libraries and then maybe far far away some piece of hardware again.

  • by mzs ( 595629 ) on Monday June 01, 2009 @11:50AM (#28168935)

    I have realized that I will have been working as a dev for ten years now in four days. I've worked at a few places and I think that the reason for this is pretty straight forward, poor benchmarks used poorly.

    We have all heard the mantra about optimizing early is evil but there are two issues to contend with. You get to a crunch time towards the end and then there is no time to address performance issues in every project. By that time so much code is written that you cannot address the performance issues in the most effective way, thinking about what algorithm to use in the dataset that ends-up being the common case. So instead some profiling work gets done and the code goes out the door.

    So for success you need to have some performance measurements even early on. The problem is that in that case you end-up with some benchmarks that don't measure the right thing (that is what you discover near the end) or you have worthless benchmarks that suffer too much from not being reproducible, taking too long to run, or not giving the dev any idea of where the performance problem really is.

    So what ends-up happening is that only after the code base has been around for a while and you get to rev n + 1 is there any real handle on any of this performance stuff. But often what ends-up happening is that project management values feature additions so as long as no single benchmark decreases by more than 2-5% and the overall performance does not decrease by more than 15% compared to the pre feature build, it gets the okay. Then a milestone arrives and there is no time again for systematic performance work and it ships as is.

    The right approach would be at that stage to not allow a new feature unless the overall benchmark does not improve by 2% and to also benchmark your competitors as well but that just does not happen except in the very rare good groups sadly.

  • Re:Of Course (Score:3, Interesting)

    by AmiMoJo ( 196126 ) on Monday June 01, 2009 @01:02PM (#28169945) Homepage Journal

    But show me any complex system that can't be done well with OO.

    Emacs - proof you can write an entire operating system without a single line of OO code :)

  • Re:Of Course (Score:4, Interesting)

    by NewbieProgrammerMan ( 558327 ) on Monday June 01, 2009 @04:35PM (#28173135)

    I would like to see your fancy C++ with templates stuff compile onto some of the proprietary toolkits I have seen for small ARM and gate array systems. Writing code that uses a number of fixed sized simple data structures all written in C makes it very easy to port it to embedded systems. The moment you use something that seems as innocuous as C++ exceptions...

    This was not (and never was going to be) an application for an embedded or real-time system. I'm not sure what I said that left everyone with the impression that I'm bashing real-time or embedded development practices. I know (more now than I did before) that there are reasons for doing such things in those environments, but none of those applied in this situation.

    My point wasn't that they should switch to C++ or something else. Personally, I don't like fancy C++ template stuff; I'd rather just stick with ANSI C. What I was trying (but apparently failing) to do was make the point that needless memory bloat isn't some curse that only applies to OO development, as was suggested in the post I initially replied to.

  • Re:Of Course (Score:2, Interesting)

    by simplerThanPossible ( 1056682 ) on Monday June 01, 2009 @04:37PM (#28173171)

    Yes, the 2nd ed of the Dragon book rewrote their sample parser in OO. It was much simpler, clearer and cleaner in procedural version in the 1st ed.

  • Re:Of Course (Score:3, Interesting)

    by crmarvin42 ( 652893 ) on Monday June 01, 2009 @05:57PM (#28174481)
    I don't know why you brought up price. "Page's Law" makes no mention as to the price of hardware or software, nor did I, so your bringing up of the tangential "Macs cost more" meme seems superfluous.

    IMO, the measure of new softwares speed is best measured on older (6-12mo old) hardware since the older version was not written with the newer hardware in mind and the newer software was written with the old and new hardware in mind (or at least should have been).

    I grant that early versions of OS X were not ready for prime time. That's why OS 9 was still shipping alongside OS X. BUt I have to disagree as to your claim that the improvements in OS X were related primarily to increased Hardware performance. My 800MHZ PBG4 shipped with OS 9 and X (10.1 IIRC). It got faster with every OS update it was capable of using (Leopard requires 867MHZ or better). The Hardware didn't change at all, only the OS.

    OTOH, I tried installing Vista on a machine that already was running XP and it was slower for just about everything. Once again, only the OS changed, not the hardware. That's why I brought up Vista. It's an example of newer software running slower on the same hardware thus supporting this "Page's Law"
  • Re:Of Course (Score:4, Interesting)

    by billcopc ( 196330 ) <vrillco@yahoo.com> on Monday June 01, 2009 @07:33PM (#28175607) Homepage

    That's still about 100 times more memory than is required to edit a text file. How do you think people got by in the 286 days when 640 Kb was standard ? Does vim allocate ridiculously oversized buffers just to show a blank screen ?

    I don't mean to pick on vim specifically, all software is guilty of this pointless bloat. Instead of having tiny apps that load and run at lightning speed, we continue to build these sloppy behemoths that can't accomplish the simplest things without triggering a dozen page faults and diddling some redundant spinlocks. It's fine to add media to make things esthetically pleasing, but code bloat benefits no one.

    With today's hardware and its ludicrous speed, we should be adding intentional delays to our code, because it should be running so damned fast that usability would suffer. The user should be the bottleneck, not the software. We have machines that are literally a thousand times faster than that heavy old 286, yet the load times for today's software are longer than booting Wordperfect 5.1 from a 360k floppy.

  • Not the same laws (Score:2, Interesting)

    by lie2me ( 1504525 ) on Tuesday June 02, 2009 @01:42PM (#28184971)

    "Wirth's law" is more quality related, as in "crappy SW can benefit from faster HW".

    "Gates's Law" is user-side observation, "speed of commercial software generally slows by fifty percent every 18 months thereby negating all the benefits of Moore's Law".

    "Page's Law" is reflection on SW development of a single company: "software gets twice as slow every 18 months... Google plans to reverse this trend and optimize its code."

    I wonder if anyone else noticed these differences.

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...