Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Businesses Google The Internet IT Technology

Can "Page's Law" Be Broken? 255

theodp writes "Speaking at the Google I/O Developer Conference, Sergey Brin described Google's efforts to defeat "Page's Law," the tendency of software to get twice as slow every 18 months. 'Fortunately, the hardware folks offset that,' Brin joked. 'We would like to break Page's Law and have our software become increasingly fast on the same hardware.' Page, of course, refers to Google co-founder Larry Page, last seen delivering a nice from-the-heart commencement address at Michigan that's worth a watch (or read)."
This discussion has been archived. No new comments can be posted.

Can "Page's Law" Be Broken?

Comments Filter:
  • Of Course (Score:5, Insightful)

    by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Monday June 01, 2009 @08:54AM (#28166665) Journal

    Can "Page's Law" Be Broken?

    I think it gets broken all the time. At least in my world. Look at Firefox 3 vs 2. Seems to be a marked improvement in speed to me.

    And as far as web application containers go, most of them seem to get faster and better at serving up pages. No, they may not be "twice as fast on twice as fast hardware" but I don't think they are twice as slow every three months.

    I'm certain it happens all the time, you just don't notice that ancient products like VI, Emacs, Lisp interpreters, etc stay pretty damn nimble as hardware takes off into the next century. People just can't notice an increase in speed when you're waiting on I/O like the user.

    • Comment removed (Score:4, Informative)

      by account_deleted ( 4530225 ) on Monday June 01, 2009 @08:59AM (#28166709)
      Comment removed based on user account deletion
      • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Monday June 01, 2009 @10:55AM (#28168179)

        agreed. Apple always manages to break it too with OS X. from 10.1 to 10.4 the OS notably improved in speed on even older equipment each time it upgraded, even on older PPC G3 and G4 machines.

        Of course, when you're starting from a point of such incredibly bad performance, there's not really anywhere to go but up.

        It would have been more impressive if they'd somehow managed to make it slower with each release.

    • I can't speak to emacs, but these says vi is generally vim, which is much much heavier than classic vi. It also does vastly more.

      • by Anonymice ( 1400397 ) on Monday June 01, 2009 @10:02AM (#28167441)

        I can't speak to emacs...

        RTFM.
        C-x M-c M-speak

    • Re:Of Course (Score:5, Insightful)

      by Z00L00K ( 682162 ) on Monday June 01, 2009 @09:24AM (#28166975) Homepage Journal

      The law isn't linear, it's more sawtooth-style.

      Features are added all the time which bogs down the software, and then there is an effort to speed it up and then there are features added again.

      One catch in performance is that it sure is faster to use RAM for data, but there is also a lot of useless data floating around in RAM, which is a waste of resources.

      And this is often the curse of object-oriented programming. Objects carries more data than necessary for many of the uses of the object. Only a few cases exists where all the object data is used. A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.

      This often explains why old languages like C, Cobol etc. are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed. The disadvantage is that the old languages require more skills from the programmer to avoid the classical problems of deadlocks and race conditions as well as having to implement functionality for linked lists etc.

      • Re: (Score:2, Interesting)

        by drinkypoo ( 153816 )

        The law isn't linear, it's more sawtooth-style.

        All data looks notchy if you sample it at high resolution and don't apply smoothing.

        One catch in performance is that it sure is faster to use RAM for data, but there is also a lot of useless data floating around in RAM, which is a waste of resources.

        RAM is cheap these days. Storage devices are still slow and the most interesting ones have a finite (Though still large) number of writes.

        This often explains why old languages like C, Cobol etc. are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed. The disadvantage is that the old languages require more skills from the programmer

        In fact you will often see today that a job that could be handled by a 555 and a couple of caps has been replaced with an internally-clocked microcontroller simply because it's a known platform and development is easy. When you have is a vertical mill, everything looks like a machining pro

        • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Monday June 01, 2009 @09:50AM (#28167295) Homepage Journal

          RAM is cheap these days.

          Unless you would need to add RAM to millions of deployed devices. For example, the Nintendo DS has 4 MB of RAM and less than 1 MB of VRAM, and it broke 100 million in the first quarter of 2009. Only one DS game [wikipedia.org] came with a RAM expansion card.

        • In fact you will often see today that a job that could be handled by a 555 and a couple of caps has been replaced with an internally-clocked microcontroller simply because it's a known platform and development is easy.

          One microcontroller beats one special-function chip plus caps on part count, board space, power consumption, and probably cost. And it can take care of other odd jobs around the circuit as well.

        • Re: (Score:3, Insightful)

          by SL Baur ( 19540 )

          One catch in performance is that it sure is faster to use RAM for data, but there is also a lot of useless data floating around in RAM, which is a waste of resources.

          RAM is cheap these days.

          I thought we were talking about performance? Adding more memory for a slow app does not necessarily make it faster when we're involved with parallel architectures. Maybe that ought to be a law of its own. See http://lwn.net/Articles/250967/ [lwn.net]

          All memory is not created equal. As CPU speeds have risen, it is becoming increasingly expensive to access main memory - you need to keep things in the CPU cache(s). As NUMA architectures increase in importance, that will become even more true. Today's supercomputer

      • And this is often the curse of object-oriented programming. Objects carries more data than necessary for many of the uses of the object. Only a few cases exists where all the object data is used.

        That sounds like bad software design that isn't specific to OO programming. People are perfectly capable of wasting memory space and CPU cycles in any programming style.

        For example, I worked with "senior" (~15 years on the job) C programmers who thought it was a good idea to use fixed-size global static arrays for everything. They also couldn't grasp why their O(N^2) algorithm--which was SO fast on a small test data set--ran so slowly when used on real-world data with thousands of items.

        • Re: (Score:2, Interesting)

          by Anonymous Coward
          What you fail to grasp is what your senior programmers understand: heap allocation is non-deterministic. Any code that your write that mallocs after initialization is done, wouldn't even pass a peer review where I work (doing safety-critical, fault-tolerant, real-time embedded). Maybe you should learn a little more before running off at the mouth.
        • Re:Of Course (Score:4, Interesting)

          by hedwards ( 940851 ) on Monday June 01, 2009 @10:53AM (#28168159)
          That's definitely a large part of the problem, but probably the bigger problem is just the operating assumption that we can add more features just because tomorrows hardware will handle it. In most cases I would rather have the ability to add a plug in or extension for things which are less commonly done with an application than have everything tossed in by default.

          Why this is news is beyond me, I seem to remember people complaining about MS doing that sort of thing years ago. Just because the hardware can handle it doesn't mean that it should, tasks should be taking less time as new advancements are going, adding complexity is only reasonable when it does a better job.
      • by kieran ( 20691 )

        And this is often the curse of object-oriented programming. Objects carries more data than necessary for many of the uses of the object. Only a few cases exists where all the object data is used. A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.

        Surely this is a problem begging a solution in the form of smarter compilers?

      • Re:Of Course (Score:5, Insightful)

        by AmiMoJo ( 196126 ) on Monday June 01, 2009 @10:53AM (#28168151) Homepage Journal

        OO was never designed for speed or efficiency, only ease of modelling business systems. It became a fashionable buzz-word and suddenly everyone wanted to use it for everything, so you end up in a situation where a lot of OO programs really only use OO for allocating memory for new objects.

        I'm not trying to be a troll here, I just find it odd that OO is considered the be-all and end-all of programming to the point where people write horribly inefficient code just because they want to use it. OO has it's place, and it does what it was designed to do quite well, but people should not shy away from writing quality non-OO code. I think a lot of programmings come up knowing nothing but OO these days, which is a bit scary...

      • Re: (Score:3, Insightful)

        by Twinbee ( 767046 )

        Since C structs are effectively C++ objects, would you be against using structs for the same reasons too?

    • Re:Of Course (Score:4, Interesting)

      by Carewolf ( 581105 ) on Monday June 01, 2009 @09:36AM (#28167109) Homepage

      Exactly firefox 3 vs 2 is an excelent example. Especially because Firefox between major releases have been know for the opposite: Getting slower with each minor release.

      There are also examples of the opposite. The KDE 3.x got faster and faster for the entire generation, while KDE 4.0 was much slower again, but here 4.1, 4.2 and especially the next 4.3 is many times fast than the 4.0 release.

      So I don't think Google's ideas are unique. The issue is well known and fought against in many different ways in especially open source.

    • Re: (Score:3, Insightful)

      by bill_kress ( 99356 )

      If it get's broken, it's not a law.

      What's the obsession these days with calling observations "laws"?

      Moore's law is simply an observation, as is this. Actually software will be as slow as hardware of the period allows, rarely faster. Can we call this Bill's Law? Note the relationship between Moore's and Page's is obvious when you consider Bill's law.

      Hmph, it's all just observations that happen to hold true for a short period. If that period happens to be 20 years, us short-lived humans extrapolate it to

  • The 'easy' way (Score:3, Interesting)

    by Dwedit ( 232252 ) on Monday June 01, 2009 @08:55AM (#28166677) Homepage

    Make developers target a slow and memory constrained platform. Then you get stellar performance when it runs on the big machines.

    • Nope (Score:5, Funny)

      by Colin Smith ( 2679 ) on Monday June 01, 2009 @09:02AM (#28166731)

      You just get an app which uses 100k of RAM and 32gb of filesystem buffer.

       

    • Re:The 'easy' way (Score:5, Insightful)

      by imgod2u ( 812837 ) on Monday June 01, 2009 @09:19AM (#28166909) Homepage

      The problem there is that there gets to a point where the user just won't notice "stellar" speeds. Take a video game for instance. Anything past ~70 fps is really unnoticeable by the average human eye. If you design the game to run at 70 fps for a slow and memory constrained machine, the user won't really notice his quad-SLI or whatever vacuum cleaner box being any better. And you've sacrificed a lot in visual quality.

      • Anything past ~70 fps is really unnoticeable by the average human eye.

        I disagree. If you can render the average scene at 300 fps, you can:

        • Apply motion blurring (think 4x temporal FSAA) at 60 fps. Film gets away with 24 fps precisely because of motion blur.
        • Keep a solid 60 fps even through pathologically complex scenes.
        • Render at 60 fps even when four players have joined in on the same home theater PC.

        If you design the game to run at 70 fps for a slow and memory constrained machine [...] you've sacrificed a lot in visual quality.

        A well-engineered game will have (or be able to generate) meshes and textures at high and low detail for close-up and distant objects respectively. On high-spec PCs, you can use the high-detail assets farther from the camera; on the slow and memory-constrained PCs that your potential customers already own, they get the low-detail assets but can still enjoy the game.

        • by imgod2u ( 812837 ) on Monday June 01, 2009 @10:22AM (#28167715) Homepage

          I disagree. If you can render the average scene at 300 fps, you can:

                  * Apply motion blurring (think 4x temporal FSAA) at 60 fps. Film gets away with 24 fps precisely because of motion blur.
                  * Keep a solid 60 fps even through pathologically complex scenes.
                  * Render at 60 fps even when four players have joined in on the same home theater PC.

          All of your points follows the argument "you can do 60 fps with higher quality". Which was pretty much my argument...

          A well-engineered game will have (or be able to generate) meshes and textures at high and low detail for close-up and distant objects respectively. On high-spec PCs, you can use the high-detail assets farther from the camera; on the slow and memory-constrained PCs that your potential customers already own, they get the low-detail assets but can still enjoy the game.

          It could or it could not. The point is the game can utilize the computing power of higher-end systems. It isn't just designed for a slow and memory-constrained machine and then runs at blazing fps on faster systems; you can change visual quality settings to use more computing power.

        • by Shin-LaC ( 1333529 ) on Monday June 01, 2009 @10:30AM (#28167835)
          Mod parent up. And here [100fps.com] is a page that explains some common misconceptions.
    • Dwedit, the current maintainer of the PocketNES emulator for Game Boy Advance, wrote:

      Make developers target a slow and memory constrained platform.

      I hope you're not talking about something like the NES. There are some things that just won't fit into 256 KB of ROM and 10 KB of RAM, like a word processing document or the state of the town in a sim game like SimCity or Animal Crossing.

      Then you get stellar performance when it runs on the big machines.

      Only if the big machines use the same CPU and I/O architecture as the small machines. Otherwise, you need to use an emulator that brings a roughly 10:1 CPU penalty (e.g. PocketNES), or more

    • Re: (Score:3, Funny)

      Ah ha, the business model behind Android finally reveals itself :)

    • This is why I'm interested in checking the license details of Windows 7 Starter Edition.

      Designed to run on a netbook? Less bloat? Reduced cost to the consumer? Win-win.

      I use third party media players and don't care about Aero Glass. If it supports DX10, we have a new Windows gaming platform.
    • Re:The 'easy' way (Score:5, Informative)

      by Abcd1234 ( 188840 ) on Monday June 01, 2009 @09:25AM (#28166981) Homepage

      Make developers target a slow and memory constrained platform. Then you get stellar performance when it runs on the big machines.

      Hardly. Have you never heard of space-time tradeoffs? ie, the most common compromise one has to make when selecting an algorithm for solving a problem? If you assume you have a highly constrained system, then you'll select an algorithm which will work within those constraints. That probably means selecting for space over time. Conversely, if you know you're working on a machine with multiple gigabytes of memory, you'll do the exact opposite.

      In short: there's *nothing wrong with using resources at your disposal*. If your machine has lots of memory, and you can get better performance by building a large, in-memory cache, then by all means, do it! This is *not* the same as "bloat". It's selecting the right algorithm given your target execution environment.

    • by drsmithy ( 35869 )

      Make developers target a slow and memory constrained platform. Then you get stellar performance when it runs on the big machines.

      Non-sequitur.

  • by rotide ( 1015173 ) on Monday June 01, 2009 @08:56AM (#28166685)
    Why would a company spend money to make software more efficient when the current incarnation does its job just fine?

    While I like the idea of being as succinct and efficient as possible with your code, at what point does it become fruitless?

    Obviously, if you're testing your code on a "new" workstation and it's sluggish, you'll find ways to make it work better. But if it works well? What boss is going to pay you to work on a project for no real benefit other than to point out it is very efficient?

    • Unlike workstations where(as you say) the value of going from "workstation adequately responsive, 60% load" to "workstation adequately responsive, 30% load" is pretty much zero; it matters on servers, particularly servers running vast numbers of instances of a homogeneous workload. If you have thousands of instances, gains of even a few percent mean substantial reductions in the number of servers you need to run.
      • Unlike workstations where(as you say) the value of going from "workstation adequately responsive, 60% load" to "workstation adequately responsive, 30% load" is pretty much zero

        Not always. A notebook computer running at 60% load draws more current than one running at 30% load. But LCD backlights eat a lot of power too, and the licensing policy that Microsoft announced for Windows 7 Starter Edition (CPU less than 15 watts) might encourage CPU engineers to move more logic to the GPU and the chipset.

        If you have thousands of instances, gains of even a few percent mean substantial reductions in the number of servers you need to run.

        Moore's law predicts that transistor density on commodity integrated circuits doubles every 18 months. This means more cores can fit on the same size chip. If your applications are inhere

    • Actually, if you're doing test-directed development, you should have a test that tells you if you've met your performance needs or not. Your management wants to know they have a certain amount of bang/$, to meet their performance budget.

      For user-interface stuff, that could be as simple as "3 seconds on average, no more than 5% over 20 seconds", for some number of simulated users on your development machine.

      So build a test framework and measure the first part of the program you write. For example, that

    • by cylcyl ( 144755 ) on Monday June 01, 2009 @09:39AM (#28167141)

      When companies go into feature race, they forget that it quickly becomes diminishing returns. As the features you enable are less and less likely for your client base to be interested in.

      However, if you improve the performance of your core functions (thru UI or speed), your entire customer base gets improvement and have a real reason to upgrade

    • Because it's not doing it's job just fine if it's inefficient. There's a certain amount of inefficiency that's optimal or acceptable, but milliseconds can and do add up.

      Over the entire company, what might be a minor waste of time for one person can become significant very quickly, which is one of the reasons why updating computers and adding a second monitor can be such a profitable move for a company. Tweaks like that do cost money in the short term, but frequently pay off in the long term.

      The only t
  • coming from google (Score:2, Insightful)

    by ionix5891 ( 1228718 )

    who are trying to make software be available only via a browser and clunky javascript

      makes this rather ironic

    • by Ilgaz ( 86384 )

      No it is their justification to run Office in a Web browser which I did back in 2001, with Think Free Office. Think Free guys used to rely on Java but it changed as technology progresses, now they use mixture of Java, Ajax and HTML technologies. I think some Flash will be involved too.

      Of course, same people laughing at me while using a Office written in Java now talks about what kind of a modern idea Google invented (!) after 8 years.

      Of course, native vs. interpreted application? I purchased Apple iWork, fu

    • Re: (Score:2, Insightful)

      by Anonymous Coward
      So... you don't think it would be a good idea for them to improve the efficiency of their browser and said software? To me it sounds like common sense, not irony... if you're going to run software in a browser via javascript, make it really efficient software.
    • by bcrowell ( 177657 ) on Monday June 01, 2009 @11:59AM (#28169103) Homepage

      coming from google who are trying to make software be available only via a browser and clunky javascript makes this rather ironic

      The transcript leaves out a few things from the video, the main one being that Brin gives a list of applications he has specifically in mind: gmail, chrome, and Native Client [googlecode.com]. Of these, only gmail is a javascript application. Chrome doesn't run in a browser, Chrome is a browser. And Native Client is an attempt to get out of the very situation you're complaining about, where web-based apps have to be written in javascript. NativeClient (NaCl) is a browser plugin that allows native x86 code to run in a browser. If you read the paper on NaCl I linked to above, the emphasis on security is impressive. They clearly understand what a disaster things like ActiveX have been in terms of security, and they're serious about making it safe with all kinds of fancy techniques.

      A couple of other observations:

      They're not kidding about making performance a priority, it's not a new priority for them, and they seem to be doing well at it. When I first tried the Google Docs spreadsheet, its performance was completely unacceptable. A year or so later, it was mentioned on Slashdot again. I was all set to make a snarky post about its poor perfomance, but then I stopped and decided to try it again to see if the performance was still as bas as I remembered. It was much better, so I posted on Slashdot to say so. I then got an email from one of the developers working on Google Docs to say he was glad I'd noticed the improvement, because it had been their main priority recently.

      In the video, Brin refers to "Page's law" as the "inverse of Moore's law." I would actually say it's not so much an inverse of it as a corollary of it. Developers are always going to be as sloppy as they can get away with being, and they're always going to prefer to work with languages and APIs that give them the maximum amount of abstraction, platform-independence, and expressiveness. Software houses are always going to market proprietary software based on features (which the user can read about before making a decision to buy), not on performance (which the user can't test until he's paid for the software and tried it out on his own machine). Therefore they're always going to write software that performs as badly as they can get away with. That means that if Moore's law improves hardware performance by a factor of x over a certain period of time, software developers are just naturally going to write software that performs worse by a factor of x over that same period of time.

      The really scary thing about browser-based apps, in my opinion, is that they represent a huge threat to open-source software, exactly at the moment when the OSS software stack is starting to be pretty comprehensive, mature, and usable. If you look at the web apps out there, essentially all of them are under proprietary licenses, and nearly all of them are impossible to run without a server running the completely closed-source server-side code. Although Google generally seems pretty friendly toward OSS, I don't really want to have to rely on their good intentions. They are, after all, a publicly traded company, whose only reason for existing is to maximize returns for their shareholders. From this perspective, NaCl is actually pretty scary. The default with javascript is that at least you get to see the source code of the client-side software, even if it's under a proprietary license; I think it's only natural for me to demand this if my web browser is going to run random code off of some stranger's web site. With NaCl, the default will be that all I ever get to see is the object code of the program. This is even worse than java applets; java is actually relatively easy to disassemble into fairly readable source code. (And in any case, java applets never caught on.)

  • 1) Historically: thwarting piracy. Bigger apps were harder to pirate. Copying 32 floppies = pain in the ass.

    2) The perception of value. More megabytes implies more features implies more value. You can charge more. Also, you can charge people again for what is basically the same product (there are companies that depend on this!)
  • by viyh ( 620825 ) on Monday June 01, 2009 @09:02AM (#28166735)
    "Page's Law" seems to be a tongue in cheek joke since it's sited primarily by the Google folks themselves. It definitely isn't true across the board. It's purely a matter of a) what the software application is and b) how the project is managed/developed. If the application is something like a web browser where web standards are constantly being changed and updated so the software must follow in suit, I could see where "Page's Law" might be true. But if the product is well managed and code isn't constantly grandfathered in (i.e., the developers know when to start from scratch) then it wouldn't necessarily be a problem.
    • by Keith_Beef ( 166050 ) on Monday June 01, 2009 @09:07AM (#28166775)

      All he has done is put numbers into Wirth's law.

      I remembered this as "software gets slower faster than hardware gets faster", but Wikipedia has a slightly different wording: "software is getting slower more rapidly than hardware becomes faster".

      http://en.wikipedia.org/wiki/Wirth%27s_law

      In fact, that article also cites a version called "Gates's Law", including the 50% reduction in speed every 18 months.

      K.

  • I wasn't talking about it for a while as I am tired of Google fanatics but, what is the point of running a software with Administrator(win)/Super User(Mac) privileges every 2 hours that will... check for updates?

    I speak about the Google Updater and I don't really CARE if it is open source or not.

    Not just that, you are giving a very bad example to industry to use as reference. They already started talking about ''but Google does it''.

    Is that part of the excuse? Because hardware guys beat the badly designed s

  • by fuzzyfuzzyfungus ( 1223518 ) on Monday June 01, 2009 @09:08AM (#28166795) Journal
    I'd suspect that Google probably will. Not because of any OMG special Google Genius(tm), but because of simple economics.

    Google's apps are largely web based. They run on Google's servers and communicate through Google's pipes. Since Google pays for every server side cycle, and every byte sent back and forth, they have an obvious incentive to economize. Since Google runs homogenous services on a vast scale, even tiny economies end up being worth a lot of money.

    Compare this to the usual client application model: Even if the scale is equivalent, the maker of the software doesn't pay for the computational resources. Their only pressure is indirect(i.e. customers who don't buy because their machines don't meet spec, or customers who get pissed off because performance sucks). They thus have a far smaller incentive to watch their resource consumption.

    The client side might still be subject to bloat, since Google doesn't pay for those cycles; but I suspect competitive pressure, and the uneven javascript landscape, will have an effect here as well. If you are trying to sell the virtues of webapps, your apps are (despite the latency inherent in web communication) going to have to exhibit adequate responsiveness under suboptimal conditions(i.e. IE 6, cellphones, cellphones running IE 6), which provides the built in "develop for resource constrained systems" pressure.
    • Compare this to the usual client application model: Even if the scale is equivalent, the maker of the software doesn't pay for the computational resources. Their only pressure is indirect(i.e. customers who don't buy because their machines don't meet spec, or customers who get pissed off because performance sucks). They thus have a far smaller incentive to watch their resource consumption.

      Then why are games for PlayStation 2 still coming out years after the launch of the PLAYSTATION 3 console? If the incentive to run on existing deployed hardware were so small, major video game publishers would make their games PS3-exclusive even if the game's design didn't require it.

      • "customers who don't buy because their machines don't meet spec"

        Since consoles move in large, discreet steps, the particular indirect pressure noted above is extremely significant. In the case of the playstation, the PS2 was released ~2000 and the PS3 ~2007. Nothing in between, one or the other and the PS2 has a vastly greater installed base. Because specs are fixed, requirements don't get to drift upward. They either stay still, or jump.

        PCs aren't wholly different, any publisher of "casual games", fo
  • When I was a little kid, I saw a new computing device: a Pacman cabinet at the local pinball parlour.

    Since then, I've seen dozens of implementations of it, and they fall into two camps: a knockoff that can hardly be called a Pacman-clone, or a full-up 100% authentic duplicate of the original. Of course the latter is done with emulation. Every important detail of the old hardware can be emulated so a true ROM copy can be run with the same timing and everything behaves properly. If you know the proper sec

  • Page's Law. (Score:3, Insightful)

    by C_Kode ( 102755 ) on Monday June 01, 2009 @09:16AM (#28166881) Journal

    Sounds like someone is trying to cement their legacy in history by stamping their name on common knowledge. :-)

  • We could also consider the possibility that a twice-as-fast computer on a twice-as-fast network pipe produces twice-as-much data which, in order to keep the same perceived speed, must be processed twice-as-quickly by another computer.

    • by tepples ( 727027 )

      We could also consider the possibility that a twice-as-fast computer on a twice-as-fast network pipe produces twice-as-much data

      But why is it producing twice-as-much data? Is it receiving twice-as-many requests? If so, from whom? Twice-as-many users? Or a single user doing twice-as-many things?

  • One thing that rarely comes up when discussing bloat and slow underperforming applications is energy consumption. While you can shave off some percents off of a server by maximizing hardware energy savings you can save much more by optimizing its software in many cases.

    I think it all comes down to economics. As long as the hardware and software industry lives in symbiosis with their endless upgrade loop we will have to endure this. To have your customers buy the same stuff over and over again is a precious

  • From the transcript of the speech:

    "you never loose a dream"

  • by JamesP ( 688957 )

    It's simple, just don't use Java

    In a more serious note, my personal opinion is have the developers use and test the programs in slower machines.

    Yes, they can profile the app, etc, but the problem is that it really doesn't create the 'sense of urgency' working in a slow machine does. (Note I'm not saying developers should use slow machines to DEVELOP, but there should be a testing phase in slow machines)

    Also, slower machines produce more obvious profile timings.

  • Grosch's (other) Law (Score:4, Informative)

    by Anonymous Coward on Monday June 01, 2009 @09:34AM (#28167069)

    Herb Grosch said it in the 1960's: Anything the hardware boys come up with, the software boys will piss away.

  • Larger user base (Score:3, Interesting)

    by DrWho520 ( 655973 ) on Monday June 01, 2009 @09:54AM (#28167341) Journal
    Making later versions of software run more efficiently on a baseline piece of hardware may also make the software run more efficiently on lesser pieces of hardware. Does the increase in possible install base (since your software now runs on hardware slower than your baseline) justify a concerted effort to write software that runs more efficiently?
  • by toby ( 759 ) * on Monday June 01, 2009 @09:57AM (#28167375) Homepage Journal

    10.0, 10.1, 10.2, 10.3, and maybe 10.4 was a series of releases where performance improved with each update. I don't run 10.5 so can't comment if the trend continues.

    • Comment removed based on user account deletion
    • Re: (Score:3, Insightful)

      by blueZ3 ( 744446 )

      Because 10.0 sucked? I don't know if it was intentional or not, but that was slow enough that I noticed that speed was an issue (and I was using only the most pedestrian of software--browser and email). It was as if the OS was completely unoptimized. If subsequent releases had gotten slower, they'd have been going backwards.

      My primary computer, my wife's computer, and our HTPC are all Macs, so I'm not trolling... but damn was it slow.

  • Hardware has advanced to the point that we don't care about performance all that much.

    What is more of a concern is how easy it is to write software, and how easy it is to maintain that software, and how easy it is to port that software to other architectures. Efficiency of code generally means efficient use of a single architecture. That's fine, but for code that has to last a long time (i.e., anything besides games), you want it to be written in a nice, easy-to-change way that can be moved around to diffe

    • Hardware has advanced to the point that we don't care about performance all that much.

      That might be true of software intended to run on desktop PCs. But for servers, you want efficiency so you can handle more requests from more users. And for software intended to be run on small, cheap, battery-powered devices, you want efficiency so you can underclock the CPU and run longer on a charge. You mentioned games, but a lot of applications for handheld and subnotebook computers aren't games.

  • by jollyreaper ( 513215 ) on Monday June 01, 2009 @10:37AM (#28167949)

    Business managers don't want to pay for great when good will do. Have you gotten the beta to compile yet? Good, we're shipping. I don't care if it was a tech demo, I don't care if you said your plan was to figure out how to do it first, then go back through and do it right. We have a deadline, get your ass in gear.

    Then the next release cycle comes around and they want more features, cram them in, or fuck it we'll just outsource it to India. We don't know how to write a decent design spec and so even if the Indians are good programmers, the language barrier and cluelessness will lead to disaster.

    And here's the real kicker -- why bother to write better when people buy new computers every three years? We'll just throw hardware at the problem. == this is the factor that's likely to change the game.

    If you look at consoles, games typically get better the longer it's on the market because programmers become more familiar with the platform and what it can do. You're not throwing more hardware at the problem, not until the new console ships. That could be years and years away, just for the shipping, and even more years until there's decent market penetration. No, you have to do something wonderful and new and it has to be done on the current hardware. You're forced to get creative.

    With the push towards netbooks and relatively low-power systems (low-power by today's standards!), programmers won't be able to count on power outstripping bloat. They'll have to concentrate on efficiency or else they won't have a product.

    There's also the question of how much the effort is worth. $5000 in damage to my current car totals it, even if it could be be repaired. I can go out and buy a new car. In Cuba, there's no such thing as a new car, there's only so many on the market. (are they able to import any these days?) Anyway, that explains why the 1950's disposable rustbuckets are still up and running. When no new cars are available for love or money, the effort in keeping an old one running pays for itself.

    Excellence has to be a priority coming down from the top in a company. If cut-rate expediency is the order of the day, crap will be the result.

  • by Winter Lightning ( 88187 ) on Monday June 01, 2009 @11:20AM (#28168531)

    "Page's law" is simply a restatement of May's law:

    "Software efficiency halves every 18 months, compensating Moore's Law".

    David May is a British Computer scientist who was the lead architect for the Transputer. See:
    http://en.wikipedia.org/wiki/David_May_(computer_scientist) [wikipedia.org]
    and page 20 of:
    http://www.cs.bris.ac.uk/~dave/iee.pdf [bris.ac.uk]

  • by mzs ( 595629 ) on Monday June 01, 2009 @11:50AM (#28168935)

    I have realized that I will have been working as a dev for ten years now in four days. I've worked at a few places and I think that the reason for this is pretty straight forward, poor benchmarks used poorly.

    We have all heard the mantra about optimizing early is evil but there are two issues to contend with. You get to a crunch time towards the end and then there is no time to address performance issues in every project. By that time so much code is written that you cannot address the performance issues in the most effective way, thinking about what algorithm to use in the dataset that ends-up being the common case. So instead some profiling work gets done and the code goes out the door.

    So for success you need to have some performance measurements even early on. The problem is that in that case you end-up with some benchmarks that don't measure the right thing (that is what you discover near the end) or you have worthless benchmarks that suffer too much from not being reproducible, taking too long to run, or not giving the dev any idea of where the performance problem really is.

    So what ends-up happening is that only after the code base has been around for a while and you get to rev n + 1 is there any real handle on any of this performance stuff. But often what ends-up happening is that project management values feature additions so as long as no single benchmark decreases by more than 2-5% and the overall performance does not decrease by more than 15% compared to the pre feature build, it gets the okay. Then a milestone arrives and there is no time again for systematic performance work and it ships as is.

    The right approach would be at that stage to not allow a new feature unless the overall benchmark does not improve by 2% and to also benchmark your competitors as well but that just does not happen except in the very rare good groups sadly.

  • by henrypijames ( 669281 ) on Monday June 01, 2009 @12:14PM (#28169313) Homepage

    ... isn't software v. hardware, but speed v. functionality, i. e., in the history of most software, the decrease in speed is disproportional to the increase in functionality. Of course, "disproportional" is subjective, and new, advanced functionalities are generally more complicated and resource intensive than old, basic ones. So a simple reverse-linear relationship might be unrealistic, but when many software don't even manage to beat the reverse-quadratic ratio, there's definitely something wrong.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...