Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Mozilla The Internet Programming IT Technology

Reducing Firefox's Memory Use 110

An anonymous reader writes "Many people have complained about Firefox's memory use. Federico Mena-Quintero has a proposal for reducing the amount of memory used to store images, which, in his proof of concept code, 'reduced the cumulative memory usage... by a factor of 5.5.'."
This discussion has been archived. No new comments can be posted.

Reducing Firefox's Memory Use

Comments Filter:
  • Easier solution (Score:5, Insightful)

    by plover ( 150551 ) * on Friday November 25, 2005 @03:11PM (#14114254) Homepage Journal
    Buy more memory. It seems to be the rest of the industry's answer to resource hogging software. Look at all the bloatware out there: XML, JVMs, .NET, etc. The rest of the world is building for 1GB boxes, so who cares how efficient their code is anymore?

    Coming from me, this is sarcasm, but it's a depressingly prevalent real attitude in the industry.

    • Re:Easier solution (Score:3, Interesting)

      by eggstasy ( 458692 )
      With CPU development practically stagnating, but RAM and HD storage still growing fast, it also strikes me as a pretty good solution. I wouldn't want to instance that CPU-hogging old JPEG algorithm 50 times just because I hit the back button.
      There's always a tradeoff between memory use and CPU comsumption. If it's simple to do and has enough impact, maybe they should let the users decide?
      Or even automatically configure it depending on the user's hardware.
      • Re:Easier solution (Score:3, Interesting)

        by obi ( 118631 )
        it's not just the amount of space it takes in RAM and HD, but also the amount of time it takes to transport something that's 5.5x bigger. Have you seen hard drives become 5.5x faster recently? Or even RAM?

        At the same time, rendering/decompressing an image might be quite self-contained, and running 50 self-contained instances of that CPU-hogging old JPEG algorithm might parallellize quite easily, so would scale with the current trends in CPU development (more parallel - just throw more chips/cores/DSPs/GPUs
        • The other problem is that you'll still have to periodically close out the browser to free all the memory that's in use, meaning that you have to reload all those pages that you have open in various tabs. Forgive me for comparing Firefox to Windows, but I have to effectively "reboot" Firefox from time to time in order to get all that memory back. It makes it harder to send the message that Linux doesn't suffer from the same problems that Windows does when you have to restart the browser.

          That being said,

          • The other problem is that you'll still have to periodically close out the browser to free all the memory that's in use, meaning that you have to reload all those pages that you have open in various tabs. Forgive me for comparing Firefox to Windows, but I have to effectively "reboot" Firefox from time to time in order to get all that memory back.

            It's not only that. Firefox will periodically freeze in it's tracks for about 1 minute; CPU usage climbs to 70-80% while it's doing god knows what.

            • It's not only that. Firefox will periodically freeze in it's tracks for about 1 minute; CPU usage climbs to 70-80% while it's doing god knows what.
              The A9 and Google toolbars used to cause this whenever I hit an https page -- check for that.
          • It makes it harder to send the message that Linux doesn't suffer from the same problems that Windows does when you have to restart the browser.

            That it does. But Linux isn't any better than Windows when running an equivalent browser. Mozilla on Windows and Mozilla on Linux is about the same in almost all regards.

            So the important thing is to point out that there are plenty of other advantages in running Linux.
          • I actualy don't see any memory issues in Firefox in Linux whereas I deffinantly experience these glitches on the windows machines I have to use at work. They are running equivelent machines, I just never see firefox memory or cpu use climb while on Linux.

            ~Anders
          • The other problem is that you'll still have to periodically close out the browser to free all the memory that's in use, meaning that you have to reload all those pages that you have open in various tabs. Forgive me for comparing Firefox to Windows, but I have to effectively "reboot" Firefox from time to time in order to get all that memory back. It makes it harder to send the message that Linux doesn't suffer from the same problems that Windows does when you have to restart the browser.

            Is that because w
            • I understand your point, but how about something as simple as a "cleanup memory" button. I too often have several tabs open (4 instances right now with 24 tabs) but most of those tabs will sit idle most of the time, like my PHP/MySQL/M-W.com windows that are mainly used for occasional reference. Being able to force a cleanup would be good. I'd still save the time to reload the entire page, even if it does take a second to redraw when I flip back to the tab.

              I'm not totally sure that I subscribe to your

              • Re:Easier solution (Score:4, Interesting)

                by mallie_mcg ( 161403 ) on Saturday November 26, 2005 @10:50PM (#14121658) Homepage Journal
                I understand your point, but how about something as simple as a "cleanup memory" button. I too often have several tabs open (4 instances right now with 24 tabs) but most of those tabs will sit idle most of the time, like my PHP/MySQL/M-W.com windows that are mainly used for occasional reference. Being able to force a cleanup would be good. I'd still save the time to reload the entire page, even if it does take a second to redraw when I flip back to the tab.

                Great for people like us, who knows whats going on in the background. Many people never run into the firefox memory issues because they tend to single process with their machines, and switch their machines off regularly. The solution should not be a manual thing, it should be able to be solved with clever programming (perhaps an idle, lack of mouse movement detection?) its hard enough to train users to use computers to do their jobs, without adding extra load to their fragile little minds. M
                • The solution should not be a manual thing, it should be able to be solved with clever programming (perhaps an idle, lack of mouse movement detection?) its hard enough to train users to use computers to do their jobs, without adding extra load to their fragile little minds.

                  Agreed. But the more complex you make it, the less likely it is to get done. I've seen threads on Bugzilla go on for years while people argue the nuances of an implementation. Better a manual process than no process at all.

          • You know, I never have to do that with Mozilla. In fact, 99% of the problems I see Firefox users complain about, I never experience with Mozilla. Hell, I leave Mozilla open for days on end, never bothering to close it when I go to work/sleep/eat/whatever. And that's on Windows, too--my linux box is too slow to tolerably run a gui (though lately, doing rather well with enlightenment, might have to try again..), in case you're wondering.

            Glad I switched back to Mozilla after Firebird...
      • Re:Easier solution (Score:3, Insightful)

        by GigsVT ( 208848 )
        With CPU development practically stagnating, but RAM and HD storage still growing fast

        What?

        I could believe HD, but RAM sizes have not kept up at all. You might have gotten a system with 128 megs with a 20 gig hard disk with a CPU running 800 mhz a few years ago. These days you get a system generally with something like 512 megs and 200 gigs storage running 3Ghz. Also RAM prices have not dropped all that much. 512megs of DDR2 is over $200.

        Yes, CPU speeds have stagnated in the last year or so, only growin
        • I would suggest that the expansion in RAM sizes available to new computers is in part due to cost cutting to minimize the unnecessary expense and the fact that not many users need that much RAM because most programs have been needing it.

          I managed to score an off-lease computer for dirt cheap decked out to 4GB of RAM, then realized that I have absolutely no need or use for that much memory. On a Windows 2000 computer, I rarely use more than 500MB.
          • with knoppix and some other live cds you can use a cheatcode (I think it's knoppix:toram or some such) and when it is booting it puts the entire OS into RAM if you have enough, like you have. I do this with a mini distro, austrumi, and it makes it *blazingly fast*, as in quick. Like a megaprocessor upgrade or something. Knoppix is I think 2 gigs compressed with the single CD version, so 4 gigs of ram would give you some decent cushion.
        • When you say 3Ghz just be aware that that is the "Pentium 4" Ghz, which isn't the same as "Pentium III" Ghz. We really haven't come as far as you say.
        • A 512MB DDR2 stick can be easily bought for 30-40 and a 512MB PC3200 (=DDR1) stick for 39-49.

          I recently upgraded my laptop with a 1GB DDR2 SODIMM module, total cost: 99.

          $200? Nice joke buddy but no cigar!
          Also, maybe pure MHz hasn't gone up more than 15-20% but pure 'speed', or performance/ has at least tripled. I got my AMD X2 4200+ cheaper than a P4/2533 only a few years ago and the difference in practice in LAME/Xvid/games is on a completely different scale.
    • Re:Easier solution (Score:3, Informative)

      by Jeff DeMaagd ( 2015 )
      At first, I thought so, but I just looked at my memory usage and Firefox is the top hog, beating the next two with more combined memory use. I don't mind a program taking 50MB, but it's at 116MB right now.
      • Firefox alone, with only a few tabs open, can consume as much ram as xorg + kde (including amarok, kmail, and everything that loads up at startup on my box) + konqueror. Bloatware or what?
        • Bah, it is just cache. If I have the memory available why not use it? Unfortunately Firefox isn't at a Kernel level and can't know if perhaps some portion of the memory would be better spent overall if it were used on caching the local I/O for example.
          • Bah, it is just cache. If I have the memory available why not use it? Unfortunately Firefox isn't at a Kernel level and can't know if perhaps some portion of the memory would be better spent overall if it were used on caching the local I/O for example.

            Easy solution: Create a file in /tmp, memorymap it, and satisfy all requests for cache memory from the memorymapped region. Then it should compete on equal footing with all other filesystem IO, and be pushed out of disk cache and pulled back in as appropri

      • Try turning off "browser.cache.memory.enable", in about:config, and see what the usage is. I bet that's the majority of it, and 116M is just fine with me. If memory is tight, it might be a good thing to sacrifice though.
    • Re:Easier solution (Score:5, Informative)

      by Guspaz ( 556486 ) on Friday November 25, 2005 @04:17PM (#14114567)
      Firefox (on Windows) can and will suck up an infinite amount of memory. This is because under some circumstances (Well, always, at least for me and many other users) it does NOT remove the uncompressed images from memory when a tab is closed.

      If I am opening and closing a lot of image-heavy tabs, after a while, my firefox instance is sucking up 800MB of system memory, and the ONLY way to free it is to restart firefox.

      I don't care about firefox's memory usage with compressed versus uncompressed. If I'll get more speed with 90MB of uncompressed images, go for it. What I do have a problem with is how it doesn't bother to remove raw images that are no longer needed. Essentially, it is a really bad memory leak that they haven't fixed for ages.

      As for reducing actual memory usage, a hybrid solution is best. At the very least all images on other tabs should remain compressed, and then decompressed when switching to that tab, going back to the compressed images from the old tab (Disk cache them, or keep both compressed and uncompressed in memory).

      In addition, you can probably do smart-cacheing on images on the current tab. As the article author mentioned, keep uncompressed copies of only images near the current viewport. Another solution might be to store everything as compressed, even in the current tab, and modify the rendering engine so that images are drawn asynchronously. A 100ms delay while scrolling will cause noticeable hitching, but if you draw the rest of that page and throw in the image 100ms later, the user will have a much smoother experience. They can keep scrolling while the image is loaded in.
      • by plover ( 150551 ) * on Friday November 25, 2005 @04:50PM (#14114724) Homepage Journal
        You call it a leak. They call it a cache. Semantics, really. :-)
        • If the program knows how to reference the memory again if needed, then it's not technically a memory leak. So I guess what he's complaining about is "inefficient caching of tiles." Which is no small thing, I guess.
      • If you're viewing THAT many images, the Easiest Solution in this case would be to quit looking at all that goddamned porn!
      • Re:Easier solution (Score:3, Interesting)

        by jonadab ( 583620 )
        > What I do have a problem with is how it doesn't bother to remove raw images that are no longer
        > needed. Essentially, it is a really bad memory leak that they haven't fixed for ages.

        This is *partly* due to the way memory allocation and freeing work at the system call level. In a nutshell, memory that you free does not actually become free for other programs to use until your process exits. (As bad as this sounds, it's preferable to the situation wherein the system doesn't know what process owns the
        • Firefox doesn't re-use the memory though. It just keeps going up until it either consumes all system memory or it is restarted.

          As mentioned in other posts, it may not be a memory leak if it can still reference unused bitmaps, but it doesn't seem to be ever removing the old bitmaps from the cache.

          My solution to this so far has been SessionSaver; I can just terminate Firefox and re-open it. All my tabs and sessions are exactly like before I closed it, except memory usage is back to normal.
        • This is a relic from the C/C++ language, having to do with memory fragmentation and non-relocatablity of objects.
        • This is *partly* due to the way memory allocation and freeing work at the system call level. In a nutshell, memory that you free does not actually become free for other programs to use until your process exits. (As bad as this sounds, it's preferable to the situation wherein the system doesn't know what process owns the memory, so if an app has a leak the only way to reclaim it is to restart the whole system.)

          Are you sure? Leaked memory is memory which is still allocated to the leaking process, so you can s
          • > Leaked memory is memory which is still allocated to the leaking process

            Only if the operating system keeps track of which process it was that allocated the memory. In general, reasonable modern operating systems do this, of course, but there have been systems that didn't. Another thing that happened on those systems was that if one program had pointer errors, it could end up corrupting the memory of another process, and the OS wouldn't even know, much less prevent it. (That's also possible on systems
    • Time is expensive. Ram is cheap. do the math.

      Say you have 500 customers, if they each have to get 512 MB of extram ram to run your software, the cost of that would run about 512 * $50 = $25,000, give or take.

      Now, say instead, you get your development team to spend 3-4 weeks chasing down memory issues and optimizing the code to be lean and mean. Even if the team is very small (10 people), that just cost you $40,000 to $50,000 in salaries, not to mention the lost time they could be working on something else.

      S
      • In this case, though, the number of customers is on the order of a hundred million people.

        Care to do the math again? :)

        It's an important point, though. Sometimes programmer time isn't used terribly efficiently.
      • So....what you're saying is the Firefox team needs to give out a free stick of RAM with every download?

        I keed, I keed. (running 1.0 something on win2k athlon xp 1800 w 1 Gig RAM. It never crashes or hangs.)

      • by Kris_J ( 10111 ) *
        At work we have half a dozen PCs (of ~22) with the maximum amount of RAM installed that they support. The list includes my laptop. I am therefore for anything that reduces RAM use and against any lazy programming that requires more RAM.
    • Re:Easier solution (Score:2, Insightful)

      by jlarocco ( 851450 )

      Good idea. I don't use Firefox, but that approach will ensure that next time I think about switching browsers, I'll have one less option to consider.

    • You forgot to mention Windows and KDE.

      Seriously, the tendency is to think, "Well, our target market has at least 256 Mb of memory and a 1 Ghz. CPU, so there's plenty of room and no need to optimize". It's easy to start thinking that way, and I suppose exponentially increasing memory demands are what drove manufacturers to keep lowering the cost-per-bit. The problem is that once multitasking operating systems became the rule, an application developer could no longer depend upon having all of a machine's r
    • This is one attitude that I do not understand. Yes, this will solve the problem but when I buy a new computer or more memory, I want a FASTER and more powerful machine.

      If the memory usage is low, then I can run more applications with the same spec!
  • Jerkiness (Score:3, Informative)

    by Froze ( 398171 ) on Friday November 25, 2005 @03:27PM (#14114328)
    A quick read through the article (sacrilege, I know) claimed a small amount of jerkiness on uncompressing images. Does anyone have any idea how this would affect various hardware? For instance I have a 333MHz Pentium with ~144MB of mem that I expect (WAG) would get extremely "jerky" using this.
    • Federico's proof-of-concept code didn't try to do any lookahead staging. He suggested that Firefox decompress images adjacent to, as well as within the currently visible screen area.
  • X extension (Score:4, Interesting)

    by obi ( 118631 ) on Friday November 25, 2005 @03:29PM (#14114339)
    He gives three possibilities: storing it uncompressed in the server, storing it uncompressed in the client, and storing it compressed in the client (and uncompressing it on the fly).

    I wonder if it might be interesting and worthwile to have an X extension to store it compressed on the server? That way there's a lot less X traffic, and potentially a lot more applications could make use of it.

    The only condition is that you don't need to decompress in Moz, and recompress it to send to the X server, but just pass along the compressed data (there's some security implications with that though, but I guess they could be dealt with).

    • Re:X extension (Score:3, Interesting)

      by obi ( 118631 )
      One thing I forgot:

      if it's done on the server-side, it's probably also easier to take advantage of fancy graphics hardware. Imagine a graphics card that is able to decompress JPEGs on the fly, for instance (considering pixel shaders in current 3D hardware, it's not too far fetched).
    • Re:X extension (Score:4, Interesting)

      by emj ( 15659 ) on Friday November 25, 2005 @03:48PM (#14114420) Journal
      The problem is that to store images compressed on the server you would have to decompress, resize, compress and then store it on the server, and you would still hav to keep the original in the client. Lots of the pictures on the web are resized before being shown by the browser.
      • That's true, but the extension (and possibly the graphics hardware) could handle the resizing/viewport (and even rotating etc) operations for you too. Of course, I understand that it might be difficult to integrate such an API in Moz's codebase - I just don't know.
    • I wonder if it might be interesting and worthwile to have an X extension to store it compressed on the server? That way there's a lot less X traffic

      I'd rather let Nomachine NX [gnome.org] deal with the optimizations of X protocol transfers.

    • Re:X extension (Score:4, Interesting)

      by theCoder ( 23772 ) on Friday November 25, 2005 @05:09PM (#14114842) Homepage Journal
      Well, along with that, why couldn't the X server compress the image itself, independent of the app? There wouldn't need to be any changes to the X protocol, this would be something done internal to the server. I suppose there could be an extension to the protocol that allows XPixmaps to be sent in JPEG compressed format or something as well, to reduce transmission time. If this was done, then apps like Firefox wouldn't have to change to get the benifit the author is describing.

      Besides, I'm not sure that storing the images compressed on the client side is going to work as well as the author hopes. In fact, it would increase the RSS of the firefox app, making people think that FF is even more bloated, even though it reduced memory usage overall. How many people have even heard of xrestop (I hadn't until I read the article)?

    • BAH! The obvious thing to do is to remove the graphics from the X-Server after a timeout on tabs that are hidden.

      So the current tab has it's images on the X-Server, the hidden tabs have their images on the X-Server, but if you don't view them for 10 minutes they are free'd on the X-Server.

      This way there is equal performance for viewing pages, and a slight lag when switching to a tab you haven't viewed in a while. That lag is so small it might as well be nothing. There will be a noticeable lag when you ar
  • Suppose you click a link to a huge (say 10 meg) image from a slow server or on dial up. Then you want to right click and save as. Since firefox might not keep a copy of the image in it's compressed form, you may very well wind up downloading that file from the server again.

    It may or may not get cached, I've definitely noticed situations where it does not get cached and must download completely again. There's a bug report on this behavior but I think it's closed as not a bug.
  • It's the GDI objects (Score:5, Interesting)

    by HughsOnFirst ( 174255 ) on Friday November 25, 2005 @03:37PM (#14114367)
    Windows has something called GDI objects (graphics device interface objects), and firefox uses too damn many of them.

    Somewhere after 5000 of them are in use windows slows down to a crawl and dies no matter how much memory you have, and with enough tabs and windows open firefox will be using 4000+ of them all by itself.
  • Synchronicity... (Score:5, Interesting)

    by benjamindees ( 441808 ) on Friday November 25, 2005 @04:10PM (#14114540) Homepage
    I'm right now running a copy of Opera [opera.com] on a system that's intentionally limited to 64 megs of RAM. It's working beautifully.

    I'm testing out browsers for use on some old machines as web kiosks. Basically, my choices are:

    • Konqueror - includes all of KDE (ugh)
    • Konqueror embedded - lacks maintenance
    • Firefox - seems to be slow and has issues when run without a window manager
    • Dillo - has website layout problems
    • and, Opera - seems to be the best choice


    These machines (P1), and lots of machines like them, pretty much max out at about 64 megs of RAM. I could probably find more RAM, but it'd be costly, and there are usually hardware compatibility problems.

    Although I'm leaning towards Opera at the moment, I was using Konqueror for a while. Linux does a great job of swapping, and Konqueror is quite snappy, so even with low memory it's a viable option. But, with all the libraries that Konqueror requires, 64 megs is kind of pushing it.

    And there is a decided trend in hardware towards less memory and faster processors. It's not uncommon to find Pentium III's with only 128 megs of RAM. Unfortunately, many open source programs are written without limited memory requirements in mind.

    It's kind of humbling to think that, as few as five years ago, a Pentium I with 64 megs of RAM would run an entire OS and web browser without so much as touching swap space. Today, you have to use apps designed for embedded machines to run in 64 megs of RAM, and you're lucky if you can run more than one app at a time.

    From my testing, Firefox is barely outside the range of viable options for a machine with 64 megs of RAM. But as with any performance tuning, there are probably trade-offs. And having lots of options is usually the best strategy. But I think these improvements suggested for Firefox would be beneficial in almost any scenario. Avoiding I/O seems to be the best strategy on any system newer than, say, a Pentium I, when web browsing. So uncompressing images on the fly in exchange for less memory usage would doubtlessly be a good trade-off.
    • This is a me too [slashdot.org] reply :)) Both konqi and opera do fine on low-end hardware, konqi lags a bit behind opera in start-up time (because all the libs it loads) - but it is still much faster than firefox or other gecko based browsers (epiphany, galeon). And I don't think the problem is with images. Gecko is damn slow, that is the problem, and so is the interface under linux/bsd. Firefox works OK in win 98 (on pII 300Mhz machines with 64M ram) - until at least it eats up all the ram) while it is unusable on bsd/l
    • Re:Synchronicity... (Score:4, Informative)

      by brunes69 ( 86786 ) <`gro.daetsriek' `ta' `todhsals'> on Friday November 25, 2005 @05:42PM (#14114987)

      Konqueror - includes all of KDE (ugh)
      Konqueror embedded - lacks maintenance

      These are are both false statements.

      For one, you don't need "all of KDE" to build and run Konqeuror. All you need is the kdecore libraries, all of which put together have a much smaller footprint both in memory and diskspace than Firefox. If you don't beleive me, 'apt-get install konqueror' in Debian or any other distro that segments up KDE packages.

      For the second, Konqeuror embedded is built from the *exact same* cvs tree as Konqueror. Any commits to the rendering engine go to both browsers. So it does not 'lack maitenence', it is very actively developed, just like Konqueror.

    • Hey, you missed out Internet Explorer in your list of testing!
  • by ianezz ( 31449 ) on Friday November 25, 2005 @04:19PM (#14114582) Homepage
    Since the X server can't deal (yet?) with compressed pixmaps by itself, and since we don't want to store the uncompressed pixmaps offscreen in the X server because it takes memory, the only way to do this is to have the application to uncompress the pixmaps on the fly and upload the needed bits to the X server each times it needs them.

    That's fine when the X server and the application are on the same host, but it is less than ideal when the X server is on a different host (you really want to send the data just once in this case). It's probably better to have it both ways.

    Possible outcomes:

    • applications could do that by themselves via specific support in the toolkit (i.e. let GTK+, Qt and FLTK deal with it)
    • the X server could transparently (read: internally, with neither the applications nor the toolkits knowing of it) compress pixmaps uploaded to it, uncompressing them when needed (really bad, because the X server would also have to compress them)
    • the X protocol could be extended to deal with compressed pixmaps (the X server would have just to uncompress them, but that's a new extension and the applications/toolkits need to be modified to use it)
    • leave everything as it is, assuming that X servers mainly run on systems providing virtual memory, which is quite cheap (bad for small/embedded devices)
    • I thought of that too. But, in this instance, is it really important?

      Think of when a person would use a browser remotely over an X session. Isn't it more important to have a usable browser on the local machine than to have quick access to a browser via X11?

      And if they do run a remote browser, for something like LTSP, aren't they going to be using a local network, with relatively a fast connection anyways?

      So really you're talking about people who access a browser, remotely via X11, over a slow connection.
  • :-) Decreasing the momory footprint is fine. But for the start I would be happy to don't see the message "out of memory" when downloading 30kB XUL code for my administration ;-)

    I prefer for the start bugfree Gecko that is consuming lot of memory. For me it is better to buy new memory then delay the development of my XUL CMS.
  • by meanfriend ( 704312 ) on Friday November 25, 2005 @04:39PM (#14114670)
    This was a nifty piece of investigation, but doesnt address the largest cause of firefox memory usage. Namely, memory is not freed when tabs are closed.

    See:

    https://bugzilla.mozilla.org/show_bug.cgi?id=13145 6 [mozilla.org]

    Try a test. Fire up a clean FF and note memory usage. Go to somewhere like fark.com and open 50 links in tabs and note mem usage. Close every tab and see if mem usage goes down. It doesnt. Most people visit dozens of pages a day. Hundreds per week. After a while, the memory footprint of FF can grow to epic proportions (ie hundreds of megs) even with only a few tabs open because FF cannot release memory of closed tabs. I have to restart FF every week or so because I'm tired of it using 200MB for no good reason.

    It doesnt bother me so much that FF stores uncompressed images for tabs which are active (ie. open, even if not visible). The article itself mentions a performance hit when storing compressed images. But why the f*** cant it free the memory when I close that tab? The fact that I explicity closed it should indicate that I dont want it anymore. FF developers have acknowledged the problem but have said that there is no easy fix. Probably a poor design in the underlying architecture, though no one associated with the project would state it that bluntly.

    BTW, this article reminds me of one of the best reasons to use some sort of adblocking software. You save quite a bit of memory when you arent caching a dozen useless images with every new web page you visit. Especially in light of the above bug, you can significantly slow down the expanding memory footprint with adblocking.
    • by Skapare ( 16644 ) on Friday November 25, 2005 @04:58PM (#14114778) Homepage

      I've switched to opening links in new windows a lot now. In part this is because I want to group a set of tabs together. And it's easier to just close the whole mess by closing that whole window (generally all on one site or about one topic). But it seems FF is not free-ing up memory in these cases, either.

      I don't see why a tab or a new window should be different internally, though. It should only be a matter of associating the state of a loaded page with a given display context. The real issue, though, is the memory management issue. Apparently something fundamentally wrong in the browser architecture is preventing that. I highly suspect it is due to over-abstraction and/or the inability of some tools they are using to properly destruct objects that are no longer needed. It does seem that large complex software projects such as this do tend to suffer a lot of complexity issues that result in basic things like free-ing memory becoming impossible to do. I don't encounter these problems in my programs, but then, I don't do anything nearly as large as Firefox, nor do I use a team of developers, nor do I use all these abstract tools by ignoring their internal operation implementations. I'll be curious as to the actual, real cause.

    • Firefox simply never releases memory, AFAIK. But it'll reuse the memory.
    • by Anonymous Coward
      Sigh. You have a very poor understanding of memory usage patterns.

      Applications will seldom free memory back to the system, even though it has been freed within the program's memory manager (malloc etc). Most Unix systems give applications a single contiguous chunk of virtual memory that typically only grows rather than shrinks (due to memory fragmentation). That is a terriblly ineffective way to diagnose a memory leak.

        AC
    • I wonder if it is related to the resource leak associated with "Bugzilla Bug 246974: CPU usage reaches 99% and will not go down"...
  • One of the things I occaisionally do is set up older computers that would otherwise end up in an environmentally dangerous scrap heap so they can be used by people that otherwise be unable to afford a computer (at least one with all legal software loaded). A lean configuration of Slackware has worked well before. I'm considering using Ubuntu in the future, but it raises the lower limit on memory. And Firefox on any of these has posed problems.

    But the real problem is these older machines are limited in h

  • by Clueless Moron ( 548336 ) on Friday November 25, 2005 @04:46PM (#14114708)
    While it looks like a nifty idea at first glance, this kind of memory optimization is ultimately pointless when you have a nifty demand-paged vmem kernel like Linux.

    Consider: since my box has 1G of memory, I do want the X server to hang on the all those pixmaps, because that makes firefax run fast. The hack would make it waste CPU time re-uncompressing images, whether it's needed or not.

    With the way Firefox works now, if memory does start to run short, well, that's when the kernel will start paging things out based on its clever working set algorithms. If a given pixmap area in the X server hasn't been accessed in quite a while, it'll get swapped out to disk and the memory reclaimed. If the pixmap is accessed later, it'll automatically page back in.

    I don't know about your box, but mine (Athlon XP2000+) can decompress JPEGs at a rate of around only 3MB per second. My disk drives, OTOH, are a hell of a lot faster than that.

    In other words, letting the OS do its job by tossing the images onto swap when necessary strikes me as a much better strategy than constantly sucking up CPU decompressing every image every time it's used just in case the memory might be needed.

    People worry too much about VMEM, IMHO. If I write a program that allocates 1G of memory, but then spins around using only 10k for the next hour, it'll have basically zero impact on the OS. Only ~10k if real RAM is actually getting used.

    • Firefox runs on things other than Linux, in particular
      isn't everyone supposed to be pushing for IE-FF switch?
    • Or something perhaps a bit less sexist; that's all I could come with on short notice. Point is, the operating system is fine and dandy for general purpose computing, and has lots of tweaks, but at the end of the day the general purpose OS must to generalize it's algorithms for ALL programs by definition, wheras the application itself knows best which parts of its working set are nessecary and which are trivial.

      Exokernels take this concept to the extreme and let applications decide where to allocate their re
    • Was this admonition aimed at FireFox developers or Microsoft Windows developers?
    • People worry too much about VMEM, IMHO. If I write a program that allocates 1G of memory, but then spins around using only 10k for the next hour, it'll have basically zero impact on the OS. Only ~10k if real RAM is actually getting used.

      Not true. Say that after that hour, your program needs to access all of its memory again (because you deiconified it, or whatever...). And that you have done other stuff during the last hour, that caused the rest of your gargantuan memory hog program to be paged out to dis

    • I don't know about your box, but mine (Athlon XP2000+) can decompress JPEGs at a rate of around only 3MB per second. My disk drives, OTOH, are a hell of a lot faster than that.

      No they aren't. Unless you are only reading very large files (from several megabytes up), seek time will kill your transfer rate. And paging memory seems to require lots of seeks.
    • because that makes firefax run fast

      Awesome. I want one.

    • People worry too much about VMEM, IMHO. If I write a program that allocates 1G of memory, but then spins around using only 10k for the next hour, it'll have basically zero impact on the OS. Only ~10k if real RAM is actually getting used.

      Actually, it will use 12k on a system with 4k pagesize (such as x86).

      On a more serious note, if you are going to rely on swap to take care of caching, you don't want to store the data in X-servers address space. On a 32-bit machine, that space is 4 GB at absolute theor

  • It is an interesting notion, but I see problems with the methodology.

    The biggest problem is that there is a big hit to user experience. Changing tabs and scrolling faster than your wheel will suck.
  • recompress (Score:3, Insightful)

    by TheSHAD0W ( 258774 ) on Friday November 25, 2005 @05:05PM (#14114828) Homepage
    This is a pretty neat idea, but JPEG compression tends to be rather CPU-intensive for various reasons. I'd recommend either recompressing to a more reasonable format - LZW, for instance - or perhaps going to an intermediate compression also based on JPEG but requiring less CPU to deal with. For instance, decompressing to 32- or 64-bit fixed-length tuplets rather than the LZ or numerical compression used in the files.
    • No compression is necessary, only decompression, which is fast. The image comes compressed, no computation on your part, you decompress it when it needs to be viewed, keep the compressed version in memory and simply delete the uncompressed version when no longer needed.
      Regards,
      Steve
  • Extensions (Score:2, Interesting)

    Fine, compress images, but extensions, to those who use them are a much bigger problem. I switched to epiphany because I had a tendency to overload firefox with extensions (and besides ephy is faster even with a clean firefox).

    When you have as few as 5 extensions, memory and CPU usage soar terribly. I realise this won't affect 80% of users like images would, but certainly 95% of FF-using slashdotters it will affect.

    Surprisingly flock doesn't appear to suffer as badly from either problem, perhaps they'v

  • by Skapare ( 16644 ) on Friday November 25, 2005 @05:33PM (#14114943) Homepage

    One possible contributor to the memory issue here, and in some other programs, could be the way the memory is allocated. Memory is obtained from the kernel in chunks a multiple of a page size. These pages cannot be returned back to the kernel unless all usage of the entire page is gone. Memory usage for a typical object tends to be small pieces. If the pieces allocated for one page (in one tab) are interleaved with pieces allocated for another page (in another tab), then closing one of those tabs, even if the mainline code destructs all objects which correctly free all their underlying memory allocations, does not necessarily result in pages being released back to the kernel.

    So how can memory allocations get scattered around like that? Consider that many objects need to persist as long as the page exists, but many others can be destructed because they are only needed when the page is being loaded or rendered. During loading and rendering, both sets of objects can be created in a mixture. Then the non-persistent ones would be destructed. Because of the order of allocation of underlying memory, the persistent objects tend to be interleaved with the non-persistent ones. That then means most pages may have some persistent object lying around, preventing it from being returned to the kernel.

    Solutions to this problem would be difficult. But I also think this effort would be valuable for any and all large projects that can face this kind of memory issue. Some means is needed to control the memory alloction, and in particular to allow grouping of memory into contexts. The first kind of context would be a context for each tab or window being opened by the browser user. That way, if a tab is closed, it should substantially destruct objects grouped together. But this can also be wasteful because the non-persistent objects that do get destructed after rendering is done cannot have their memory recycled by other contexts. So another dimenstion of context needs to be on the basis of what is persistent vs. non-persistent, so that all non-persistent memory gets grouped together so it can be returned to the kernel as whole pages, which can then be recycled to other contexts (getting the pages from the kernel again).

    This would require a much more involved memory allocation system. Further, it would also require major changes in many of the abstract programming classes used by such large programs ... in ways that tend to be counter to what the abstraction is all about in the first place. Abstraction is supposed to hide details about the underlying implementation so the programmer can/should concentrate on application logic. But this is not really an optimal way to program when dealing with limited resource issues that need to be managed, such as the memory issues seen here. In particular, the various classes themselves won't know whether they are persistent (in the context of what the browser application needs) or not. Many instances of the very same class may be created for both persistent and non-persistent intents, so the class itself could never be designed to make any such assumptions (e.g. think of hiding the details in reverse ... the class does not see the details of the application, e.g. which instances are to be long lived and which are not).

    A concept that may help with this is one that would have to be applied to the whole of such object oriented programming, or even non-OO functions that also could allocate memory for such variant uses (this isn't fundamentally an OO problem ... OO merely exposes it due to the larger application scale that OO enables to be implemented). This concept is to create instance groups that can span laterally across all classes. It would require that each time an instance is created, that it be associated with a particular instance group. Then instead of destructing each instance individually, the group is destructed, which destructs all instances in the group. The implementation of all thi

    • So, you're basically talking about a NeXT/OSX-ish implementation of memory zones and the Objective C retain-release mechanism...



    • the solution to allocation inefficiencies due to fragmentation is well known.
      it doesn't require any substantial changes to the programming model. use
      a coalescing garbage collector. since you are moving objects it requires
      either use of mprotect() to trap stale references or lock the object against
      mutation during the copy, or the assumption that one can 'stop the world'
      while copying objects.

      as usual the overhead of copying can be tuned by policy, and mitigated through
      the use of a standard generational impleme
    • Wouldn't it be simpler just to have Firefox manage its own memory ? I think it might be possible without a complete overhaul. (But I'm far from being a firefox developer)

      If the problem is that new windows tabs create a lot of objects that each get interspersed in memory, then why don't they just:

      Overload "new" so that objects from each page are allocated from separate pools of memory. (using separate 4k memory pages for each tab/window wouldn't be a big deal)

      Many times, applications do something similar so
  • I have of course noticed that Firefox uses quite a lot of memory, but it is the CPU usage that is of a greater concern to me and actually causes me problems.

    I have an Athlon 64 3000+ and a gig of RAM, and I regularly go to use an application that has a genuine need for a lot of CPU e.g. a game, to find it runs slowly and then upon investigation Firefox is found to be using 50% or more of my CPU!

    I have investigated further and the problem seems to relate to tabs containing Flash, I have had to get into the

It is easier to write an incorrect program than understand a correct one.

Working...