Reducing Firefox's Memory Use 110
An anonymous reader writes "Many people have complained about Firefox's memory use. Federico Mena-Quintero has a proposal for reducing the amount of memory used to store images, which, in his proof of concept code, 'reduced the cumulative memory usage... by a factor of 5.5.'."
Easier solution (Score:5, Insightful)
Coming from me, this is sarcasm, but it's a depressingly prevalent real attitude in the industry.
Re:Easier solution (Score:3, Interesting)
There's always a tradeoff between memory use and CPU comsumption. If it's simple to do and has enough impact, maybe they should let the users decide?
Or even automatically configure it depending on the user's hardware.
Re:Easier solution (Score:3, Interesting)
At the same time, rendering/decompressing an image might be quite self-contained, and running 50 self-contained instances of that CPU-hogging old JPEG algorithm might parallellize quite easily, so would scale with the current trends in CPU development (more parallel - just throw more chips/cores/DSPs/GPUs
Re:Easier solution (Score:2)
That being said,
Re:Easier solution (Score:2)
It's not only that. Firefox will periodically freeze in it's tracks for about 1 minute; CPU usage climbs to 70-80% while it's doing god knows what.
Re:Easier solution (Score:2)
Re:Easier solution (Score:2)
Re:Easier solution (Score:1)
That it does. But Linux isn't any better than Windows when running an equivalent browser. Mozilla on Windows and Mozilla on Linux is about the same in almost all regards.
So the important thing is to point out that there are plenty of other advantages in running Linux.
Re:Easier solution (Score:1)
~Anders
Re:Easier solution (Score:2)
Is that because w
Re:Easier solution (Score:2)
I'm not totally sure that I subscribe to your
Re:Easier solution (Score:4, Interesting)
Great for people like us, who knows whats going on in the background. Many people never run into the firefox memory issues because they tend to single process with their machines, and switch their machines off regularly. The solution should not be a manual thing, it should be able to be solved with clever programming (perhaps an idle, lack of mouse movement detection?) its hard enough to train users to use computers to do their jobs, without adding extra load to their fragile little minds. M
Re:Easier solution (Score:2)
Agreed. But the more complex you make it, the less likely it is to get done. I've seen threads on Bugzilla go on for years while people argue the nuances of an implementation. Better a manual process than no process at all.
Re:Easier solution (Score:1)
Glad I switched back to Mozilla after Firebird...
Re:Easier solution (Score:3, Insightful)
What?
I could believe HD, but RAM sizes have not kept up at all. You might have gotten a system with 128 megs with a 20 gig hard disk with a CPU running 800 mhz a few years ago. These days you get a system generally with something like 512 megs and 200 gigs storage running 3Ghz. Also RAM prices have not dropped all that much. 512megs of DDR2 is over $200.
Yes, CPU speeds have stagnated in the last year or so, only growin
Re:Easier solution (Score:2)
I managed to score an off-lease computer for dirt cheap decked out to 4GB of RAM, then realized that I have absolutely no need or use for that much memory. On a Windows 2000 computer, I rarely use more than 500MB.
usefulness (Score:1)
Re:Easier solution (Score:1)
I see now that is the highest end stuff, but what you linked to was 256megs. Looks like a more realistic price is somewhere in the middle, $80 or $90.
Re:Easier solution (Score:2)
It's actually cheaper than DDR [newegg.com] or SDRAM [newegg.com].
Even 200-pin DDR2 SODIMMs are reasonably priced. [newegg.com]
Re:Easier solution (Score:2)
Who modded this crap up? (Score:1)
I recently upgraded my laptop with a 1GB DDR2 SODIMM module, total cost: 99.
$200? Nice joke buddy but no cigar!
Also, maybe pure MHz hasn't gone up more than 15-20% but pure 'speed', or performance/ has at least tripled. I got my AMD X2 4200+ cheaper than a P4/2533 only a few years ago and the difference in practice in LAME/Xvid/games is on a completely different scale.
Re:Who modded this crap up? (Score:1)
Re:Easier solution (Score:3, Informative)
Re:Easier solution (Score:2)
Re:Easier solution (Score:2)
Re:Easier solution (Score:2)
Easy solution: Create a file in /tmp, memorymap it, and satisfy all requests for cache memory from the memorymapped region. Then it should compete on equal footing with all other filesystem IO, and be pushed out of disk cache and pulled back in as appropri
Re:Easier solution (Score:2)
Re:Easier solution (Score:5, Informative)
If I am opening and closing a lot of image-heavy tabs, after a while, my firefox instance is sucking up 800MB of system memory, and the ONLY way to free it is to restart firefox.
I don't care about firefox's memory usage with compressed versus uncompressed. If I'll get more speed with 90MB of uncompressed images, go for it. What I do have a problem with is how it doesn't bother to remove raw images that are no longer needed. Essentially, it is a really bad memory leak that they haven't fixed for ages.
As for reducing actual memory usage, a hybrid solution is best. At the very least all images on other tabs should remain compressed, and then decompressed when switching to that tab, going back to the compressed images from the old tab (Disk cache them, or keep both compressed and uncompressed in memory).
In addition, you can probably do smart-cacheing on images on the current tab. As the article author mentioned, keep uncompressed copies of only images near the current viewport. Another solution might be to store everything as compressed, even in the current tab, and modify the rendering engine so that images are drawn asynchronously. A 100ms delay while scrolling will cause noticeable hitching, but if you draw the rest of that page and throw in the image 100ms later, the user will have a much smoother experience. They can keep scrolling while the image is loaded in.
Re:Easier solution (Score:4, Funny)
Re:Easier solution (Score:2)
Re:Easier solution (Score:3, Funny)
Re:Easier solution (Score:3, Interesting)
> needed. Essentially, it is a really bad memory leak that they haven't fixed for ages.
This is *partly* due to the way memory allocation and freeing work at the system call level. In a nutshell, memory that you free does not actually become free for other programs to use until your process exits. (As bad as this sounds, it's preferable to the situation wherein the system doesn't know what process owns the
Re:Easier solution (Score:2)
As mentioned in other posts, it may not be a memory leak if it can still reference unused bitmaps, but it doesn't seem to be ever removing the old bitmaps from the cache.
My solution to this so far has been SessionSaver; I can just terminate Firefox and re-open it. All my tabs and sessions are exactly like before I closed it, except memory usage is back to normal.
Re:Easier solution (Score:2)
Re:Easier solution (Score:2)
Re:Easier solution (Score:2)
They don't have to, thought. A VM with a relocating garbage collector will have a few chunks of memory, with the active data located at the beginning of each chunk right after garbage collection. It is quite possible to resize these chunks with "mmap" and "mremap" s
Re:Easier solution (Score:2)
Are you sure? Leaked memory is memory which is still allocated to the leaking process, so you can s
Re:Easier solution (Score:1)
Only if the operating system keeps track of which process it was that allocated the memory. In general, reasonable modern operating systems do this, of course, but there have been systems that didn't. Another thing that happened on those systems was that if one program had pointer errors, it could end up corrupting the memory of another process, and the OS wouldn't even know, much less prevent it. (That's also possible on systems
Its all about cost (Score:2)
Say you have 500 customers, if they each have to get 512 MB of extram ram to run your software, the cost of that would run about 512 * $50 = $25,000, give or take.
Now, say instead, you get your development team to spend 3-4 weeks chasing down memory issues and optimizing the code to be lean and mean. Even if the team is very small (10 people), that just cost you $40,000 to $50,000 in salaries, not to mention the lost time they could be working on something else.
S
Re:Its all about cost (Score:3, Insightful)
Care to do the math again?
It's an important point, though. Sometimes programmer time isn't used terribly efficiently.
Re:Its all about cost (Score:2)
I keed, I keed. (running 1.0 something on win2k athlon xp 1800 w 1 Gig RAM. It never crashes or hangs.)
Re:Its all about cost (Score:3, Interesting)
Re:Easier solution (Score:2, Insightful)
Good idea. I don't use Firefox, but that approach will ensure that next time I think about switching browsers, I'll have one less option to consider.
Re:Easier solution (Score:2)
Seriously, the tendency is to think, "Well, our target market has at least 256 Mb of memory and a 1 Ghz. CPU, so there's plenty of room and no need to optimize". It's easy to start thinking that way, and I suppose exponentially increasing memory demands are what drove manufacturers to keep lowering the cost-per-bit. The problem is that once multitasking operating systems became the rule, an application developer could no longer depend upon having all of a machine's r
Re:Easier solution (Score:1)
If the memory usage is low, then I can run more applications with the same spec!
Jerkiness (Score:3, Informative)
Re:Jerkiness (Score:2)
X extension (Score:4, Interesting)
I wonder if it might be interesting and worthwile to have an X extension to store it compressed on the server? That way there's a lot less X traffic, and potentially a lot more applications could make use of it.
The only condition is that you don't need to decompress in Moz, and recompress it to send to the X server, but just pass along the compressed data (there's some security implications with that though, but I guess they could be dealt with).
Re:X extension (Score:3, Interesting)
if it's done on the server-side, it's probably also easier to take advantage of fancy graphics hardware. Imagine a graphics card that is able to decompress JPEGs on the fly, for instance (considering pixel shaders in current 3D hardware, it's not too far fetched).
Re:X extension (Score:4, Interesting)
Re:X extension (Score:2)
Re:X extension (Score:2)
I'd rather let Nomachine NX [gnome.org] deal with the optimizations of X protocol transfers.
Re:X extension (Score:4, Interesting)
Besides, I'm not sure that storing the images compressed on the client side is going to work as well as the author hopes. In fact, it would increase the RSS of the firefox app, making people think that FF is even more bloated, even though it reduced memory usage overall. How many people have even heard of xrestop (I hadn't until I read the article)?
Re:X extension (Score:2)
So the current tab has it's images on the X-Server, the hidden tabs have their images on the X-Server, but if you don't view them for 10 minutes they are free'd on the X-Server.
This way there is equal performance for viewing pages, and a slight lag when switching to a tab you haven't viewed in a while. That lag is so small it might as well be nothing. There will be a noticeable lag when you ar
Other good side effect (Score:1)
It may or may not get cached, I've definitely noticed situations where it does not get cached and must download completely again. There's a bug report on this behavior but I think it's closed as not a bug.
It's the GDI objects (Score:5, Interesting)
Somewhere after 5000 of them are in use windows slows down to a crawl and dies no matter how much memory you have, and with enough tabs and windows open firefox will be using 4000+ of them all by itself.
Re:It's the GDI objects (Score:2)
Re:It's the GDI objects (Score:2)
Firefox using 256+ meg of memory I can deal with, but using 5000 GDI objects is ridiculous there are single pages that use 400+
Re:It's the GDI objects (Score:1, Insightful)
The GDI problems in Win9x were never fixed outright, just reduced. And the problems in Windows 2000 and XP are completely different, since they derive from NT.
http://support.microsoft.com/default.aspx?kbid=126 962 [microsoft.com]
I've had older versions of ZoneAlarm hit this limit:
http://support.microsoft.com/default.aspx?kbid=326 591 [microsoft.com]
Synchronicity... (Score:5, Interesting)
I'm testing out browsers for use on some old machines as web kiosks. Basically, my choices are:
These machines (P1), and lots of machines like them, pretty much max out at about 64 megs of RAM. I could probably find more RAM, but it'd be costly, and there are usually hardware compatibility problems.
Although I'm leaning towards Opera at the moment, I was using Konqueror for a while. Linux does a great job of swapping, and Konqueror is quite snappy, so even with low memory it's a viable option. But, with all the libraries that Konqueror requires, 64 megs is kind of pushing it.
And there is a decided trend in hardware towards less memory and faster processors. It's not uncommon to find Pentium III's with only 128 megs of RAM. Unfortunately, many open source programs are written without limited memory requirements in mind.
It's kind of humbling to think that, as few as five years ago, a Pentium I with 64 megs of RAM would run an entire OS and web browser without so much as touching swap space. Today, you have to use apps designed for embedded machines to run in 64 megs of RAM, and you're lucky if you can run more than one app at a time.
From my testing, Firefox is barely outside the range of viable options for a machine with 64 megs of RAM. But as with any performance tuning, there are probably trade-offs. And having lots of options is usually the best strategy. But I think these improvements suggested for Firefox would be beneficial in almost any scenario. Avoiding I/O seems to be the best strategy on any system newer than, say, a Pentium I, when web browsing. So uncompressing images on the fly in exchange for less memory usage would doubtlessly be a good trade-off.
Re:Synchronicity... (Score:2)
Re:Synchronicity... (Score:4, Informative)
Konqueror - includes all of KDE (ugh)
Konqueror embedded - lacks maintenance
These are are both false statements.
For one, you don't need "all of KDE" to build and run Konqeuror. All you need is the kdecore libraries, all of which put together have a much smaller footprint both in memory and diskspace than Firefox. If you don't beleive me, 'apt-get install konqueror' in Debian or any other distro that segments up KDE packages.
For the second, Konqeuror embedded is built from the *exact same* cvs tree as Konqueror. Any commits to the rendering engine go to both browsers. So it does not 'lack maitenence', it is very actively developed, just like Konqueror.
Re:Synchronicity... (Score:2)
Trade network traffic and CPU for memory (Score:3, Interesting)
That's fine when the X server and the application are on the same host, but it is less than ideal when the X server is on a different host (you really want to send the data just once in this case). It's probably better to have it both ways.
Possible outcomes:
Re:Trade network traffic and CPU for memory (Score:1)
Think of when a person would use a browser remotely over an X session. Isn't it more important to have a usable browser on the local machine than to have quick access to a browser via X11?
And if they do run a remote browser, for something like LTSP, aren't they going to be using a local network, with relatively a fast connection anyways?
So really you're talking about people who access a browser, remotely via X11, over a slow connection.
Out of memory (Score:1)
I prefer for the start bugfree Gecko that is consuming lot of memory. For me it is better to buy new memory then delay the development of my XUL CMS.
Interesting, but doesnt solve the biggest problem (Score:5, Interesting)
See:
https://bugzilla.mozilla.org/show_bug.cgi?id=1314
Try a test. Fire up a clean FF and note memory usage. Go to somewhere like fark.com and open 50 links in tabs and note mem usage. Close every tab and see if mem usage goes down. It doesnt. Most people visit dozens of pages a day. Hundreds per week. After a while, the memory footprint of FF can grow to epic proportions (ie hundreds of megs) even with only a few tabs open because FF cannot release memory of closed tabs. I have to restart FF every week or so because I'm tired of it using 200MB for no good reason.
It doesnt bother me so much that FF stores uncompressed images for tabs which are active (ie. open, even if not visible). The article itself mentions a performance hit when storing compressed images. But why the f*** cant it free the memory when I close that tab? The fact that I explicity closed it should indicate that I dont want it anymore. FF developers have acknowledged the problem but have said that there is no easy fix. Probably a poor design in the underlying architecture, though no one associated with the project would state it that bluntly.
BTW, this article reminds me of one of the best reasons to use some sort of adblocking software. You save quite a bit of memory when you arent caching a dozen useless images with every new web page you visit. Especially in light of the above bug, you can significantly slow down the expanding memory footprint with adblocking.
Re:Interesting, but doesnt solve the biggest probl (Score:4, Insightful)
I've switched to opening links in new windows a lot now. In part this is because I want to group a set of tabs together. And it's easier to just close the whole mess by closing that whole window (generally all on one site or about one topic). But it seems FF is not free-ing up memory in these cases, either.
I don't see why a tab or a new window should be different internally, though. It should only be a matter of associating the state of a loaded page with a given display context. The real issue, though, is the memory management issue. Apparently something fundamentally wrong in the browser architecture is preventing that. I highly suspect it is due to over-abstraction and/or the inability of some tools they are using to properly destruct objects that are no longer needed. It does seem that large complex software projects such as this do tend to suffer a lot of complexity issues that result in basic things like free-ing memory becoming impossible to do. I don't encounter these problems in my programs, but then, I don't do anything nearly as large as Firefox, nor do I use a team of developers, nor do I use all these abstract tools by ignoring their internal operation implementations. I'll be curious as to the actual, real cause.
Re:Interesting, but doesnt solve the biggest probl (Score:2)
Re:Interesting, but doesnt solve the biggest probl (Score:1)
Re:Interesting, but doesnt solve the biggest probl (Score:1, Informative)
Applications will seldom free memory back to the system, even though it has been freed within the program's memory manager (malloc etc). Most Unix systems give applications a single contiguous chunk of virtual memory that typically only grows rather than shrinks (due to memory fragmentation). That is a terriblly ineffective way to diagnose a memory leak.
AC
Re:Interesting, but doesnt solve the biggest probl (Score:2)
Which browser for older machines? (Score:2)
One of the things I occaisionally do is set up older computers that would otherwise end up in an environmentally dangerous scrap heap so they can be used by people that otherwise be unable to afford a computer (at least one with all legal software loaded). A lean configuration of Slackware has worked well before. I'm considering using Ubuntu in the future, but it raises the lower limit on memory. And Firefox on any of these has posed problems.
But the real problem is these older machines are limited in h
Re:Which browser for older machines? (Score:1)
Re:Which browser for older machines? (Score:1)
Don't try to play Operating System (Score:5, Insightful)
Consider: since my box has 1G of memory, I do want the X server to hang on the all those pixmaps, because that makes firefax run fast. The hack would make it waste CPU time re-uncompressing images, whether it's needed or not.
With the way Firefox works now, if memory does start to run short, well, that's when the kernel will start paging things out based on its clever working set algorithms. If a given pixmap area in the X server hasn't been accessed in quite a while, it'll get swapped out to disk and the memory reclaimed. If the pixmap is accessed later, it'll automatically page back in.
I don't know about your box, but mine (Athlon XP2000+) can decompress JPEGs at a rate of around only 3MB per second. My disk drives, OTOH, are a hell of a lot faster than that.
In other words, letting the OS do its job by tossing the images onto swap when necessary strikes me as a much better strategy than constantly sucking up CPU decompressing every image every time it's used just in case the memory might be needed.
People worry too much about VMEM, IMHO. If I write a program that allocates 1G of memory, but then spins around using only 10k for the next hour, it'll have basically zero impact on the OS. Only ~10k if real RAM is actually getting used.
Re:Don't try to play Operating System (Score:2)
isn't everyone supposed to be pushing for IE-FF switch?
Never send a boy in to do a man's job (Score:2)
Exokernels take this concept to the extreme and let applications decide where to allocate their re
Re:Don't try to play Operating System (Score:1)
Re:Don't try to play Operating System (Score:3, Interesting)
Not true. Say that after that hour, your program needs to access all of its memory again (because you deiconified it, or whatever...). And that you have done other stuff during the last hour, that caused the rest of your gargantuan memory hog program to be paged out to dis
Re:Don't try to play Operating System (Score:2)
No they aren't. Unless you are only reading very large files (from several megabytes up), seek time will kill your transfer rate. And paging memory seems to require lots of seeks.
Re:Don't try to play Operating System (Score:2)
Awesome. I want one.
Re:Don't try to play Operating System (Score:2)
Actually, it will use 12k on a system with 4k pagesize (such as x86).
On a more serious note, if you are going to rely on swap to take care of caching, you don't want to store the data in X-servers address space. On a 32-bit machine, that space is 4 GB at absolute theor
A bit basic (Score:2)
The biggest problem is that there is a big hit to user experience. Changing tabs and scrolling faster than your wheel will suck.
This is not either/or, people (Score:4, Interesting)
recompress (Score:3, Insightful)
Re:recompress (Score:1)
Regards,
Steve
Extensions (Score:2, Interesting)
When you have as few as 5 extensions, memory and CPU usage soar terribly. I realise this won't affect 80% of users like images would, but certainly 95% of FF-using slashdotters it will affect.
Surprisingly flock doesn't appear to suffer as badly from either problem, perhaps they'v
Scattered memory allocations (Score:5, Interesting)
One possible contributor to the memory issue here, and in some other programs, could be the way the memory is allocated. Memory is obtained from the kernel in chunks a multiple of a page size. These pages cannot be returned back to the kernel unless all usage of the entire page is gone. Memory usage for a typical object tends to be small pieces. If the pieces allocated for one page (in one tab) are interleaved with pieces allocated for another page (in another tab), then closing one of those tabs, even if the mainline code destructs all objects which correctly free all their underlying memory allocations, does not necessarily result in pages being released back to the kernel.
So how can memory allocations get scattered around like that? Consider that many objects need to persist as long as the page exists, but many others can be destructed because they are only needed when the page is being loaded or rendered. During loading and rendering, both sets of objects can be created in a mixture. Then the non-persistent ones would be destructed. Because of the order of allocation of underlying memory, the persistent objects tend to be interleaved with the non-persistent ones. That then means most pages may have some persistent object lying around, preventing it from being returned to the kernel.
Solutions to this problem would be difficult. But I also think this effort would be valuable for any and all large projects that can face this kind of memory issue. Some means is needed to control the memory alloction, and in particular to allow grouping of memory into contexts. The first kind of context would be a context for each tab or window being opened by the browser user. That way, if a tab is closed, it should substantially destruct objects grouped together. But this can also be wasteful because the non-persistent objects that do get destructed after rendering is done cannot have their memory recycled by other contexts. So another dimenstion of context needs to be on the basis of what is persistent vs. non-persistent, so that all non-persistent memory gets grouped together so it can be returned to the kernel as whole pages, which can then be recycled to other contexts (getting the pages from the kernel again).
This would require a much more involved memory allocation system. Further, it would also require major changes in many of the abstract programming classes used by such large programs ... in ways that tend to be counter to what the abstraction is all about in the first place. Abstraction is supposed to hide details about the underlying implementation so the programmer can/should concentrate on application logic. But this is not really an optimal way to program when dealing with limited resource issues that need to be managed, such as the memory issues seen here. In particular, the various classes themselves won't know whether they are persistent (in the context of what the browser application needs) or not. Many instances of the very same class may be created for both persistent and non-persistent intents, so the class itself could never be designed to make any such assumptions (e.g. think of hiding the details in reverse ... the class does not see the details of the application, e.g. which instances are to be long lived and which are not).
A concept that may help with this is one that would have to be applied to the whole of such object oriented programming, or even non-OO functions that also could allocate memory for such variant uses (this isn't fundamentally an OO problem ... OO merely exposes it due to the larger application scale that OO enables to be implemented). This concept is to create instance groups that can span laterally across all classes. It would require that each time an instance is created, that it be associated with a particular instance group. Then instead of destructing each instance individually, the group is destructed, which destructs all instances in the group. The implementation of all thi
Re:Scattered memory allocations (Score:1)
So, you're basically talking about a NeXT/OSX-ish implementation of memory zones and the Objective C retain-release mechanism...
Re:Scattered memory allocations (Score:2)
it doesn't require any substantial changes to the programming model. use
a coalescing garbage collector. since you are moving objects it requires
either use of mprotect() to trap stale references or lock the object against
mutation during the copy, or the assumption that one can 'stop the world'
while copying objects.
as usual the overhead of copying can be tuned by policy, and mitigated through
the use of a standard generational impleme
Re:Scattered memory allocations (Score:1)
If the problem is that new windows tabs create a lot of objects that each get interspersed in memory, then why don't they just:
Overload "new" so that objects from each page are allocated from separate pools of memory. (using separate 4k memory pages for each tab/window wouldn't be a big deal)
Many times, applications do something similar so
CPU Usage (Score:1)
I have of course noticed that Firefox uses quite a lot of memory, but it is the CPU usage that is of a greater concern to me and actually causes me problems.
I have an Athlon 64 3000+ and a gig of RAM, and I regularly go to use an application that has a genuine need for a lot of CPU e.g. a game, to find it runs slowly and then upon investigation Firefox is found to be using 50% or more of my CPU!
I have investigated further and the problem seems to relate to tabs containing Flash, I have had to get into the
Re:CPU Usage (Score:1)
Yes, I had thought about the fact I'm not actually making use of the power-saving technology of my CPU due to this problem! Luckily the PC is fairly quick so I don't get a DoS-style slowdown, just a noticeable and annoying one.
I had been looking at various extensions myself to block the loading of SWF files, thanks for that piece of info, sounds like adblock is ideal if it can provide me with a hotkey to turn it back on when I am e.g. viewing a flash animation or game. Flash advertisements on the other ha
Re:CPU Usage (Score:2)
Re:CPU Usage (Score:1)