In IE8 and Chrome, Processes Are the New Threads 397
SenFo writes "To many of the people who downloaded Google Chrome last week, it was a surprise to observe that each opened tab runs in a separate process rather than a separate thread. Scott Hanselman, Lead Program Manager at Microsoft, discusses some of the benefits of running in separate processes as opposed to separate threads. A quote: 'Ah! But they're slow! They're slow to start up, and they are slow to communicate between, right? Well, kind of, not really anymore.'"
Have to watch what I say (Score:5, Funny)
I think they took that as architectural advice.
Re: (Score:2, Funny)
Re:Have to watch what I say (Score:4, Funny)
And here I was hoping I'd see a Spoonerism somewhere.
Re:Have to watch what I say (Score:5, Funny)
Re: (Score:3, Funny)
Processes (Score:5, Interesting)
Increases in computing power have made insignificant the perceived sluggishness of running multiple processes -- if Chrome won't run smoothly on that Pentium 2 of yours, then perhaps you should install command-line linux anyway!
Regarding Chrome, check out this [slashdot.org] response to my comment I linked to above, posted on June 30. At the time, I thought it was just an extension of a good idea but since his comment was posted earlier than Chrome was released I'm beginning to wonder if that fellow had any inside knowledge...
[/tinfoil hat]
Re:Processes (Score:5, Insightful)
Well of course. It isn't even new in the browser world. In fact it's where we started.
The earliest browsers required you to run a new instance for each concurrently opened site. This presented onerous resource demands, so they made it more efficient by having multiple window instances run under one process, and then with tabs that obviously carried over to tabs running under one shared process.
This is so much ado about nothing. I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.
Every bandwagoner, technical lightweight is now stomping their feet that Firefox needs to get on this yesterday, but really this is pretty low on the list of things that make a real improvement in people's lives. In fact I would go so far as to call it a gimmick. Presuming that the sandbox of a browser automatically stops sites from doing stupid stuff (unlike IE that will let a site kill just by going into a perpetual loop in JavaScript), and plug-ins are created by an idiot, this is completely unnecessary.
Chrome's great JavaScript is a real story, one upped by Firefox's ThreadMonkey doing one better. Those are real improvements that really do matter.
Re: (Score:2, Insightful)
> I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.
I restart firefox roughly twice per hour when developing my javascript application. Having 10 concurrent tabs executing heavy javascript/ajax generally hangs the browser.
Of course, extenions (in particular FireBug) are probably responsible of that, and it is painful but not a showstopper. A process per tab model would probably be better for my usage...
Re:Processes (Score:4, Funny)
Well there's your problem...
Re:Processes (Score:5, Insightful)
Re:Processes (Score:4, Informative)
A few onclick events and ajax calls do not make up an application. Something that requires heavy debugging does, and chances are you're reinventing the wheel if that's the case in Javascript (see: jQuery, MooTools, etc).
Re:Processes (Score:4, Funny)
Wait...did you expect someone replying to you on slashdot to know what the hell they were talking about?
Well there's your problem...
Um, duh (Score:2, Funny)
Every bandwagoner, technical lightweight is now stomping their feet that Firefox needs to get on this yesterday
Of course they want it yesterday... that is why they aren't as smart as you and I. They think you can go back in time.
Smart people like those reading this comment want it *today* or perhaps tomorrow morning. The honor roll students understand that today or even tomorrow might not be possible and instead are willing to wait a few days. The Mensa crowd and those working on Duke Nukem Forever or Perl6 are willing to wait until the code is the most architecturally perfect code ever written.
My point, for thos
Re:Processes (Score:4, Insightful)
I own a fairly old computer, and every time I open or close a javascript-heavy page, or open a PDF file, all the rest of my tabs become unusable for some seconds. It's not the end of the world, but I can't think of anything that I'd rather firefox devs spend their time on.
Re:Processes (Score:5, Interesting)
I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.
Possibly you can, but the biggest one I can count is having a flash plug-in (or similar) crash the entire browser when there's only a problem on one tab. That happens more frequently than I'd care for, so if there was a change that only brought down one tab, that would be great.
Re: (Score:3, Insightful)
Even better would be running just the plugins in separate processes. This way you don't even lose the tab that crashes Flash, only the problematic Flash video.
Re: (Score:3, Insightful)
Agreed - crashes in the browser itself are rare - Firefox seems very reliable in that way.
However crashes in plugins can be common, and indeed trusting a big binary blob to invasively use your process safely just seems like a bad idea. So I would say that firefox should definitely go for that part and not worry about the process-per-tab part.
Well like most good ideas it has been in Bugzilla for years!
https://bugzilla.mozilla.org/show_bug.cgi?id=156493 [mozilla.org]
Re: (Score:3, Insightful)
This is so much ado about nothing. I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.
Every bandwagoner, technical lightweight is now stomping their feet that Firefox needs to get on this yesterday, but really this is pretty low on the list of things that make a real improvement in people's lives.
You've made here the classic mistake of thinking that everyone else uses a piece of technology in the same way that you do. This i
Re: (Score:3, Interesting)
It isn't even new in the browser world. In fact it's where we started.
Though it is pretty new for a mainstream browser to choose that as an explicit choice.
I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.
Either you're lucky or I'm unlucky... there was a couple years when I was restarting Firefox probably about once a day on average because the memory use would shoot up to the point where it was basically unusable, and clo
Re: (Score:3, Insightful)
Chrome is doing more than browsers originally did. It's got a master process that's watching over everything else. The processes are also running at multiple different privilege levels. This may not be something that's absolutely new, but it does show innovation. There's nothing wrong with evolutionary progress. So-called "revolutionary" ideas often end up being less useful.
I find that browsers will crash or hang fairly often if a page is poorly coded or a plugin reacts badly. Unfortunately, people will alw
Re:Processes (Score:5, Informative)
And, as us old-timers know, this architecture was the basis of the original Bell Labs unix system, back in the 1970s. Lots of little, single-task processes communicating via that newfangled sort of file called a "pipe". That was before the advent of our fancy graphical displays, of course.
Somewhat later, in the mid-1980s when we had graphical displays, the tcl language's tk graphics package encouraged the same design by adding to the usual unix-style pipes and forked processes. The language had a simple, elegant "send" command that would let a sub-process run arbitrary commands (typically function calls) inside the parent process using the sub-process's data. The idea was that you have a main process that is responsible for maintaining the on-screen window(s), but the work is done by the sub-processes. This design prevented blocking in the GUI, because the actions that would block were done in the other processes. The result was a language in which GUI tools could be developed very quickly, and would take maximal advantage of any parallelism supplied by the OS.
But that was all decades ago, before most of our current programmers had ever touched a computer. Imagine if we only knew how to design things that way today. Is it possible that current software developers are rediscovering this decades-old sort of design?
Re:Processes (Score:5, Interesting)
The funny thing is that I'm working on a shiny new HP 6910p laptop. The kind with ~6 hrs battery life, a good deal of memory, a fast CPU and even a decent GPU. Everyone goes on and on about how the "cost" to start different processes all the time is no longer significant, but I really noticed the difference. I run FireFox 3 with a whole bunch of plug-ins and a nice skin. That contraption, in spite of the plug-ins, feels quite a bit faster than the Chrome browser does out of the box. I've tested Chrome yesterday, and at the end of 6 hours of work (in which everything did work off the bat, even starting my Citrix apps from a web portal, kudos to that) I concluded that firefox feels leaner and meaner, and went back to it.
One of the major gripes I have is that it feels like FireFox is much quicker with regards to my proxy. We run a proxy configuration script which gives us different settings depending on which office we're in, and in Firefox I never notice the damn thing running. Now in Chrome, whenever I open a new tab, I see the damn thing executing. New process, sandbagging (or whatever you call it)... bah, humbug. I agree with the parent poster. I can count the times when that would have come in handy on the fingers of one hand after 3 versions of firefox on my system, and it makes the experience noticeably slower.
Cut a long story short: I appreciate it's a beta. Come release time I'll give it another whirr. But right now, I don't see what the big hubbub is about save the fact there's another Open Source competitor on the market, which is always good. What is funny is that when you de-install Chrome, a dialogue pops up asking "Are you sure you want to uninstall Google Chrome? - Was it something we said?"
Re:Processes (Score:5, Insightful)
It's primarily improvements in computer speed. Threads are very cheap in Windows (and this is why .NET in particular is so heavily dependent on spawning tons of threads for many types of tasks) and processes remain fairly expensive, but that expense is somewhat minimized by being about to throw ten kajillion bogomips at the problem.
Re:Processes (Score:4, Informative)
Re:Processes (Score:5, Interesting)
Ahh .net threads are regular win32 threads. They are scheduled by the OS, not the runtime. .Net code is (well, can be...) much, much more robust than c++ code as its managed, and type conversion errors, null references and other things can be caught and recovered from more gracefully than C++. This applies to Java too, of course.
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
Re: (Score:2)
Comment removed (Score:5, Funny)
Re:Processes (Score:5, Insightful)
Running each instance in a separate process is NOT new technology [...]
True, *nix does that for last 3 decades.
The point here (and of RTFA) is that finally on Windows processes are becoming cheaper, making them usable for all the nasty stuff *nix was indulging itself for all the time.
On NT 3.5, creation of new process was taking sometime as long as 300ms. Imaging Chrome on NT: if you open three tabs and start three new processes, on NT in the times, that alone would have taken about one second.
Unix never had the problem. It's Windows specific. And they are improving.
Re: (Score:2)
Unix still has some pretty gnarly issues with threads being relatively expensive though, no? IIRC, they're nowhere near as cheap as Windows threads (and while you can do anything with processes that you can with threads, I think it's pretty clear that there are some big wins to retaining shared address space instead of doing IPC/shared memory files/whatever).
Re:Processes (Score:5, Informative)
Threads historically were expensive because historically nobody used them. E.g. on Windows IO multiplexing had some history of bugs and didn't worked in the beginning - you had to use threads (and people used to use threads and even now use threads instead of NT's IO multiplexing). On *nix IO multiplexing worked more like always thus threads were (and are) used rarely.
Now, since number of CPUs increased dramatically in recent years, threads were optimized to be fast: developers now throw them as panacea at any task at hand. (Most stupid use of threads seen in the month: start new thread for every child application to wait for its termination; due to stupid code, it still might miss its termination).
As a system developer, I have went trhu user space parts of Linux 2.6 and Solaris 10 threads implementations (in disassembler; x64 and SPARC64 respectively) and can say that they are implemented well. (I was looking for example of atomic ops implementation.) Kernel parts of both Linux and Solaris are well known to perform extremely well, since they were tuned to support extremely large Java applications (and Java only now got IO multiplexing - before that threads were only option to perform multiple IO tasks simultaneously). HP-UX 11.x also didn't showed any abnormalities during internal benchmarks and generally its implementation is faster than that of Solaris 10 (after leveling results using speed of CPUs; SPARC64 vs. Itanic 2).
But I guess "slow *nix threads" is now the myth of the same kind as "slow Windows process creation." (Problem is of course that process creation in Windows would always remains expensive compared to *nix. But not that those millisecons of difference matter much for desktop applications.)
Re: (Score:3, Informative)
Are you kidding? This idea is the subject of a popular, but ignored request for enhancement [mozilla.org] filed back in Mozilla's Bugzilla in 2002!
It has 81 votes and 103 users on CC list. The idea is ages old, the successful implementation is new.
Now if only Mozilla guys got to finally implement it in their browser... Otherwise you'll always get folks blaming the
So... (Score:5, Insightful)
...his argument that processes aren't really slower than threads anymore is because your processor is faster?
Re:So... (Score:4, Insightful)
Re: (Score:3, Funny)
Re: (Score:2)
No, his argument is that with a faster processor he no longer has to care that they are slower.
The threshold of caring is subjective. If you are launching a new process to respond to a mouse move message you probably still care that process launch is expensive.
If you are launching a process to create a new tab, something which is governed by human scale time perception, you probably don't care. Especially since almost all the pages you need are already in RAM, so you may not even hit the disk.
Oblig (Score:5, Funny)
"The 70's called...." I can't bring myself to say the rest....
Re:Oblig (Score:4, Funny)
1990 called, they'd like their joke back.
Re: (Score:2)
"The 70's called...." I can't bring myself to say the rest....
BURN!!!
Deja vu (Score:5, Insightful)
My head hurts, I'm confused.
Re: (Score:3, Interesting)
Well the truth it that Chrome might not be as slow under Linux as it is under Windows.
If I remember correctly Windows is really slow at starting a new process while Linux is pretty fast. That was one reason why Apache was so slow on Windows and why they went to threads.
Re: (Score:3, Interesting)
But the speed at which Chrome and IE8 spawn new processes depends on user interaction. Unless you use something like FF's Linky extension that allows you to open 99 tabs at a time, you won't notice a performance hitch. I don't think you can click faster than your system can start processes - unless it's *really* maxed out and/or paging. Which, BTW, happened to me just yesterday when FF3's VM size approached 1GB (after a week or so.) Killing the process and letting it restore windows and tabs reduced the VM
Requirements/Trade-offs (Score:5, Insightful)
There are at least three problems here.
One is efficiency. Nobody will argue that a properly implemented multi-threaded software project is going to be less efficient than a new process per job. If you're writing a server to handle 100,000 connections simultaneously you probably want to use threads.
One is necessity. If you're only going to have at most a couple hundred threads you don't need to think in terms of 100,000 processes - orders of magnitude change things.
The last is correctness. Most multi-threaded browsers aren't actually implemented correctly. So they grow in resource consumption over time and you have to do horrendous things like kill the main process and start over, which loses at least some state with current implementations.
So theory vs. reality vs. scale. There's no "one true" answer.
Re:Requirements/Trade-offs (Score:5, Informative)
"If you're writing a server to handle 100,000 connections simultaneously you probably want to use threads."
Actually, if you want to scale to 100000 connections then you will *not* want to use threads. Google "C10K problem".
Re: (Score:3, Informative)
A single process (single-threaded or multi-threaded) would have OS limits on the open file/socket descriptors much lower than 100000.
I haven't tried it yet myself but supposedly erlang servers do this kind of thing regularly. Somebody here probably knows if you use ulimit or whatever to tune that.
Re:Deja vu (Score:5, Interesting)
Windows people never really understood processes, they cannot distinguish them from programs (look at CreateProcess). They traditionally don't have cheap processes and abuse threads.
In Linux we have NPTL now so there is a robust threads implementation if you need it. I don't thing processes are "superior" to threads (processes sharing their address space) or the other way round. They are for different purposes. If you need different operations sharing a lot of data go for threads I would say.
Re: (Score:2)
Probably, but which one is more stable? They can argue all they want, but the results still speak for themselves.
Obviously it's not impossible that the IE8 team acknowledged this. Not unlike people blasting a politician for changing his stance on an issue from something stupid to something good for the masses. It's like being of the opinion that mysql_query("SELECT * FROM users where id = {$_GET['id']}"); is a good idea - you're still just plain wrong. Sure, you avoid the overhead of calling a mysql_rea
Re:Deja vu (Score:5, Insightful)
Both have pluses and minuses, as with anything. (I won't speak to the Unix model as I am not terribly conversant with it, but I know a good bit about the Windows model of threaded processes.)
A threaded process model has one enormous advantage: you stay within the same address space. Inter-process communication is annoying at best and painful at worse; you have to do some very ugly things like pipes, shared memory, or DBus (on Linux, that is). Using the threaded process model, I can do something like the following (it's C#-ish and off the cuff, so it probably won't compile, but it should be easy to follow):
In an isolated task model, this is nowhere near as simple. The problem, though, is that one thread can, at least in C++, take down the whole damn process if something goes sour. (You can get around that in .NET with stuff like catching NullPointerExceptions, but you'll almost certainly be left in an unrecoverable state and have to either kill the thread or kill the program.) The Loosely Coupled Internet Explorer (LCIE) model is forced to use processes to avoid taking everything down when one tab barfs up its lunch.
Re: (Score:3, Informative)
As a minor nitpick, Sleep(0) can return immediately. You'll end up with the main thread burning CPU in a tight loop if nothing is waiting.
Thread.Join would be more appropriate, or using Monitor.* manually.
Re: (Score:3, Insightful)
I forgot about longjmp, my bad. That said, with longjmp you can come back from it, *sort of*, but as you said, you can't really guarantee program state, and so that's not really very useful.
Re:Deja vu (Score:5, Insightful)
For the majority of local applications, a threaded model is superior. This is because local applications can be "trusted" in the sense they don't need to run each child thread sandboxed etc, so they gain the benefits of greater efficiency without worrying about reduced security. A browser is quite a different beast - it is effectively an OS to run remote "applications" (read: web 2.0 style web sites). So it kind of makes sense to run each as a seperate process.
Windows the OS still runs each application in its own process. So it's not right to compare it to Chrome and argue that it doesn't use seperate processes, because it does - where it counts.
Re:Deja vu (Score:5, Insightful)
Yes, you are correct. Unix started with a process model based on fork() and explicit IPC. Threads were "grafted on" later. It tends to result in more robust software (good multi-threading is HARD).
In Linux a "thread" is a "process", just with more sharing. Thread creation is cheaper in Windows; process creation is cheaper in Linux. I tend to like the isolation that processes offer (multithreading brings with it the joy of variables that can appear to just change by themselves).
There was never any good reason to NOT use multiple processes in a browser, except one. The GUI was "unified" amongst the browser Windows, and it has always been presumed that it would be too difficult to co-ordinate the drawing of multiple browsers. Also, the menu bars and controls would have to assigned to a separate process for each of the browsers. This can be done with an IPC channel, but that code would not have been portable between Unix and Windows at all.
Since process creation was SO expensive in Windows (in days of old), the "thread" or "lightweight thread" approach was used instead (to maximize portability).
It is an amazing testament to Google that they have achieved the multi-process, single UI model (I just don't know how they did the portability part).
Testament indeed (Score:3, Insightful)
It's not altogether clear that they have ...
Re:Deja vu (Score:5, Insightful)
This is a misunderstanding of the application.
Microsoft said 'Threads are better than Processes for a web server', where you're wasting a ton of resources creating a new process for every CGI script that's run. They were right! Now every major web server supports in-process applications that are created once per server (perhaps with a pool of shared app space) rather than once per request.
Microsoft has never said that all the applications on your computer should run in one thread... that's just crazy talk.
This is simply a decision by Microsoft and Google to treat a browser tab as an application, rather than as a document. Now that web pages do a lot more processing (and crashing), this makes more sense than the old way. There's nothing particularly bad about using threads instead... Firefox is just fine with threads, I see no reason for them to undertake a massive change due to misplaced hype.
It really has to do with how much the processes share. If most of the memory per process could be shared, threads are probably more efficient. If not, processes. I'm no browser architect though, so I'll leave it up to Google, Mozilla, and Microsoft to make their own decisions.
Re:Deja vu (Score:5, Funny)
Only one benefit discussed: isolation (Score:3, Informative)
Re:Only one benefit discussed: isolation (Score:5, Insightful)
From the other perspective, having used IE in the past, I know how easy it is for a page to open lots of popups. In fact, you could open so many popups that it would crash the browser.
Now that the browser likes opening new processes, an out of control web page can crash my whole OS instead?
Another Benefit: Killing Bad Pages (Score:3, Interesting)
Re: (Score:3, Interesting)
Er, wha? Threads will regularly kill a process when they're in a bad state in any sufficiently complex program, and given how nasty handling the Web can be, it really doesn't surprise me that web browsers crash.
Processes are both easier to use from a developer's point of view (because I assume part of LCIE is a developer-invisible shared memory model) and somewhat safer than just using threads. It's still possible to crash them, of course, but it's harder to crash than when using a threaded-process model
Re:Only one benefit discussed: isolation (Score:4, Interesting)
Big internet client applications are never properly designed in the first place.
I don't say that as a cynic; it's just that they are so damn big and pull in so many libraries, etc. When you're writing a web browser, you don't have time to write a GIF decoder, so you're going to use someone else's library. This type of thing happens over and over, dozens of times. You just can't audit all that code. But if there's a buffer overflow bug in just one of those libraries...
What excites me about this multiprocess approach isn't just the fact that we can recover from hung javascript. That's just a populist example. What I look forward to, is the problem getting split up into even more processes, with some of those processes running as "nobody" instead of the user, or some of them running under mandatory access controls, etc.
All that crap will never be fully debugged, so let's acknowledge that and protect against it.
Chrome's sandboxing is just the tip of the iceberg compared to what is possible, but it's a step in the right direction and (dammit, finally!!) has people talking about sandboxing as something to really work on. A thousand programmers all over the internet are going to adopt a trend that just happens to be a good trend. Thank you, Google.
Chrome = slow as hell (Score:3, Interesting)
Re: (Score:2)
I didn't time it myself but Chrome does seem really slow to start a new tab.
Re:Chrome = slow as hell (Score:5, Informative)
"to load 8 sites" ... 8 sites that you visit frequently and thus have cached on your Firefox installation, perchance?
Don't be so rash to judge. Chrome has many other areas where it lacks compared to Firefox, speed isn't generally one of them. I've heard many users say that it loads pages more than twice as fast as Firefox, and also scrolls much faster on graphical/data-intensive pages.
The lack of extensions (such as adblock, firebug/firephp, flash block, noscript, coloured tabs) is the main reason why I've barely used it since I installed it.
Re: (Score:3, Insightful)
Indeed! So the OP is that much more of a drama queen for uninstalling it in "5 minutes".
Re: (Score:3, Informative)
Something is wrong with your PC then.
I love FF and have no interest in chrome without all the addons FF provides.
That being said chrome was insanely fast, really, really fast - easily the fastest web browser I've ever seen, including clean instealls of FF1 / 1.5 / 2 and 3.
There's another cost to seperate processes (Score:3, Interesting)
AV slowing the start of each process is really going to cause a performance hit.
Cheap and Dirty Crash Isolation... (Score:5, Insightful)
The real reason for processes instead of threads is cheap & dirty crash isolation. Who cares about RPC time, you don't do THAT much of it in a web browser.
But with more and more apps being composed IN the browser, you need isolation to get at least some crash isolation between "apps"
Re: (Score:3, Interesting)
And yet the now-famous :% crash takes down all of Chrome, not just the current tab. I had a chance to ask a Chrome developer about that, but I didn't get an answer. Perhaps crash-isolation isn't as good in practice as one would think, or perhaps that was just another "oops" on the part of the Chrome dev team, and we'll get real crash isolation in the next release.
Re: (Score:2)
Later releases don't do that any more. But I assume that one was because of a crash in the "supervisor process" - IE8 still has the problem of it being possible to crash the supervisor (UI) process and all child processes die with it.
Re: (Score:2)
But with more and more apps being composed IN the browser, you need isolation to get at least some crash isolation between "apps"
That is a good point. It should also help reduce the issue of a plug-in or stuck page freezing the whole browser.
One thing I would be curious about is how they handle the inter-process communication since, while they are separate processes, things like cookies need to be shared between them. I would also be curious what sort of memory overhead the causes?
I doubt that (Score:2)
I imagine that most people who knew what Chrome is and actually installed it also know what processes are.
Re: (Score:2)
Chrome polish!
Um, it's really a red herring (Score:2, Insightful)
The real issue here is that our OS's mechanisms for controlling resource sharing and protection among cooperating concurrent "lines" of execution (to avoid the words "process" or "thread") aren't as fine-grained as they could be. It's nearly an all-or-nothing choice between everything-shared ("threads") or very-little-shared ("processes"). Processes do get the advantage that the OS allows them to selectively share memory with each other, but threads don't get the natural counterpart, the ability to define
Re: (Score:2, Insightful)
That maybe true on windoze. Linux has had fine-grained control of resource sharing between processes/threads for ages - clone(), mmap() etc. Modern linux threads are implemented as processes at the kernel level, in fact. The idea that "processes are slow" is windowsitis, like "command lines suck" - windows processes may be slow, the windows command line may suck, but processes or command lines in general don't necessarily suck.
Re: (Score:2)
Re: (Score:2)
aren't as fine-grained as they could be
See clone(2) [die.net]. Every significant resource related to a process is selectable when spawning a new thread of execution. pthread_create() and fork() are both implemented in terms of clone(). You may invent you're own mix of shared or copied bits according to your specific needs.
Naturally the Windows case is far less general. First, clone() is too short. MinimumWindowsAPISymbolLengthIs(12). There is no fork(). This makes porting fun; see perl, cygwin, et al.
The design intent of Google's Chrome is, simply
victory of tabbed browsing - not defeat of threads (Score:2)
Tabbed browsing is now so normal that the problem of a crash in one tab bringing down all the others is big deal. On Vista, this problem happens a lot with IE7, and it's *the* single major annoyance for my geek GF on that platform.
Threads truly have their place, but this is a good use of separate processes per tab because it keeps one tab from crashing the others where threads can't achieve that.
Re:victory of tabbed browsing - not defeat of thre (Score:2)
Your girlfriend is a geek who uses IE7 on Vista? Are you SURE?
And yes, a properly written multithreaded browser can prevent one tab from crashing another. The only way one tab could bring down the others would be if it spewed crap in the shared memory space. If you're letting web pages overwrite whatever memory they want then you've got big problems.
Re: (Score:3, Informative)
You cannot so isolate threads without effectively making them separate processes. If the threads *can* write into other memory, then there is the danger that broken code *will* do so. What code you write doesn't matter in the least- get a dangling pointer and you'll be writing to something random, which may very well be some other thread's memory space. If the OS doesn't enforce the isolation, you effectively have no isolation.
No code is perfect. The process model is an acknowledgement of that fact. When it
It's the Windows job creation scheme (Score:2)
It's the Windows job creation scheme [today.com] mentality applied to OS threading: processes are heavyweight in Windows [catb.org]. "Process-spawning is expensive - not as expensive as in VMS, but (at about 0.1 seconds per spawn) up to an order of magnitude more so than on a modern Unix." More work = more hardware.
Limitations (Score:5, Insightful)
There are some details to Chrome's sandboxing implementation [docforge.com] that limit its security benefits:
- The process limit is 20. Anything requiring an additional process once this limit is reached, such as opening another tab, will be assigned randomly to any of the existing 20 processes.
- Frames are run within the same process as the parent window, regardless of domain. Hyperlinking from one frame to another does not change processes.
There are also some problems where valid cross-site JavaScript doesn't work. Of course it's still only a beta. Some specific details are documented by Google [chromium.org].
Share nothing procesesses (Score:3, Insightful)
Not only does that do away with most terrible multithreaded programming problems, but it also can let you write an application which does not need to execute all on the same processor or even the same machine, think concurrency, cloud computing, 1000 core processors, etc.
Look up the way Erlang programs work. Actor based programming is pretty sweet after you wrap your head around it.
Re: (Score:2)
You can do all that with threads and distributed objects too. I actually find distributed computing much cooler when your controller process accepts and executes threads from other nodes. Plus then you've got some code keeping an eye on those jobs.
Processes in Vista (Score:5, Informative)
I remember a story from a long time ago, during Longhorn's early development, where Microsoft did a study of the cpu cycles needed for various tasks between WinXP and Linux. I've never been able to track the study down again since, but I remember that creating a new process took about an order of magnitude more cycles on Windows than Linux. Linux processes are also lighterweight in general; Linux admins think nothing of having 100 processes running, while Windows admins panic when it hits 50.
(The basic reasoning goes that Linux has an awesome processes model because its thread model sucks, and Windows has an awesome thread model because its process model sucks. That's why Apache2 has pluggable modules to make it run with either forking or threading.)
A lot of development from the early Longhorn was scrapped, so how does Vista fare? Does its process model still suck?
Re:Processes in Vista (Score:5, Informative)
The basic reasoning goes that Linux has an awesome processes model because its thread model sucks,
NPTL has scaled to hundreds of millions of threads created in under 10 seconds on common hardware....
IPCommunications overhead? (Score:3, Interesting)
Having gotten several gray hairs from debugging thread-lock issues, I can't help but wonder how these processes do IPC. Presumably complex objects (in the OOP sense) have to be serialized and written to files or piped through sockets. That's not necessarily a bad idea, but it means the data pathways are exposed to the OS, and it's a potential security issue, too.
Hum? (Score:2)
Slow to start a process!? (Score:5, Insightful)
It's hilarious anyone would think that. We're talking about a web browser, not a web server. Even on platforms where process creation is "slow", it's still going to be instantaneous from a single human's point of view. It's not like the user is opening 100 tabs per second.
Re: (Score:2)
Erlang Browser (Score:3, Informative)
Seems like they have taken a leaf of Erlang wisdom here. If you were to write a browser in Erlang, using one (Erlang) process per tab is exactly the way you would have written it. I think it shows that designing software for robustness, something that previously mostly was done for high availability enterprise systems is now reaching the desktop.
Wouldn't surprise me if the next cool browser innovation will be hot code swapping so that you won't have to close all your 5324 tabs just to install the latest browser security fix. At which point they have reinvented Erlang. :)
Re: (Score:3, Informative)
It has nothing to do with Erlang and everything to do with basic design principles. Erlang did not invent what it preaches.
It's different this time around... (Score:2)
Before, when people were arguing between threads vs. processes, most of the arguments assumed ONLY ONE CPU. Forgive me if I'm wrong on this, but my understanding is that one of the biggest reasons modern day programs (programs, not OSs) under-utilize modern multi-core CPUs is that all threads of a process remain on the same CPU as their parent process.
So by designing an application to spawn new processes instead of threads, it un-handcuffs the multi-core CPU and allows it to distribute the work between all
Re: (Score:3, Insightful)
Wrong. If a process has affinity fixed to a single core, then its threads will be similarly constrained. But threads on an unconstrained process will happily move between cores; that's why you can get really aggravating race conditions on multi-proc machines that don't appear for the same multi-threaded program on a single core machine.
Also, Apache and the like allow the option of threads vs. processes. Traditionally, Windows installs use thread and *nix installs use processes because Windows is optimize
quite ironic (Score:5, Interesting)
UNIX didn't have threads for many years because its developers thought that processes were a better choice. Then, a whole bunch of people coming from other systems pushed for threads to be added to UNIX, and they did. Now, 30 years later, people are moving back to the processes-are-better view that UNIX originally was pushing.
Microsoft and Apple have moved to X11-like window systems, Microsoft and Google are moving from threads to processes, ... Maybe it's time to switch to V7 (or Plan 9)? :-)
Re:quite ironic (Score:5, Insightful)
And debugging threads is easy? Oh boy, talk about a crack pipe.
Its not always one process per tab (Score:2)
Could someone give the gory details (Score:3)
Could someone give the gory details on how this all is accomplished? In particular, whether all processes access (and draw in) the same graphical context somehow, or they're just a bunch of z-ordered overlapping windows that move together when dragged?
Re:Could someone give the gory details (Score:5, Informative)
Basically, a single process (the one main browser process) owns the window and draws to it. Renderer processes draw their web content into shared memory; the browser process then transfers the data into a backing store, which it uses to paint the window. The process is coordinated via inter-process message-passing (using pipes, it seems), but the rendering output travels via shared memory.
Re: (Score:3, Informative)
As has been mentioned before, this *is* how browsers started. Back in the Stone Age, when the first browsers were created, they were specialized applications that could view one site at a time. In order for you to open multiple pages, you had to start multiple instances of the application, thus multiple processes.
Eventually, it was deemed more efficient to allow a single process to open multiple pages using multiple threads. So, again, this is nothing new, just a reversal to old ideas whose merits are de