Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming IT Technology

In IE8 and Chrome, Processes Are the New Threads 397

SenFo writes "To many of the people who downloaded Google Chrome last week, it was a surprise to observe that each opened tab runs in a separate process rather than a separate thread. Scott Hanselman, Lead Program Manager at Microsoft, discusses some of the benefits of running in separate processes as opposed to separate threads. A quote: 'Ah! But they're slow! They're slow to start up, and they are slow to communicate between, right? Well, kind of, not really anymore.'"
This discussion has been archived. No new comments can be posted.

In IE8 and Chrome, Processes Are the New Threads

Comments Filter:
  • So... (Score:5, Insightful)

    by Anonymous Coward on Wednesday September 10, 2008 @04:59PM (#24952367)

    ...his argument that processes aren't really slower than threads anymore is because your processor is faster?

  • Deja vu (Score:5, Insightful)

    by overshoot ( 39700 ) on Wednesday September 10, 2008 @05:03PM (#24952415)
    I remember the "processes vs. threads" argument, but last time around wasn't it Microsoft arguing that a threaded process model was superior to an isolated task model like Linux had? Weren't the Linux camp blowing the horn for the superior robustness and security of full task isolation?

    My head hurts, I'm confused.

  • Re:So... (Score:4, Insightful)

    by moderatorrater ( 1095745 ) on Wednesday September 10, 2008 @05:05PM (#24952447)
    Yep, kind of like how anti-aliasing isn't really slower than straight rendering any more because I've got a better video card.
  • Re:Processes (Score:5, Insightful)

    by ergo98 ( 9391 ) on Wednesday September 10, 2008 @05:05PM (#24952453) Homepage Journal

    Running each instance in a seperate process is NOT new technology

    Well of course. It isn't even new in the browser world. In fact it's where we started.

    The earliest browsers required you to run a new instance for each concurrently opened site. This presented onerous resource demands, so they made it more efficient by having multiple window instances run under one process, and then with tabs that obviously carried over to tabs running under one shared process.

    This is so much ado about nothing. I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.

    Every bandwagoner, technical lightweight is now stomping their feet that Firefox needs to get on this yesterday, but really this is pretty low on the list of things that make a real improvement in people's lives. In fact I would go so far as to call it a gimmick. Presuming that the sandbox of a browser automatically stops sites from doing stupid stuff (unlike IE that will let a site kill just by going into a perpetual loop in JavaScript), and plug-ins are created by an idiot, this is completely unnecessary.

    Chrome's great JavaScript is a real story, one upped by Firefox's ThreadMonkey doing one better. Those are real improvements that really do matter.

  • by nweaver ( 113078 ) on Wednesday September 10, 2008 @05:10PM (#24952541) Homepage

    The real reason for processes instead of threads is cheap & dirty crash isolation. Who cares about RPC time, you don't do THAT much of it in a web browser.

    But with more and more apps being composed IN the browser, you need isolation to get at least some crash isolation between "apps"

  • by Estanislao Martínez ( 203477 ) on Wednesday September 10, 2008 @05:12PM (#24952569) Homepage

    The real issue here is that our OS's mechanisms for controlling resource sharing and protection among cooperating concurrent "lines" of execution (to avoid the words "process" or "thread") aren't as fine-grained as they could be. It's nearly an all-or-nothing choice between everything-shared ("threads") or very-little-shared ("processes"). Processes do get the advantage that the OS allows them to selectively share memory with each other, but threads don't get the natural counterpart, the ability to define their own thread-local memory domains, protected from other threads. A more powerful OS concurrency API would allow you to say exactly what things are shared and which are private to each unit of concurrent execution.

  • Limitations (Score:5, Insightful)

    by truthsearch ( 249536 ) on Wednesday September 10, 2008 @05:13PM (#24952591) Homepage Journal

    There are some details to Chrome's sandboxing implementation [docforge.com] that limit its security benefits:

    - The process limit is 20. Anything requiring an additional process once this limit is reached, such as opening another tab, will be assigned randomly to any of the existing 20 processes.

    - Frames are run within the same process as the parent window, regardless of domain. Hyperlinking from one frame to another does not change processes.

    There are also some problems where valid cross-site JavaScript doesn't work. Of course it's still only a beta. Some specific details are documented by Google [chromium.org].

  • by Safiire Arrowny ( 596720 ) on Wednesday September 10, 2008 @05:19PM (#24952661) Homepage
    Share nothing processes which communicate via message passing is the future as far as I can tell.

    Not only does that do away with most terrible multithreaded programming problems, but it also can let you write an application which does not need to execute all on the same processor or even the same machine, think concurrency, cloud computing, 1000 core processors, etc.

    Look up the way Erlang programs work. Actor based programming is pretty sweet after you wrap your head around it.
  • by Anonymous Coward on Wednesday September 10, 2008 @05:23PM (#24952731)

    That maybe true on windoze. Linux has had fine-grained control of resource sharing between processes/threads for ages - clone(), mmap() etc. Modern linux threads are implemented as processes at the kernel level, in fact. The idea that "processes are slow" is windowsitis, like "command lines suck" - windows processes may be slow, the windows command line may suck, but processes or command lines in general don't necessarily suck.

  • Re:Processes (Score:2, Insightful)

    by 7 digits ( 986730 ) on Wednesday September 10, 2008 @05:23PM (#24952741)

    > I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.

    I restart firefox roughly twice per hour when developing my javascript application. Having 10 concurrent tabs executing heavy javascript/ajax generally hangs the browser.

    Of course, extenions (in particular FireBug) are probably responsible of that, and it is painful but not a showstopper. A process per tab model would probably be better for my usage...

  • Re:Processes (Score:5, Insightful)

    by ThePhilips ( 752041 ) on Wednesday September 10, 2008 @05:26PM (#24952795) Homepage Journal

    Running each instance in a separate process is NOT new technology [...]

    True, *nix does that for last 3 decades.

    The point here (and of RTFA) is that finally on Windows processes are becoming cheaper, making them usable for all the nasty stuff *nix was indulging itself for all the time.

    On NT 3.5, creation of new process was taking sometime as long as 300ms. Imaging Chrome on NT: if you open three tabs and start three new processes, on NT in the times, that alone would have taken about one second.

    Unix never had the problem. It's Windows specific. And they are improving.

  • by Sloppy ( 14984 ) on Wednesday September 10, 2008 @05:27PM (#24952803) Homepage Journal

    They [processes] 're slow to start up

    It's hilarious anyone would think that. We're talking about a web browser, not a web server. Even on platforms where process creation is "slow", it's still going to be instantaneous from a single human's point of view. It's not like the user is opening 100 tabs per second.

  • by ceoyoyo ( 59147 ) on Wednesday September 10, 2008 @05:28PM (#24952831)

    From the other perspective, having used IE in the past, I know how easy it is for a page to open lots of popups. In fact, you could open so many popups that it would crash the browser.

    Now that the browser likes opening new processes, an out of control web page can crash my whole OS instead?

  • Re:Processes (Score:3, Insightful)

    by ShadowRangerRIT ( 1301549 ) on Wednesday September 10, 2008 @05:32PM (#24952913)
    Not to rain on your parade, but exactly how are you intending to use browsers in cluster computing? Are you expecting to have so many tabs that a full compute cluster is needed to run them? Your post seems to completely forget that we are talking about a web browser!
  • by bill_mcgonigle ( 4333 ) * on Wednesday September 10, 2008 @05:36PM (#24952979) Homepage Journal

    There are at least three problems here.

    One is efficiency. Nobody will argue that a properly implemented multi-threaded software project is going to be less efficient than a new process per job. If you're writing a server to handle 100,000 connections simultaneously you probably want to use threads.

    One is necessity. If you're only going to have at most a couple hundred threads you don't need to think in terms of 100,000 processes - orders of magnitude change things.

    The last is correctness. Most multi-threaded browsers aren't actually implemented correctly. So they grow in resource consumption over time and you have to do horrendous things like kill the main process and start over, which loses at least some state with current implementations.

    So theory vs. reality vs. scale. There's no "one true" answer.

  • Re:Processes (Score:5, Insightful)

    by FishWithAHammer ( 957772 ) on Wednesday September 10, 2008 @05:42PM (#24953059)

    It's primarily improvements in computer speed. Threads are very cheap in Windows (and this is why .NET in particular is so heavily dependent on spawning tons of threads for many types of tasks) and processes remain fairly expensive, but that expense is somewhat minimized by being about to throw ten kajillion bogomips at the problem.

  • by ShadowRangerRIT ( 1301549 ) on Wednesday September 10, 2008 @05:52PM (#24953217)

    Wrong. If a process has affinity fixed to a single core, then its threads will be similarly constrained. But threads on an unconstrained process will happily move between cores; that's why you can get really aggravating race conditions on multi-proc machines that don't appear for the same multi-threaded program on a single core machine.

    Also, Apache and the like allow the option of threads vs. processes. Traditionally, Windows installs use thread and *nix installs use processes because Windows is optimized for threads and *nix for processes, though it only matters if the server is under load.

  • Re:Deja vu (Score:5, Insightful)

    by FishWithAHammer ( 957772 ) on Wednesday September 10, 2008 @05:54PM (#24953275)

    Both have pluses and minuses, as with anything. (I won't speak to the Unix model as I am not terribly conversant with it, but I know a good bit about the Windows model of threaded processes.)

    A threaded process model has one enormous advantage: you stay within the same address space. Inter-process communication is annoying at best and painful at worse; you have to do some very ugly things like pipes, shared memory, or DBus (on Linux, that is). Using the threaded process model, I can do something like the following (it's C#-ish and off the cuff, so it probably won't compile, but it should be easy to follow):

    class Foo
    {
        Object o = new Object(); // mutex lock, functionality built into C#
        SomeClass c = new SomeClass();
     
        static void Main(String[] args)
        {
            Thread t = new Thread(ThreadFunc1);
            Thread t2 = new Thread(ThreadFunc2);
            t.Start();
            t2.Start();
            while (t.IsRunning || t2.IsRunning) { Thread.Sleep(0); } // cede time
        }
     
        static void ThreadFunc1()
        {
            while (true)
            {
                lock (o)
                {
                    c.DoFunc1();
                }
            }
        }
     
        static void ThreadFunc2()
        {
            while (true)
            {
                lock (o)
                {
                    c.DoFunc2();
                }
            }
        }
    }

    In an isolated task model, this is nowhere near as simple. The problem, though, is that one thread can, at least in C++, take down the whole damn process if something goes sour. (You can get around that in .NET with stuff like catching NullPointerExceptions, but you'll almost certainly be left in an unrecoverable state and have to either kill the thread or kill the program.) The Loosely Coupled Internet Explorer (LCIE) model is forced to use processes to avoid taking everything down when one tab barfs up its lunch.

  • Re:Deja vu (Score:5, Insightful)

    by shird ( 566377 ) on Wednesday September 10, 2008 @05:54PM (#24953279) Homepage Journal

    For the majority of local applications, a threaded model is superior. This is because local applications can be "trusted" in the sense they don't need to run each child thread sandboxed etc, so they gain the benefits of greater efficiency without worrying about reduced security. A browser is quite a different beast - it is effectively an OS to run remote "applications" (read: web 2.0 style web sites). So it kind of makes sense to run each as a seperate process.

    Windows the OS still runs each application in its own process. So it's not right to compare it to Chrome and argue that it doesn't use seperate processes, because it does - where it counts.

  • by B3ryllium ( 571199 ) on Wednesday September 10, 2008 @05:56PM (#24953313) Homepage

    Indeed! So the OP is that much more of a drama queen for uninstalling it in "5 minutes".

  • Re:Processes (Score:4, Insightful)

    by gnud ( 934243 ) on Wednesday September 10, 2008 @05:58PM (#24953369)

    This is so much ado about nothing. I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.

    I own a fairly old computer, and every time I open or close a javascript-heavy page, or open a PDF file, all the rest of my tabs become unusable for some seconds. It's not the end of the world, but I can't think of anything that I'd rather firefox devs spend their time on.

  • Re:Deja vu (Score:5, Insightful)

    by ratboy666 ( 104074 ) <fred_weigel@[ ]mail.com ['hot' in gap]> on Wednesday September 10, 2008 @06:06PM (#24953503) Journal

    Yes, you are correct. Unix started with a process model based on fork() and explicit IPC. Threads were "grafted on" later. It tends to result in more robust software (good multi-threading is HARD).

    In Linux a "thread" is a "process", just with more sharing. Thread creation is cheaper in Windows; process creation is cheaper in Linux. I tend to like the isolation that processes offer (multithreading brings with it the joy of variables that can appear to just change by themselves).

    There was never any good reason to NOT use multiple processes in a browser, except one. The GUI was "unified" amongst the browser Windows, and it has always been presumed that it would be too difficult to co-ordinate the drawing of multiple browsers. Also, the menu bars and controls would have to assigned to a separate process for each of the browsers. This can be done with an IPC channel, but that code would not have been portable between Unix and Windows at all.

    Since process creation was SO expensive in Windows (in days of old), the "thread" or "lightweight thread" approach was used instead (to maximize portability).

    It is an amazing testament to Google that they have achieved the multi-process, single UI model (I just don't know how they did the portability part).

  • Testament indeed (Score:3, Insightful)

    by overshoot ( 39700 ) on Wednesday September 10, 2008 @06:10PM (#24953571)

    It is an amazing testament to Google that they have achieved the multi-process, single UI model (I just don't know how they did the portability part).

    It's not altogether clear that they have ...

  • Re:Deja vu (Score:5, Insightful)

    by The Raven ( 30575 ) on Wednesday September 10, 2008 @06:22PM (#24953789) Homepage

    This is a misunderstanding of the application.

    Microsoft said 'Threads are better than Processes for a web server', where you're wasting a ton of resources creating a new process for every CGI script that's run. They were right! Now every major web server supports in-process applications that are created once per server (perhaps with a pool of shared app space) rather than once per request.

    Microsoft has never said that all the applications on your computer should run in one thread... that's just crazy talk.

    This is simply a decision by Microsoft and Google to treat a browser tab as an application, rather than as a document. Now that web pages do a lot more processing (and crashing), this makes more sense than the old way. There's nothing particularly bad about using threads instead... Firefox is just fine with threads, I see no reason for them to undertake a massive change due to misplaced hype.

    It really has to do with how much the processes share. If most of the memory per process could be shared, threads are probably more efficient. If not, processes. I'm no browser architect though, so I'll leave it up to Google, Mozilla, and Microsoft to make their own decisions.

  • Re:Processes (Score:3, Insightful)

    by AaronLawrence ( 600990 ) * on Wednesday September 10, 2008 @06:51PM (#24954163)

    Agreed - crashes in the browser itself are rare - Firefox seems very reliable in that way.
    However crashes in plugins can be common, and indeed trusting a big binary blob to invasively use your process safely just seems like a bad idea. So I would say that firefox should definitely go for that part and not worry about the process-per-tab part.
    Well like most good ideas it has been in Bugzilla for years!
    https://bugzilla.mozilla.org/show_bug.cgi?id=156493 [mozilla.org]

  • Re:Processes (Score:3, Insightful)

    by Tumbleweed ( 3706 ) * on Wednesday September 10, 2008 @06:57PM (#24954239)

    This is so much ado about nothing. I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.

    Every bandwagoner, technical lightweight is now stomping their feet that Firefox needs to get on this yesterday, but really this is pretty low on the list of things that make a real improvement in people's lives.

    You've made here the classic mistake of thinking that everyone else uses a piece of technology in the same way that you do. This is not the case. With the advent of tabs, I typically have multiple dozens of them open at any given time. I also often leave programs running for days on end. I surf a LOT of very different sites, most of which I've never been to before, and which often have lots of javascript and/or Flash. This makes for a very different experience than you describe, as I've often had Firefox crash, sometimes without any warning at all, just *poof*, the window is gone without even a warning message. Fortunately, the crash recovery features of FF3 are quite good. But they don't always work, and let me say that having to reopen a browser with several dozen tabs (many of which have youtube videos and whatnot) can take rather awhile.

    I very much like the idea of each tab in its own process. Plus, this should work better on multi-core CPUs, which is also quite nice.

  • Re:Limitations (Score:2, Insightful)

    by corerunner ( 971136 ) on Wednesday September 10, 2008 @07:35PM (#24954655) Homepage
    In complete seriousness, what reason is there to ever have hundreds of tabs open? I use typically have 2 browser windows per workspace, and up to 4 workspaces, but even with 20 tabs per window that's still only 160 tabs distributed across the equivalent of 8 monitors! How could anyone possibly remember where to find the site he is looking for in 400 tabs? Maybe I'm missing something...
  • by Anonymous Coward on Wednesday September 10, 2008 @08:00PM (#24954933)

    The problem with your logic is that it assumes that Microsoft has full control over all of the code that could potentially run within the browser process. This is not true as browser plug-ins such as Flash or the Java runtime are native code loaded within the browser process. In a single process environment if there is a bug in any of the plug-ins then the entire process is possibly subject to failure. Even if the bug does not cause the process itself to crash, it could potentially write to other portions of the process space causing unexpected results. If there is an unexpected failure on a thread the state of the entire process is suspect.

    IE8 and Chrome go about plug-in isolation in different ways. In IE8 the plug-in is loaded in the rendering process, whereas in Chrome the plug-in is loaded into a single dedicated plug-in process. In IE8 if you have three tabs open all using the same plug-in and the plug-in fails on one of those pages that tab closes but the other two tabs continue to run. In Chrome if the same situation happens then all three tabs remain open but the plug-in fails across all three. Neither is really better, although Chrome's model currently requires hackery to work as plug-ins have not been designed to be loaded and isolated in such a fashion.

    So it's not as simple as claiming that if a browser is written correctly that it should never fail. First, any sufficient complex software will have bugs, and second, any software that loads native plug-ins in-process cannot account for the activity of those plug-ins. The process isolation model handles both.

  • Re:Processes (Score:3, Insightful)

    by spectre_240sx ( 720999 ) on Wednesday September 10, 2008 @09:29PM (#24955845) Homepage

    Chrome is doing more than browsers originally did. It's got a master process that's watching over everything else. The processes are also running at multiple different privilege levels. This may not be something that's absolutely new, but it does show innovation. There's nothing wrong with evolutionary progress. So-called "revolutionary" ideas often end up being less useful.

    I find that browsers will crash or hang fairly often if a page is poorly coded or a plugin reacts badly. Unfortunately, people will always make mistakes and there will always be things that are capable of crashing a rendering engine, but if separate processes are used, the effects can be limited to the tab / plugin that cause the problem. This opens up a lot of other potential separations. It would be great if they could separate text-boxes from the tab somehow so that if a tab went to hell, one's Slashdot post needn't go with it. Obviously you're not going to see a separate process for each input, but relegating all user-generated state to a process separate from the rendering engine might be a good idea.

    I'm having trouble understanding your argument about sandboxing, but it seems that you're for it. Separate processes greatly increase the degree of security in this case. Malicious coders would have to find a vulnerability in both the browser and the operating system to get around what Google is doing. If sandboxing is implemented in the browser alone, there's no operating system security to step in if there is a vulnerability in the browser.

    As for the javascript engine... Yes, Firefox's is faster, but it's also more mature. The architecture for V8 has a lot of potential and who knows what kind of speed increases we'll see after further development. Also, There may be reasons other than speed that make a full javascript virtual machine a good way to go. It's good to have competing solutions which drive innovation.

  • Re:Processes (Score:5, Insightful)

    by apoc.famine ( 621563 ) <apoc.famine@NOSPAM.gmail.com> on Wednesday September 10, 2008 @09:50PM (#24956025) Journal
    Currently that's spelled "half of the internet". The other half is flash.
  • Re:quite ironic (Score:5, Insightful)

    by the_B0fh ( 208483 ) on Wednesday September 10, 2008 @10:43PM (#24956493) Homepage

    And debugging threads is easy? Oh boy, talk about a crack pipe.

  • Re:Deja vu (Score:3, Insightful)

    by FishWithAHammer ( 957772 ) on Thursday September 11, 2008 @12:08AM (#24957343)

    I forgot about longjmp, my bad. That said, with longjmp you can come back from it, *sort of*, but as you said, you can't really guarantee program state, and so that's not really very useful.

  • Re:Processes (Score:2, Insightful)

    by HanoiAnon ( 1361203 ) on Thursday September 11, 2008 @03:55AM (#24958841)

    Funny. Firefox crashes every single day for me, usually 2-10 times per day. If I go to youtube, it's guaranteed to crash within an hour.

    It sure would be nice if it didn't crash my entire browser session every single time the current tab crashed (hint: flash. Hint2: Flash.)

    I would *@#$@*# kill for a browser that didn't suck quite as horrifically as firefox.

    Uh... So why are you still using it??

  • Re:Processes (Score:3, Insightful)

    by Tweenk ( 1274968 ) on Thursday September 11, 2008 @08:27AM (#24960265)

    Even better would be running just the plugins in separate processes. This way you don't even lose the tab that crashes Flash, only the problematic Flash video.

With your bare hands?!?

Working...