Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

In IE8 and Chrome, Processes Are the New Threads 397

SenFo writes "To many of the people who downloaded Google Chrome last week, it was a surprise to observe that each opened tab runs in a separate process rather than a separate thread. Scott Hanselman, Lead Program Manager at Microsoft, discusses some of the benefits of running in separate processes as opposed to separate threads. A quote: 'Ah! But they're slow! They're slow to start up, and they are slow to communicate between, right? Well, kind of, not really anymore.'"
This discussion has been archived. No new comments can be posted.

In IE8 and Chrome, Processes Are the New Threads

Comments Filter:
  • by michaelepley ( 239861 ) on Wednesday September 10, 2008 @05:06PM (#24952465) Homepage
    Tabs running in separate processes for process isolation for fault/crash tolerance is fine, but its only one benefit. However, 1) tabs running in separate threads shouldn't bring down the entire browser, if the application was properly designed in the first place; and 2) I'm sure we'll still find plenty of ways to crash the primary process and/or cause even separately running processes to do this.
  • Re:Hilarious... (Score:3, Informative)

    by dzfoo ( 772245 ) on Wednesday September 10, 2008 @05:21PM (#24952709)

    As has been mentioned before, this *is* how browsers started. Back in the Stone Age, when the first browsers were created, they were specialized applications that could view one site at a time. In order for you to open multiple pages, you had to start multiple instances of the application, thus multiple processes.

    Eventually, it was deemed more efficient to allow a single process to open multiple pages using multiple threads. So, again, this is nothing new, just a reversal to old ideas whose merits are debatable.

            -dZ.

  • Processes in Vista (Score:5, Informative)

    by hardburn ( 141468 ) <hardburn.wumpus-cave@net> on Wednesday September 10, 2008 @05:22PM (#24952725)

    I remember a story from a long time ago, during Longhorn's early development, where Microsoft did a study of the cpu cycles needed for various tasks between WinXP and Linux. I've never been able to track the study down again since, but I remember that creating a new process took about an order of magnitude more cycles on Windows than Linux. Linux processes are also lighterweight in general; Linux admins think nothing of having 100 processes running, while Windows admins panic when it hits 50.

    (The basic reasoning goes that Linux has an awesome processes model because its thread model sucks, and Windows has an awesome thread model because its process model sucks. That's why Apache2 has pluggable modules to make it run with either forking or threading.)

    A lot of development from the early Longhorn was scrapped, so how does Vista fare? Does its process model still suck?

  • by bluefoxlucid ( 723572 ) on Wednesday September 10, 2008 @05:32PM (#24952921) Homepage Journal

    The basic reasoning goes that Linux has an awesome processes model because its thread model sucks,

    NPTL has scaled to hundreds of millions of threads created in under 10 seconds on common hardware....

  • Erlang Browser (Score:3, Informative)

    by bjourne ( 1034822 ) on Wednesday September 10, 2008 @05:36PM (#24952975) Homepage Journal

    Seems like they have taken a leaf of Erlang wisdom here. If you were to write a browser in Erlang, using one (Erlang) process per tab is exactly the way you would have written it. I think it shows that designing software for robustness, something that previously mostly was done for high availability enterprise systems is now reaching the desktop.

    Wouldn't surprise me if the next cool browser innovation will be hot code swapping so that you won't have to close all your 5324 tabs just to install the latest browser security fix. At which point they have reinvented Erlang. :)

  • by B3ryllium ( 571199 ) on Wednesday September 10, 2008 @05:41PM (#24953045) Homepage

    "to load 8 sites" ... 8 sites that you visit frequently and thus have cached on your Firefox installation, perchance?

    Don't be so rash to judge. Chrome has many other areas where it lacks compared to Firefox, speed isn't generally one of them. I've heard many users say that it loads pages more than twice as fast as Firefox, and also scrolls much faster on graphical/data-intensive pages.

    The lack of extensions (such as adblock, firebug/firephp, flash block, noscript, coloured tabs) is the main reason why I've barely used it since I installed it.

  • Re:Erlang Browser (Score:3, Informative)

    by FishWithAHammer ( 957772 ) on Wednesday September 10, 2008 @06:08PM (#24953535)

    It has nothing to do with Erlang and everything to do with basic design principles. Erlang did not invent what it preaches.

  • by AbRASiON ( 589899 ) * on Wednesday September 10, 2008 @06:16PM (#24953683) Journal

    Something is wrong with your PC then.
    I love FF and have no interest in chrome without all the addons FF provides.
    That being said chrome was insanely fast, really, really fast - easily the fastest web browser I've ever seen, including clean instealls of FF1 / 1.5 / 2 and 3.

  • by bill_mcgonigle ( 4333 ) * on Wednesday September 10, 2008 @06:39PM (#24953997) Homepage Journal

    A single process (single-threaded or multi-threaded) would have OS limits on the open file/socket descriptors much lower than 100000.

    I haven't tried it yet myself but supposedly erlang servers do this kind of thing regularly. Somebody here probably knows if you use ulimit or whatever to tune that.

  • Re:Processes (Score:5, Informative)

    by ThePhilips ( 752041 ) on Wednesday September 10, 2008 @07:22PM (#24954525) Homepage Journal

    Threads historically were expensive because historically nobody used them. E.g. on Windows IO multiplexing had some history of bugs and didn't worked in the beginning - you had to use threads (and people used to use threads and even now use threads instead of NT's IO multiplexing). On *nix IO multiplexing worked more like always thus threads were (and are) used rarely.

    Now, since number of CPUs increased dramatically in recent years, threads were optimized to be fast: developers now throw them as panacea at any task at hand. (Most stupid use of threads seen in the month: start new thread for every child application to wait for its termination; due to stupid code, it still might miss its termination).

    As a system developer, I have went trhu user space parts of Linux 2.6 and Solaris 10 threads implementations (in disassembler; x64 and SPARC64 respectively) and can say that they are implemented well. (I was looking for example of atomic ops implementation.) Kernel parts of both Linux and Solaris are well known to perform extremely well, since they were tuned to support extremely large Java applications (and Java only now got IO multiplexing - before that threads were only option to perform multiple IO tasks simultaneously). HP-UX 11.x also didn't showed any abnormalities during internal benchmarks and generally its implementation is faster than that of Solaris 10 (after leveling results using speed of CPUs; SPARC64 vs. Itanic 2).

    But I guess "slow *nix threads" is now the myth of the same kind as "slow Windows process creation." (Problem is of course that process creation in Windows would always remains expensive compared to *nix. But not that those millisecons of difference matter much for desktop applications.)

  • by FooBarWidget ( 556006 ) on Wednesday September 10, 2008 @07:41PM (#24954707)

    "If you're writing a server to handle 100,000 connections simultaneously you probably want to use threads."

    Actually, if you want to scale to 100000 connections then you will *not* want to use threads. Google "C10K problem".

  • by Shin-LaC ( 1333529 ) on Wednesday September 10, 2008 @08:42PM (#24955409)
    I went looking for the same information earlier today. Surprisingly, the design document titled "How Chromium Displays Web Pages" [chromium.org] doesn't shed any light on that, at least at this time. You have to dive into the source [chromium.org] to find out.

    Basically, a single process (the one main browser process) owns the window and draws to it. Renderer processes draw their web content into shared memory; the browser process then transfers the data into a backing store, which it uses to paint the window. The process is coordinated via inter-process message-passing (using pipes, it seems), but the rendering output travels via shared memory.
  • Re:Processes (Score:4, Informative)

    by smartdreamer ( 666870 ) on Wednesday September 10, 2008 @09:36PM (#24955895)
    Actually, threads are heavier in Windows than in every other OS including Linux, MacOSX and Singularity. Still they are way cheaper than a process and that is the whole point of their existence. As for .NET, it uses its own built-in threads model; .NET threads are different than OS threads.
  • Re:Processes (Score:4, Informative)

    by Firehed ( 942385 ) on Wednesday September 10, 2008 @09:55PM (#24956085) Homepage

    A few onclick events and ajax calls do not make up an application. Something that requires heavy debugging does, and chances are you're reinventing the wheel if that's the case in Javascript (see: jQuery, MooTools, etc).

  • Re:Processes (Score:5, Informative)

    by jc42 ( 318812 ) on Wednesday September 10, 2008 @09:58PM (#24956127) Homepage Journal

    Running each instance in a seperate process is NOT new technology

    Well of course. It isn't even new in the browser world. In fact it's where we started.

    And, as us old-timers know, this architecture was the basis of the original Bell Labs unix system, back in the 1970s. Lots of little, single-task processes communicating via that newfangled sort of file called a "pipe". That was before the advent of our fancy graphical displays, of course.

    Somewhat later, in the mid-1980s when we had graphical displays, the tcl language's tk graphics package encouraged the same design by adding to the usual unix-style pipes and forked processes. The language had a simple, elegant "send" command that would let a sub-process run arbitrary commands (typically function calls) inside the parent process using the sub-process's data. The idea was that you have a main process that is responsible for maintaining the on-screen window(s), but the work is done by the sub-processes. This design prevented blocking in the GUI, because the actions that would block were done in the other processes. The result was a language in which GUI tools could be developed very quickly, and would take maximal advantage of any parallelism supplied by the OS.

    But that was all decades ago, before most of our current programmers had ever touched a computer. Imagine if we only knew how to design things that way today. Is it possible that current software developers are rediscovering this decades-old sort of design?

  • Re:Deja vu (Score:3, Informative)

    by ozphx ( 1061292 ) on Thursday September 11, 2008 @12:08AM (#24957341) Homepage

    As a minor nitpick, Sleep(0) can return immediately. You'll end up with the main thread burning CPU in a tight loop if nothing is waiting.

    Thread.Join would be more appropriate, or using Monitor.* manually.

  • by caerwyn ( 38056 ) on Thursday September 11, 2008 @12:34AM (#24957567)

    You cannot so isolate threads without effectively making them separate processes. If the threads *can* write into other memory, then there is the danger that broken code *will* do so. What code you write doesn't matter in the least- get a dangling pointer and you'll be writing to something random, which may very well be some other thread's memory space. If the OS doesn't enforce the isolation, you effectively have no isolation.

    No code is perfect. The process model is an acknowledgement of that fact. When it comes to plugins, the code is third-party- adobe, etc. Do you really want the stability of the browser as a whole to depend on third-party code?

    You acknowledge the security vulnerability possibility, but you seem to be missing that the process model is an example of the sort of sandboxing that is precisely appropriate for limiting the impact of those vulnerabilities. Given sufficient OS-level control, in fact, you could run plugins in subprocesses specifically barred from touching any other portion of the system- that's precisely the sort of the risk-mitigation that we want browsers to be moving toward.

    I certainly agree that the process model as currently implemented continues to have some efficiency issues, and the incompleteness of its implementation wrt chrome reusing process and the like impairs or removes some of its benefits. But I'm not sure how it can be held that the segregation and robustness benefits are actually downsides, as you seem to be saying- if Flash crashes or hangs due to a broken flash app, I'd really rather not lose the email I'm writing in another tab.

  • by Anonymous Coward on Thursday September 11, 2008 @05:38AM (#24959331)

    You are correct, but using processes would be even worse.

  • Re:Processes (Score:3, Informative)

    by the_olo ( 160789 ) on Thursday September 11, 2008 @06:45AM (#24959627) Homepage

    Corollary: Revamp the plugin architecture so that plugins have to run in a separate process.

    I'm beginning to wonder if that fellow had any inside knowledge...

    Are you kidding? This idea is the subject of a popular, but ignored request for enhancement [mozilla.org] filed back in Mozilla's Bugzilla in 2002!

    It has 81 votes and 103 users on CC list. The idea is ages old, the successful implementation is new.

    Now if only Mozilla guys got to finally implement it in their browser... Otherwise you'll always get folks blaming the browser for crashes which are in fact caused by proprietary plugins.

For God's sake, stop researching for a while and begin to think!

Working...