Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

In IE8 and Chrome, Processes Are the New Threads 397

SenFo writes "To many of the people who downloaded Google Chrome last week, it was a surprise to observe that each opened tab runs in a separate process rather than a separate thread. Scott Hanselman, Lead Program Manager at Microsoft, discusses some of the benefits of running in separate processes as opposed to separate threads. A quote: 'Ah! But they're slow! They're slow to start up, and they are slow to communicate between, right? Well, kind of, not really anymore.'"
This discussion has been archived. No new comments can be posted.

In IE8 and Chrome, Processes Are the New Threads

Comments Filter:
  • by bigtallmofo ( 695287 ) * on Wednesday September 10, 2008 @03:57PM (#24952317)
    I may be inadvertently responsible for Internet Explorer 8's use of separate processes for each tab. Months ago, when they invited me to install the beta of their latest web browser, I told them to do something that sounds very similar to "Go fork yourself!"

    I think they took that as architectural advice.
  • Processes (Score:5, Interesting)

    by Ethanol-fueled ( 1125189 ) * on Wednesday September 10, 2008 @03:57PM (#24952331) Homepage Journal
    Running each instance in a seperate process is NOT new technology, hell, any n00b who knows what JCreator is has seen that option before(see this [slashdot.org] comment I posted awhile back).

    Increases in computing power have made insignificant the perceived sluggishness of running multiple processes -- if Chrome won't run smoothly on that Pentium 2 of yours, then perhaps you should install command-line linux anyway! :)

    Regarding Chrome, check out this [slashdot.org] response to my comment I linked to above, posted on June 30. At the time, I thought it was just an extension of a good idea but since his comment was posted earlier than Chrome was released I'm beginning to wonder if that fellow had any inside knowledge...

    [/tinfoil hat]
    • Re:Processes (Score:5, Insightful)

      by ergo98 ( 9391 ) on Wednesday September 10, 2008 @04:05PM (#24952453) Homepage Journal

      Running each instance in a seperate process is NOT new technology

      Well of course. It isn't even new in the browser world. In fact it's where we started.

      The earliest browsers required you to run a new instance for each concurrently opened site. This presented onerous resource demands, so they made it more efficient by having multiple window instances run under one process, and then with tabs that obviously carried over to tabs running under one shared process.

      This is so much ado about nothing. I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.

      Every bandwagoner, technical lightweight is now stomping their feet that Firefox needs to get on this yesterday, but really this is pretty low on the list of things that make a real improvement in people's lives. In fact I would go so far as to call it a gimmick. Presuming that the sandbox of a browser automatically stops sites from doing stupid stuff (unlike IE that will let a site kill just by going into a perpetual loop in JavaScript), and plug-ins are created by an idiot, this is completely unnecessary.

      Chrome's great JavaScript is a real story, one upped by Firefox's ThreadMonkey doing one better. Those are real improvements that really do matter.

      • Re: (Score:2, Insightful)

        by 7 digits ( 986730 )

        > I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.

        I restart firefox roughly twice per hour when developing my javascript application. Having 10 concurrent tabs executing heavy javascript/ajax generally hangs the browser.

        Of course, extenions (in particular FireBug) are probably responsible of that, and it is painful but not a showstopper. A process per tab model would probably be better for my usage...

      • Um, duh (Score:2, Funny)

        by coryking ( 104614 ) *

        Every bandwagoner, technical lightweight is now stomping their feet that Firefox needs to get on this yesterday

        Of course they want it yesterday... that is why they aren't as smart as you and I. They think you can go back in time.

        Smart people like those reading this comment want it *today* or perhaps tomorrow morning. The honor roll students understand that today or even tomorrow might not be possible and instead are willing to wait a few days. The Mensa crowd and those working on Duke Nukem Forever or Perl6 are willing to wait until the code is the most architecturally perfect code ever written.

        My point, for thos

      • Re:Processes (Score:4, Insightful)

        by gnud ( 934243 ) on Wednesday September 10, 2008 @04:58PM (#24953369)

        This is so much ado about nothing. I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.

        I own a fairly old computer, and every time I open or close a javascript-heavy page, or open a PDF file, all the rest of my tabs become unusable for some seconds. It's not the end of the world, but I can't think of anything that I'd rather firefox devs spend their time on.

      • Re:Processes (Score:5, Interesting)

        by firefly4f4 ( 1233902 ) on Wednesday September 10, 2008 @05:04PM (#24953485)

        I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.

        Possibly you can, but the biggest one I can count is having a flash plug-in (or similar) crash the entire browser when there's only a problem on one tab. That happens more frequently than I'd care for, so if there was a change that only brought down one tab, that would be great.

        • Re: (Score:3, Insightful)

          by Tweenk ( 1274968 )

          Even better would be running just the plugins in separate processes. This way you don't even lose the tab that crashes Flash, only the problematic Flash video.

      • Re: (Score:3, Insightful)

        Agreed - crashes in the browser itself are rare - Firefox seems very reliable in that way.
        However crashes in plugins can be common, and indeed trusting a big binary blob to invasively use your process safely just seems like a bad idea. So I would say that firefox should definitely go for that part and not worry about the process-per-tab part.
        Well like most good ideas it has been in Bugzilla for years!
        https://bugzilla.mozilla.org/show_bug.cgi?id=156493 [mozilla.org]

      • Re: (Score:3, Insightful)

        by Tumbleweed ( 3706 ) *

        This is so much ado about nothing. I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.

        Every bandwagoner, technical lightweight is now stomping their feet that Firefox needs to get on this yesterday, but really this is pretty low on the list of things that make a real improvement in people's lives.

        You've made here the classic mistake of thinking that everyone else uses a piece of technology in the same way that you do. This i

      • Re: (Score:3, Interesting)

        by EvanED ( 569694 )

        It isn't even new in the browser world. In fact it's where we started.

        Though it is pretty new for a mainstream browser to choose that as an explicit choice.

        I can count on one hand the number of times I've had a problem with Firefox that would have been solved by it being in its own process.

        Either you're lucky or I'm unlucky... there was a couple years when I was restarting Firefox probably about once a day on average because the memory use would shoot up to the point where it was basically unusable, and clo

      • Re: (Score:3, Insightful)

        Chrome is doing more than browsers originally did. It's got a master process that's watching over everything else. The processes are also running at multiple different privilege levels. This may not be something that's absolutely new, but it does show innovation. There's nothing wrong with evolutionary progress. So-called "revolutionary" ideas often end up being less useful.

        I find that browsers will crash or hang fairly often if a page is poorly coded or a plugin reacts badly. Unfortunately, people will alw

      • Re:Processes (Score:5, Informative)

        by jc42 ( 318812 ) on Wednesday September 10, 2008 @08:58PM (#24956127) Homepage Journal

        Running each instance in a seperate process is NOT new technology

        Well of course. It isn't even new in the browser world. In fact it's where we started.

        And, as us old-timers know, this architecture was the basis of the original Bell Labs unix system, back in the 1970s. Lots of little, single-task processes communicating via that newfangled sort of file called a "pipe". That was before the advent of our fancy graphical displays, of course.

        Somewhat later, in the mid-1980s when we had graphical displays, the tcl language's tk graphics package encouraged the same design by adding to the usual unix-style pipes and forked processes. The language had a simple, elegant "send" command that would let a sub-process run arbitrary commands (typically function calls) inside the parent process using the sub-process's data. The idea was that you have a main process that is responsible for maintaining the on-screen window(s), but the work is done by the sub-processes. This design prevented blocking in the GUI, because the actions that would block were done in the other processes. The result was a language in which GUI tools could be developed very quickly, and would take maximal advantage of any parallelism supplied by the OS.

        But that was all decades ago, before most of our current programmers had ever touched a computer. Imagine if we only knew how to design things that way today. Is it possible that current software developers are rediscovering this decades-old sort of design?

      • Re:Processes (Score:5, Interesting)

        by Chrisje ( 471362 ) on Thursday September 11, 2008 @02:33AM (#24958699)

        The funny thing is that I'm working on a shiny new HP 6910p laptop. The kind with ~6 hrs battery life, a good deal of memory, a fast CPU and even a decent GPU. Everyone goes on and on about how the "cost" to start different processes all the time is no longer significant, but I really noticed the difference. I run FireFox 3 with a whole bunch of plug-ins and a nice skin. That contraption, in spite of the plug-ins, feels quite a bit faster than the Chrome browser does out of the box. I've tested Chrome yesterday, and at the end of 6 hours of work (in which everything did work off the bat, even starting my Citrix apps from a web portal, kudos to that) I concluded that firefox feels leaner and meaner, and went back to it.

        One of the major gripes I have is that it feels like FireFox is much quicker with regards to my proxy. We run a proxy configuration script which gives us different settings depending on which office we're in, and in Firefox I never notice the damn thing running. Now in Chrome, whenever I open a new tab, I see the damn thing executing. New process, sandbagging (or whatever you call it)... bah, humbug. I agree with the parent poster. I can count the times when that would have come in handy on the fingers of one hand after 3 versions of firefox on my system, and it makes the experience noticeably slower.

        Cut a long story short: I appreciate it's a beta. Come release time I'll give it another whirr. But right now, I don't see what the big hubbub is about save the fact there's another Open Source competitor on the market, which is always good. What is funny is that when you de-install Chrome, a dialogue pops up asking "Are you sure you want to uninstall Google Chrome? - Was it something we said?"

    • Re: (Score:3, Interesting)

      Using processes over threads will also benefit when it comes to cluster computing. You can't really migrate a thread to another node, because then you have shared memory coherency issues. However, migrating a process is much easier.
      • Re: (Score:3, Insightful)

        Not to rain on your parade, but exactly how are you intending to use browsers in cluster computing? Are you expecting to have so many tabs that a full compute cluster is needed to run them? Your post seems to completely forget that we are talking about a web browser!
    • Re:Processes (Score:5, Insightful)

      by ThePhilips ( 752041 ) on Wednesday September 10, 2008 @04:26PM (#24952795) Homepage Journal

      Running each instance in a separate process is NOT new technology [...]

      True, *nix does that for last 3 decades.

      The point here (and of RTFA) is that finally on Windows processes are becoming cheaper, making them usable for all the nasty stuff *nix was indulging itself for all the time.

      On NT 3.5, creation of new process was taking sometime as long as 300ms. Imaging Chrome on NT: if you open three tabs and start three new processes, on NT in the times, that alone would have taken about one second.

      Unix never had the problem. It's Windows specific. And they are improving.

      • Unix still has some pretty gnarly issues with threads being relatively expensive though, no? IIRC, they're nowhere near as cheap as Windows threads (and while you can do anything with processes that you can with threads, I think it's pretty clear that there are some big wins to retaining shared address space instead of doing IPC/shared memory files/whatever).

        • Re:Processes (Score:5, Informative)

          by ThePhilips ( 752041 ) on Wednesday September 10, 2008 @06:22PM (#24954525) Homepage Journal

          Threads historically were expensive because historically nobody used them. E.g. on Windows IO multiplexing had some history of bugs and didn't worked in the beginning - you had to use threads (and people used to use threads and even now use threads instead of NT's IO multiplexing). On *nix IO multiplexing worked more like always thus threads were (and are) used rarely.

          Now, since number of CPUs increased dramatically in recent years, threads were optimized to be fast: developers now throw them as panacea at any task at hand. (Most stupid use of threads seen in the month: start new thread for every child application to wait for its termination; due to stupid code, it still might miss its termination).

          As a system developer, I have went trhu user space parts of Linux 2.6 and Solaris 10 threads implementations (in disassembler; x64 and SPARC64 respectively) and can say that they are implemented well. (I was looking for example of atomic ops implementation.) Kernel parts of both Linux and Solaris are well known to perform extremely well, since they were tuned to support extremely large Java applications (and Java only now got IO multiplexing - before that threads were only option to perform multiple IO tasks simultaneously). HP-UX 11.x also didn't showed any abnormalities during internal benchmarks and generally its implementation is faster than that of Solaris 10 (after leveling results using speed of CPUs; SPARC64 vs. Itanic 2).

          But I guess "slow *nix threads" is now the myth of the same kind as "slow Windows process creation." (Problem is of course that process creation in Windows would always remains expensive compared to *nix. But not that those millisecons of difference matter much for desktop applications.)

    • Re: (Score:3, Informative)

      by the_olo ( 160789 )

      Corollary: Revamp the plugin architecture so that plugins have to run in a separate process.

      I'm beginning to wonder if that fellow had any inside knowledge...

      Are you kidding? This idea is the subject of a popular, but ignored request for enhancement [mozilla.org] filed back in Mozilla's Bugzilla in 2002!

      It has 81 votes and 103 users on CC list. The idea is ages old, the successful implementation is new.

      Now if only Mozilla guys got to finally implement it in their browser... Otherwise you'll always get folks blaming the

  • So... (Score:5, Insightful)

    by Anonymous Coward on Wednesday September 10, 2008 @03:59PM (#24952367)

    ...his argument that processes aren't really slower than threads anymore is because your processor is faster?

    • Re:So... (Score:4, Insightful)

      by moderatorrater ( 1095745 ) on Wednesday September 10, 2008 @04:05PM (#24952447)
      Yep, kind of like how anti-aliasing isn't really slower than straight rendering any more because I've got a better video card.
    • Re: (Score:3, Funny)

      by InlawBiker ( 1124825 )
      Yeah, exactly. Why wait for programmers to come up with solid multi-thread code? $150 now gets you a dual-core CPU and 4gb or RAM. Just hope your browser doesn't crash while you're there....
    • by clodney ( 778910 )

      No, his argument is that with a faster processor he no longer has to care that they are slower.

      The threshold of caring is subjective. If you are launching a new process to respond to a mouse move message you probably still care that process launch is expensive.

      If you are launching a process to create a new tab, something which is governed by human scale time perception, you probably don't care. Especially since almost all the pages you need are already in RAM, so you may not even hit the disk.

  • Oblig (Score:5, Funny)

    by plopez ( 54068 ) on Wednesday September 10, 2008 @04:00PM (#24952387) Journal

    "The 70's called...." I can't bring myself to say the rest....

  • Deja vu (Score:5, Insightful)

    by overshoot ( 39700 ) on Wednesday September 10, 2008 @04:03PM (#24952415)
    I remember the "processes vs. threads" argument, but last time around wasn't it Microsoft arguing that a threaded process model was superior to an isolated task model like Linux had? Weren't the Linux camp blowing the horn for the superior robustness and security of full task isolation?

    My head hurts, I'm confused.

    • Re: (Score:3, Interesting)

      by LWATCDR ( 28044 )

      Well the truth it that Chrome might not be as slow under Linux as it is under Windows.
      If I remember correctly Windows is really slow at starting a new process while Linux is pretty fast. That was one reason why Apache was so slow on Windows and why they went to threads.

      • Re: (Score:3, Interesting)

        by rrohbeck ( 944847 )

        But the speed at which Chrome and IE8 spawn new processes depends on user interaction. Unless you use something like FF's Linky extension that allows you to open 99 tabs at a time, you won't notice a performance hitch. I don't think you can click faster than your system can start processes - unless it's *really* maxed out and/or paging. Which, BTW, happened to me just yesterday when FF3's VM size approached 1GB (after a week or so.) Killing the process and letting it restore windows and tabs reduced the VM

    • by bill_mcgonigle ( 4333 ) * on Wednesday September 10, 2008 @04:36PM (#24952979) Homepage Journal

      There are at least three problems here.

      One is efficiency. Nobody will argue that a properly implemented multi-threaded software project is going to be less efficient than a new process per job. If you're writing a server to handle 100,000 connections simultaneously you probably want to use threads.

      One is necessity. If you're only going to have at most a couple hundred threads you don't need to think in terms of 100,000 processes - orders of magnitude change things.

      The last is correctness. Most multi-threaded browsers aren't actually implemented correctly. So they grow in resource consumption over time and you have to do horrendous things like kill the main process and start over, which loses at least some state with current implementations.

      So theory vs. reality vs. scale. There's no "one true" answer.

      • by FooBarWidget ( 556006 ) on Wednesday September 10, 2008 @06:41PM (#24954707)

        "If you're writing a server to handle 100,000 connections simultaneously you probably want to use threads."

        Actually, if you want to scale to 100000 connections then you will *not* want to use threads. Google "C10K problem".

    • Re:Deja vu (Score:5, Interesting)

      by Anonymous Coward on Wednesday September 10, 2008 @04:43PM (#24953089)

      Windows people never really understood processes, they cannot distinguish them from programs (look at CreateProcess). They traditionally don't have cheap processes and abuse threads.

      In Linux we have NPTL now so there is a robust threads implementation if you need it. I don't thing processes are "superior" to threads (processes sharing their address space) or the other way round. They are for different purposes. If you need different operations sharing a lot of data go for threads I would say.

    • by Firehed ( 942385 )

      Probably, but which one is more stable? They can argue all they want, but the results still speak for themselves.

      Obviously it's not impossible that the IE8 team acknowledged this. Not unlike people blasting a politician for changing his stance on an issue from something stupid to something good for the masses. It's like being of the opinion that mysql_query("SELECT * FROM users where id = {$_GET['id']}"); is a good idea - you're still just plain wrong. Sure, you avoid the overhead of calling a mysql_rea

    • Re:Deja vu (Score:5, Insightful)

      by FishWithAHammer ( 957772 ) on Wednesday September 10, 2008 @04:54PM (#24953275)

      Both have pluses and minuses, as with anything. (I won't speak to the Unix model as I am not terribly conversant with it, but I know a good bit about the Windows model of threaded processes.)

      A threaded process model has one enormous advantage: you stay within the same address space. Inter-process communication is annoying at best and painful at worse; you have to do some very ugly things like pipes, shared memory, or DBus (on Linux, that is). Using the threaded process model, I can do something like the following (it's C#-ish and off the cuff, so it probably won't compile, but it should be easy to follow):

      class Foo
      {
          Object o = new Object(); // mutex lock, functionality built into C#
          SomeClass c = new SomeClass();
       
          static void Main(String[] args)
          {
              Thread t = new Thread(ThreadFunc1);
              Thread t2 = new Thread(ThreadFunc2);
              t.Start();
              t2.Start();
              while (t.IsRunning || t2.IsRunning) { Thread.Sleep(0); } // cede time
          }
       
          static void ThreadFunc1()
          {
              while (true)
              {
                  lock (o)
                  {
                      c.DoFunc1();
                  }
              }
          }
       
          static void ThreadFunc2()
          {
              while (true)
              {
                  lock (o)
                  {
                      c.DoFunc2();
                  }
              }
          }
      }

      In an isolated task model, this is nowhere near as simple. The problem, though, is that one thread can, at least in C++, take down the whole damn process if something goes sour. (You can get around that in .NET with stuff like catching NullPointerExceptions, but you'll almost certainly be left in an unrecoverable state and have to either kill the thread or kill the program.) The Loosely Coupled Internet Explorer (LCIE) model is forced to use processes to avoid taking everything down when one tab barfs up its lunch.

      • Re: (Score:3, Informative)

        by ozphx ( 1061292 )

        As a minor nitpick, Sleep(0) can return immediately. You'll end up with the main thread burning CPU in a tight loop if nothing is waiting.

        Thread.Join would be more appropriate, or using Monitor.* manually.

    • Re:Deja vu (Score:5, Insightful)

      by shird ( 566377 ) on Wednesday September 10, 2008 @04:54PM (#24953279) Homepage Journal

      For the majority of local applications, a threaded model is superior. This is because local applications can be "trusted" in the sense they don't need to run each child thread sandboxed etc, so they gain the benefits of greater efficiency without worrying about reduced security. A browser is quite a different beast - it is effectively an OS to run remote "applications" (read: web 2.0 style web sites). So it kind of makes sense to run each as a seperate process.

      Windows the OS still runs each application in its own process. So it's not right to compare it to Chrome and argue that it doesn't use seperate processes, because it does - where it counts.

    • Re:Deja vu (Score:5, Insightful)

      by ratboy666 ( 104074 ) <fred_weigel@ho[ ]il.com ['tma' in gap]> on Wednesday September 10, 2008 @05:06PM (#24953503) Journal

      Yes, you are correct. Unix started with a process model based on fork() and explicit IPC. Threads were "grafted on" later. It tends to result in more robust software (good multi-threading is HARD).

      In Linux a "thread" is a "process", just with more sharing. Thread creation is cheaper in Windows; process creation is cheaper in Linux. I tend to like the isolation that processes offer (multithreading brings with it the joy of variables that can appear to just change by themselves).

      There was never any good reason to NOT use multiple processes in a browser, except one. The GUI was "unified" amongst the browser Windows, and it has always been presumed that it would be too difficult to co-ordinate the drawing of multiple browsers. Also, the menu bars and controls would have to assigned to a separate process for each of the browsers. This can be done with an IPC channel, but that code would not have been portable between Unix and Windows at all.

      Since process creation was SO expensive in Windows (in days of old), the "thread" or "lightweight thread" approach was used instead (to maximize portability).

      It is an amazing testament to Google that they have achieved the multi-process, single UI model (I just don't know how they did the portability part).

      • Testament indeed (Score:3, Insightful)

        by overshoot ( 39700 )

        It is an amazing testament to Google that they have achieved the multi-process, single UI model (I just don't know how they did the portability part).

        It's not altogether clear that they have ...

    • Re:Deja vu (Score:5, Insightful)

      by The Raven ( 30575 ) on Wednesday September 10, 2008 @05:22PM (#24953789) Homepage

      This is a misunderstanding of the application.

      Microsoft said 'Threads are better than Processes for a web server', where you're wasting a ton of resources creating a new process for every CGI script that's run. They were right! Now every major web server supports in-process applications that are created once per server (perhaps with a pool of shared app space) rather than once per request.

      Microsoft has never said that all the applications on your computer should run in one thread... that's just crazy talk.

      This is simply a decision by Microsoft and Google to treat a browser tab as an application, rather than as a document. Now that web pages do a lot more processing (and crashing), this makes more sense than the old way. There's nothing particularly bad about using threads instead... Firefox is just fine with threads, I see no reason for them to undertake a massive change due to misplaced hype.

      It really has to do with how much the processes share. If most of the memory per process could be shared, threads are probably more efficient. If not, processes. I'm no browser architect though, so I'll leave it up to Google, Mozilla, and Microsoft to make their own decisions.

  • by michaelepley ( 239861 ) on Wednesday September 10, 2008 @04:06PM (#24952465) Homepage
    Tabs running in separate processes for process isolation for fault/crash tolerance is fine, but its only one benefit. However, 1) tabs running in separate threads shouldn't bring down the entire browser, if the application was properly designed in the first place; and 2) I'm sure we'll still find plenty of ways to crash the primary process and/or cause even separately running processes to do this.
    • by ceoyoyo ( 59147 ) on Wednesday September 10, 2008 @04:28PM (#24952831)

      From the other perspective, having used IE in the past, I know how easy it is for a page to open lots of popups. In fact, you could open so many popups that it would crash the browser.

      Now that the browser likes opening new processes, an out of control web page can crash my whole OS instead?

    • There's another benefit to separating pages into processes. You can use standard OS tools (top, ps, Task Manager, System Monitor) to find processes that are eating up cycles and kill them. If I have 30 tabs open in Firefox, and one of them has some wonky JavaScript/Flash/Java that is munching the CPU, I have to kill the entire browser and start from scratch. With separate processes, I can shut down the specific offender and continue on (assuming it isn't the browser itself). I find this to be the most a
    • Re: (Score:3, Interesting)

      Er, wha? Threads will regularly kill a process when they're in a bad state in any sufficiently complex program, and given how nasty handling the Web can be, it really doesn't surprise me that web browsers crash.

      Processes are both easier to use from a developer's point of view (because I assume part of LCIE is a developer-invisible shared memory model) and somewhat safer than just using threads. It's still possible to crash them, of course, but it's harder to crash than when using a threaded-process model

    • by Sloppy ( 14984 ) on Wednesday September 10, 2008 @05:59PM (#24954259) Homepage Journal

      tabs running in separate threads shouldn't bring down the entire browser, if the application was properly designed in the first place

      Big internet client applications are never properly designed in the first place.

      I don't say that as a cynic; it's just that they are so damn big and pull in so many libraries, etc. When you're writing a web browser, you don't have time to write a GIF decoder, so you're going to use someone else's library. This type of thing happens over and over, dozens of times. You just can't audit all that code. But if there's a buffer overflow bug in just one of those libraries...

      What excites me about this multiprocess approach isn't just the fact that we can recover from hung javascript. That's just a populist example. What I look forward to, is the problem getting split up into even more processes, with some of those processes running as "nobody" instead of the user, or some of them running under mandatory access controls, etc.

      All that crap will never be fully debugged, so let's acknowledge that and protect against it.

      Chrome's sandboxing is just the tip of the iceberg compared to what is possible, but it's a step in the right direction and (dammit, finally!!) has people talking about sandboxing as something to really work on. A thousand programmers all over the internet are going to adopt a trend that just happens to be a good trend. Thank you, Google.

  • by Xaximus ( 1361711 ) on Wednesday September 10, 2008 @04:06PM (#24952491)
    I haven't tried IE8, but I uninstalled Chrome 5 minutes after installing it. It took Firefox about 20 seconds to load 8 sites, while Chrome took over a minute. If it's going to be that slow, nothing else matters.
    • by LWATCDR ( 28044 )

      I didn't time it myself but Chrome does seem really slow to start a new tab.

    • by B3ryllium ( 571199 ) on Wednesday September 10, 2008 @04:41PM (#24953045) Homepage

      "to load 8 sites" ... 8 sites that you visit frequently and thus have cached on your Firefox installation, perchance?

      Don't be so rash to judge. Chrome has many other areas where it lacks compared to Firefox, speed isn't generally one of them. I've heard many users say that it loads pages more than twice as fast as Firefox, and also scrolls much faster on graphical/data-intensive pages.

      The lack of extensions (such as adblock, firebug/firephp, flash block, noscript, coloured tabs) is the main reason why I've barely used it since I installed it.

    • Re: (Score:3, Informative)

      by AbRASiON ( 589899 ) *

      Something is wrong with your PC then.
      I love FF and have no interest in chrome without all the addons FF provides.
      That being said chrome was insanely fast, really, really fast - easily the fastest web browser I've ever seen, including clean instealls of FF1 / 1.5 / 2 and 3.

  • by sqlrob ( 173498 ) on Wednesday September 10, 2008 @04:08PM (#24952521)

    AV slowing the start of each process is really going to cause a performance hit.

  • by nweaver ( 113078 ) on Wednesday September 10, 2008 @04:10PM (#24952541) Homepage

    The real reason for processes instead of threads is cheap & dirty crash isolation. Who cares about RPC time, you don't do THAT much of it in a web browser.

    But with more and more apps being composed IN the browser, you need isolation to get at least some crash isolation between "apps"

    • Re: (Score:3, Interesting)

      by lgw ( 121541 )

      And yet the now-famous :% crash takes down all of Chrome, not just the current tab. I had a chance to ask a Chrome developer about that, but I didn't get an answer. Perhaps crash-isolation isn't as good in practice as one would think, or perhaps that was just another "oops" on the part of the Chrome dev team, and we'll get real crash isolation in the next release.

      • Later releases don't do that any more. But I assume that one was because of a crash in the "supervisor process" - IE8 still has the problem of it being possible to crash the supervisor (UI) process and all child processes die with it.

    • But with more and more apps being composed IN the browser, you need isolation to get at least some crash isolation between "apps"

      That is a good point. It should also help reduce the issue of a plug-in or stuck page freezing the whole browser.

      One thing I would be curious about is how they handle the inter-process communication since, while they are separate processes, things like cookies need to be shared between them. I would also be curious what sort of memory overhead the causes?

  • I imagine that most people who knew what Chrome is and actually installed it also know what processes are.

  • The real issue here is that our OS's mechanisms for controlling resource sharing and protection among cooperating concurrent "lines" of execution (to avoid the words "process" or "thread") aren't as fine-grained as they could be. It's nearly an all-or-nothing choice between everything-shared ("threads") or very-little-shared ("processes"). Processes do get the advantage that the OS allows them to selectively share memory with each other, but threads don't get the natural counterpart, the ability to define

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      That maybe true on windoze. Linux has had fine-grained control of resource sharing between processes/threads for ages - clone(), mmap() etc. Modern linux threads are implemented as processes at the kernel level, in fact. The idea that "processes are slow" is windowsitis, like "command lines suck" - windows processes may be slow, the windows command line may suck, but processes or command lines in general don't necessarily suck.

    • I don't know if an analogue exists for unmanaged Windows code, but AppDomains in .NET seem to bridge the gap you describe. A single process can have multiple threads and multiple AppDomains, and threads operating in separate AppDomains have substantial protection against one another.
    • by TopSpin ( 753 ) *

      aren't as fine-grained as they could be

      See clone(2) [die.net]. Every significant resource related to a process is selectable when spawning a new thread of execution. pthread_create() and fork() are both implemented in terms of clone(). You may invent you're own mix of shared or copied bits according to your specific needs.

      Naturally the Windows case is far less general. First, clone() is too short. MinimumWindowsAPISymbolLengthIs(12). There is no fork(). This makes porting fun; see perl, cygwin, et al.

      The design intent of Google's Chrome is, simply

  • Tabbed browsing is now so normal that the problem of a crash in one tab bringing down all the others is big deal. On Vista, this problem happens a lot with IE7, and it's *the* single major annoyance for my geek GF on that platform.

    Threads truly have their place, but this is a good use of separate processes per tab because it keeps one tab from crashing the others where threads can't achieve that.

    • Your girlfriend is a geek who uses IE7 on Vista? Are you SURE?

      And yes, a properly written multithreaded browser can prevent one tab from crashing another. The only way one tab could bring down the others would be if it spewed crap in the shared memory space. If you're letting web pages overwrite whatever memory they want then you've got big problems.

  • It's the Windows job creation scheme [today.com] mentality applied to OS threading: processes are heavyweight in Windows [catb.org]. "Process-spawning is expensive - not as expensive as in VMS, but (at about 0.1 seconds per spawn) up to an order of magnitude more so than on a modern Unix." More work = more hardware.

  • Limitations (Score:5, Insightful)

    by truthsearch ( 249536 ) on Wednesday September 10, 2008 @04:13PM (#24952591) Homepage Journal

    There are some details to Chrome's sandboxing implementation [docforge.com] that limit its security benefits:

    - The process limit is 20. Anything requiring an additional process once this limit is reached, such as opening another tab, will be assigned randomly to any of the existing 20 processes.

    - Frames are run within the same process as the parent window, regardless of domain. Hyperlinking from one frame to another does not change processes.

    There are also some problems where valid cross-site JavaScript doesn't work. Of course it's still only a beta. Some specific details are documented by Google [chromium.org].

  • by Safiire Arrowny ( 596720 ) on Wednesday September 10, 2008 @04:19PM (#24952661) Homepage
    Share nothing processes which communicate via message passing is the future as far as I can tell.

    Not only does that do away with most terrible multithreaded programming problems, but it also can let you write an application which does not need to execute all on the same processor or even the same machine, think concurrency, cloud computing, 1000 core processors, etc.

    Look up the way Erlang programs work. Actor based programming is pretty sweet after you wrap your head around it.
    • by ceoyoyo ( 59147 )

      You can do all that with threads and distributed objects too. I actually find distributed computing much cooler when your controller process accepts and executes threads from other nodes. Plus then you've got some code keeping an eye on those jobs.

  • Processes in Vista (Score:5, Informative)

    by hardburn ( 141468 ) <hardburn@nosPaM.wumpus-cave.net> on Wednesday September 10, 2008 @04:22PM (#24952725)

    I remember a story from a long time ago, during Longhorn's early development, where Microsoft did a study of the cpu cycles needed for various tasks between WinXP and Linux. I've never been able to track the study down again since, but I remember that creating a new process took about an order of magnitude more cycles on Windows than Linux. Linux processes are also lighterweight in general; Linux admins think nothing of having 100 processes running, while Windows admins panic when it hits 50.

    (The basic reasoning goes that Linux has an awesome processes model because its thread model sucks, and Windows has an awesome thread model because its process model sucks. That's why Apache2 has pluggable modules to make it run with either forking or threading.)

    A lot of development from the early Longhorn was scrapped, so how does Vista fare? Does its process model still suck?

  • by Urban Garlic ( 447282 ) on Wednesday September 10, 2008 @04:23PM (#24952735)

    Having gotten several gray hairs from debugging thread-lock issues, I can't help but wonder how these processes do IPC. Presumably complex objects (in the OOP sense) have to be serialized and written to files or piped through sockets. That's not necessarily a bad idea, but it means the data pathways are exposed to the OS, and it's a potential security issue, too.

  • by Sloppy ( 14984 ) on Wednesday September 10, 2008 @04:27PM (#24952803) Homepage Journal

    They [processes] 're slow to start up

    It's hilarious anyone would think that. We're talking about a web browser, not a web server. Even on platforms where process creation is "slow", it's still going to be instantaneous from a single human's point of view. It's not like the user is opening 100 tabs per second.

    • Reading a number of the prior posts, it looks like Windows process creation has overhead of roughly a tenth of a second. That's slow enough to be visible to the naked eye (barely). If that's roughly synchronous per core and I open up a multitab bookmark (say, 30 webcomics) that means the time taken simply to launch the processes on a dual core machine would be roughly 1.5 seconds. That doesn't count any setup work Chrome has to do after the process is created, it's just the OS overhead cost. That's non-
  • Erlang Browser (Score:3, Informative)

    by bjourne ( 1034822 ) on Wednesday September 10, 2008 @04:36PM (#24952975) Homepage Journal

    Seems like they have taken a leaf of Erlang wisdom here. If you were to write a browser in Erlang, using one (Erlang) process per tab is exactly the way you would have written it. I think it shows that designing software for robustness, something that previously mostly was done for high availability enterprise systems is now reaching the desktop.

    Wouldn't surprise me if the next cool browser innovation will be hot code swapping so that you won't have to close all your 5324 tabs just to install the latest browser security fix. At which point they have reinvented Erlang. :)

    • Re: (Score:3, Informative)

      It has nothing to do with Erlang and everything to do with basic design principles. Erlang did not invent what it preaches.

  • Before, when people were arguing between threads vs. processes, most of the arguments assumed ONLY ONE CPU. Forgive me if I'm wrong on this, but my understanding is that one of the biggest reasons modern day programs (programs, not OSs) under-utilize modern multi-core CPUs is that all threads of a process remain on the same CPU as their parent process.

    So by designing an application to spawn new processes instead of threads, it un-handcuffs the multi-core CPU and allows it to distribute the work between all

    • Re: (Score:3, Insightful)

      Wrong. If a process has affinity fixed to a single core, then its threads will be similarly constrained. But threads on an unconstrained process will happily move between cores; that's why you can get really aggravating race conditions on multi-proc machines that don't appear for the same multi-threaded program on a single core machine.

      Also, Apache and the like allow the option of threads vs. processes. Traditionally, Windows installs use thread and *nix installs use processes because Windows is optimize

  • quite ironic (Score:5, Interesting)

    by speedtux ( 1307149 ) on Wednesday September 10, 2008 @04:44PM (#24953109)

    UNIX didn't have threads for many years because its developers thought that processes were a better choice. Then, a whole bunch of people coming from other systems pushed for threads to be added to UNIX, and they did. Now, 30 years later, people are moving back to the processes-are-better view that UNIX originally was pushing.

    Microsoft and Apple have moved to X11-like window systems, Microsoft and Google are moving from threads to processes, ... Maybe it's time to switch to V7 (or Plan 9)? :-)

  • The comic is a lie. The allocation of processes is quite complecated chromes-process-model [marcchung.com]. With a standard install you can quite easily create multiple tabs per process. Basically on a website site right click on a link to a page in the same website and select open in new tab. The new tab is then allocated to the same process. Once you have such a tab you can navigate to elsewhere on the web so you can easily end up with a situation where two different website in two different tabs share the same process.
  • by melted ( 227442 ) on Wednesday September 10, 2008 @06:24PM (#24954541) Homepage

    Could someone give the gory details on how this all is accomplished? In particular, whether all processes access (and draw in) the same graphical context somehow, or they're just a bunch of z-ordered overlapping windows that move together when dragged?

    • by Shin-LaC ( 1333529 ) on Wednesday September 10, 2008 @07:42PM (#24955409)
      I went looking for the same information earlier today. Surprisingly, the design document titled "How Chromium Displays Web Pages" [chromium.org] doesn't shed any light on that, at least at this time. You have to dive into the source [chromium.org] to find out.

      Basically, a single process (the one main browser process) owns the window and draws to it. Renderer processes draw their web content into shared memory; the browser process then transfers the data into a backing store, which it uses to paint the window. The process is coordinated via inter-process message-passing (using pipes, it seems), but the rendering output travels via shared memory.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...