For those of you interested in parallelism in browsers, I suggest you keep an eye on Mozilla's experimental new browser engine, Servo.[1] The goal is to make use of a variety of concurrency strategies (such as a Chrome-esque process-per-tab design[2]) and Rust's built-in support for memory-safe concurrency abstractions (fork/join, lightweight tasks, SIMD, etc.) to produce a ludicrously parallel[3] web browser. And even if Servo itself never happens to make its way into production, one of its purposes is to enable Mozilla to explore effective parallelism strategies to pursue with Gecko.
[2] I wasn't exactly thrilled about this myself, but as long as Servo has to interact with C++ code (notably, SpiderMonkey) this was judged critical for security. Fortunately, pcwalton seems to believe that Servo's tab processes will occupy less memory than Chrome's.
I'll admit I wasn't a fan of process-per-tab originally either. However, since ever browser leaks memory, the ability to close some windows and reclaim the memory that was lost has been very useful to me (I tend to have 50+ tabs open at most times, and for many days at a time).
That being said, I'd also be ok with process-per-window, as that would give me the same basic ability.
Hahaha, you and me and about 1% of the browser users (well on HN, probably more like 10%) have particular habits of keeping much larger numbers of tabs open than everyone else.
Have you imagined an alternate scheme to tabs, where there are 100s of thousands of potential "tabs" which can be organized, called up in groups, moved in clumps together, and shared?
Imagine each group of tabs like soldiers in a Command and Conquer game...
I also keep a lot of tabs open, that's why I'm using Firefox and not Chrome. Firefox is very memory-efficient recently, whereas Chrome is a memory hog and can't handle the number of tabs I keep open in Firefox.
Firefox also does a neat thing with tabs that have been opened and not used in a long time, effectively unloading the website and reloading it once you visit the tab (for some reason it doesn't do so with tabs that I keep open in the background on purpose, like Gmail). They also had memory fragmentation problems, which they gradually fixed and now I'm very, very pleased with Firefox. Chrome is really draining my resources at 30 tabs opened, whereas I've had Firefox open with 100 tabs without issues.
The one process per tab model does have an advantage for misbehaving long-running apps. For example I had problems with Asana at some point, as I think it was leaking memory or something and Firefox couldn't handle it. In Chrome, you can find out what misbehaving tabs you have and simply close it.
If Firefox can switch to the one process per tab model, but somehow keep its current efficiency in managing memory, that would be so awesome.
As a Firefox user myself, with the same problem of having hundreds of tabs open all the time, I've noticed a neat little trick to "unload" all the tabs from RAM if you need to free some up.
Just bring up the menu bar (or hit Alt), then go to File -> Exit, which should close all Firefox windows but save the current tab history. Then, simply restart Firefox, again go back to the menu, then do History -> Restore Previous Session.
Besides the few tabs that get auto-loaded (i.e. the ones you were viewing in each window), the remaining tabs should remain un-loaded until you manually re-visit that tab (at which point the page will be redownloaded or loaded from cache into RAM).
I tried solving this problem a while back - using a task based approach - where you create a meta-tab for a logical task and organize tabs under it. You can close, open, suspend, archive tasks. History, bookmarks and other browsing artifacts are associated with tasks. Sadly I couldn't take it beyond prototyping. I have made multiple attempts to - but failed.
* [A task-focused approach to support sharing and interruption recovery in web browsers](http://vimeo.com/9088447)
I am still waiting for a plugin which allows me to impose a efficient low-cost organisation layer over tabs (have made multiple feeble attempts to do a plugin which helps organize tabs using mind-maps, a org structure that is my personal favorite). I have tried almost all plugins over the years which could potentially solve this for me - but have not been successful in finding one that fits my needs.
interesting! I wonder if perhaps a logical way to combine "pages" is into a "book" or a "codex".
It would kewl if this vein of technology grows. Perhaps cached tabs could store a snapshot for offline-browsing and recall after the page is down, too...
...or if multiple web pages could be combined into a single browser page.
Yes, some kind of meta-layer would be neat.
Maybe even combine a "multipath" "narrative" "storyline" into a navigable structure for communicating ideas from multiple perspectives, like news stories or technical manuals or software/hardware/testing/docs of the same product..
sorry for all the quotes and sky-high speculation - i'm buzzing on caffeine, and very excited to chat with other people about the possibilities of extra-tabular information navigation!
I have found out that lots of people from other non-technical areas also have the "too many tabs opened" problem. We are working on a solution for that except for the soldiers part =)
The idea is that you can keep your browser synced (one or many browsers) and move tabs for later, search them, archive them, restore them and soon also share them. Take a look at http://listboard.it if you are interested.
I just signed up. I've been using and abusing multiple windows and then tabs in Opera's MDI interface since the modem days, when it was nice to have a few pages loading up and come back to them fully loaded instead of waiting for each one.
I'd love to have something indexing and closing old tabs for me in the background instead of manually managing them. Heuristics like 'this tab or any pages on this domain haven't been visited in a week, index the page and close the tab'. At times I use tabs as reminders, perhaps adding some intelligence like "This Kickstarter is expiring in 3 days and hasn't been viewed in a week, you still interested?" might be nice. Any tab with future dates approaching might be worth alerting people to. A summary page might be nice with the last closed tabs, the upcoming dates tabs mentioned above. Pinboard or the like integration would be pretty awesome, that is primarily how I'm managing tabs now.
The more I think about this the more excited I get. Of course the tough part is satisfying each of us who have their own reasons and their own process for managing multiple tabs.
The problem is a lot of people now research on things. Like Price Research. They tend to open a dozen of tabs to look through things. I think this is a problem that needs to be looked at.
I have been thinking of something that combined Bookmark, History and Open Tabs.
I think there is an opportunity to blend the concepts of tabs and bookmarks. Tabs could/should be aggressively swapped out to disk, or even unloaded entirely if the state that they have is unnecessary. Obviously we can just bookmark then close a tab today if we don't care about it's state, but that is not a workflow that the standard tab/bookmark UI facilitates.
(The standard tab UI also does not facilitate massive numbers of tabs, but tree style tabs solves that problem beautifully.)
Very interesting, thanks for the suggestion. I currently use Firefox and Chrome concurrently for various reasons, so I think I will probably get some use out of this.
I guess it's a holdover from when I used Linux more often, but for most applications it seems sensible to just let the WM manage tabs. I guess there are a couple of situations when that approach is worse, though:
- Tabs need to communicate with each other, or with a "host" application. In-process communication might be simpler than inter-process.
- Some people have WMs that don't provide an acceptable tabbing interface, and you want your application to have an acceptable interface everywhere. In this case I guess I'd suggest distributing a separate tabbing program along with your app, but that kind of separation of responsibility has definitely gone out of fashion.
If you'd like to try this out on Windows without affecting your main Firefox profile, we just released portable packages of the Firefox Nightly and Aurora builds at PortableApps.com yesterday: http://portableapps.com/news/2013-12-04--firefox-aurora-27-a...
They run self-contained in their own directory so you can quickly extract them to your Desktop or portable device. The installer downloads the latest build as you install it and configures it for standalone use. When you're done testing, you can just delete the FirefoxPortableNightly directory.
Bonus: The Nightly branch also has the new Australis UI redesign that they've been working on and is worth checking out.
I remember the days when for multiprocessing was the only option and multi-threading was only available on a few systems.
Now with the security exploits many plugins have exposed and the way a misbehaved thread can bring the whole application down, we are moving back to the multiprocess model as a better sandbox model.
Shared memory (be it shm/heap in same process) with associated mutexes, semaphores, locks and the like is a right pain to get right without introducing race conditions, deadlocks etc.
Go is interesting as it uses "micro threads" (goroutines) and message passing CSP style but I haven't found a use for it yet.
Going multiprocess and using IPC doesn't intrinsically eliminate any of the tricky concurrency challenges with multi-threading. Neither do coroutines (goroutines).
When you have multiple processes talking to another then, unless all your RPCs are completely stateless, you still need to orchestrate synchronisation in some form. There's also the problem that in 2013 we still don't really know how to do IPC/RPCs really well, and doing so portably is hard (Notice Mozilla are writing their own from scratch)
It's all very well new languages coming along and solving the trivial stuff (setting up message queues, isolating memory) but very few of them do anything to solve the real problems.
> It's all very well new languages coming along and solving the trivial stuff (setting up message queues, isolating memory) but very few of them do anything to solve the real problems.
I think that Rust makes a lot of effort to move things in the right direction. In Rust you can share memory, but unless you are in the unsafe sublanguage (which is clearly marked) the type system restricts you to one of: (1) copying messages; (2) transferring ownership of a message so that the original thread cannot race on it; (3) sharing immutable memory only; (4) taking a lock before accessing mutable data. We've seen huge engineering benefits from this: making parts of Servo thread-safe has been simply a matter of running the compiler repeatedly and letting the error messages tell us where to insert the locks; when it compiles we know the data races are gone. (You can still deadlock though.)
> Go is interesting as it uses "micro threads" (goroutines) and message passing CSP style but I haven't found a use for it yet.
But is also has shared heaps by default. So you still have to rely on everyone being an "adult". I wish shared heaps would have been a specially enabled feature not the default. But I guess the language was position to compete with C++ and Java and isolated heaps would have provided a performance decrease in benchmarks -- and thus people's willingness to adopt it.
For large concurrent systems, safety and fault tolerance often leads to their failure but it is kind of hard to encode that in a quick benchmark to impress people.
Here are a few languages/systems with default isolated heap runtimes between concurrency units: Dart's isolates, Erlang's processes, Nimrod's threads, Web Workers in modern browsers. Anyone know of more?
"I wish shared heaps would have been a specially enabled feature not the default."
I hope to someday see the language at least do the opposite, have a specially-declarable "isolated" goroutine. In theory, the compiler ought to be able to analyze a goroutine, determine that it shares no data at startup with another process (i.e., nothing in a closure or something), determine that it only communicates via value-passing channels (which courtesy of my previous restriction, can be analyzed by simply looking at what channels are passed in at startup time, and some analysis of the types of the channels), and thus guarantee that at least this goroutine is fully isolated. Pervasive usage of the new keyword I'm hypothesizing would allow a diligent programmer to recover most of the isolation advantages without having to rewrite Go entirely. It also ought to enable some other optimizations against these guaranteed-isolated goroutines, the biggest of which is that they no longer need to participate in a global stop-the-world GC, both in that they can continue running while that is occurring and that they also relieve the global GC from the task of scanning over them.
(In fact all the analysis ought to itself be fully automateable, and the user shouldn't have to declare it; I'd want them to still have the option to make a declaration so the compiler can tell them if they screwed up, though. I don't like such critical functionality being behind an opaque optimizer.)
But this certainly won't happen soon; there's a large enough list of stuff that comes before that.
> determine that it only communicates via value-passing channels (which courtesy of my previous restriction, can be analyzed by simply looking at what channels are passed in at startup time, and some analysis of the types of the channels),
That would preclude sending interfaces over channels (along with any other existential or mutable reference type), because the type system doesn't know whether the interface is closing over shared state. Not being able to send interfaces over channels would mean that channels would be restricted to only one kind of type, because Go doesn't have discriminated unions so interfaces are the only way to perform type-switch. Those goroutines would be so restricted as to be almost useless.
You cannot just bolt isolation on after the fact. You must design your language for it from the start.
That said, Go's race detector is very good and it's awesome that they focused on getting first-class support for runtime race detection so early.
"You cannot just bolt isolation on after the fact. You must design your language for it from the start."
Yes, I agree, and I'm sure I'm going to have many years of wishing they had. At the moment I don't have a better entrant in this field that is palatable to my coworkers, though. They've rather disliked Erlang (and not for lack of trying, and not for lack of good reasons, for that matter), Haskell's right out, and I'm running low on production-quality true isolation-based languages here. Several additional up-and-coming contenders; I'm sure if I could have used Rust-from-2018 I'd take that in a heartbeat, but, alas, it's 2013.
Sketched: 1. As neat as the clustering is, it's very opaque and hard to debug. And even after years of using Erlang, it's always a pain to set it up again, and opaque when it fails. This is a critical feature for the system I've written, and I just can't keep it stable. That others seem to have managed doesn't help me any, and I'm done pouring time down this sinkhole. 2. The syntax is quite klunky. I've been programming in Erlang for 6 years now, including for my job, and yes, I still don't like the syntax. In addition to ",.;", it's a terribly klunky functional language, wearing a lot of the trappings while failing to reap a lot of the benefits. And I don't just mean this is a minor inconvenience, it seriously inhibits me from wanting to create well-factored code, because it's so much work I can't factor away. (I have examples, but explaining them is a blog post, not a 6-th level nested HN post) 3. As neat as OTP is (and it is neat), it tends to encourage a highly coupled programming approach to fit into "gen_server" (or whatever), and due to problem #2, many of the tools I'd use to solve that from either the imperative side or the FP side are not present, or too hard to use. The whole gen_X encourages very choppy and hard-to-follow code. If you pour enough work into it, you can get around that, but the language doesn't help you enough. It's also bizarrely hard to test the resulting OTP code considering we started with a "functional language".
It's a brilliant language that was well ahead of its time, and I don't mean that merely as a "I want to be nice" parting comment; it is a brilliant language that was ahead of its time and every serious language designer should study it until they deeply understand it. Indeed, I will absolutely attribute a significant portion of my success in programming Go to the wisdom (no sarcasm) I learned from Erlang, and Go would be a better language today had the designers spent more time learning about it first. (It still wouldn't be an Erlang clone, but it would be a better language.) But it's just become increasingly clear that it has been a drag on my project, for a whole host of little reasons that add up. It was the right decision at the time, because virtually nothing else could do what it did when I started, but that's not true anymore.
Someone will be tempted to post a point-by-point rebuttal. My pre-rebuttal is, I've been programming in it for six years (so, for instance, if there's some "magic solution" to clustering that has somehow escaped my six years of Googling, well, I think I did my part), yes, I know all other languages will also have "little things" (and big things), and Erlang may be perfect for your project, absolutely no sarcasm.
Rust's "tasks" feature isolated memory, and, thanks to the magic of linear types, passing data from one to another is both statically-guaranteed to be safe and is never more expensive than the cost of copying a pointer.
> is never more expensive than the cost of copying a pointer
Well, if you're passing something that is pointer size, yes. But say, `chan.send([0u8, .. 1_000_000])` will do at least one 1 MB memcpy to load that into the stream, and a 1 MB memcpy to load it out of it when `.recv()` is called.
>Here are a few languages/systems with default isolated heap runtimes between concurrency units: Dart's isolates, Erlang's processes, Nimrod's threads, Web Workers in modern browsers. Anyone know of more?
Tcl runs an interpreter per thread and communicate each other using message passing.
Well, OS processes are the obvious solution. I was asking about languages/runtimes that feature that as a default.
> and no threads in favour of "select".
Not sure why you mentioned that. Callback chains can create concurrency contexts (callback chains) that can interfere with each other, much like multiple threads would. It would have a much higher granularity but you are not out of the woods.
On most modern operating system you can selectively set up shared memory between processes as well. One can implement a lower latency (but more dangerous) buffer sharing / message passing-by-reference scheme.
The biggest issue is performance. Although you might be able to work around it with local RPC calls and passing memory ownership as offered by Windows, Mac OS X, QNX and Minix. To quote the ones I am familiar with.
Just imagine the performance impact of something like Eclipse if each plugin was a separate process using message passing.
Maybe it is doable, not sure. Anyway I rather favor safety over performance.
> All IPC happens using the Chromium IPC libraries
Interesting that they chose to share code with Chrome. Since the two are competitors, I would have thought that they'd use completely separate implementations. It's interesting that open source makes this sharing possible.
If the existing implementation is perfectly good, writing your own is either pride, stupidity, or a learning exercise.
I would take for granted that the Firefox developers aren't stupid, and that they have enough interesting work to do that they aren't going to spend time on it as a learning exercise.
Ah, okay. That feels a hyperbolic to be pride or stupidity, rather than just a poor analysis of the efficacy of an existing solution. Thank you for your response.
> Interesting that they chose to share code with Chrome. Since the two are competitors, I would have thought that they'd use completely separate implementations.
That's what open source is about: collaboration instead competition. Why compete when you can pool your resources together.
Mozilla does have a habit of rejecting code like WebDB/SQLite and PNaCl, so people are always surprised to see them reusing code, especially from Google.
The difference here is that we wanted to implement a multiprocess architecture. We don't want to implement (P)NaCl, since we don't think they're good for the web. We have no qualms with integrating code from other sources, we've done a lot of it. (Google Breakpad, WebRTC, numerous image and video decoding libraries, Freetype, libffi, ANGLE, the list goes on...)
As a further observation on this, Robert O'Callahan has a blog post [1] explaining Mozilla's stance on (not) implementing bad-for-the-web features; as an example to show this stance is long-lived, he mentions that engineers at Netscape/Mozilla implemented (in 1999) ActiveX support for Gecko, but kept it disabled by default on purpose despite a possible market share impact.
I'm very excited about this. I usually drive Firefox Beta without any sorts of complaints, but installed nightly just to try this out live.
With my list of extensions[1] this doesn't seem to be particularly stable. It fails to bring up my tabs from last time. That would be OK for experimentation, had it not been for the fact that it also crashes regularly.
These two combined really is test-stopper for me.
Note: I'm not complaining. I'm very pleased this is being worked on. I'm just commenting first-hand experience about the state of things, so that others can make up their minds if they want to give it a go as well.
Are the crashes listed in Firefox's about:crashes page? Filing bug reports with those crash IDs (which reference stack traces on crash-stats.mozilla.com) would be a big help.
Firebug might be a problem because it is tightly coupled to Firefox's internal debugging APIs.
Firebug is known to not work (and cause stability problems) at the moment (for reasons mentioned by cpeterso). We're working with addon developers to improve this situation.
From a code maintainance point of view, how do you manage to keep this 'branch' in track with the main one ?
I mean, every patch made to the real firefox has to be carefully reviewed and backported to this multiprocess branch.
Is that a manual process ? Or can it be automated like that :
1) Check if new commit arrived on 'head'
2) Auto backport it to the multiprocess branch
3) Try a build + run tests. Everything looks good ? Keep it
4) Not goot ? Send an email to the multiprocess maintainer so that he has a look ?
Since multiprocess is in the regular Nightly, the code is in the main line of development (mozilla-central). Based on the preference, it decides at runtime how to handle the content/chrome interface.
But when Mozilla does branch off separate trees, VCS merges and lots of automated tests are largely sufficient.
To try Electrolysis (multiprocess) in Firefox Nightly: in about:config, toggle the browser.tabs.remote pref and restart (still work-in-progress, don't expect a fully working browser).
I tried it, sadly FF nightly crashed completely (not just a tab) every about 30 seconds with just 3 tabs open. The crash recovery dialog sent a log to Mozilla everytime, so they can hopefully fix the bugs soon.
I welcome this just so we can determine which tabs are using the CPU persistently. I had to switch back to Chromium (after a good few weeks really giving FF another go) because I was sick of this issue. Firefox is smoother and more memory friendly than Chromium these days, a pleasure to use, but in Chrome I can kill hoggy tabs... so that's where I'm staying for the moment.
Now, this isn't snarky, but you really run into issues like that, where its noticeably bad in a particular tab, enough to need to kill it? What sort of sites, and what processor?
In Chromium it's generally runaway memory hogging tabs. Facebook is generally awful for example. Leave any Facebook page open all day (background of course, possibly on another virtual desktop) and you'll be staring at a multi-GB tab by evening.
I'm not sure what's causing the CPU utilisation in Firefox. It's common to blame extensions in the FF community because there's no easy way to determine where the problem is.
Every other day I hit the problem that a tab consumes so much CPU that it stutters. After closing that tab and reopening the same web site the issue is gone. That's using Chrome stable both on Linux and Android.
Very nice indeed, I crash Firefox a few times a day from hitting the memory limit.
Granted this is due to AB+ and Reddit Enhancement Suite. Although imo.im leaking memory over time doesn't help! (I am somewhat annoyed that an IM client takes up 500MB of memory, I miss Meebo!)
Right now FF is at a fairly svelte 1.8GB. Heh.
The other problem is that performance degrades dramatically as the number of open tabs increases. Once I hit 50 or so tabs scrolling becomes horribly jerky. From the sounds of it, this change may very well fix that as well.
(For reference I am on an insanely fast home built machine!)
>Once I hit 50 or so tabs scrolling becomes horribly jerky
That could be a GPU driver issue. GPU scheduling is generally horrible outside the "run a fullscreen game as fast as you can and screw everything else" usercase.
Well in hopeful theory land those other tabs that aren't active shouldn't be hitting my GPU. :)
I can also pop over to IE and it scrolls a-ok! (To be fair, IE11 has beautiful scrolling, everything else looks jerky in comparison, it really is quite a lovely effect!)
But all my plugins are in FF, so.... with an SSD it is not like FF takes too long to come back up anyway! Still annoying though!
You can try Palemoon x64. I also had problems with crashing over memory limit. Was on Palemoon for 2 months and it works OK. I don't use many addons though.
My biggest problem with Firefox is its startup time. It takes much longer to start the firefox.exe then IE and Chrome which are both very fast.
I am using Win7 on i7-3960X, intel SSD, 16 GB RAM.
If I install any plugin then this thing is much worst. (For this reason I am not using any plugin which is a big loss).
It is weird that this issue is seldomly mentioned, but I think that it is much more important then the other performance benchmarks such as javascript performance.
I don't have no/low tab cold start problems in either.
... that said, try closing Chromium with 20 odd tabs open and then reopening it. It'll take bloody ages to reload all those tabs. Firefox lazy tab loading saves a metric buttload of time in this scenario.
Interestingly, that complaint no longer exists for me. Firefox really starts instantly. Which is unfortunate, as it's the first program in my task bar and I'm still used to use Win+Shift+1 to start a new instance. Which, if Firefox isn't already running, results in it asking whether I want to start in Safe mode because I was holding shift.
Tom's Hardware Guide's "Web Browser Grand Prix" measured Firefox's startup times being much faster than Chrome, for both cold and warm starts and single and multiple tabs.
As a Firefox user, I must say that these tests don't mean much considering web browsers are updated every day to week to month depending on the version used. Each browser wins at one point.
I am curious if the lack of such kind of complaint happens because usually people start their browser at morning and keep it open. I prefer to frequently close my browser and reopen when needed.
I think you underestimate how good modern operating systems are at memory management. I have 30+ tabs open (rather than bookmarking) and Firefox hasn't been closed for days.
Spending developer time on things like start up and installers which are just 1% of what software does is not very productive. Yes its good to look at start up every now and again, however it shouldn't be the main focus of any project.
I just hope Mozilla won't go extreme, and won't use a separate process for each tab like Chrome does. It produces memory bloat if you have many tabs open. While they say they'll mitigate memory issues, this should be balanced.
It's unfortunate how behind the curve Mozilla is on this. No denying this was a huge undertaking but the length of time it's taken has obviously been detrimental to Firefox usage, the only real reason I still use Chrome as my primary browser. Though I'll give Electrolysis a shot with Firefox Nightly and see how it works out.
Huh? I don't get this. Mozilla is switching from a single-process model to a multi-one. Chrome was built that way from say one. I hope you see that moving from different models costs more time then actually pick one and support it forever.
You have a good point about switching cost. On the other hand, Chrome always could be run with --single-process (mainly there to measure the overhead of multiprocess), so it didn't really "pick one".
Once you have a multi-process setup in place, running in a single process is relatively simple, IPC just routes messages internally instead of across processes. Having a single-process setup and going to a multi-process one is a way, way, way larger effort.
The article explains this. We built the infrastructure, and we used it for out-of-process plugins on desktop, and for Firefox OS, but we went in search of lower-hanging fruit on desktop. There was also (IIRC) some worry about breaking extension compatibility, which would make it a nonstarter. I think there are some planned mitigations for that now, and we've fixed a lot of the easy wins for responsiveness, so we're pursuing this again.
This is more about responsiveness. It's not just about slow script, if you open a very large text file in a tab, the entire Firefox UI stalls. If you have some moderately heavy operation happening in a different tab, scrolling gets choppy across the board, switching tabs is janky, etc etc etc.
Testing it out now, so far so good! I might make this my default profile, I would love not having rogue tabs freeze the entire system. It's very nearly my one remaining thing I prefer Chrome for, Firefox has really improved lately.
As a dedicated FF user, I've been waiting for this for so long. I may finally be able to isolate which tab is grinding my computer to a halt! As usual, this is amazing work.
Gecko plugin peer here: Unfortunately, no. NPAPI has two modes: windowed and windowless. In windowless mode (roughly): we proxy input to flash, and flash renders into a buffer we provide it. In windowed mode, we create a native OS child window for flash and let it handle input and rendering directly. In this mode, without a way for flash to pass "unused" keys back to us, it will require some ugly hacks to steal hotkeys from it reliably.
Most sites run flash in windowed mode, and for good reason - flash's performance sucks in windowless mode, and it cannot make use of hardware acceleration (IIRC). Since Adobe's NPAPI flash seems to be essentially in stability mode, it's unlikely this will be improved :(
Now, in current multiprocess mode, we actually force flash to use windowless mode -- because support for windowed mode isn't finished yet (bug 923746). But the aforementioned performance issues mean that we'll probably remove that restriction once we support windowed mode in multiprocess.
This is a terrible idea. There is no justifiable reason to do this. The reasons given are weak, this is just more over engineering that will add an enormous amount of complexity and add no real value.
Lets look at the reasons given as to why they want to do this.
>Performance. Most performance work at Mozilla over the last two years has focused on responsiveness of the browser. The goal is to reduce "jank"—those times when the browser seems to briefly freeze when loading a big page, typing in a form, or scrolling.
You can do all of this with proper threading and task delegation. Putting things in separate processes will not magically make things better. The answer to "jank" is proper coding, not over engineering. Last time I checked there was the same "jank" in IE and Chrome even though they use MPs.
>Security. Technically, sandboxing doesn’t require multiple processes. However, a sandbox that covered the current (single) Firefox process wouldn’t be very useful. Sandboxes are only able to prevent processes from performing actions that a well-behaved process would never do. Unfortunately, a well-behaved Firefox process (especially one with add-ons installed) needs access to much of the network and file system.
This is BS. You could have three processes and have FireFox sandboxed completely. Main process runs in a low integrity mode which limits it's resource access to a single directory. Second process is a download delegation process (takes a file after it is downloaded and moves it to the requested location while also promoting it's integrity) running in normal integrity mode. Third process is a network communication delegate/proxy running in normal or possibly even low integrity. These two delegate processes I mentioned will still be needed for the MP Firefox so it is no more work to create them.
>Stability
This is the only true benefit, but it is of very little value. Firefox almost never crashes and when it does, the session restore brings you back to were you left off in seconds.
Cons? More complexity means more bugs. This is a workaround for really fixing FireFox. I am going to have 150+ extra processes in my task manager now. More memory use. More context switches in the operating system eating up resources and causing more system latency and overall slowdown (context switches at the kernel level which will affect the whole OS).
The article explains it. Moving things off the UI thread (and other responsiveness related fixes) is what has been done so far as part of Snappy. This project is about putting web content in a different process than the Firefox UI.
[1] https://github.com/mozilla/servo/
[2] I wasn't exactly thrilled about this myself, but as long as Servo has to interact with C++ code (notably, SpiderMonkey) this was judged critical for security. Fortunately, pcwalton seems to believe that Servo's tab processes will occupy less memory than Chrome's.
[3] https://github.com/mozilla/servo/wiki/Design