> Modern smartphones have it right -- the user downloads an application, typically written in native code (or pseudo-native, for Android), and that application has only very limited access to the system.
But those access limits are enforced in similar ways to the access limits imposed by browsers on web content, and they are regularly broken in similar ways too. Just like a browser, the attack surface of a smartphone OS includes image rendering libraries, font rendering libraries, 2D and 3D graphics engines, media codecs, and, umm, JavaScript and HTML rendering engines.
Systems like iOS and Android have copied (and improved, I will freely admit) one of the web's most important features -- what the brower calls the same-origin policy. It's the idea that one developer's code doesn't automatically have access to the data from all your other apps/sites. But modern desktop operating systems still don't have this feature, which is one reason desktop users are still way more willing to use your web site than they are to download and run your native application.
When was the last time there was an outbreak of malware on Apple iTunes or Google Play? How many users were affected?
Compare that to the near-monthly announcement of new browser security vulnerabilities, either in the browser itself or a bundled component (e.g. Flash).
If a smartphone application exploits an image rendering or font library, it doesn't gain any additional permissions. It can't do anything with a vulnerable library that it couldn't do otherwise.
The important attack surface on smartphones is local kernel exploits, which are rare.
If every web page had its own full chroot/jail (or equivalent), then they would be as secure as native applications. But in the current model of web pages as accessed through a browser, this is impractical to implement.
Yes, those are all valid and important points -- as I said above, mobile operating systems have taken the browser's conceptual model and improved it. They've made technical improvements (using the kernel's security policies and mechanisms) and non-technical ones (providing a trusted distribution channel with the ability to ban malware apps). I agree that browsers are behind in important ways.
But I'm not so sure that local kernel exploits are that much rarer than, say, Chromium sandbox exploits, which play a similar role in mitigating vulnerabilities exposed to untrusted code. (Not all of those monthly vulnerability announcements actually allow code execution.)
I think in practice the curated App Store model helps much more to prevent outbreaks of attacks in the wild. Google Play, which does not review apps before publishing them, has had malware problems. And since users and usability are still a weak link in the permission system, not all mobile malware even needs to circumvent technical measures to gain the privileges it wants.
And browsers still offer safety advantages over native apps on the desktop systems that currently account for the majority of browser usage.
> If every web page had its own full chroot/jail (or equivalent), then they would be as secure as native applications.
This is exactly what Chrome has always done and other browsers are adopting: plugins have long had to have elevated privileges (I'm no expert on why), but if you exploit an image rendering or font library, you get nothing, because the browser and each tab's renderer are separate processes, and the latter are heavily sandboxed through the OS. This bug is impressive and the first of its kind because it doesn't attack a plugin, but the sandbox itself, (indirectly) through the IPC mechanism. In that sense it's not much different from attacking the iOS sandbox: web pages have more access to cross-origin communications (and other functionality that must pass through the sandbox, such as downloads), and there seem to generally be more situations where the renderer has to synchronize with the browser, but that's mostly a matter of degree.
Chrome's sandbox is not equivalent to having a separate jail for every web site. It is designed to prevent web content from attacking non-browser apps and data, not to prevent one web page from attacking another (though Chrome, like all browsers, has other mechanisms to do this).
Chrome does not guarantee one process per tab, or even per origin. If you reach its internal process limit -- or if a page does something like window.open() that gives it a reference to another tab -- then it will render multiple sites in the same processes, not sandboxed from each other: http://code.google.com/p/chromium/issues/detail?id=81877
> If a smartphone application exploits an image rendering or
> font library, it doesn't gain any additional permissions.
> It can't do anything with a vulnerable library that it
> couldn't do otherwise.
Haven't phones on both platforms been rooted via local exploits, just like the ones you describe?
> The important attack surface on smartphones is local kernel exploits, which are rare.
The important attack surface on Chrome is sandbox model exploits, which are rare.
But those access limits are enforced in similar ways to the access limits imposed by browsers on web content, and they are regularly broken in similar ways too. Just like a browser, the attack surface of a smartphone OS includes image rendering libraries, font rendering libraries, 2D and 3D graphics engines, media codecs, and, umm, JavaScript and HTML rendering engines.
Systems like iOS and Android have copied (and improved, I will freely admit) one of the web's most important features -- what the brower calls the same-origin policy. It's the idea that one developer's code doesn't automatically have access to the data from all your other apps/sites. But modern desktop operating systems still don't have this feature, which is one reason desktop users are still way more willing to use your web site than they are to download and run your native application.