Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not 100% sure what you mean by that final sentence, could you maybe elaborate?


Sure.

Does WASM have better performance than the JVM? If not, could it have better performance theoretically? Is it more secure? How much slower than a regular binary would it be? etc.


Like Steve says, performance is a complicated thing to measure. For one view on it, Lucet ships with a suite of microbenchmarks that compare its execution of wasm with the same C code compiled natively. The `make bench` target runs these. The most alarming regressions are in simple functions that take string arguments - the arguments have to be copied into the sandbox, and then the results copied out, in order to run a very simple function.

So, in those cases, we don't expect to match native, but things will get better when GC proposal lands in WASM, which gives support for operating on memory regions that are outside of WASM linear memory. But in most applications we've experimented with, we haven't found this overhead to be a showstopper.


Ah, cool.

Performance is really difficult to properly measure, because each of these projects have different performance profiles, and new ones keep popping up, like Lucet did today! And if you write a benchmark, then something like https://hacks.mozilla.org/2018/10/calls-between-javascript-a... happens, and all of a sudden the numbers are all different. So it's really hard to speak about wasm generally this way, it's better to talk about specific implementations and use-cases, IMHO. And to understand that benchmarks need to be updated in order to still be relevant.

Security is an interesting axis; like the JVM (as far as I know), wasm is memory safe by design. But security is more holistic than that. One interesting thing about wasm is that you have to say up-front what things you want to call in the host, which provides the ability for the host to say "nope, you're not gonna be able to do that." And of course, logic bugs can lead to security vulnerabilities in all of these platforms.


I think wasm's memory safety is weaker than JVM. wasm's checking occurs at control flow points and the linear memory boundary, but does not extend down to individual objects. For example arrays are not bound checked in wasm.


The JVM and wasm have completely different memory safety stories on the inside of the sandbox, because wasm can run arbitrary C, so it has to cope with that somehow.

But the thing Steve is pointing out is that they're equivalent from the outside, where neither can corrupt the host environment.


But this is true of an ordinary Unix process too, via virtual memory. A Unix process can’t corrupt the kernel or other processes. But in practice there’s still a lot you can do with a buffer overflow vulnerability.


The issue with Unix process is that a process has the same rights than the user who ran that process. The design goal of WASI is to provide a capability-based system[1], so an attacker exploiting a buffer overflow wouldn't be able to access things that the original program wasn't supposed to.

[1]: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webas...


Then it's a bit bizarre to call it memory safe. I would say that "sandbox" is far more established term for what it is or does. Because if by using seccomp and/or user namespaces I would say that Linux/C is memory safe wouldn't it be an unhelpful statement? Can we say that Chrome or Qemu are implemented on memory safe platform?


Wasm doesn't really have "arrays", so I don't think wasm checking them would really make sense. I guess you could argue that this is a distinction without a difference. You still won't get memory unsafety, which in my mind, is the higher order bit. YMMV of course.


You do get memory unsafety. OpenSSL compiled to wasm would still be vulnerable to Heartbleed. The JVM would prevent it.


Please re-read what Rusky said; the point is about the boundary. Yes, wasm programs can mess up their own memory. That’s not what I’m talking about. I should be more clear about this in the future though, thanks.


I don't understand this point. We don't say that C is memory safe because of the kernel boundary, even though this boundary does provide important safety guarantees.

edit: I guess this viewpoint makes sense for the use case of "thousands of wasm programs in the same process." They are protected from each other, in a sense qualitatively the same as Unix processes. This is still a much weaker guarantee than the JVM provides.


The goal is to allow existing C code to run, so in the end you'll need the same freedom when it comes to memory access. The difference between this and plain process is that the sandbox uses a capability-based security model, instead of giving all the user's rights to the process [1].

[1]: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webas...


There is not a mention of memory safety in this post. It's very confusing term in this context. I wouldn't call running a binary with syscall interception (with qemu for example) as "memory safe".


If the memory safety of your own code is a high priority, consider writing in Rust. If you’re trying to get a C codebase to run everywhere, then that’s fine too but it seems unrealistic to expect the wasm run time to magically make it memory safe.


It is possible to corrupt data on the stack due to out of bounds.


Yes, bugs can always exist.


Not sure what you mean by that, but probably not that stack corruption is a necessary evil of any language.


Not when you use a VM that actually cares about security at all layers, but anyway WASM is doing everything better. /s


Using such a VM requires rewriting existing C code. Wasm's approach allows that to happen incrementally, where it matters, by using memory safe languages like Rust, instead of relying on the VM.


Not at all, as proven by memory tagging on Solaris/SPARC, iOS and upcoming Android/ARM v8.3.

So leaving memory tagging out of WASM was a deliberate design decision.

There are already better alternatives to write secure code if using C is not a requirement, so again nothing new on WASM other than its hype.


Sure, and we use software memory tagging with things like LLVM sanitizers. It's great.

But a) the hardware Wasm needs to support doesn't have tagging, and b) the software equivalent requires support from the allocator and has a large performance penalty.

So yes, it was a deliberate design decision, taken in order to support existing C programs on the Web.

This is really getting tiring- learning from the past is great, but fetishizing it to the point of denying anything new has any value is... not.


WASM wouldn't even be a thing if it wasn't for Mozilla politics against PNaCL.

Tiring is the continuous advertising that WASM is the second coming of Christ in VM implementations.


Those "politics" were there for a reason. Wasm is a solid improvement over PNaCl, which I will note you have curiously left out of all your other old-VM-worship claims.


I see.

Nice, sounds like the Android/Fuchsia approach security-wise :)


> Does WASM have better performance than the JVM? If not, could it have better performance theoretically? Is it more secure? How much slower than a regular binary would it be? etc.

Hard to answer as they're different domains. In theory, once optimized and JIT'd, the asm could be the same on non-GC'd sections. JVM bytecode is higher level which means it both benefits (i.e. can optimize more) and is hamstrung (i.e. GC). As for a "regular binary", the JVM knows no such thing and really neither does WASM. Also depends on what "regular" is (i.e. what does the code do) and which runtimes you use and how much is AOT'd vs JIT'd vs interpreted.


WASM has a subset of a subset of a percentage of features that JVM provides.

It's basically a runtime for C/C++-like feature set [1], JVM has support for many of the features that are still on WASM's roadmap [2]

I would be very wary of any performance comparisons between WASM and ... pretty much anything else (except, possibly, bare C?).

[1] https://webassembly.org/docs/high-level-goals/

[2] https://webassembly.org/docs/future-features/


If I understood this correctly, that's what the Fastly people do. They compare it to bare C binaries and besides string-performance, it holds up good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: