Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think wasm's memory safety is weaker than JVM. wasm's checking occurs at control flow points and the linear memory boundary, but does not extend down to individual objects. For example arrays are not bound checked in wasm.


The JVM and wasm have completely different memory safety stories on the inside of the sandbox, because wasm can run arbitrary C, so it has to cope with that somehow.

But the thing Steve is pointing out is that they're equivalent from the outside, where neither can corrupt the host environment.


But this is true of an ordinary Unix process too, via virtual memory. A Unix process can’t corrupt the kernel or other processes. But in practice there’s still a lot you can do with a buffer overflow vulnerability.


The issue with Unix process is that a process has the same rights than the user who ran that process. The design goal of WASI is to provide a capability-based system[1], so an attacker exploiting a buffer overflow wouldn't be able to access things that the original program wasn't supposed to.

[1]: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webas...


Then it's a bit bizarre to call it memory safe. I would say that "sandbox" is far more established term for what it is or does. Because if by using seccomp and/or user namespaces I would say that Linux/C is memory safe wouldn't it be an unhelpful statement? Can we say that Chrome or Qemu are implemented on memory safe platform?


Wasm doesn't really have "arrays", so I don't think wasm checking them would really make sense. I guess you could argue that this is a distinction without a difference. You still won't get memory unsafety, which in my mind, is the higher order bit. YMMV of course.


You do get memory unsafety. OpenSSL compiled to wasm would still be vulnerable to Heartbleed. The JVM would prevent it.


Please re-read what Rusky said; the point is about the boundary. Yes, wasm programs can mess up their own memory. That’s not what I’m talking about. I should be more clear about this in the future though, thanks.


I don't understand this point. We don't say that C is memory safe because of the kernel boundary, even though this boundary does provide important safety guarantees.

edit: I guess this viewpoint makes sense for the use case of "thousands of wasm programs in the same process." They are protected from each other, in a sense qualitatively the same as Unix processes. This is still a much weaker guarantee than the JVM provides.


The goal is to allow existing C code to run, so in the end you'll need the same freedom when it comes to memory access. The difference between this and plain process is that the sandbox uses a capability-based security model, instead of giving all the user's rights to the process [1].

[1]: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webas...


There is not a mention of memory safety in this post. It's very confusing term in this context. I wouldn't call running a binary with syscall interception (with qemu for example) as "memory safe".


If the memory safety of your own code is a high priority, consider writing in Rust. If you’re trying to get a C codebase to run everywhere, then that’s fine too but it seems unrealistic to expect the wasm run time to magically make it memory safe.


It is possible to corrupt data on the stack due to out of bounds.


Yes, bugs can always exist.


Not sure what you mean by that, but probably not that stack corruption is a necessary evil of any language.


Not when you use a VM that actually cares about security at all layers, but anyway WASM is doing everything better. /s


Using such a VM requires rewriting existing C code. Wasm's approach allows that to happen incrementally, where it matters, by using memory safe languages like Rust, instead of relying on the VM.


Not at all, as proven by memory tagging on Solaris/SPARC, iOS and upcoming Android/ARM v8.3.

So leaving memory tagging out of WASM was a deliberate design decision.

There are already better alternatives to write secure code if using C is not a requirement, so again nothing new on WASM other than its hype.


Sure, and we use software memory tagging with things like LLVM sanitizers. It's great.

But a) the hardware Wasm needs to support doesn't have tagging, and b) the software equivalent requires support from the allocator and has a large performance penalty.

So yes, it was a deliberate design decision, taken in order to support existing C programs on the Web.

This is really getting tiring- learning from the past is great, but fetishizing it to the point of denying anything new has any value is... not.


WASM wouldn't even be a thing if it wasn't for Mozilla politics against PNaCL.

Tiring is the continuous advertising that WASM is the second coming of Christ in VM implementations.


Those "politics" were there for a reason. Wasm is a solid improvement over PNaCl, which I will note you have curiously left out of all your other old-VM-worship claims.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: