Hacker News new | past | comments | ask | show | jobs | submit login
Debugging WebAssembly outside of the browser (hacks.mozilla.org)
140 points by syrusakbary on Sept 4, 2019 | hide | past | favorite | 63 comments



I'm not sure if I like where this whole WebAssemby thing is going. On one hand I'm thrilled to the prospect of Python in the browser, on the other this seems like a huge layered mess just waiting to trouble those poor devs.

I actually enjoy creating plain server side rendered web applications with just a dusting of client side Javascript where it makes sense. The development flow is simple and debugging is easy.


WASM is still young. The tooling around it hasn't gained maturity yet. Do you remember trying to debug Javascript before the creation of the in-browser debugger? Desktop application developers back then probably felt the same way you do now.

> this seems like a huge layered mess just waiting to trouble those poor devs.

WASM is a VM. It's less complicated than a javascript engine. How complicated it seems in daily practice is going to come down to how good the tooling is.


Minor nitpick. WebKit created a debugger a couple of years before chrome existed. https://webkit.org/blog/61/introducing-drosera/


The Mozilla Suite had a javascript debugger (https://www-archive.mozilla.org/projects/venkman/) before either Webkit or Firefox existed. I won't claim it's the first, although it might as well be.


Netscape even had a JS debugger in '97 [0].

[0] http://www.rdwarf.com/lerickson/jscript/jsd.pdf


Wow, I wish I had known that back then. How did I miss that?


Don't worry, you didn't miss much. It was as stable as most software of that era.


Though, surely it wasn't better than the IE js debugger. "Error on line 0"


Yes, the bike shed should definitely be green.

I knew somebody was going to object to that little factoid despite its irrelevance to the point at hand. I've updated my comment to remove any reference to any specific in-browser debugger.


Apologies, didn’t intend to detract from your point. I thought people might be interested in the history of it since very few people even at the time seemed to know of its existence.


No worries. I should have chosen my words more carefully. And, to be fair, I didn't know webkit was the first to create an in-browser debugger either. I always thought it was either Chrome or Firefox.


> Do you remember trying to debug Javascript before the creation of the in-browser debugger?

Yes, I used visual studio. You could debug the server in the same session. It was pretty sweet.


True, without tooling most platforms/languages would suck to work with.

But I believe that the more tooling a stack needs, the more wobbly the stack gets. The more wobble the more error prone and less fun to work with.


A good use case for WASM outside the browser is application plugins, or tools where it's better to distribute "WASM binaries" because client-side compilation from C, C++ or Rust might take too long or is too much hassle for the user.

It probably won't replace solutions that already work well, but it might be an option for the things that currently don't work.


Until you find out WebAssembly can't call any native functions (for example: to create a GUI, to call a .so / .dll export, etc.)

There was discussion to add `dl_open` + `dl_sym` to WASI but it got shot down. WebAssembly seems to be in a weird state of "look, I can run in and out of the browser, but I can't actually do anything useful out of the browser, and I can't access the DOM without going through JavaScript in the browser"


WASM embeddings like https://github.com/wasmerio/wasmer-c-api should allow that, at least calling from the embedder into WASM, and from WASM into the embedder.

(e.g. see https://github.com/wasmerio/wasmer-c-api/blob/master/wasmer-...)

There's similar embeddings for other languages here (python, go, ruby etc...):

https://github.com/wasmerio


I can't think of any good use cases for loading WebAssembly modules into native code. I also think it's bad out of the gates that the WebAssembly community is so fragmented across the different runtimes (each offering different functionality at the moment).


Think of plugins for applications like Autodesk Maya. Currently these are either cross-platform, but very slow Python plugins, or fast native DLLs which need to be compiled for each app-version/OS/CPU combination.

If Maya had an embedded WASM runtime it could support plugins which are fast (at least much closer to native speed than Python scripts), and you only need to distribute a single WASM blob.


Why not just use Lua?


Sometimes you need to embed libraries which are only available as C or C++, or other programming languages. If this code needs to be included into the Lua plugin as CPU/OS specific DLLs you're back at the current mess.


Fastly is building a WebAssembly runtime on its edge servers so customers can run their own sandboxed WebAssembly closer to clients:

https://www.fastly.com/blog/announcing-lucet-fastly-native-w...


Wasmer

Lucet

wasm-jit-prototype

None of them will let your WebAssembly code call an exported function from a shared library.

As an exercise, try to call https://docs.microsoft.com/en-us/windows/win32/api/winuser/n... from WebAssembly.


I think the plan was to get high performance stuff into the browser, with languages like C/C++ and Rust and not to replace JS with Python, C# or Java.

But well, I guess people will do it anyway, let's see how this plays out in the long run.


> I think the plan was to get high performance stuff into the browser, with languages like C/C++ and Rust and not to replace JS with Python, C# or Java.

On the contrary, for many people performance isn’t a factor, but they find the idea of using the same language (and thus code) client-side and server-side appealing.

This was one of the key features and benefits of NodeJS, and I won’t be surprised when I see others trying to replicate the experience via web-assembly.


Contrary to popular belief, performance is always a factor.

Every cycle your app wastes is a cycle another app can't use. It's a fraction of a watt that adds up over time, draining batteries faster, drawing electricity from the power grid and burning carbon fuels.

Each individual cycle may be cheap, but when we have a culture of development that doesn't respect performance, the waste adds up and has a real impact on the world. Just like all other resources, in aggregate, waste is harmful.

Even on the spectrum of "developer productivity", you have to seriously consider if the operational cost of needing beefier servers to run slow-language-X is less than the operational cost of hiring fast-language-Y developers.

I doubt anyone has done a serious study of the issue. Say C++ is 15x faster than Python. You might be able to say that a C++ dev is 2x the cost of a Python dev. Is the ratio of development cost to server cost more than 7.5x? What about other languages that aren't quite so fast, but aren't quite so expensive? You might find that C# is 10x faster than Python, but devs are only 1.25x the cost. Now your ratio of dev cost to server cost needs to be at least 8x to come ahead.

And honestly, good developers are expensive, no matter what language they're working in. Choosing a programming language to be able to optimize for cheap developers could be optimizing for bad developers who can't create the features you need. They might be half the price, but can they get the job done--after all the reworks and defect fixes and other learning curves--in less than 2x the time? I've never seen a programming language on its own have that kind of impact on development.

The point is not that "Language X is universally better than language Y". My point is more that "some languages are so incredibly wasteful that you seriously have to start considering whether or not you're saving money".


It's simply not possible to quantify performance in this way, when comparing language implementations. Performance can also be constrained by environment and/or context on top of the expensive abstractions the compiler or interpreter are required to make. But I'm honestly just being pedantic.

You make a good argument about the potential costs abstractions when they are applied at a very large scale. You are however ignoring the cost of things that may be prevented with high-level language features, like for instance errors in memory management which in my opinion are far worse than wasted cycles. I'd wager most would choose a marginally higher power bill over Heartbleed.


Though I didn't call it out specifically, I wasn't ignoring it, hence the mention of C#.

Again, some languages are unnecessarily wasteful of cycles.


I don’t think a serious study of the issue can be done. Code review is the most difficult part of the development process (and a major advantage of open source), and businesses want to code the MVP as fast as possible without the competition copying them.

The ideal language isn’t the most efficient language but one that is fast to develop, and leaves a possibility to optimize later. I would think languages like Nim and Rust are best in that regard, and more recently JavaScript.


I agree on your comment about "most efficient language". And I agree that the business-interest is micro-focused on monthly-cost versus yearly- or even multi-yearly-cost.

I think those two things are at odds with each other. I think a lot of technical decisions are being made on a short-sighted business level. And I think a lot of people on the technology side have inculcated too much of the business message.

I'm not arguing against ever using languages like Python or Ruby or PHP or whatever. But I think a lot of people make assumptions that their language choice doesn't matter. In a lot of the same ways that building an entire economy on carbon fuels was "easy", it only worked because of externalities. As a profession, we need to be taking a longer view of our craft.

I don't think that it is true that it's impossible to make C++ secure. I don't think that it's true that--say--Java and C# are less productive than--say--Python or Ruby. On a surface level, certain things are easier, but once you start digging into real problems, there isn't much difference. And experienced developers in any language will be far beyond that surface syntax.

I just think that, as a profession, we have our priorities way out of whack.


Maybe your comment is based on thinking about a recent conversation instead of replying to my comment.

You make a few good points, but keep in mind that software development is both a centralization and planning problem. And exercising planning requires having accurate feedbacks.


>On the contrary, for many people performance isn’t a factor, but they find the idea of using the same language (and thus code) client-side and server-side appealing.

Transpilation is a better option if you just care about using the same language because you have better control over the DOM and you can use HTML and CSS markup (which is the best widget set across any platform).


I think the USP of Node.js was that the majority of devs already knew JS.

What I want to say is, I question the premise that there are as many backend devs who just don't do frontent stuff because of JS as there were frontend devs who didn't do backend because JS wasn't an option.

On the other hand the industry gets bigger by the day, maybe even a small percentage of devs doing Python frontends will be thousands of people :)


I work in a company that currently has 2/3 of the teams using Blazor because they aren't comfortable/familiar enough with JS. My team being C# backend and React/Redux/Material-UI front end (using node scripts for all orchestration and ci/cd).

It's a mostly MS shop and you'd be surprised how many developers in orgs really just don't do front end work because of JS.


Can you show some apps using Blazor?


Nothing is public/demoable at this point, sorry. They're using Bootstrap with a Material Design theme applied, nothing too special (CRUD). The app I'm working on has a more complex UI.


How long until the industry doesn't get bigger by the day, and frontend devs + backend devs alike get automated away when AI can generate maintainable + readable code for 95% of the "API + UI plumbing" that is out there?


Judging from every time someone has advertised "don't need programmers" services/software has actually worked, I doubt it will ever really happen in practice, at least not in the coming century or more.


I think high level business analysts could write some Markdown-like human readable syntax for things like:

`apiRequest(method, url, requestParameters, headers, body)`

`databaseQuery(credentials, sql, bindings)`

The past 3-5 years of my professional career have just been shuffling data around from one point to another.


This AI needs to understand first - in minor detail, what it is that it's actually building. Even two people can't understand each other sometimes, and what they are actually building.

I installed RationalRose once, and... that was it. Code generation tools suck.


Some people may feel that way, but the general plan is not to replace JS. It's mentioned multiple times in official docs and blog posts.

https://webassembly.org/docs/faq/#is-webassembly-trying-to-r...


I'm not a fan of seeing web assembly spread outside the browser. If you want to sandbox binary code, there are many much more efficient options, eg. Linux containers (especially with tools like gvisor) or VMs. Vendors that only allow use of WebAssembly are leaving performance and compatibility on the table.


> If you want to sandbox binary code, there are many much more efficient options, eg. Linux containers (especially with tools like gvisor) or VMs.

WebAssembly is a VM, with an accompanying instruction format.[1]

> Vendors that only allow use of WebAssembly are leaving performance and compatibility on the table.

It'd be interesting to know what you're thinking of when you say that. For example, is Cloudflare leaving performance and compatibility on the table by using WebAssembly instead of Linux containers for Cloudflare Workers?

[1] https://en.wikipedia.org/wiki/WebAssembly#Stack_machine


And what if the windows users?

Webassembly is a perfectly good container and would by default have crossplatform capability.


Just like when I deploy Jars compiled on Windows across Linux servers, amazing!


Would you say the same about the JVM, JavaScript (SpiderMonkey + V8), RVM, cPython, and other bytecode VMs? I don't see how a bytecode VM leaves performance and compatibility on the table. If anything, from the JVM standpoint, the JIT features and platform separation improve performance and stability.


VMs might be more "efficient" in some ways but certainly not all ways. If you want to run 10,000+ sandboxes on a machine, you can do it with WebAssembly. With (hardware virtualization) VMs it would be rough.


Looks interesting, but why would I need to debug wasm binaries when I could debug directly the rust code? And why would I want to convert it to wasm? In order to run it in the browser, right?


I don't see how you would "debug directly the rust code" without debugging the wasm binaries. Wasm is a compilation target for rust. To be clear: You compile rust directly to wasm, it's not like you compile your rust to a binary and then convert that to wasm. I do not believe there is an intermediate artifact.

As for your second question... there's motion towards making wasm runtimes other environments, like devices or server-side. Kind of like the JVM or CLR, but closer to the metal (e.g. no garbage collector). The advantage of the server-side is the same as node.js, sharing code with the client. The advantage for other environments is a portable file format with performance characteristics closer to native. My impression is that the reception is less enthusiastic than wasm in the browser.


> I don't see how you would "debug directly the rust code"

I think the intention of OP is that you compile the same Rust code to a native x86 or ARM binary and debug that. Most bugs are ISA-agnostic so that approach totally makes sense in many situations.

That's how I'm debugging my WASM code compiled from C/C++, and that also works automatically with IDE debuggers that don't know about WASM (like Xcode's or Visual Studio's).


While that can work with self-contained libs, quite often Rust WASM code uses wasm-bindgen/web-sys to talk to JS/DOM which makes in-browser debugging the only reasonable way.


Flash and Java allowed you to run the code in the browser, but still debug using the source code and not the VM bytecode.


A good wasm debugger would let you see the source code too, just like how regular native debuggers show you the source code instead of x86 assembly.


For bugs that only manifest when compiled to WASM but not when compiled to x86 or ARM machine code (quite rare, but happens), or bugs in platform-specific code (for instance code which talks to browser APIs).

In typical applications, the vast majority of bugs should happen in the platform-agnostic parts, so debugging a native build totally makes sense. It would still be great if WASM would be as trivially debuggable as native code.

IMHO proper source-level debugging support in the browser dev-tools (or remote-debugging via VSCode) would make a lot more sense than debugging in gdb/lldb though.


Mainly because things might work differently in a WASM runtime than they would in native code. A chief example is filesystem access. The WebAssembly System Interface (https://wasi.dev) uses capability-based access to files, so WASM modules are unable to open or write to paths that are not explicitly passed to them, with appropriate permissions, by the WASM runtime.

So there's effectively another layer of sandboxing versus a native binary, and you might need to track down how your program behaves in that sandbox.

Similarly, if you're working with a program that imports several WebAssembly modules, you'll need a way to debug the combined system, versus any single module in isolation.


What if your code is browser specific and uses the javascript based API? You'll need to run it in the browser to debug it right?


Yep! We're still working on how in-browser debugging should work, but we'll get there.

This post is specifically "Debugging WebAssembly Outside of the Browser," but the title on HN was unfortunately truncated.


I gotta say, that's something I'm not looking forward to


Looks like we're well on our way to a somewhat overly complicated reimplementation of thin binaries[1]

[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.108...


We had that already with the JVM and CIL. They never caught on in the browser.


No, we had a VM / runtime environment with those. Now it's much closer to the metal.


The more relevant improvement is that it's better-integrated with the browser; Java and Silverlight both had pretty horrendous user experiences (and the latter wasn't even usable outside Windows without hacking Wine/Mono into a browser plugin, e.g. Pipelight).

The sandboxing story also seems to be much stronger with WASM than either of those (where sandboxing felt more like an afterthought at best).

In other words: WASM still relies on a runtime/VM, but the difference is that the runtime/VM is astronomically better for the intended use case (and also, incidentally, a presumably-unintended use case) than most existing runtimes/VMs.


I think the title on this would be more clear if it was the full article title, "Debugging WebAssembly Outside of the Browser."


Fixed now.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: