Despite the author's obvious enthusiasm: WASM has a long way to go before it is really useful. From my humble perspective there was not much progress since the 1.0 release. As long as every access to the host (especially the GUI) is tunneled via JavaScript, one shouldn't be surprised about the low performance and the enormous memory consumption. See what happens behind the scenes when you set a pixel in the SDL API when running a WASM app in the browser. And there are plenty of good alternatives for the desktop already.
It does feel like WebAssembly development has stalled after the initial release. There are proposals for reference types, garbage collection, and host bindings that would greatly expand a WebAssembly binary's capabilities, but they haven't been implemented in any browser, much less standardized yet.
I'm still waiting for tail call optimization. By not having support for it, a bunch of languages are either forced to not bother with WASM, or have to do something like trampolines and give up performance.
>WASM has a long way to go before it is really useful.
I disagree.
I can already write decent C# applications that run in browser while using already existing C# libraries for e.g parsing text and it will run fine.
So, WebAssembly allows me to use existing code C# and move it to the browser while developing in my fancy language instead of Javascript and I think it (dropping js) is very solid pros
I just think tooling (visual studio, vs code, rider, etc...) around C#'s Blazor has to mature in order to make this kind of development worth considering, but I wouldnt call that "long way to go"
> I can already write decent C# applications that run in browser .. it will run fine.
It is undoubtedly technically possible, but not really practicable, unless you accept an exorbitant speed-down and much higher memory consumption than e.g. when running in DotNet. Personally, I don't find that attractive in any way, nor do I find it particularly efficient.
Try with a representative application where a whole JS based single page app is implemented in another language than JS (e.g. C++/Qt), which is one of the core benefit propositions made by WASM.
I think you have the performance differences backwards, wasm is usually quite a bit faster than comparable javascript. While it's true that right now there are additional overhead costs for browser APIs, that's changing, but it's also misunderstanding the value proposition of wasm. You can write an entire front end web application if you want, but it really shines for library code and computationally expensive code. And of course, the distribution benefits mentioned in the article.
I think you're just misunderstanding what wasm is. Writing browser applications in any language was never a "core benefit proposition", it was just something that you could do with the tooling. The value has always been in the library/platform layer. Objectively pure wasm is faster than pure javascript, subjectively the additional type safety from writing in Rust (which is practically speaking what most people shipping wasm write their source code in) makes people more confident that their complex code works.
Outside of a few hobbyists, nobody is writing entire web applications in wasm. We already have great frameworks for dealing with UI/DOM stuff.
Browser API calls add 1us of overhead. If your code is conscious of this, then it won’t be a problem. Much of the thought process behind react is to update the dom as little as possible, because it usually takes a while to repaint. Similarly, your WASM code will have this small overhead, but even 1000 calls is just 1ms. Regardless, there is still much more performance gains to be had by WASM-compatible languages when providing computationally expensive features.
My favorite example is Rapier http://rapier.rs/, check out the 3D demos. It blows every JS physics engine out of the water, and it’s using Conrod, a native gui library which has a webgl backend.
Thanks for the link. I tried the cubes 3d demo which takes about 45 seconds on my laptop until no cube moves anymore (around step 590); probably not the expected performance; if I find time I will run the Rust version locally.
"WebGL is fully integrated with other web standards, allowing GPU-accelerated usage of physics and image processing and effects as part of the web page canvas"
So I would guess that when your graphics chip is faster than mine you get a higher performance.
These two things have nothing to do with each other.
This is a physics engine running on your CPU, not your GPU.
The webgl window is run with your GPU and is extremely simple. Drawing a few dozen boxes is something graphics cards from 22 years ago would have no problem with.
And yet the graphics are actually the performance bottleneck.
I tried that on a 4k monitor, and it ran quite slowly. I then made the window smaller, and it ran much faster. It's bottlenecked on the graphics, not on the physics engine.
I think that says a lot more about your combination of software and hardware than webasm.
If someone says "webasm is slow, this demo took 45 seconds to run" and you say "it ran slow for me when I made the window so big that between my browser's webgl implementation, my gpu drivers and my graphics card, a hundred cubes make everything run at 1/10th the speed" what point is that making?
This subthread started because two people saw wildly different performance for the same demo, and some attributed that to wasm. My point was that given that the demo seems heavily dependent on graphics performance, it's unlikely that the observed performance differences had anything to do with wasm.
The demo is just a few boxes, it is not graphics limited. A 4k webgl window might have some sort of bottleneck in your web browser, drivers or gpu. The demo is a physics simulation though. I don't know why you would create a situation where anything would slow down on your computer while having nothing to do with CPU speed and then talk about it in a thread about webasm.
I'm not sure why anyone would think it makes sense to use a computer so ancient it can't run webgl and then use that to talk about webasm not being fast on a webgl demo.
>Even if true, it doesn't help much if you have to marshall through JS for nearly every call.
If your particular front end code is dominated by many calls to the DOM, then WASM may be a net performance loss currently, yes.
In cases where you are doing something computationally heavy then periodically updating the dom with results, then WASM can be a huge win.
In cases where the performance is fine either way, being able to use one language server and client is a huge win. I am already doing useful things with WASM and Rust. It is a very very nice workflow.
I've rewritten hot loops (decompression & texture detiling) from JavaScript into specialized AssemblyScript and seen a decent (2-3x) perf boost? I'm not sure how much of that is "running hot" in JS, since in the examples back then I was probably only doing it for ~100 textures. I can't imagine beating JS with a giant STL-using app with a clang-based toolchain, but the AOT nature helps a lot for hot loops; the kind of stuff you'd write custom asm for in a game engine.
Of course I was making sure to do as minimal memory transfer as possible, and I already use a lot of weird patterns to try to minimize GC in JS.
> Even if true, it doesn't help much if you have to marshall through JS for nearly every call.
If you're doing a lot of cheap calls, it's probably not the optimal way to get a performance boost. I think Web Assembly shines at taking an expensive function, and doing it faster. For example, let's say I wanted to multiple a few large matricies of values. In Javascript this would be VERY slow. In Web Assembly it would be a bit faster, but then Web Assembly plus Web GPU allows me to do it REALLY fast. Yes, I'd then have to marshal the end result back to javascript but that's still cheaper then trying to do all that math natively in Javascript.
Calling from WASM to JS is "reasonably fast" now, of course it doesn't make any sense to make such a call for setting a single pixel, but this doesn't make sense anywhere else either.
IMHO the current main problem of WASM running in browsers is not WASM, but that most "HTML5 APIs" are too high level, too specialized, and built under the assumption that Javascript is too slow to allow lower level, more general APIs (worst example of this outdated thinking is WebAudio).
The next problem is: browser updates regularly break things, often unintended, but sometimes intended (like Chrome's splendid decision to start WebAudio contexts muted, which broke pretty much every WebAudio demo on the web).
And finally: Features behind "security gates", some features just show a passive popup - but not that different browsers could ever agree on what the best UX is for this, each one behaves differently), other features show a popup that requires a user interaction. Other feature can only be used from within in a "short-lived input event handler". Yet other features only work over a HTTPS connection. And yet other features require specific response headers to be set by the web servers (like SharedArrayBuffer support in Firefox, and probably soon-ish Chrome).
Please browser vendors I beg of you, make up your damn mind already about how to handle such security-sensitive features in a consistent way.
Then there's the always lingering deprecation threat, like WebAudio's ScriptProcessorNode, despite it working perfectly fine for situations where audio samples must be generated on the browser thread and audio synthesis can't be moved into the audio thread)
All of this combined makes the browser platform a quite frustrating platform to work on for anything that isn't very simple webpages. Not as bad as Android development (which is at the bottom of my list), but it's starting to get close.
PS: Despite my ranting, I do actually like writing WASM stuff and putting it up on the web, at least this can be done without going through hoops like on closed platforms (e.g. Android or iOS). But it really could be better. Most of the actual problems are not something the people working on WASM can do anything about though.
> but this doesn't make sense anywhere else either
But that's exactly what happens when you draw into an SDL raster window. Even when you have vector drawing operations on higher level, eventually pixels are modified; you can of course combine drawing operations and transport a patch of the screen to the host, but this still goes through an impressive machinery with a lot of copying.
> All of this combined makes the browser platform a quite frustrating platform to work on for anything that isn't very simple webpages
SDL also does other weird things (or at least did), like rendering each 2D sprite in its own draw call.
The way to handle something like setting unique pixels is to keep a pixel buffer in memory on the WASM side, set the pixels in there, and then once per frame copy this pixel buffer into a WebGL texture and render this through a single WebGL draw call, or blit the pixel buffer to a 2D canvas if WebGL is not an option (also once per frame).
Unfortunately with WASM we don't have so many other ways to migrate a desktop GUI application at the moment. If I have to redesign the whole existing GUI application to run on WASM, the technology is not really attractive.
Microsoft is betting on (the not yet mature) Blazor [1,2] framework for porting .NET applications to WASM. Here's a CRUD app WASM example [3] -- there's an initial load time, but once loaded the app seems pretty responsive. There's a bunch more demos here [4] to get a sense of the performance.
There's even a HN clone built on Blazor [5]. This very thread can be found there.
> low performance and the enormous memory consumption
That has not been what I've seen from it. People have made video editors and ported game into web pages. You make a lot of claims in this thread about speed and memory but don't back any of them up.
Thanks for the measurements. That's much faster indeed than on my laptop and even quite decent (besides the peak memory use). Compared to the C++ native version of similar apps on my Linux i386 and Windows x86 this is about factor 10/20 (Linux/Windows) more RAM in steady state and factor 25/30 (Linux/Windows) in peak. Start time of the C++ apps (not QML) is factor 6/3 (Linux/Windows) slower.
Btw. I noticed that on my machine memory use is higher on Chromium and performance even worse than with Firefox. So it seems to depend on browser type/version, and someone in this thread said that there is no support for Safari.
True, but we have a user-case where we have a library we could conceivably rewrite in e.g Rust and run multithreaded on the server or single-threaded inside a Webworker compiled as Wasm.
Wasn't it the primary intention of WASM/Emscripten that you precisely don't have to rewrite existing libraries, but simply compile them for WASM and reuse them in the browser?
I think Wasm outside the browser could benefit package managers. As far as I know, Rust and some other compiled languages distribute their crates/packages in source code form, which then have to be compiled before use. If packages were distributed in Wasm (plus headers/type declarations), you could JIT or quickly AOT compile them. Plus, you’d get safety guarantees to make sure a package isn’t doing anything it hasn’t been permitted to do.
I dont think it would help at all for compiled languages actually. The reason that they distribute source instead of binaries is mostly due to feature flags and conditional compilation.
Seems a bit off track too. WASM is a very lossy compile target and that's not acceptable for most compiles languages which need explicitness and assumptions about the target machine for their optimizations (ones written by the programmer, not the compiler). And the security concerns are tricky - sand boxing is always expensive.
WASM’s memory access sandboxing is actually pretty cheap in most implementations; all the major browser engines now use reserved virtual memory with a segfault handler on most systems instead of range checks.
This just cements my belief that the abstraction presented by system allocators is insufficient for modern applications. This shouldn't be done (or necessary) in userspace.
Good point about conditional compilation. It’s a trade off I’ll leave for language implementors to make. I tend to think that easily preventing a malicious package from sending your file system contents to some server is usually worth 10-20% perf loss, and that might even be made up by Wasm SIMD and other proposals.
Capability based security models should solve that without perf loss, it's just unfortunate they aren't widespread or useful enough yet.
And not to discount your threat model which is a bit hyperbolic, but in applications where you've already making the decision to use a difficult-to-sandbox compiled language, you're not going to have the same justification.
WASM makes a lot of sense for the web where we have mounds of untrusted code, much of which needs to be fast, and is trivial to inadvertently be executed by an unsuspecting user. That's not necessarily true for native applications. The real travesty is the divide between web and native has been so blurred that it's hard to see where it is anymore.
The packages would have to settle on a stable ABI. This is virtually impossible for Rust (which has no ABI stability between binaries compiled with different compiler versions).
Unfortunately this doesn’t get you much in the way of safety guarantees either, since you can’t readily run these packages in separate WASM instances; Rust, C++, C etc. all assume a shared linear memory when linking.
I’m bullish on WASM in general but I think WASM for simplifying package management or for writing executables that just need restricted filesystem access is a solution in search of a problem. Various containerization efforts have solved most of the important problems here with less runtime and toolchain overhead.
I would agree with you, except that LLVM IR isn’t platform independent. For all of Wasm’s warts it’s the best supported low level cross platform IR we have.
It makes sense if you think about it -- the front-end that used LLVM to create this IR consumed a whole lot of platform specific information, like the layout of structs, system call numbers, library names, and function names, etc and distilled it to something that knows the results of those things but no longer knows how they were derived, so it couldn't be applied as-is to some other platform.
This is why WebAssembly needs a runtime framework for making things outside WebAssembly accessible to WebAssembly code -- it has to build an abstraction around all kinds of different systems.
It's even worse, though, for LLVM IR, since it targets a specific type of CPU in various ways, where WebAssembly is a single ISA.
I am wondering if it would be a huge project to create an "http server" kind of Wasm binary. With php, rails or whatever server-side scripting a given project is using. I guess it would still require some quite extensive changes to the served HTML and js.
My use-case would be to create single instances of a server-side project, for demonstration purposes, or serverless operation. Things like providing a webapp for https://kanboard.org or NextCloud.
As a bonus, this could be combined with webtorrent, or other distributed mechanisms to let multiple "servers" interoperate seamlessly, regardless of their server/client usage. Of course, this can already be done today, but the Fediverse usually relies on DNS and certificates, while few server-side apps are completely distributed. This could provide an incentive, and ultimately make it easier to self-host (maybe even ultimately running the Wasm binaries instead of docker containers).
I have some experience with Blazor WebAssembly, and in my opinion, the technology is promising but imature. Whenever a complex page is rendered (a table with hundreds of entries, ...), you can definitely notice how slow and memory consuming it is.
Probably Blazor is immature, but not WebAssembly? Blazor btw run in interpreted mode for CIL modules. Full AOT compilation was planned for .NET 5 but canceled. But you could check rust's frameworks like Yew. That's real world demo: https://github.com/jetli/rust-yew-realworld-example-app
I am interested in how WASM will affect in-browser ad blockers like uBlock Origin that work on a finer grain than merely blocking entire domains. Will they still be effective?
Also, how will WASM affect browser extensions that change the look and feel of websites (like Stylish/Stylus), or the means of interacting with them (like Vimperator)?
These questions all boil down to how much control is the end-user going to have with WASM compared to what they had with the pre-WASM web?
The comparision with all the "native-on-web" tech (Flash/Applets/NPAPI) is not 100% accurate. A major reason why these were used was because they let you bypass the stupid JS sandbox - something that WASM on the browser isn't designed to.
"It would be fair to say that these attempts failed to establish wide adoption"
Wait what Flash spawned an entire generation of game developers and animators, it was ubiquitous. It had many problems and I am generally not sad to see it gone but saying it "failed to establish wide adoption" is only a valid thing if this person has a completely different definition of "wide adoption" than I do.
Flash was largely constrained to games but there were definitely a lot of people using it as the backbone, or an important part, of their interactive web apps, too. Adobe tried to turn it into a desktop runtime but there weren't too many people biting. I guess "actually runs tools people use on a daily basis" is the "wide adoption" they intend?
Does "Youtube was built around a Flash video playback component" count as "wide adoption"? Does "just about every Facebook game was built in Flash, including the vastly profitable ones that invented and perfected the concept of free-to-play-with-IAP" count? A hell of a lot of people exchanged a hell of a lot of money around stuff made with Flash.
You can't compile C and other languages to it (though Adobe was working on it: https://web.archive.org/web/20080827114529/http://www.onflex...) , and maybe that's the threshold they want for "wide adoption" given that there's a whole segment later on in this blog about how you can compile C to WASM?
(Judging by the photo at the top of this blog, it's eminently possible that Max Desaitov is young enough that their only encounter with Flash was "those games I was utterly obsessed with in single-digit ages and never realized were in Flash"... god I'm old)
Perhaps WASM finally has all the right infrastructure to finally become the One True Compiler Target, and all apps will disappear into The Cloud forever. Perhaps. Or perhaps in about twenty-five years, Max Desaitov will be writing a comment expressing disbelief at some doe-eyed child adding WASM to their list of previous attempts to Change Software Distribution Forever and then going on to rave excitedly about why the next attempt to do this will surely be The One. Or perhaps Max will be too busy scrambling around in the burnt-out remnants of a capitalist empire that finally collapsed, right now that feels like a worrisomely possible future, who knows.