Hacker News new | past | comments | ask | show | jobs | submit login
Missing the Point of WebAssembly (wingolog.org)
199 points by ryukafalz on Jan 9, 2024 | hide | past | favorite | 142 comments



Andy is spot-on, as usual.

Wasm is a software-defined abstraction boundary, possibly the most bullet-proof software-defined abstraction boundary yet to date, given the formal specification and machine-checked proofs of correctness. It has memory sandboxing properties that are easily and efficiently implementable on hardware, and it is low-level enough to run low-level languages at speeds very close to native compilation.

Others in the thread make analogies to things like the JVM, CLR, Parrot, Ocaml bytecode, and other things. None of those things have the rigor that Wasm has or are low-level enough to be an abstraction over hardware. And all of those come with ambient capabilities or language concepts that implicitly couple applications with the runtime system and/or other applications. Wasm by its API-less core spec does not. This was 100% intentional.

Andy mentioned WALI briefly in the article. We designed WALI to be a similar kind of "just one step up from the bottom" abstraction; if you're running on Linux, then the Wasm engine exposes a memory-sandboxed Linux syscall set with the exact same semantics (effectively pass-through), that allows building abstractions and security one layer up (outside of the engine).


Given WASM's goodness, has anyone built a WASM-cpu? that is, WASM as the assembly language (instead of x86, arm, risc-v)? Heck, does that even make sense?


I've certainly thought about it :)

If you can write an interpreter, then you can implement it in hardware. Whether it will be _efficient_ compared to using another ISA is a different matter.

Of the top of my head: the unbounded block stack would need to be managed and spilled to memory as it grows (not unlike SPARC's register windows), manageable but complex. The operand stack either needs to be bounded or likewise spilled to memory. For a superscalar, the WASM bytecodes aren't the nicest to decode, but neither are x86 nor RISC-V (C implies variable length instructions).

Unsurprisingly, it would make a lot more sense to design a µArch that is well matched (eg. has rotation instruction) to WASM and for which translation is fairly straightforward. I'm excited about WASM exactly because it provides an interesting path for getting software on new and exciting uArchs. It's _MUCH_ less work to make a WASM translator than to do the full software stack.


With a pre-pass on the bytecode (e.g. during verification), one can compute a side data-structure that makes it possible to interpret Wasm without tracking the control stack dynamically. It works surprisingly well.


The OP asked about a WASM-cpu. It assume he meant that literally thus without preprocessing. Once you allow preprocessing there's no limited to what you can do and in fact you can do something really efficient (margin is too narrow to contain the full proof).


I know, but the safety guarantees of Wasm are established by validating the code first. The sidetable that Wizard's interpreter used is small enough (about 1/3 the side of the original code) it could fit in a hardware cache along-side the original code, kind of like a u-op cache. Wizard computes this sidetable during code validation, so there is no separate step.

This is the paper I wrote about it: https://dl.acm.org/doi/abs/10.1145/3563311

I think the technique could be adapted to make a Wasm CPU, but the issues mentioned (e.g. variable-length instructions) would complicate the CPU frontend. I think it being stack-based isn't as much of an issue, as register-renaming will virtualize the Wasm operand stack. But given how complex modern CPUs are, it'd be hard to build a competitive super-scalar chip (really for any ISA) without some verious serious investment.


Thanks for the paper.

> it'd be hard to build a competitive super-scalar chip (really for any ISA) without some verious[sic] serious investment

"Hundreds of millions of dollars" cf. Mark Horowitz's Micro 35 keynote (https://www.youtube.com/watch?v=q8WK63joI_Y&t=1140s) but if you look at the stack in his graph, it's obvious that the architecture portion of this is tiny.

Making a superscalar core simulation (= "model"), or even an FPGA softcore, is within the reach of any sufficiently motivated individual and indeed there are several already. Of course the performance will be dramatically lower than a state of the art silicon implementation.


It could be done but it likely wouldn't perform well or be very useful (because the software surrounding the application also needs to exist.) The IR notably has a lot of indirections in the way e.g. function calls are represented (indirect call through a table which then refers to the module it needs to call), it's stack based, stuff like that.

Realistically speaking you are just better off targeting a compiler to whatever instruction set you have, because writing a (basic) compiler isn't too difficult. You can even just use wasm2c and then run a C compiler on it, assuming one exists for your target...


Have you ever considered building something like "Core War" [0] as a way to help people learn WebAssembly while having fun?

I played with it in the late 80s and that's how I learned some Assembly before I was a teenager.

[0]: https://en.wikipedia.org/wiki/Core_War

Edit: I meant Red Code, but that led to Assembly.


Can somebody knowledgeable enough describe how neko or hashlink compare with wasm in this regard?


Hashlink seems to have the ambient concept of e.g. objects directly in its design and bytecode format, whereas e.g. wasm instead thinks of that in terms of "reference types" which represent existence of some opaque object. So there's that.

But most importantly, Neko and HashLink do not have the same kind of explicit, formally defined semantics WebAssembly has. This is a big topic but TL;DR there is a logical specification of every WebAssembly operation and behavior that all together, gives rise to a validation algorithm. This algorithm is basically a type checker, and like any type checker, it rules out bad behaviors. This validation algorithm can be run over an arbitrary WebAssembly file and tells you "yes this is valid and safe" or "no it isn't safe." By safe, that means certain behaviors are not ever possible, like stack underflow, or unstructured control flow, or that an operator is called with an invalid operand (e.g. add a string and a number). The lack of these behaviors means that when executing this program, certain isolation properties can be shown to hold -- even if the program is produced by a hostile and untrustworthy source.

That is very, very important for many use cases. Consider a browser, where it downloads a wasm file from an arbitrary server. How does it know that file was produced by a trustworthy compiler that produces correct wasm files? It can't know that. So instead it must validate the file first (according to the precise specification given by the wasm standard) before being allowed to execute it.

The validation algorithm for WebAssembly is stated in precise mathematical language, the language of type theory. And there have been several independent efforts to validate that this algorithm is correct, in machine checked theorem provers, to show that it always produces a valid yes/no answer, among all possible WebAssembly programs. Type checking and mathematical theorem proving are very closely related, in the same field, and type checking algorithms have been formalized in this manner for a very long time. So it's a well understood topic, and there is very good reason to believe the WebAssembly validation algorithm is correct and accurate (contra the oft repeated "b-b-b-but the specification might be wrong!")

If you are writing a program where the trustworthiness of the generated code isn't a problem, then you don't need that feature. For example, in the case of Haxe, you are probably producing those programs so they can be downloaded by users who already trust you, or you are compiling them directly to native executables and then giving them that. They already trust the developer to not hack their machine. In practice, some people still use WebAssembly for those use cases because it's got mindshare and a bunch of implementations, and implementing it (at a basic level) is not too difficult, and the extra security and isolation guarantees are nice to have.


Thanks!


It is strange to me to see so much talk about all of the places WebAssembly could live, like IoT devices and edge lambdas and cross-architecture binaries, places that don't really need these problems to be solved, and yet so little talk about whether or not it can finally unshackle the web from the mess that is JavaScript. As I understand it, it's not even really possible today to make WebAssembly do anything meaningful in the browser without trampolining back out to JavaScript anyway, which seems like a remarkable missed opportunity.


> As I understand it, it's not even really possible today to make WebAssembly do anything meaningful in the browser without trampolining back out to JavaScript anyway, which seems like a remarkable missed opportunity.

It was always the goal for WebAssembly code living in the browser to have some form of object model/DOM integration, and interaction with JavaScript objects. What form it would take exactly wasn't clear. But that was never a goal for the original 1.0 specification (MVP), and not even in spec for the current 2.0 draft -- which mostly add a bunch more useful instructions and features, but nothing earth shattering.

The reality is that whatever solution gets accepted has to be supported and continue working for what is effectively ~infinite time, in multiple major browser engines. For custom runtimes it's sort of "whatever", but webpages can live forever so it's taking a lot of time to get right. But there was clearly a lot of value to be had without that, hence 1.0 was the "MVP."


While I don't necessarily agree with the unnecessary, unsupported, casual, & cheap (provided with zero substantiating/contestable support) contempt culture here ("unshackle the web from the mess that is JavaScript", "places that don't really need these problems to be solved")...

WebAssembly component-model is being developed to allow referring to and passing complex objects between different modules and the outside world, by establishing WebAssembly Interface Types (WIT). It's basically a ABI layer for wasm. This is a pre-requisite for host-object bridging, bringing in things like DOM elements.

Long running effort, but it's hard work and there's just not that many hands available for this deep work. Some assorted links with more: https://github.com/WebAssembly/component-model https://www.fermyon.com/blog/webassembly-component-model https://thenewstack.io/can-webassembly-get-its-act-together-...

A preview-2 should be arriving sometime somewhat soon.

It's just hard work. It's happening. And I think the advantages Andy talks to here illuminate very real reasons why this tech can be useful broadly. The ability to have plugins to a system that can be safely sandboxed is a huge win. That it's in any language allows much wider ecosystem of interests to participate, versus everyone interested in extending your work also having to be a java or c++ or rust developer.


The type system introduced in Wasm GC allows sharing data between the host and other modules and is usable right now. DOM access is possible and being demonstrated in several language implementations.


I saw you mention this in a previous post, but I can’t find an example. Do you have one handy? What are the limitations?

Edit: is this it? https://www.spritely.institute/news/building-interactive-web...

Edit again: apparently I had seen this before, since I upvoted its HN submission. Oops! https://news.ycombinator.com/item?id=38477602


Yup, that's one of the examples! And here's another: https://spritely.institute/news/scheme-in-scheme-on-wasm-in-...


Wasm modules have been able traffic in (opaque) host references since the reference-types proposal landed several years back.


But GC ref types have made it practical.


are you sure? the last time I checked JavaScript was NOT able to access Wasm/GC objects


Yes, we have made use of JS invoking WASM-GC managed Scheme objects in our Hoot demos at Spritely.


In Chrome 120 I still get:

TypeError: WebAssembly objects are opaque

When trying to read/write WASM-GC objects in JS.


> While I don't necessarily agree with the unnecessary, unsupported, casual, & cheap (provided with zero substantiating/contestable support)

What I personally find annoying is the debate-bro attitude that expects every statement anyone makes that you disagree with to be made with an entire supporting argument with multiple examples or citations, especially when the general reasons someone might believe that statement are pretty well known. And expressing their opinion on thr quality of JS wasn't unnecessary, it was literally necessary to explain why they are excited for the potential of WebAssembly.

> contempt culture

Oh that essay. Thinking a technology is bad and stating so, even making fun of it a bit, is not inherently mocking or degrading the skills or legitimacy of the people that use it. The author of that essay was right we shouldn't mock or degrade the skills (or other aspects) of anyone, but they can't seem to realize that making fun of a language isn't tantamount to mocking the people that use it, and if someone feels that's the case, maybe they should learn to separate their ego / sense of self worth from the tool they use, so we avoid language or editor flame wars by way of greater maturity, instead of by way of walking in eggshells. Especially since languages can be better or worse. Seeking a kinder, more inclusively spirited community is commendable, but expecting every opinion about language quality to be either kept in or perfectly sourced is a blind and misplaced excess of enthusiasm in that direction. And I say this as someone who supports things like DEI and CoCs.

For myself, I think JavaScript has a lot of warts, but not really any more than a lot of popular languages, and the core of it is very well designed indeed, but I just personally don't prefer to use it. Just in case you were going to assume I'm responding this way because I want to be able to make fun of JS. I don't. I just find this style of thinking annoying.


Wasm isn't a replacement for JavaScript. I think it's clever to leverage JavaScript as an interface to an ever-changing and extremely broad suite of native browser APIs, and keep Wasm small, fast, and secure.


> Wasm isn't a replacement for JavaScript

I think a lot of people want it to be a replacement for JavaScript. And I think that makes sense, if for no other reason than learning JavaScript is a hassle. Lots of folks think of themselves as a C# developer, or a Python developer or whatever. A lot of developers would rather make websites in their language of choice than learn JavaScript. (And who can blame them?). Webassembly (with GC bindings) should be able to make that possible.


Learning new languages may be component of it, but I think more generally, different languages are suitable for different problems and different developers. Being free to use the most suitable or preferred tool for a task is undoubtedly desirable, rather than having JavaScript as the only option.

And, while more of a personal opinion, JavaScript is not a well designed language - permanently shackling the web to it prevents progress forward.


Looks like most people have misunderstanding: they think that WebAssembly is the only way to run their favourite language in the browser. However, there's another way that exists for decades: simply use compiler of their favourite language that produces JavaScript! As for Java, GWT exists almost 20 years. Another example I know of is Brython which runs Python on top of JavaScript. However, looks like Java developers don't like to write websites, even while GWT is there.

And according to my experience, WebAssembly is not a game changer: it does not provide extra performance here, neither does it allow to get smaller binaries. I'm a developer of another Java-to-JavaScript compiler, TeaVM. I recently compared performance of JavaScript and WebAssembly targets, and in one particular case WebAssembly is slower: https://teavm.hashnode.dev/comparing-teavm-with-kotlinjs-pef.... There's another example where WebAssembly is only slighly better: https://teavm.org/gallery/jbox2d/index.html. In both cases WebAssembly binary is huge compared to JavaScript. On one hand this can be because of I don't target Wasm GC, so binaries get bloated because of need to maintain shadow stack. On another hand, there's Google Closure Compiler, which produces Wasm GC binaries, and from what I heard of it, WebAssembly binaries still don't win in their size.


Indeed. node.js was popular in part because you can use the same language on the client and the server. All-WebAssembly no-JavaScript sites would allow the same thing with other languages.


> A lot of developers would rather make websites in their language of choice than learn JavaScript. (And who can blame them?).

Personally I’m quite happy to blame them. If I want to do ML I use Python, if I do statistics I use R, for messing with Windows I use C#, for using databases I use SQL and for the web I use JavaScript. I think that refusing to learn other languages is a foolish consistency.


If those all compiled to WASM, then you can mix and match freely.


Already possible in .NET for years, it is called Common Language Runtime for a reason.

Also TIMI on IBM i, language environments on z/OS, among other examples.


True, but imagine being able to interop with all the languages ;)


I don't need to imagine,

> More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET.

Taken from https://news.microsoft.com/2001/10/22/massive-industry-and-d...

In 2001.


https://en.wikipedia.org/wiki/List_of_CLI_languages

At first it seems like an impressive list of languages! But upon closer inspection, over half of them are either abandoned or are commercial products that have been coasting mostly unmaintained for a decade... By now even the JVM ecosystem is a lot healthier.

With WebAssembly, all languages[1] supported by LLVM can be compiled with no custom compiler nonsense. AND we get all the languages[2] that depend on those languages for free as well! The entire burden is on LLVM (and soon GCC) with everything benefiting downstream, surely you can see how it's different from CLI/JVM?

This time the compile-once-run-everywhere promise might actually be true.

1. C, C++, Zig, Rust, etc 2. Python, Perl, PHP, Lua, we can even do some inception with webassembly runtimes.


> With WebAssembly, all languages[1] supported by LLVM can be compiled with no custom compiler nonsense.

You can do that without WASM. This really reinforces the authors point.


Just like it happened during the last 25 years, the amount of languages where people will be willing to keep the effort, daily, will fade way.

Or the money from all those VC firms betting the farm on Webassembly startups will dry out.

Until someone comes up with yet another bytecode format, claiming yet again to be UNCOL from 1958.

https://en.wikipedia.org/wiki/UNCOL


LLVM is now the modern version of UNCOL, it seems.


What you're describing already exists today, that's exactly what Blazor is:

https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blaz...

Blazor lets you build web apps using C#. It achieves that by running C# inside WebAssembly.


Blazor is silly. It’s more complicated to use than simply using React/Angular/Whatever you want and it’s less efficient, take up huge amounts of space and is much harder to debug. Coupled with how hard it is to actually build reusable components in any sane way…

It may get there eventually, but as with most C# things these days, it’s mostly meant to sell other Microsoft products to the horde of developers who view themselves as single language developers. The real risk isn’t so much in the technology, however, it’s in whether or not Microsoft abandons it before it’s good like they have with their previous GUI frameworks going all the way back to ASP classic. I know that angular made a big leap from JS to TS, but aside from that, your major JS frameworks have been fairly stable in the same time frame that Microsoft has gone through web forms, mvc, razor, whatever the hell the web-api bundled with X period was and now Blazor, and what is the most telling of all is how virtually every Microsoft Office365 product is written in JS. Would you really bet on a semi-shitty technology that doesn’t even see adoption within the company that makes it? Well, I guess you do considering you’re using C#.


Yeah I know; but blazor is huge because it needs to ship its own garbage collector and object runtime. Google says 1.3mb for a hello world blazor app.

With wasm-GC, blazor should get way faster and smaller. And the blazor-(JS & DOM) bridge can be massively simplified too. I don't see any good reason for blazor apps to be any larger than the equivalent react app. And that will make it a much more viable platform.


There's also something that you might have actually already heard of called Unity3D. And it's actually widely used and supported and taught by more than a few people and companies and educational institutions.


Just use Flutter, that's what they're already doing, similar to Blazor. In fact, Chrome shipped WASM-GC and Flutter has an experimental build that uses it.


As much as many people want alternatives to JavaScript, there are also people who consider themselves JavaScript experts (or the various related languages like TypeScript). They’ve invested a ton of time and effort into getting to where they are.

There’s a balance between disruptive change and keeping these folks on board. Get it wrong and you’ll end up with it strangled in its cradle.

My own take is that low-code platforms will continue aggressively moving into this space, especially in the enterprise space. They come with their own problems and are often aggressively overhyped, but they will put pressure on “traditional” developers to move faster and look for better solutions.


"As much as many people want alternatives to JavaScript, there are also people who consider themselves JavaScript experts (or the various related languages like TypeScript). They’ve invested a ton of time and effort into getting to where they are."

I do not see a problem here. The idea is not to replace js (not realistic anyway), but to have a full working alternative to not have to use js anymore, so in the end you can really choose how to build for the web.

I also invested heavily into js ... but I am really looking forward to the day, I can use something sane again.


[flagged]


Personally I agree with you. I've touched code in about 5 different languages in the last month alone. But the majority of programmers out there aren't like you and I. Most programmers see themselves as a "java developer", or a "python developer" or an "android developer". They aren't full terrain engineers who go wherever the song takes us. Lots of programmers want to write code in their preferred language, close tickets, get paid and go home at the end of the day happy. It will always be an uphill battle to convince a lot of these people to spend their weekend learning javascript.

Its much easier to make wasm good. Let Go programmers use go to make websites. Why not? If its any consolation, they'll still need to learn HTML and CSS in the process. Javascript isn't going anywhere if they want to make the switch later.


You're right. I don't consider those people serious programmers, just uneducable monolinguistic coders, who are going to find their dead-end jobs easily replaced by AI.

ChatGPT is not just useful for banging out boilerplate code, it's much more useful for teaching programmers who are willing to learn new apis and how to code in new languages and better in the languages they already know.

And that opportunity is missed on monolinguistic coders who refuse to learn or even read the fucking manual, and just see AI as a replacement for posting "just give me the codes" and copying and pasting from Stack Overflow.

Real programmers aren't going to be replaced by AI. Programmers who refuse to use AI and refuse to learn are going to be replaced by AI and programmers who are capable and enthusiastic about learning.


> As I understand it, it's not even really possible today to make WebAssembly do anything meaningful in the browser without trampolining back out to JavaScript anyway, which seems like a remarkable missed opportunity.

That's the underlying messy API it's built on. There are specs to make the API more standardized like https://github.com/WebAssembly/WASI

But overall, yeah, it feels like a shiny new toy everyone is excited about and wants to use. Some toys can be fun to play with, but it doesn't mean we have to rewrite production systems in it. Sometimes, or most of the time, toys don't become useful tools.


You seem to think WebAssembly is a "toy" and nobody's really found a good use for it, but actually it's being used in some really successful and high-profile web apps, but it's so seamless that you probably don't even realize it.

Here are some of my favorite examples:

Figma

Google Earth

Photoshop Web

Any Unity games you've played on the web: WebAssembly.

Everything built using the Blazor framework in C#: runs on WebAssembly on the web.

I really don't think all of those sites are using WebAssembly because it's a cool new toy. They're using it because it's actually pretty useful technology that enables you to build really powerful software that runs on the web browser without being limited to JavaScript.


Isn't that specifically why they are called toys and not tools?


That's the idea! If we can just convince people who make product and technology decisions to see that and not push the next shiny toy as the new tool or library everyone has to use, we'd all be in good shape :-)


Because you think Unity3D should have stuck with its tried and true Flash back-end instead of moving on to shiny toy WebAssembly?

https://forum.unity.com/threads/return-flash-deployment-back...

>Flash was not discontinued "because it couldn't provide good graphics". It was discontinued, because we no longer believe that Flash as a platform has any future. That has not changed.


The main problem is that either you compile a WASM binary or you serve a big language runtime (itself compiled to WASM) all the time: this is what you'd have to do if using C# or Java for example (maybe you can use TeaVM, but maybe you can't).

For some reason browsers didn't want to "balkanize the web" for the really good reason of supporting multiple languages, they wanted to do it for very petty reasons like WebMIDI or wildly differing implementations of LocalStorage (e.g. how much you can put in there), or non-overlapping support for video codecs, etc etc etc.


Yep, we are still generating bindings, but I don’t think that’s going to change anytime soon. WebAssembly to JavaScript is what the C ABI is to Python, with an additional bounce back to JavaScript land.


I think once WasmGC is finished up, that will make shipping a VM inside a VM more viable. Opening up direct access to the DOM would be cool, but if it were up to me, I'd just skip it and render everything inside a <canvas/> and avoid all of the "build an application on top of a document format" crap that makes web apps so weird to make.


> if it were up to me, I'd just skip it and render everything inside a <canvas/>

That's very tempting, but you lose a lot if you do that, especially around text:

* Accessibility for screen readers, etc.

* The easy ability to select text, copy/paste between other apps, etc.

* Line splitting

* Support for multiple languages, RTL, and all their other rendering quirks.

Canvas has some text APIs, but they are fairly rudimentary. It's a hard problem.


> Canvas has some text APIs, but they are fairly rudimentary

That's an understatement for sure. `measureText` is not even close to feature parity with where windows apis were 25 years ago.

Or something like text underlines has to be done entirely manually.


I completely agree, and these are required features for a full app that is planning to reach the world.

It requires a lot of extra work, but real world apps like Figma [0] are rendering directly to the canvas, and frameworks like Flutter offer support for canvas rendering (via Skia's CanvasKit) [1] [2].

I guess my point is that if someone is using WASM to ditch JavaScript, why stop there? Ditch the DOM and CSS too! Render all of the graphics with OpenGl or WebGPU!

There is a class of applications that I want to explore that would be better suited to by doing just that. I don't believe every app should do that, just that for what I want to build, that's where I'm looking at going (hence my "if it were up to me" comment above).

[0] https://www.figma.com/blog/building-a-professional-design-to...

[1] https://docs.flutter.dev/platform-integration/web/renderers

[2] https://skia.org/docs/user/modules/canvaskit/


> render everything inside a <canvas/> and avoid all of the "build an application on top of a document format" crap that makes web apps so weird to make.

We tried this with flash already. It has the same issues. Browsers provide _a lot more_ than just rendering your html+css. There are cases where it makes sense, but generally, it's a terrible idea. It will be like trying to make a visual novel using youtube in-video links.


Not a very accessible solution.


It can be! I think that's the number 1 response when anyone wants to actually make a better platform. Projects like AccessKit are a really great step in the right direction https://github.com/AccessKit/accesskit


Cool, thanks for the share. I will check it out.


It can perform a lot of computational work, but yeah it can't yet interface directly with the DOM. I'm not sure about whether it can access the network or other things, but I don't believe that it can.


> but yeah it can't yet interface directly with the DOM

The Spritely folks are doing DOM manipulation from Scheme in Wasm, is there something beyond this that you're referring to? https://spritely.institute/news/building-interactive-web-pag...


When people say "can't yet interface directly with the DOM," the "directly" is the key bit. Projects in webassembly have been able to interact with the DOM forever, but they need to do it via JavaScript shims. I am not intimately familiar with Spritely, but from reading the post briefly, it appears this is exactly what they're doing:

  Scheme.load_main("hello.wasm", {}, {
    document: {
      body() { return document.body; },
      createTextNode: Document.prototype.createTextNode.bind(document)
This requires boilerplate and adds overhead that would not exist if it were able to be done "directly."


I suspect as time goes on we'll see more and more of this become direct through optimization. My understanding is that this already happens for certain well-known methods, like if you import Math.sin or something V8 is going to optimize it to a direct call when it compiles the Wasm module that imports it. And maybe a non-JS interface will emerge, but there needs to be some kind of layer that preserves capability security. You wouldn't want any and every Wasm module to have access to the entire browser API. You will still want the ability to omit capabilities, attenuate them, etc. and for that a full programming language seems necessary and well... JS is already there.


Are there plans to add DOM support? That seems pretty critical for what I had thought wasm was supposed to be myself.


Wasm allows for importing host functions. Those host functions could provide DOM access. With Wasm GC it is possible to pass external references (DOM nodes, promises, etc.) to and from Wasm. What this looks like in practice is that a thin JS bootstrap script instantiates the Wasm module, passing along a set of JS functions that map to the declared Wasm imports and then the program is effectively driven from Wasm at that point.


I'm not sure. All I know is that there are some hurdles to get past. One of them is that several DOM APIs were created specifically for JavaScript, which is...really disheartening.


WASM in the browser can do exactly one thing aside from what is in WASM spec - interface with JavaScript.

It's not even that slow to interface with JS from WASM if you do it right (i.e. you aren't passing strings back and forth).


A lot of use cases are applications that don't technically require a DOM but maybe just the ability to copy and paste text.

The biggest thing is some kind of graphics capability.

What I think happened, if you look at the web assembly design discussions, is that they made the initially correct decision to limit the scope to be more manageable, but then over the years that decision became something of a religion and even a strange badge of honor.

"Web assembly is not meant to replace JavaScript". It certainly is challenging to try to standardized on anything like device input or graphics APIs, but after say three years, competent and sane designers obviously should have been working on that.

Maybe I am a conspiracy theorist (I am) but I sometimes suspect that Google has deliberately limited the scope of web assembly in order to maintain the control that the dominance of Chrome provides them. Because if things like keyboard/mouse and graphics had some type of web assembly standard, then it would be easy for different runtimes to support them. Then before you know it, you don't need the browser for most things, you can just use your favorite web assembly runtime.

I will go so far as to suggest that this is an antitrust issue and that a technically competent government (of which maybe non exist so far) would have somehow forced the development of an evolvable shared standard like web assembly that is not missing important pieces like graphics.

I see web assembly as the obvious successor to Java. It's been held back by some strange psychology and maybe even intentional sabotage.


While there is valid criticism of JS, I am not sure if I really want binary blobs executed in my browser.

Sure, for applications that really need to strip the overhead of a scripting language, it is very welcome. But for the average website? Not sure about it...

And security is another matter, there is a reason the WebAssembly sandbox is so restrictive.

Webassembly is awesome, but the solution needs to fit the problem.


How can a language be a mess? I hear lots of personal frustrations. But saying a language is a mess is too short sighted I believe.


This is how a language can be a mess:

    [] + [] = ''
    [] + {} = '[object Object]'
    {} + [] = 0
    {} + {} = NaN
    Array(16).join('wat' - 1) + ' Batman' = 'NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN Batman'
https://www.destroyallsoftware.com/talks/wat


You can find examples like these for basically every single language.


This is misleading, as the original talk executed them in the statement context where the preceding `{}` is an independent statement. `({} + [])` etc. are much more consistent.


Correct me if I'm wrong, but aren't most languages designed and controlled by some relatively centralized organization, whether a company or a nonprofit?

Javascript, on the other hand, is a loose standard that different companies kinda sorta adhere to. It was a rushed one man project put out to compete with Java back in the 90s, then every browser tried to do their own somewhat compatible implementation. Javascript became the lowest common denominator, winning out not because it was better than say Java or ActiveX or Flash or whatever. It was just commonly available.

Fast forward a couple decades, the situation is both better and worse. We at least have ECMAScript now, and the core language has certainly gotten better and more dev-friendly.

Unfortunately, what ECMAScript covers is so limited and the standard lib is still so poor that lodash and similar are common just for basic data manipulation, not to mention the hundred DOM abstractions like React or Vue or Svelte, and occasionally Angular. And client-side state is its own nightmare, and then persisting things into cookies/localstorage/indexedDB is a whole other can of worms, along with different ways to do pub sub. Being tethered to file-based HTTP and HTML means that even basic routing is a hack on top of a hack.

There is no standard package manager, and npm is a third party project, which can't agree on which module format to use. Libs from like 2 or 3 years ago will often be incompatible with new code.

And that's just assuming you're using a browser. Outside of that there's not even a standard runtime, between Node, Deno, Bun, etc.

Typescript is yet another third party add on, not something baked into the language.

Eslint, prettier, etc. are all third party things.

In the old days a lot of this used to just be baked into the language itself, or at least an official IDE . If you ever had a chance to use Visual Studio (Not VSCode) in the old days, everything was so well designed and integrated, from code organization to the standard libs to the UI editor.

Nowadays there's 300 ways from 200 companies to do anything in Javascript, none of it particularly good or secure or performant or easy to read.

In a way it's natural selection, I guess, with different implementations all competing for supremacy, but it definitely is messy. Every time I use JS (which is every workday) I can't help but wish some other, more coherent, language won out.


These are...weird criticisms. You basically seem to wish you were coding within a monopoly, for a single platform controlled by a single corporation. Perhaps Swift is where you would like to be?

JavaScript does incredibly well for the constraints it operates within. The fact that Node and Deno etc even exist is pretty remarkable given the primary purpose of JS.

The fact we even have ES modules is in large part because the third party NPM worked so well. Things start out messy then they gain traction and get standardised. JS isn't a loose standard at all, it's incredibly well defined, and few of its original Netscape origins really remain.

You have a bit of a point about TypeScript. It'd definitely be nicer if this was part of the language and didn't require so much third party tooling. It seems plausible that this too might happen in a few years. Perhaps browsers will gain first class support for it.


Take it from someone who has made meaningful code in JS since 2005:

- JS as a three week project is amazing

- JS as a lingua franca for web is a disaster in the same league as null pointers. It is like if backend had decided to standardize on PHP early on (if you disagree, try to mention one thing except syntax that isn't as bad in JS as in PHP, and the syntax is actually a blessing in disguise, if JS had looked like PHP it wouldn't had become a standard)

- The fact that people has made a very productive ecosystem on top of the collection of footguns that is Javascript is of course equally impressive if not more

Please note that this post praises both the inventor of JS as well as the JS ecosystem.

But that does not mean that Javascript is a good language.


> These are...weird criticisms. You basically seem to wish you were coding within a monopoly, for a single platform controlled by a single corporation.

You know, I never really thought about it that way, but I think you're right!

<lazy dev hat>That WOULD be nice! A standard way to do anything, no need for unnecessary reinventions of the wheel... progress would slow down a bit, sure, but that's OK, nobody can keep peace with this lightning-fast explosion anyway, and slow and steady is more sustainable than volatile bubbles everywhere anyway. I think I would enjoy working in such a system...</lazy hat off>

<open source hat on>Unfortunately, that'd also mean a closed ecosystem and we probably wouldn't have had the amateur dev boom that HTML/JS/CSS and its visible source gave us. I hate the messiness of that ecosystem, but I can't deny that the openness gave a boatload more people opportunities than the closed platforms. Maybe that's just the price we have to pay =/ </hats away>

> Perhaps Swift is where you would like to be?

I never tried. Is it nice? As a user I find iOS quite obnoxious, but maybe developing for it is different...?


> You basically seem to wish you were coding within a monopoly, for a single platform controlled by a single corporation.

Benevolent dictatorship has it's advantages. The JavaScript ecosystem is messy -- exactly why is up for debate and whether that's good, bad, or unimportant is also up for debate.


ECMA standardized ECMAScript in 1997, only two years after Netscape Navigator first shipped Javascript. ECMAScript and Javascript are the same thing. I've never encountered a language level incompability in the different ECMAScript implementations. The same can't be said e.g. about the extremely commitee driven C++.

The browser APIs are specced by W3C and WHATWG. For newer APIs there are usually differences in browser implementations, but these tend to get ironed out quite fast.


While this might be technically true, my own (anecdotal) experience has been quite different.

I've been writing JS since its invention, initially just using it (as a kid) for silly Geocities gimmicks and infinite window.alert() loops. I still remember how back in the day no two browsers had a totally compatible JS implementation (especially when Microsoft was busy trying to embrace-extend-extinguish), some things would be subtly different (like datetime handling, or window vs document, whatever)... it wasn't just me either, there used to be a lot of articles that discussed things like that. Here's one: https://burningbird.net/netscape-navigators-javascript-1-1-v...

In my memory, nobody really discussed ECMAscript back in those days. It was more about Netscape/Mozilla vs IE vs Phoenix's JS engines (can't remember if Phoenix used the same JS engine as Netscape), and then different versions of jQuery or early Angular, browser prefixes, early polyfills that you copied and pasted manually, etc. Even up till a few years ago, it was still quite common to have to check CanIUse to make sure a method was supported everywhere.

It was ES6 that really brought ECMAscript into popularity... this Google Trends graph shows the correlation somewhat, though both are totally dwarfed by "Javascript": https://trends.google.com/trends/explore?date=2004-01-01%202... (and it doesn't go back to the 90s, only 2004).

And the ECMAscript core definition is very narrow, only dealing with core language features and not the greater JS ecosystem, which is how we end up with a bazillion runtimes, TypeScript, third-party libs for everything from components to deep copies to state and routing and testing etc. ECMAscript by itself is super barebones, maybe enough to make a simple serverless func with, but hard to write an entire app in without third party frameworks and libs.

I can't speak to C++ (never worked enough in it), but even PHP by comparison had a wonderful standard library, and de-facto standards in the form of Symfony and Laravel that provided most of the missing use cases. In the JS world it wasn't until AngularJS that we really had something like that (but nobody wanted to use it), and now finally Next.js (which is thankfully super popular). Even then, you had to supplement those on the HTML side with something like Bootstrap, and on the CSS side with something like SASS or LESS. (Thankfully, CSS itself is also getting better these days.)

---------

I don't know that this anecdote really disproves anything you're saying (and it's not meant to), just illustrating that even when standards exist on paper, the ecosystem is of a language is more defined by its real-world usages. And for both JS and its siblings (HTML, CSS, SVG, Canvas)... the journey to get to where we are today was very, very messy and full of a million incompatibilities.


> aren't most languages designed and controlled by some relatively centralized organization, whether a company or a nonprofit?

This is a relatively recent development in programming languages. For example, your next sentence:

> Javascript, on the other hand, is a loose standard that different companies kinda sorta adhere to.

This could also describe the history of C. For example, let's look at the rationale document for the first standardization of C: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n802.pdf

I apologize in advance for quoting at length, but I could not decide what should be cut.

---------------------

The X3J11 Committee represents a cross-section of the C community: it consists of about fifty active members representing hardware manufacturers, vendors of compilers and other software development 10 tools, software designers, consultants, academics, authors, applications programmers, and others.

<snip>

The Committee’s overall goal was to develop a clear, consistent, and unambiguous Standard for the C programming language which codifies the common, existing definition of C and which promotes the portability of user programs across C language environments.

The X3J11 charter clearly mandates the Committee to codify common existing practice. The Committee has held fast to precedent wherever this was clear and unambiguous. The vast majority of the language defined by the Standard is precisely the same as is defined in Appendix A of The C Programming Language by Brian Kernighan and Dennis Ritchie, and as is implemented in almost all C translators. (This document is hereinafter referred to as K&R.)

K&R is not the only source of “existing practice.” Much work has been done over the years to improve the C language by addressing its weaknesses. The Committee has formalized enhancements of proven value which have become part of the various dialects of C. Existing practice, however, has not always been consistent. Various dialects of C have approached problems in different and sometimes diametrically opposed ways. This divergence has happened for several reasons. First, K&R, which has served as the language specification for almost all C translators, is imprecise in some areas (thereby allowing divergent interpretations), and it does not address some issues (such as a complete specification of a library) important for code portability. Second, as the language has matured over the years, various extensions have been added in different dialects to address limitations and weaknesses of the language; these extensions have not been consistent across dialects.

One of the Committee's goals was to consider such areas of divergence and to establish a set of clear, unambiguous rules consistent with the rest of the language. This effort included the consideration of extensions made in various C dialects, the specification of a complete set of required library functions, and the development of a complete, correct syntax for C.

The work of the Committee was in large part a balancing act. The Committee has tried to improve portability while retaining the definition of certain features of C as machine-dependent. It attempted to incorporate valuable new ideas without disrupting the basic structure and fabric of the language. It tried to develop a clear and consistent language without invalidating existing programs. All of the goals were important and each decision was weighed in the light of sometimes contradictory requirements in an attempt to reach a workable compromise.

---------------------

In general, standards that follow this path, codifying existing practice, that serve to coordinate interest between various actors, tend to work better than specifications that are written up-front and then followed by implementation. You can see this operating in a smaller way within current standards bodies and organizations, as you yourself alluded to with "relatively centralized."

> Javascript became the lowest common denominator, winning out not because it was better than say Java or ActiveX or Flash or whatever. It was just commonly available.

This statement is shockingly similar to what people say about C, as opposed to various other languages that were in the same space around the same time.


That's exactly what makes it powerful and ensures it's bright future. It is not controlled by one entity and non-working approaches die out by themselves and there's constant innovation. Calling that a mess is just personal problems.


Huh? What?

Nothing's more standard than Javascript these days. Not even close. It runs the same way on billions of devices across virtually all existing architectures.


>unshackle the web from the mess that is JavaScript

well_thats_your_opinion.jpeg

Javascript is great and the web became what it is today because of it, not in spite of it.


I find this explanation to still be non-compelling. I'm willing to believe I will be wrong and that this will still be the future. I just can't bring myself to think that people today will do any better of a job at defining what and how computing pieces will fit together any better than a few of the other attempts in the past. More, I'm not at all clear that this is something we even need to solve.

Standardized interconnect? Great idea. There will almost certainly be many such standardized connections. Standardized compute? I'm less sold. A large part of the decisions that go into how some compute is done is about making tradeoffs appropriate for a use case. Not finding the universally correct tradeoff.

Worse, if this is about sneaking compute into otherwise static data, as the example of fonts indicates, then a big no thank you from me. I am not too worried about "accidentally Turing complete," all told. But I also don't see the point in giving up and aiming for it everywhere.

The idea that you could maybe have portable extensions for stuff like GIMP is somewhat compelling. But... we have come a long way with safely allowing plugins without needing a whole new definition. This reminds me of the microkernel debate and how it ignores loadable modules for the kernel.


Wrong. WASM is just a bytecode format. There's nothing new here compared to JVM, Parrot, CLR, whatever other VM bytecode formats have been dreamed up. A lot of people have imagined that WASM (or eBPF, but that's another can of worms) somehow has some magic sauce that enables it to guarantee correctness or process isolation. The truth is your sandbox is only as good as the weakest link in (1) Your sandbox runtime (2) Your RPC/API call protocol that you're using to control the outside world from within your sandbox, which everyone has to build from scratch.


> There's nothing new here compared to JVM, Parrot, CLR, whatever other VM bytecode formats have been dreamed up.

WASM is far more low-level than those. It does not depend on the semantics of any existing programming language, and while it does sandbox the code it does not focus on providing mechanisms for establishing memory safety other than at the module level. The focus is almost solely on minimizing overhead compared to native code, even the GC implementation is kept as simple and efficient as possible.


> does not depend on the semantics of any existing programming language

So? The JVM does not depend on the semantics of any existing programming language. I think more research is needed on your part.


Sure it is bytecode but runs in the browser. That's the magic bit. Clients don't have to install .net, java, etc. And because it can be a target for some compiled languages, we can play cool games built in C and such in the browser.

It also has this property that it makes for very cool demos. It may be useful or not in larger production environments, but it's hard to deny that it looks cool when being showed off. The danger is when someone in charge becomes enamored by it and now the whole 20 person team is spending the next two years rewriting a production system in it.


> Sure it is bytecode but runs in the browser.

About half if not more posts are about WASI (WebAssembly outside the browser). There really isn't anything new to justifying the level of hype; bytecode or code running in the browser already have been done and eventually removed decades ago (respectively Java applets and ActiveX).


Also, Java applets and ActiveX objects were notoriously lacking in accessibility.

I see the same happening with WASM-based web apps if they don't find an elegant way to directly utilize the DOM.


Cardinal [0] is a wasm-based web app. What on earth would the DOM have to do with a modular synthesis environment? You seem to be imagining that wasm-based web apps implies a fairly strong connection to existing HTML/CSS/JS "pages". This is not the case and is an unnecessary cognitive barrier to thinking about what wasm (in-browser) could make possible.

[0] https://cardinal.kx.studio/live


And it's lacking in accessibility, just as the person you're responding to said.

> What on earth would the DOM have to do with a modular synthesis environment?

The fact that unlike people's cool and dreamy bespoke UIs drawn entirely on canvas, the DOM does actually have a competently implemented accessibility services layer that apps using it get for free out of the box.

Not to mention that you've never needed WASM to draw UIs entirely on canvas either. It's a silly idea with JS and a silly idea with WASM.


This application is written in C++.

It has no accessibility features because nobody has ever figured how to do that with a virtual modular synthesis environment that is absolutely intended to be (a) extremely skeuomorphic (b) extremely visual. The same issue exists in the native world too, and isn't going to be solved by somehow connecting a DOM with the canvas.


That is my main concern and I am following this WASM story with growing caution.


Most of the exuberance over WASM (including in this article) is over the potential for running it outside of the browser, such as (apparently) in IoT devices (this article) or as a Docker replacement[1].

For the record, I think WASM is a good idea in the browser (but is still of limited usability until they add direct DOM manipulation, and I feel like someone is intentionally blocking that)

[1] - https://news.ycombinator.com/item?id=34078737


> There's nothing new here compared to ... whatever other VM bytecode formats have been dreamed up

Well, there is: it was designed to be easy for already mature V8's codebase to adopt as a low-level IR, with minimal changes (e.g. V8 keeps the JS AST more or less intact during the codegen, it doesn't lower it into 3-address code or anything like that, so WASM too has structured loops and blocks but no general "goto"). Other VMs, I believe, were developed after the bytecode design for them was done.


Ok, so WASM was designed to be a more low-level interface to V8. This makes sense from a technical PoV and from promotion-driven culture at Google. But it doesn’t originate from any real-world problem, just a hammer looking for a nail.


Kind of ironic given PNaCL was equally designed for V8.


I worked a lot with the PNaCL team and they never told us that. Unless you have a reference I am aware of in a design doc. PNaCL was LLVM bitcode.


Used by V8 execution pipeline on Chrome.

It isn't as if it was given other execution runtime, so it certainly wasn't designed for Firefox, IE, Safari, Opera,...


No, PNaCl used LLVM initially and was working on a faster compiler internally (SubZero) before we refocused efforts on Wasm. You might be referring to the effort to port V8's compiler(s) to emit sandboxed code, but that was never merged into mainstream V8.


Not all bytecode formats are the same.

Steel-reinforced concrete is "just" another building material, but what you can do with it in practice is very different from wattle and daub.


Not all AAA batteries are the same, either. But for all intents and purposes, quibbling over differences is trivial.


For most modern CPUs, "machine code" is just another bytecode. Not much of a problem with that, it would seem.


Yeah, this kind of posts selling Webassembly as if it wasn't been done before, multiple times since the 1960's is getting tiring.


> There is little point in using WebAssembly if you control both sides of a boundary

Mozilla is using this to secure native dependencies: the WASM boundary ensures the memory and function access is correct.

https://hacks.mozilla.org/2020/02/securing-firefox-with-weba...

> The core implementation idea behind wasm sandboxing is that you can compile C/C++ into wasm code, and then you can compile that wasm code into native code for the machine your program actually runs on.


WebAssembly is bringing the opacity of the apps to the web. Too many people learned how the sausage is made by clicking "View Source". Of course the ship has mostly sailed already with all the layers of abstraction but WebAssembly will be a nice funeral wreath.


For the library world, WASM aims to offer an alternative to dynamically loaded C libraries - which it doesn't really do yet.

For the container world, WASM aims to offer an architecture agnostic universal binary format that can be run anywhere - which it doesn't really do yet.

For the web world, WASM aims to offer ~~not exactly sure~~. People say it's not a replacement for JavaScript and refuse to consider it as at least an additive alternative.

So I guess WASM in the web is something like an enhanced Web Worker that allows hot paths to be optimized using lower level languages?

That said, WASM in the browser doesn't support threads (and likely won't because of restrictive security permissions on the web making memory shared between threads impractical).

Perhaps the use case for WASM is simply enabling the consumption of existing C/C++ libraries on the web (like ffmpeg) that would otherwise be impractical to port to JavaScript?

When you think about it, WASM was announced almost a decade ago. Golang had just released version 1. JavaScript didn't have Promises or ES Modules in the specification. Mozilla announced it was backing Rust and it had its first stable release. Feels like WASM hasn't really progressed that much in that time.

For me, I would love a world where web technology powered _applications_ (not blogs posts or static sites) were PWAs backed by efficient WASM implementations (think; Slack, Discord, VSCode, JIRA, etc). Instead we get everything written as Electron apps and even if you want to use WASM, you're limited to a single thread that can't even create a div.

I guess having amazing, efficient, cross platform desktop application experiences enabled by capable web technologies would compete too effectively with native apps, Flutter, WinUI or whatever other competing interests exist.

If anything, I would consider calling that scenario an actual "Web 3.0" - but it's probably a pipe dream at this point.


“People say it's not a replacement for JavaScript and refuse to consider it as at least an additive alternative.”

It’s not a complete replacement yet but you can write the vast majority of your code in WASM and use JS as glue code if you want.

“That said, WASM in the browser doesn't support threads”

That’s… not true? It has been supported in all major browsers for years


> use JS as glue code if you want

You can and this makes it appropriate for tech demos but it's a huge point of contention for engineers who are concerned about performance sensitive applications - particularly where boot time (TTI, TFP, etc) is important.

Further, deferring the loading of a wasm module to js evaluation forgoes the browser's ability to optimize based on static analysis, dropping launch optimizations.

Lastly, the added tooling adds an aspect of distrust. Web development is already saturated with complex tooling.

We cannot simply run `cargo build --target wasm32-unknown-unknown` and have it work. Additionally, if you want to code split you would need to split your application up into a wasm executable with dynamically linked wasm libraries - something that isn't supported at present.

> That’s… not true? It has been supported in all major browsers for years

At present, you must initialize multiple instances of a wasm module and pass a SharedArrayBuffer between those modules to share memory. This means threads are controlled in JS and not in WASM.

Additionally, ShareArrayBuffer is not supported in multi-origin websites - making it impossible to use in contexts like established applications which already use subdomains extensively (like Slack, Jira, etc).


Where can I find more information about managed data types in WebAssembly?



  now that it will be cheaper and more natural to pass data back and forth with JavaScript, are we likely to see Wasm/GC progressively occupying more space in web applications?
is there any tracking issue to see when JavaScript will finally be able to read and write Wasm/GC objects? The last time I checked any such attempt would result in an error


Part of Wasm's capability security model is that JS objects are opaque to Wasm and Wasm objects are opaque to JS. This is a good thing and it doesn't prevent interop.

Everyone asks about DOM nodes, so let's use that as an example. In Wasm, you could import a function known locally as $documentGetElementById. The host (JS) could then instantiate the module and provide Document.prototype.getElementById.bind(document) as the implementation of that import. $documentGetElementById's return type would be (ref extern), an opaque reference to a heap object from the host that can now be freely passed around the Wasm module or back to the host. You could then import any other DOM functions that you need to inspect/manipulate these objects.

On the flip side, if you have a Wasm object reference in JS, you need to call some exported Wasm function(s) to inspect/manipulate it.


How does it not prevent interop if JS can't even read or write the struct data provided by wasm?

If they still require glue code hell, what's even the point?


Every day the predictions from that "Birth and Death of Javascript" talk become closer to reality


I agree with this, but even if you internalize this it's still very easy to miss the point of web assembly. People often talk about web assembly with hope that it will "free them from javascript". This is missing the point, the beauty of web assembly is that you can keep your scaffolding in javascript, something it's extremely good at, and have your performance too by calling into "native code" as a sort of half-ffi. In short I think compiling whole ass programs into wasm is trash, but when you compile little libraries, or even individual functions and then call them transparently from JS, that's when it really shines.

Being able to "natively" perform computations on data, and then spit the results back into a terse, expressive language like javascript in such a tightly integrated way is heterogeneous programming at it's best.


WebAssembly allows you to program web pages in a language other than Javascript.

The above statement might gloss over a lot of very important details: But that mindset also misses the point: The critical use case of WebAssembly, and the problem it attempted to solve, right now, is to creating a webpage in a language other than JavaScript.

Everything else that the article mentions is an impressive feat on its own, but what WebAssembly could do is very different than what it does now. And the fact that now you can program a website in a language other than Javascript (or TypeScript) is a very tangible accomplishment that people who aren't enamored with computer science can appreciate, understand, and repeat.


I'd argue that's actually besides the point. Before WebAssembly there were plenty of good ways to write web apps in a language other than JavaScript. Some for example integrate it into the language tooling (like ClojureScript which I have used and have fond memories of), some simply directly used asm.js (which I have also used).

So if the only use case of WebAssembly is to enable that, it is an utterly unexciting piece of incremental improvement.

Everything that's exciting about WebAssembly comes from where it can be used besides replacing JavaScript in a browser. It's a point I've repeatedly argued on this forum since two years ago (see https://news.ycombinator.com/item?id=30156799 and https://news.ycombinator.com/item?id=38138739)


Maybe we're just splitting hairs on what it means to be "a webpage", but I feel like people are _not_ writing webpages in WebAssembly. Instead, you get video games, interactive animations, photo editing apps, ... Basically, desktop-app like things, but generally without a DOM, without CSS. The look and feel diverges quite strongly, a user kind of intuits they're no longer in a regular web-app.

To my frustration, WebAssembly is not replacing JS/TS and it's abominable toolchains, it ends up delivering a new suite of experiences, by adding another layer of abominable toolchains.

There's also low level "helper"-libraries getting developed in WASM, e.g. a database implementation, but wondering if that ever takes off, managing bindings is rough and library size/bootstrapping can end up bloated. And it certainly does't get us out of JS/TS-dev land.


Wasm GC makes it much more practical to write DOM-based web applications now. I think we'll start to see more examples of them now that Chrome and Firefox are both shipping Wasm GC in stable releases.


Thanks for clarification, will look into it


There are many places that have been using WASM for purposes similar to the ones in the article for quite a while now. For example, Cloudflare workers have been using WASM for a year and a half.



I'm both excited about cross-language but also I think the people you describe as potential beneficiaries are those most apt to suffer under the new (anti-)regime.

> And the fact that now you can program a website in a language other than Javascript (or TypeScript) is a very tangible accomplishment that people who aren't enamored with computer science can appreciate, understand, and repeat.

Industrial software dev has resulted in gross difficult minified bundles & absurdities like vdom that are hard to view-source hack. But there's pretty ok un-minifying & it's surprisingly possible to get really far, if a user wants to seize the power & do some userscripting.

And many in the webdev world have steadily been trying to push back against this rampant industrialization. HTTP2 and ESM were hopefully going to make unbundling more viable, but that's still long suffering (ex: import-maps don't work for workers, https://github.com/WICG/import-maps/issues/2), and now that the http cache doesn't work cross-site many of the possible re-use upsides are dead. Still a goal for many though. WebComponents is still a vast un-frameworking potential; in many implementations it unravels the descent into abstraction that React framework begat. Classic web devs have a lot of heart to make the web pro-user, to un-framework things, but the industrial demands are hard to re-satisfy on these better paths & so much momentum is elsewhere.

Now compare to the wasm world. We have basically pure machine code running our web in wasm. It's utterly unintelligible. There's dozens of hundred of kb libraries that languages bring in to userland just to do the language thing. I think people will have an incredibly poor time learning the web, think the worst risk wasm has is in scaring people off from view-source, from casually observing & learning anything. Wasm is god smiting the tower of Babel, is a one language breaking & being cast down & a hundred different tribes speaking their own language. So much good will come from this diversity, we'll see so many different styles & tries from so many camps, and that will be a benefit, but it will in my view be drastically harder for those "who aren't enamored with computer science can appreciate, understand, and repeat." Intelligibility is the loser, when Babel falls.


> We have basically pure machine code running our web in wasm. It's utterly unintelligible.

Whenever technology changes there's always an argument against the new thing.

In this case, I don't think that argument has merit. We've been running various flavors of compiled code since (almost) the beginning of computers, and the merits and drawbacks of that approach are well understood.


The web has been distinct & unique & closer to "the language of the gods" for being much better, more pro-user than the old industrial-computing compiled ways. Many webdevs got their start from view-source (and many would have been lost to computing if we were just another compiled blackbox). The aliveness of the web contrasts to the dead programming variety. The erosion of this aliveness & legibility has been mourned, and should be. It is worth fighting for the user.

I don't think your argument has merit, that merits can be ignored because this is how computers always used to be. Wasm's return to the dark times of industrialized blackbox computing will cut off the best users from the joy of being closer to the machine that they had on the web, and that return to darkness is a sad plight for mankind, even if that's how it always used to be. I treasure the light of the open alive web, and hope somehow it can survive this voyage into shadow.


We can and should provide a "view source" equivalent for native code as well. Package managers like Guix that know where to find the source and how to build it are a good step forward there.

Text formats can be minified and obfuscated, binary formats can include mechanisms to see and experiment with the source. It's all about the tooling you have, what that tooling makes easy, and what it makes hard.


And what, pray tell, does view-source look like in typical SPA? Even in the debugger (if the beginner knows what that is), what will the minified bundle of 50 libraries look like?


Sure, we can Whataboutism the good to death. There's so much Whataboutism to find here.

Does that Whataboutism wet blanket mean it should be that way? Does it mean we should make the situation even more infernal and bad?


Interesting. I knew another user who spoke similarly to you about the web (often exaggerating of the "wonders of the web" while WASM and similar technologies were the signaling of the "dark times"), are you an alt of rektide? It seems like they stopped posting just as your account was created, which leads me to believe so. Just a curious observation, as they too were on threads related to the web.


I'm surprised that he didn't mention Nebulet microkernel running WASM.


I learnt something important about webassembly today from this. Thank you!


I still hope for WASM to get its own rendering facility, and to ultimately enable what JavaFX almost did: make true desktop applications run in the browser, enabling users to drag them out of the browser to have them as actual, integrated desktop programs.


Why don't I like this idea? Another OS in the OS.


Having a point to miss would be great news for WebAssembly.


how to handle gc strings when most tools don't support wasm stringref?


Yeah the strings situation isn't so good right now. stringref is really good imo but different factions in the wasm group are opposed.

The best way to represent strings right now is (array (mut i8)) for UTF-8 or WTF-8 data. In Guile Hoot we are using stringref because it's convenient for the compiler to be able to emit (string.const "foo") and such, but we then perform a post-pass to remove all stringref instructions and replace them with instructions that operate on i8 arrays so the resulting binaries are usable. This approach has the advantage that string contents are visible to wasm so internal string manipulation is fast, but the downside is that strings have to be copied when going across the host/wasm boundary.

An alternative approach would be to fully rely on the host to provide strings utilizing extern refs. This avoids the copying at the boundary problem of the first approach, but it has the disadvantage that all string operations need to call the host, so internal string manipulation is slow.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: