"Each error scope stores only the first error it captures; any further errors it captures are silently ignored." is one of the worst design decisions I've ever seen. That means if code is expecting one error to occur, any unexpected errors that occur after it in the scope are unobservable. The only way to handle that would be to push/pop tons of scopes.
How hard is it to make an array of errors instead of a single slot? What's the upside of this other than laziness?
These kinds of errors are usually programmer errors that cause literally hundreds of downstream errors because they put an object later used by other code in an invalid state. In these cases only the first error is particularly useful, the rest are just noise.
You'll need some JavaScript to instantiate the WASM module and pass in a set of imported functions for using WebGPU. After passing in the capabilities it can be all WASM from there.
Afaik, there's no means to pass in imported functions right now.
Rust has a very extensive bindgen to make it kind of look like you can, but it's really a huge payload of js running on the page's main-thread that's converting back and forth. At significant cost.
We just shipped garbage collection, which is one precondition to actually being able to pass things around (so things passed in can participate properly in gc). Next is component-model, which allows for passing non-trivial objects around; currently it's just primitives like numbers that can be passed. After that goes in, hopefully it won't be long before we have host-object bridging, where platform objects can be sent across. https://github.com/WebAssembly/component-model
You can pass around arbitrary JS objects via the (ref extern) type. The (import "module" "name" ...) form can be used to declare an imported thing (function, global, memory, table.) I don't do Rust so I don't know what the limitations are with Rust on WASM right now but speaking just about core WASM stuff all the basic pieces are there now.
Afaik, no. What you are talking about is the future, not anything available on any platforms today.
Right now, different runtimes don't have any way to communicate what type an object or function is. That's what component model is trying to figure out. Without that, there's not really a way for a ref to do anyone any good, as far as I know.
If you can link any docs or examples, that'd be great. I feel like it's been a long long long wait, & I've been very eager. But if I'm mistaken, and passing stuff across boundaries is possible, that'd be amazing to see.
> Right now, different runtimes don't have any way to communicate what type an object or function is. That's what component model is trying to figure out. Without that, there's not really a way for a ref to do anyone any good, as far as I know.
All of that seems like more of a convenience than a necessity: even with just the current implementation of host references, it's perfectly possible to track objects purely through indices into a big table, using reference-counting on the module's side, with no JS assistance necessary.
In fact, wasm-bindgen has already implemented support for tracking JS objects as host references in this way, via the --reference-types flag [0]. I've tried out this flag with their WebGL example, and the shims are indeed very minimal: JS objects passed as input aruments go directly into the target functions, without any sort of decoding or lookups. However, JS objects as outputs are still inserted into the table by the shims (calling back into the module to get a free index), which I presume is just a limitation of Rust not being able to express a host reference stored in a local while calling another function.
So overall, there's no fundamental issue blocking native functions (or methods, through a bound Function.prototype.call) from being called from WASM with no shims at all, as long as the module can implement any sort of basic index allocator. Though I'd imagine sufficiently-simple shims would get compiled down to hardly anything regardless, in case that turns out to be more performant than Function.prototype.call.
wasm-bindgen is writing it's own ABI, but still involves a ton of auto-generated JS stubs running for you in the background. it works great! you would never know it's there, but it has a serious performance hit versus actual host object bridging (what we're working towards) since it has to keep serializing things over MessageChannels. You can read some about the auto-generated shims here, https://rustwasm.github.io/docs/wasm-bindgen/contributing/de... . And you can see that wasm-bindgen doesn't actually send objects into Rust, but instead has shims here: https://rustwasm.github.io/docs/wasm-bindgen/contributing/de...
Component Model is designed to eliminate the need for having this autogenerated rpc system & many of these shims, and let wasm itself actually hold references & invoke/use objects in a genuinely shared fashion.
> but it has a serious performance hit versus actual host object bridging (what we're working towards) since it has to keep serializing things over MessageChannels.
I don't understand what you mean? With the --reference-types flag I mentioned, wasm-bindgen is literally passing the JavaScript objects in and out of the shims with absolutely no serialization or indirection on the JavaScript side: the only indirection is on the module's side, when it stores all the host references in a big table and passes around indices to them.
To give a concrete example, I've attached the entirety of the JavaScript code that wasm-bindgen generates with --reference-types enabled on its WebGL example [0]. You can see how many of the shims do absolutely nothing but call the underlying function, even when they take arbitrary JS objects as arguments:
So as you can see, we already have direct bridging, as long as the JS objects can be treated as opaque. In fact, the code works just as well if we ditch the anonymous functions entirely, and import the native methods directly into the WASM module (though wasm-bindgen currently doesn't do this on its own, and I have no idea how this affects performance):
Serialization and deserialization are really only needed if we have record types filled with raw data like numbers and strings that we want to ergonomically manipulate on the JavaScript side. If we just need to track opaque handle objects, as with WebGL or WebGPU, then WASM's feature of host references is already sufficient to avoid serialization overhead.
Part of the security model of WASM is that guest values are opaque to the host and host values are opaque to the guest. You can get a lot of mileage out of (ref extern) right now. Like if you import document.createTextNode you know the extern ref you get back is a text node and you can wrap the extern up as such in your WASM program. I have wrapped a subset of the DOM and Canvas APIs and made several small programs like a classic todo list example and a small retro video game rendered to canvas. I've been able to create elements, assign event handlers, use setTimeout/requestAnimationFrame, etc. all from within WASM. So while additional proposals could add more useful reference types (such as (ref string) from the stringref proposal) you can already do a lot with the types that are there now that reference types + GC are part of core WASM.
I don't quite understand what you mean. WASM modules don't have direct access to anything. They have to be granted capabilities from by the host. Are you referring to optimizations that various engines do where they recognize well-known imports like Math.sin and compile things such that they don't invoke JS at all?
You need the host (whatever it is) to provide the capabilities to WASM. On the web, the host environment is JS. Browser engines are free to optimize well-known imports with or without any new proposals. I guess I just don't really see the issue...
However I did the mistake to keep calling them reference types, when in reality they are now called component model.
As for the browser, yeah that is what matters for WebGPU, outside of the browser there are so many middleware options with much better tooling (GPU debuggers for Web 3D still aren't a thing 10 years later, besides poor SpectorJS), and access to latest hardware features, instead of what was the state of the art in 2015.
Whatever WGPU might do in addition to WebGPU is naturally going beyond the standard features a browser is expected to provide, thus no different from any other graphics middleware.
And has 2015 state of the art hardware as inspiration, in 10 years time we might finally get mesh shaders and raytracing, looking at how long it took to get OpenGL ES 3.1 compute shaders available.
How hard is it to make an array of errors instead of a single slot? What's the upside of this other than laziness?