Cool, Steven Wittens is behind this. The header at https://acko.net/ is one of the first examples of WebGL I remember seeing in the wild, and still one of the cleanest. Looking forward to seeing where this goes!
The first practical usage of WebGL in the wild I can think of would be Google Maps, although it's easy to forget about the switch from the old server-side tiles to the vector WebGL thingy ( https://www.youtube.com/watch?v=X3EO_zehMkM ).
The still strikes me as one of the very few useful usages of WebGL that exists even today. There's other usages of course, like figma, sketchup, etc... But those rarely (if ever) benefit from the web's primary advantages of ephemeral & linkability, and would work perfectly fine as just classic desktop apps. Kinda like an awful lot of tools in those spaces still are.
Majority usage of webgl still seems to otherwise be ads & shopping. It seems likely that webgpu is going to be much of the same, although it has a chance of graduating out of the web platform as a potentially useful middleware abstraction for native apps that want to use the GPU.
Vampire Survivors, currently the most popular game on Steam Deck, is browser based and built w WebGL, Phaser, and Capacitor. I agree games like this have been rare but perhaps we’re about to hit a turning point https://poncle.itch.io/vampire-survivors
Why not? I'm not a game developer, but between WebGPU and QUIC+WebTransport enabling UDP in the browser, I've been cautiously optimistic that web gaming will see a revival.
Flash gaming was hugely successful largely because the authoring tools were excellent and truly accessible. WebGPU on the other hand is like one or two orders or magnitude less accessible (as it currently stands) than typical modern game dev, which is itself nothing compared to what Flash was back in the day. So on the one hand you have a tiny number of diehard tech nerds who can deliver some labor of love, and on the other hand you have gaming companies/studios, for whom the business case of shipping games constrained to low tens of MBs is largely nonexistent.
Flash still exists if you want it, it is just called Haxe now. Both major and a number of minor game development platforms can publish to the web. Checking itch.io there were 120 web based games uploaded within the past day (includes updates I assume). I would guess there are more now than ever before, but because there are more easily available commercial games for very little money on sale more people just get commercial games. Or download free games, which gives better controls than a browser. Although the games market in general is so much larger now that I'm not conviced there are fewer people interested in web games than previously. I suspect it is more a case of people who were personally interested in web games previously and aren't now thinking that it is no longer a thing.
Still a great and useful tool (tho over the years it became slower and slower with each release, just like any adobe product) but the export to html / air / openfl etc. are nowhere as good as the OG flash runtime of yore.
Useful for exporting movie and packaging assets / animations to then be consumed by a custom engine but not worth the effort that much when you have spine, etc. which are better supported and used in the industry.
Scaleform still seems to be used by some AAA studio (I'm always surprised to see it) but I can't imagine it they will continue to use it for long.
I thought that community just migrated to unity and the other alternatives that Export to HTML. Isn't even the next unreal engine supposed to have an export to web feature?
They do but they never got something as good, it's multiple megabytes for a single empty scene with a long initialization time and doesn't run as well as it should (especially on mobile) when you consider old flash games were running on old pentium could be just a few kb and started instantly (and were 1 single streamable file).
If you are only targeting the web, you're better off with a game engine specific to the web.
I'm not sure if I'm understanding what you mean about "linkability", but when I'm writing up something on Notion, I'm able to refer to specific elements of a Figma drawing and have the preview rendered right there in Notion. As far as I'm aware, it also updates along with the Figma drawing.
I've maintained a small WebGPU project for a little while, and haven't had to utilize any solutions like Use.GPU. I'm not here to express an opinion about it, but if you like using WebGL without adding large dependencies to your projects, you can leverage WebGPU the same way, with one important caveat: the packing of structured data into bindings.
In short, if you have a JavaScript object of named values you want to pass into a WGSL shader that has a corresponding binding, you have some homework to do. So I wrote a tiny (work in progress) utility to do it for you.
Just like gl-matrix ( https://glmatrix.net ) is a tiny library that trivializes geometric transforms for small projects, gpu-buffer ( https://github.com/Rezmason/matrix/blob/master/lib/gpu-buffe... ) trivializes passing data into WGSL shaders. You hand it a shader, and it'll return a set of objects that'll transform your JS objects into buffers to then bind.
Note that what Use.GPU does is much more polymorphic than that. It uses getters inside shaders to allow e.g. attributes like point color and size to be either uniform or per vertex, without having to change the shader source.
It will also autogenerate bind groups and descriptors, including optimized support for volatile bindings that e.g. change every frame (like a front/back buffer).
This is necessary if you want to make composable shaders which can adapt to a variety of use cases.
I think the design of standard APIs will increasingly cater to engine developers and folks willing to pore over specs, and the folks running smaller scale operations will have a harder time leveraging new things without considerable personal investment— unless folks implement higher level wrapper libraries, that is.
I personally disagree with you on a bunch of things— this is the Internet after all— but you've been undoubtedly empowering graphics programmers for years, and I appreciate you.
Maybe one day you'll realize that being disagreeable on the internet is another service I've been freely providing for public benefit, with very little thanks and at great personal cost.
Slightly OT: I never heard of webGPU before. So, in theory it will be feasible in a few years to run models like Stable Diffusion on my GPU through the browser without fighting with conda, pip and drivers? Or did I get the scope of this wrong?
You can already run ML models with GPU acceleration in WebGL using tensorflow.js and other libraries. WebGPU will make things better for sure, but I think the major obstacles to running large models like Stable Diffusion will not be solved by WebGPU.
WebGPU will not expose Nvidia-specific tensor cores, at least initially. But the main issues are with loading and processing gigabytes of data in browsers, which aren't addressed by WebGPU at all. You'll have difficulty downloading and storing all that data and quickly run into out of memory crashes.
In principle, yes. With webGPU (but also WebGL2) we get access to compute shaders, which makes it feasible/easier to evaluate a model like Stable Diffusion.
The biggest issue I see is that those models (or rather their trained parameters) are usually pretty large (several GiB). So it'll take a while to set up in the browser before the evaluation can actually start. It'll also require a lot of bandwidth on both ends.
A lot of those things should already be doable with the fragment shaders we get from WebGL and a lot of hackery, like clever (ab)use of textures. So the actual issue that we're not seeing a lot of this is probably not due to it being impossible right now...
> The biggest issue I see is that those models (or rather their trained parameters) are usually pretty large (several GiB). So it'll take a while to set up in the browser before the evaluation can actually start. It'll also require a lot of bandwidth on both ends.
Might not be feasible due to memory constraints (I'm not sure), but browsers can load data from disk into memory without having to touch the network. So you could in theory ask the users to download the model separately, then ask them to select it from a file picker, and the browser can have access to it that way.
Right. Another option is running the model locally from a web page, served from the project source directory, which is still a lot easier than setting up a platform specific GPU accelerated ML dev environment.
It also supports many more features in rendering which previously were unavailable or so badly supported you couldn't use them. It's basically like going from 2005 to 2015 in terms of capabilities.
There are still notable gaps, like no bindless resources or raytracing though. So don't expect e.g. unreal engine to run on WebGPU without some significant compromises.
I believe WebGL compute shader development has been halted[1], in favor of making WebGPU happen. Though it's possible to run it in dev builds in some cases, I'm pretty sure there's no browser that has it on by default, and likely this won't happen.
Apple does support WebGL2, but compute shader support is not part of that core spec. The demos[2] certainly don't work in Safari in my quick test.
You shouldn't be downvoted, your question is reasonable even if it demonstrates a misunderstanding. So it's not a wrapper over WebGL, it's a more modern, lower-level GPU programming model. There are equivalent efforts in native environments to replace OpenGL - notably Vulkan and (Apple's) Metal.
As far as I understand, the approach allows far more flexibility, at the expense of higher complexity. It's less "stateful" than WebGL, which basically gives you a big class that manages everything OOP-style.
For anyone wondering what the difference is between R3F and Use.GPU:
R3F is a react reconciler for three.js. You manipulate a three.js scene, which the classic non-reactive renderer then draws.
Use.GPU does not have a non-reactive model nor does it have a notion of a scene. Components just compose and expand to produce lambdas which make calls to the GPU.
It's basically React without a DOM, where components can _only_ render other React components.
The docs go into detail of how this is accomplished and what it means.
My experience is the opposite. I found react-fiber to be a leaky and more importantly unnecessary abstraction. It is very easy to embed https://threejs.org/ in any frontend framework and use it directly without any wrapper.
> Use.GPU has a powerful WGSL shader linker. It lets you compose shaders functionally, with closures, using syntactically correct WGSL. It provides a real module system with per-module scope.
> The shader linker is very intentionally a stand-alone library, with no dependencies on rest of the run-time. It has minimal API surface and small size.
> Every shader is built this way. Unlike most 3D frameworks, there is no difference between built-in shaders and your own. There is no string injection or code hook system, and no privileged code in shader land. Just lots of small shader modules, used à la carte.
I see this project is written in Typescript and Rust, so.. is this shader linker written in Rust?
Could I, in principle, reuse it for running everything in Rust, with no Typescript in the frontend?
No, the only part that's Rust is the truetype font renderer, which uses ab_glyph. But even the conversion to SDFs is in Typescript.
Moving data in and out of WASM is still kind of slow, relatively speaking (until GC lands), and Rust's ergonomics are not a good fit for the API design which has a ton of optionals.
The shader linker uses a Lezer grammar rather than the official WGSL tree sitter grammar so the AST node structure is more optimized for quick unconditional consumption.
Can't run the demos on my M1 Mac - I can't find the #enable-unsafe-webgpu flag in chrome://flags. Anyone know a workaround? Or is it just not available on Apple Silicon?
Edit: I didn't read it properly - the flag is only available on the Chrome dev channel [0] (and presumably also Canary). The demos work great on my M1 now.
> WebGPU is only available for developers, locked behind a browser flag.
This isn't strictly true. There's an origin trial that enables you to use WebGPU in Chrome stable without flipping any flags, and even ship demos to real users today, on any supported platform. That's currently only Chrome on Windows and macOS, but more platforms are in progress.
Okay, what are privacy and security implications? If WebGPU goes GA, and a website requests it and gets approved (if it's gonna be behind a permission at all), what it would be able to learn about my machine and what it might be theoretically able to do beyond doing "normal" GPU compute?
I feel that instead of reimplmenting the React <Component> tree and its hook system (What they call "Live"), they could have made a customer React renderer instead (like React three fiber for example). Focus on the interesting bits (WebGPU), don't reinvent the wheel (React)
Seems like my integrated i7 laptop GPU can't run any of these examples due to incomplete Vulkan support for Haswell under Linux. Sad, since I doubt it'll be completed.
Maybe I'll have to retire this almost 10 year old laptop sometime soon, even though it still runs pretty well.
I run Firefox Nightly and turning on dom.webgpu.enabled didn't enable the demos, even after a browser restart :/ Version is 106.0a1 (2022-09-09) (64-bit), I'm running on NixOS.
I know this is hacker news, but I think the idea of hacking general purpose compute infrastructure on top of graphics/gaming hardware is starting to get out of hand.
GPUs are already pretty generic. The main diff is that a GPU will execute N threads in lockstep, where N = 32 or 64, like a very wide and always-on SIMD.
The dedicated render pipeline still exists, and is still needed for things like Z buffering and interpolation, but you'd be surprised how much of what used to be fixed hardware is being emulated by the driver.
It turns out that graphics hardware is perfect for certain kinds of non-graphical scientific computing. Dedicated GPGPU hardware already exists, but people don't have those at home and/or on their regular computers that they use.
I really don't see the issue. GPU is specialised hardware for certain computations - faster and/or more energy-efficient. Your analogy is weird too; if a CPU is a car, then a GPU is a specialized car that can go 1000x faster under certain specialized circumstances.
That said, your original issue was with general purpose compute infrastructure on top of GPU, which can be applied to this analogy too; using the Thrust SSC to transport a container is probably not the best. Possible, but suboptimal.
It has already. Chips that are ostensibly for graphics are containing more and more silicon dedicated to general purpose workloads. Turns out doing it like this this is usually cheaper than manufacturing dedicated parallel processors and GPUs separately, since the requirements have so much overlap.
It's mentioned at the top of the page that WebGPU isn't enabled by default in _any_ browser. The linked^1 caniuse page mentions this as well, while available in Safari, it has to be enabled in the developer config.
I'm unclear as to how to interpret this response. Did you mean 'Kyiv' and that I'm somehow anti-neologistic, or something?
- ed: damnit, I shoulda gone with 'anti neo-legoistic'.
- ed - ed: I genuinely meant my initial question - I'd not seen that word in that context, so wondered what it was. I thought it was perhaps some term I'd not encountered prior. I do know Americans call Lego pieces 'Legos', hence my follow-up question.