Hacker News new | past | comments | ask | show | jobs | submit login
Use.GPU (usegpu.live)
259 points by andsoitis on Sept 9, 2022 | hide | past | favorite | 81 comments



Cool, Steven Wittens is behind this. The header at https://acko.net/ is one of the first examples of WebGL I remember seeing in the wild, and still one of the cleanest. Looking forward to seeing where this goes!


The first practical usage of WebGL in the wild I can think of would be Google Maps, although it's easy to forget about the switch from the old server-side tiles to the vector WebGL thingy ( https://www.youtube.com/watch?v=X3EO_zehMkM ).

The still strikes me as one of the very few useful usages of WebGL that exists even today. There's other usages of course, like figma, sketchup, etc... But those rarely (if ever) benefit from the web's primary advantages of ephemeral & linkability, and would work perfectly fine as just classic desktop apps. Kinda like an awful lot of tools in those spaces still are.

Majority usage of webgl still seems to otherwise be ads & shopping. It seems likely that webgpu is going to be much of the same, although it has a chance of graduating out of the web platform as a potentially useful middleware abstraction for native apps that want to use the GPU.


How can you forget about games? The reason GPU exists in the first place.


Browser-based games died with the death of flash. WebGL(2) didn't bring them back, and WebGPU isn't going to, either.


Vampire Survivors, currently the most popular game on Steam Deck, is browser based and built w WebGL, Phaser, and Capacitor. I agree games like this have been rare but perhaps we’re about to hit a turning point https://poncle.itch.io/vampire-survivors


Why not? I'm not a game developer, but between WebGPU and QUIC+WebTransport enabling UDP in the browser, I've been cautiously optimistic that web gaming will see a revival.


Flash gaming was hugely successful largely because the authoring tools were excellent and truly accessible. WebGPU on the other hand is like one or two orders or magnitude less accessible (as it currently stands) than typical modern game dev, which is itself nothing compared to what Flash was back in the day. So on the one hand you have a tiny number of diehard tech nerds who can deliver some labor of love, and on the other hand you have gaming companies/studios, for whom the business case of shipping games constrained to low tens of MBs is largely nonexistent.


Flash still exists if you want it, it is just called Haxe now. Both major and a number of minor game development platforms can publish to the web. Checking itch.io there were 120 web based games uploaded within the past day (includes updates I assume). I would guess there are more now than ever before, but because there are more easily available commercial games for very little money on sale more people just get commercial games. Or download free games, which gives better controls than a browser. Although the games market in general is so much larger now that I'm not conviced there are fewer people interested in web games than previously. I suspect it is more a case of people who were personally interested in web games previously and aren't now thinking that it is no longer a thing.


Plus I would assume that game devs of today would rather serve up their games for a few bucks on Steam rather than for free through the web.


> Flash gaming was hugely successful largely because the authoring tools were excellent and truly accessible.

Adobe Flash Professional still exists as Adobe Animate: https://www.adobe.com/products/animate.html


Still a great and useful tool (tho over the years it became slower and slower with each release, just like any adobe product) but the export to html / air / openfl etc. are nowhere as good as the OG flash runtime of yore.

Useful for exporting movie and packaging assets / animations to then be consumed by a custom engine but not worth the effort that much when you have spine, etc. which are better supported and used in the industry.

Scaleform still seems to be used by some AAA studio (I'm always surprised to see it) but I can't imagine it they will continue to use it for long.


Most gamers are on mobile, so a proper app store app with a home screen icon will be more discoverable and easier for users to come back to.

Also, you can add support for notifications, and for the developer there are more options for monetization.


I thought that community just migrated to unity and the other alternatives that Export to HTML. Isn't even the next unreal engine supposed to have an export to web feature?


They do but they never got something as good, it's multiple megabytes for a single empty scene with a long initialization time and doesn't run as well as it should (especially on mobile) when you consider old flash games were running on old pentium could be just a few kb and started instantly (and were 1 single streamable file).

If you are only targeting the web, you're better off with a game engine specific to the web.


Epic already abandoned the Emscripten toolchain sadly.


meet and other video messaging apps use the GPU to blur or replace your background before streaming you video to their services


I'm not sure if I'm understanding what you mean about "linkability", but when I'm writing up something on Notion, I'm able to refer to specific elements of a Figma drawing and have the preview rendered right there in Notion. As far as I'm aware, it also updates along with the Figma drawing.


Oh man, I remember finding this site years ago, I had completely forgot about it. What a piece of internet nostalgia.


I've maintained a small WebGPU project for a little while, and haven't had to utilize any solutions like Use.GPU. I'm not here to express an opinion about it, but if you like using WebGL without adding large dependencies to your projects, you can leverage WebGPU the same way, with one important caveat: the packing of structured data into bindings.

In short, if you have a JavaScript object of named values you want to pass into a WGSL shader that has a corresponding binding, you have some homework to do. So I wrote a tiny (work in progress) utility to do it for you.

Just like gl-matrix ( https://glmatrix.net ) is a tiny library that trivializes geometric transforms for small projects, gpu-buffer ( https://github.com/Rezmason/matrix/blob/master/lib/gpu-buffe... ) trivializes passing data into WGSL shaders. You hand it a shader, and it'll return a set of objects that'll transform your JS objects into buffers to then bind.


Note that what Use.GPU does is much more polymorphic than that. It uses getters inside shaders to allow e.g. attributes like point color and size to be either uniform or per vertex, without having to change the shader source.

It will also autogenerate bind groups and descriptors, including optimized support for volatile bindings that e.g. change every frame (like a front/back buffer).

This is necessary if you want to make composable shaders which can adapt to a variety of use cases.


Thanks for making this, by the way.

I think the design of standard APIs will increasingly cater to engine developers and folks willing to pore over specs, and the folks running smaller scale operations will have a harder time leveraging new things without considerable personal investment— unless folks implement higher level wrapper libraries, that is.

I personally disagree with you on a bunch of things— this is the Internet after all— but you've been undoubtedly empowering graphics programmers for years, and I appreciate you.


Maybe one day you'll realize that being disagreeable on the internet is another service I've been freely providing for public benefit, with very little thanks and at great personal cost.


Slightly OT: I never heard of webGPU before. So, in theory it will be feasible in a few years to run models like Stable Diffusion on my GPU through the browser without fighting with conda, pip and drivers? Or did I get the scope of this wrong?


You can already run ML models with GPU acceleration in WebGL using tensorflow.js and other libraries. WebGPU will make things better for sure, but I think the major obstacles to running large models like Stable Diffusion will not be solved by WebGPU.

WebGPU will not expose Nvidia-specific tensor cores, at least initially. But the main issues are with loading and processing gigabytes of data in browsers, which aren't addressed by WebGPU at all. You'll have difficulty downloading and storing all that data and quickly run into out of memory crashes.


In principle, yes. With webGPU (but also WebGL2) we get access to compute shaders, which makes it feasible/easier to evaluate a model like Stable Diffusion.

The biggest issue I see is that those models (or rather their trained parameters) are usually pretty large (several GiB). So it'll take a while to set up in the browser before the evaluation can actually start. It'll also require a lot of bandwidth on both ends.

A lot of those things should already be doable with the fragment shaders we get from WebGL and a lot of hackery, like clever (ab)use of textures. So the actual issue that we're not seeing a lot of this is probably not due to it being impossible right now...


> The biggest issue I see is that those models (or rather their trained parameters) are usually pretty large (several GiB). So it'll take a while to set up in the browser before the evaluation can actually start. It'll also require a lot of bandwidth on both ends.

Might not be feasible due to memory constraints (I'm not sure), but browsers can load data from disk into memory without having to touch the network. So you could in theory ask the users to download the model separately, then ask them to select it from a file picker, and the browser can have access to it that way.


Next step: browser JS VMs need to allow you access gigs of RAM in-memory, it's surprisingly locked down


It'd be nice if the models could be loaded via WebTorrent[1], which would significantly reduce the bandwidth requirement for whomever releases it.

[1] https://github.com/webtorrent/webtorrent


The models are certainly available on torrent though


Right. Another option is running the model locally from a web page, served from the project source directory, which is still a lot easier than setting up a platform specific GPU accelerated ML dev environment.


WebGL2 doesn't support compute shaders.


It did for a short while as an extension, then Google decided to throw away Intel's work and only support compute in WebGPU.


It's a replacement for WebGL, but the key difference is that it supports compute shaders.


It also supports many more features in rendering which previously were unavailable or so badly supported you couldn't use them. It's basically like going from 2005 to 2015 in terms of capabilities.

There are still notable gaps, like no bindless resources or raytracing though. So don't expect e.g. unreal engine to run on WebGPU without some significant compromises.


Someone patched Chrome to enable raytracing with WebGPU: https://github.com/maierfelix/chromium-ray-tracing


WebGL2 also supports them! And now that Apple finally added support (like half a year ago(?)) to it, it can be fairly safely used


I believe WebGL compute shader development has been halted[1], in favor of making WebGPU happen. Though it's possible to run it in dev builds in some cases, I'm pretty sure there's no browser that has it on by default, and likely this won't happen.

Apple does support WebGL2, but compute shader support is not part of that core spec. The demos[2] certainly don't work in Safari in my quick test.

[1]: https://registry.khronos.org/webgl/specs/latest/2.0-compute/

[2]: https://9ballsyndrome.github.io/WebGL_Compute_shader/webgl-c...


Thanks Google that dropped them out of Chrome and we all know how the Web is basically whatever Chrome does, so WebGPU it is.


Is it a replacement? It looks more like a framework that lives on top of webgl.


You shouldn't be downvoted, your question is reasonable even if it demonstrates a misunderstanding. So it's not a wrapper over WebGL, it's a more modern, lower-level GPU programming model. There are equivalent efforts in native environments to replace OpenGL - notably Vulkan and (Apple's) Metal.

As far as I understand, the approach allows far more flexibility, at the expense of higher complexity. It's less "stateful" than WebGL, which basically gives you a big class that manages everything OOP-style.


It's the other way around. WebGPU is to Vulkan what WebGL is to OpenGL. It's less low level than vulkan though.


WebGPU is to WebGL as Vulkan is to OpenGL, a lower level, low overhead graphics API.


If anyone is looking for something more production ready (that uses webGL), I highly recommend https://github.com/pmndrs/react-three-fiber

we use it to built https://flux.ai


For anyone wondering what the difference is between R3F and Use.GPU:

R3F is a react reconciler for three.js. You manipulate a three.js scene, which the classic non-reactive renderer then draws.

Use.GPU does not have a non-reactive model nor does it have a notion of a scene. Components just compose and expand to produce lambdas which make calls to the GPU.

It's basically React without a DOM, where components can _only_ render other React components.

The docs go into detail of how this is accomplished and what it means.


My experience is the opposite. I found react-fiber to be a leaky and more importantly unnecessary abstraction. It is very easy to embed https://threejs.org/ in any frontend framework and use it directly without any wrapper.


> https://usegpu.live/docs/guides-shaders

> Use.GPU has a powerful WGSL shader linker. It lets you compose shaders functionally, with closures, using syntactically correct WGSL. It provides a real module system with per-module scope.

> The shader linker is very intentionally a stand-alone library, with no dependencies on rest of the run-time. It has minimal API surface and small size.

> Every shader is built this way. Unlike most 3D frameworks, there is no difference between built-in shaders and your own. There is no string injection or code hook system, and no privileged code in shader land. Just lots of small shader modules, used à la carte.

I see this project is written in Typescript and Rust, so.. is this shader linker written in Rust?

Could I, in principle, reuse it for running everything in Rust, with no Typescript in the frontend?

What I actually want to do: use this with https://sycamore-rs.netlify.app/

That is,

> https://acko.net/blog/the-gpu-banana-stand/

> To recap, I built a clone of the core React run-time, called Live, and used it as the basis for a set of declarative and reactive components.

Replace Live with Sycamore


No, the only part that's Rust is the truetype font renderer, which uses ab_glyph. But even the conversion to SDFs is in Typescript.

Moving data in and out of WASM is still kind of slow, relatively speaking (until GC lands), and Rust's ergonomics are not a good fit for the API design which has a ton of optionals.

The shader linker uses a Lezer grammar rather than the official WGSL tree sitter grammar so the AST node structure is more optimized for quick unconditional consumption.


Wow, you have a custom WGSL parser? Very interesting, I may have a use for this...


Can't run the demos on my M1 Mac - I can't find the #enable-unsafe-webgpu flag in chrome://flags. Anyone know a workaround? Or is it just not available on Apple Silicon?

Edit: I didn't read it properly - the flag is only available on the Chrome dev channel [0] (and presumably also Canary). The demos work great on my M1 now.

[0] https://www.google.com/chrome/dev/?platform=mac&extra=devcha...


I've been waiting for something like this for a long time. I've been using GPU.js and this looks like a huge leap forward. can't wait to dig in!


> WebGPU is only available for developers, locked behind a browser flag.

This isn't strictly true. There's an origin trial that enables you to use WebGPU in Chrome stable without flipping any flags, and even ship demos to real users today, on any supported platform. That's currently only Chrome on Windows and macOS, but more platforms are in progress.


Yeah, and it's still behind an explicitly labelled "Unsafe WebGPU" flag. Description has this warning:

> Warning: As GPU sandboxing isn't implemented yet for the WebGPU API, it is possible to read GPU data for other processes.

Their own wording says it's unsafe, and possible to read GPU data from other processes, but can run from a site that has the origin trial enabled.

Is that really the case? Or should they change their wording?


Okay, what are privacy and security implications? If WebGPU goes GA, and a website requests it and gets approved (if it's gonna be behind a permission at all), what it would be able to learn about my machine and what it might be theoretically able to do beyond doing "normal" GPU compute?


Interesting!

I feel that instead of reimplmenting the React <Component> tree and its hook system (What they call "Live"), they could have made a customer React renderer instead (like React three fiber for example). Focus on the interesting bits (WebGPU), don't reinvent the wheel (React)


Seems like my integrated i7 laptop GPU can't run any of these examples due to incomplete Vulkan support for Haswell under Linux. Sad, since I doubt it'll be completed.

Maybe I'll have to retire this almost 10 year old laptop sometime soon, even though it still runs pretty well.


I read through the really long technical article about this from last week and it looks very interesting.

But the proof is in the pudding, as they say, so it's cool to see it's a real product that you can try.

Has anyone taken the time to experiment with this yet? What's your feedback?


This library is really inspiring to me in many aspects. WebGPU is interesting, declarative programming on WebGPU is fantastic.


I run Firefox Nightly and turning on dom.webgpu.enabled didn't enable the demos, even after a browser restart :/ Version is 106.0a1 (2022-09-09) (64-bit), I'm running on NixOS.

"Error: Cannot get WebGPU adapter"


it was shown on caniuse.com that Firefox Nightly has optional webgpu support, however it does not run webgpu triangle demos. confused :(


Even if I enable WebGPU and change my user agent to Chrome it won't let me through to the demos with Safari.

Some demos I find work and others don't. Assuming that the feature state is still early days on all browsers.


Is the view that Apple holds back WebGL force people to write native apps still widely held?


i've no clue why, but doesnt work even if i set the flag in config.

firefox dev latest on intel mac.


I think Firefox's WebGPU implementation is a little behind Chromium's at the moment.


Chrome Canary only I suppose.


I know this is hacker news, but I think the idea of hacking general purpose compute infrastructure on top of graphics/gaming hardware is starting to get out of hand.

When will we flip it around?


GPUs are already pretty generic. The main diff is that a GPU will execute N threads in lockstep, where N = 32 or 64, like a very wide and always-on SIMD.

The dedicated render pipeline still exists, and is still needed for things like Z buffering and interpolation, but you'd be surprised how much of what used to be fixed hardware is being emulated by the driver.


What's wrong with it?

It turns out that graphics hardware is perfect for certain kinds of non-graphical scientific computing. Dedicated GPGPU hardware already exists, but people don't have those at home and/or on their regular computers that they use.


What is wrong with it can be summed up by Henry Ford's quote: “If I would have asked people what they wanted, they would have said faster horses.”

(In this analogy horse=GPU, car/truck=general purpose compute hardware)


I really don't see the issue. GPU is specialised hardware for certain computations - faster and/or more energy-efficient. Your analogy is weird too; if a CPU is a car, then a GPU is a specialized car that can go 1000x faster under certain specialized circumstances.

That said, your original issue was with general purpose compute infrastructure on top of GPU, which can be applied to this analogy too; using the Thrust SSC to transport a container is probably not the best. Possible, but suboptimal.


It has already. Chips that are ostensibly for graphics are containing more and more silicon dedicated to general purpose workloads. Turns out doing it like this this is usually cheaper than manufacturing dedicated parallel processors and GPUs separately, since the requirements have so much overlap.


How? And who is we? So long as the GPU remains closed and proprietary, we can't port a general purpose OS to it and have it take over the boot.


> When will we flip it around?

Do you mean software rendering? Or more like baking GPUs into CPUs (already done decades ago, see SoC or integrated graphics hardware).


looks like they don't care about compute-pipeline.


They do. A very detailed article was posted not long ago :

https://acko.net/blog/the-gpu-banana-stand/


Page doesn’t load on mobile Safari.


It's mentioned at the top of the page that WebGPU isn't enabled by default in _any_ browser. The linked^1 caniuse page mentions this as well, while available in Safari, it has to be enabled in the developer config.

[1] - https://caniuse.com/webgpu


How was I suppose to see the page saying web GPU isn’t enabled by default if the page itself wasn’t loading?

Seems back up now, might have been HN hug of death.


you can enable webgpu in your settings > safari under experimental features


[flagged]


From the same school of thought which brought you "What the heck is Kiev?" as though that attitude would win any favors.


I'm unclear as to how to interpret this response. Did you mean 'Kyiv' and that I'm somehow anti-neologistic, or something?

- ed: damnit, I shoulda gone with 'anti neo-legoistic'.

- ed - ed: I genuinely meant my initial question - I'd not seen that word in that context, so wondered what it was. I thought it was perhaps some term I'd not encountered prior. I do know Americans call Lego pieces 'Legos', hence my follow-up question.


> Lego bricks/pieces?

Yes, or (more generically) "building blocks".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: