Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s not just the language. That code is impossible to directly translate to a pixel shader because GPUs only implement fixed-function blending. Render target pixels (and depth values) are write-only in the graphics pipeline, they can be only loaded with fixed-function pieces of GPUs: blending, depth rejection, etc.

It’s technically possible to translate the code into compute shader/CUDA/OpenCL/etc., but that gonna be slow and hard to do, due to concurrency issues. You can’t just load/blend/store without a guarantee other threads won’t try to concurrently modify the same output pixel.



Tilers (mostly mobile and Apple) generally expose the ability to read & write the framebuffer value pretty easily - see things like GL_EXT_shader_framebuffer_fetch or vulkan's subpasses.

For immediate mode renderers (IE desktop cards), VK_EXT_fragment_shader_interlock seems available to correct those "concurrency" issues. DX12 ROVs seem to expose similar abilities. Though performance may be hit more than tiling architectures.

So you can certainly read-modify-write framebuffer values in pixel shaders using current hardware, which is what is needed for a fully shader-driven blending step.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: