It’s not just the language. That code is impossible to directly translate to a pixel shader because GPUs only implement fixed-function blending. Render target pixels (and depth values) are write-only in the graphics pipeline, they can be only loaded with fixed-function pieces of GPUs: blending, depth rejection, etc.
It’s technically possible to translate the code into compute shader/CUDA/OpenCL/etc., but that gonna be slow and hard to do, due to concurrency issues. You can’t just load/blend/store without a guarantee other threads won’t try to concurrently modify the same output pixel.
Tilers (mostly mobile and Apple) generally expose the ability to read & write the framebuffer value pretty easily - see things like GL_EXT_shader_framebuffer_fetch or vulkan's subpasses.
For immediate mode renderers (IE desktop cards), VK_EXT_fragment_shader_interlock seems available to correct those "concurrency" issues. DX12 ROVs seem to expose similar abilities. Though performance may be hit more than tiling architectures.
So you can certainly read-modify-write framebuffer values in pixel shaders using current hardware, which is what is needed for a fully shader-driven blending step.
It’s technically possible to translate the code into compute shader/CUDA/OpenCL/etc., but that gonna be slow and hard to do, due to concurrency issues. You can’t just load/blend/store without a guarantee other threads won’t try to concurrently modify the same output pixel.