Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s the dream of HW GPU designers everywhere. I’ve got a bunch of dead Cell and Larrabee chips laying around upstairs that’d like to disagree, though. Coverage rasterization is just one tiny piece of the GPU story; the really hard parts (software-wise) are things like varying interpolation, primitive synthesis (and varying synthesis), change-of-basis, and specific sampling and blending modes. It turns out that once you add in the HW to make “a few edge cases” faster, you end up with all of the HW you need for a complete GPU.


Sure, there are things that will always benefit from some ASIC magic (like, I don't see texturing going away any time soon). And we even get new specialized hardware, such as hardware RT. But advances in both performance and rendering approaches make some of the old fixed-function unnecessary. For example, Apple already dropped most fixed function related to blending and MS — since they serialize the fragments before shading they don't get any race conditions. And from what I understand they don't have any interpolation hardware either, it's all done in the shader. Of course, they pay for all this convenience with additional fixed-function complexity of the TBDR binner, but I think this illustrates how an algorithmic change can make certain things unnecessary.

Tomorrow's real-time rendering is all about RT, point clouds, microprimitives, and all that interesting stuff, the fixed function rasterizer simply won't be as useful anymore.


What is "change-of-basis"? My Google-fu failed me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: