>Whether this will end up being better than really well optimized polygon based systems like Nanite+photogrammetry is also an open question
I think this is pretty much settled unless we encounter any fundamental new theory roadblocks on the path of scaling ML compute. Polygon based systems like Nanite took 40+ years to develop. With Moore's law finally out of the way and Huang's law replacing it for ML, hardware development is no longer the issue. Neural visual computing today is where polygons where in the 80s. I have no doubt that it will revolutionize the industry, if only because it is so much easier to work with for artists and designers in principle. As a near-term intermediate we will probably see a lot of polygon renderers with neural generated stuff inbetween, like DLSS or just artificially generated models/textures. But this stuff we have today is like the Wright brother's first flight compared to the moon landing. I think in 40 years we'll have comprehensive real time neural rendering engines. Possibly even rendering output directly to your visual cortex, if medical science can keep up.
That's only true today. And it's quite difficult for artists by comparison. I don't think people will bother with the complexities of polygon based graphics once they no longer have to.
Not really. Look at how many calculations a single pixel needs in modern PBR pipelines just from shaders. And we're not even talking about the actual scene logic. A super-realistic recreation of reality will probably need a kind of learned, streaming compression that neural networks are naturally suited for.
They already are. But the future will probably not look like this if the current trend continues. It's just not efficient enough when you look at the whole design process.
As I said twice now already, efficiency is not just a question of rendering pixels. When you take the entire development lifecycle into account, there is a vast opportunity for improvement. This path is an obvious continuation of current trends we see today: Why spend time optimising when you can slap on DLSS? Why spend time adjusting countless lights when you can use real time GI? Why spend time making LODs when you can slap on Nanite? In the future people will ask "Why spend time modelling polygons at all when you can get them for free?"
Nobody will spend time modelling polygons. They will convert gaussian splats to polygons automatically, and the application will rasterise polygons. This is how it's already done, if we went back to ray marching NeRFs we would be going backwards and would be an incredible waste of performance. Polygons are here to stay for the next 20 years.
I think this is pretty much settled unless we encounter any fundamental new theory roadblocks on the path of scaling ML compute. Polygon based systems like Nanite took 40+ years to develop. With Moore's law finally out of the way and Huang's law replacing it for ML, hardware development is no longer the issue. Neural visual computing today is where polygons where in the 80s. I have no doubt that it will revolutionize the industry, if only because it is so much easier to work with for artists and designers in principle. As a near-term intermediate we will probably see a lot of polygon renderers with neural generated stuff inbetween, like DLSS or just artificially generated models/textures. But this stuff we have today is like the Wright brother's first flight compared to the moon landing. I think in 40 years we'll have comprehensive real time neural rendering engines. Possibly even rendering output directly to your visual cortex, if medical science can keep up.