Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Games want to be more photorealistic and the hardware to fake it is getting faster and cheaper.

I remember in 2008 when Intel demoed ray tracing[0] for Quake Wars. It ran on a quad-socket (4 CPUs x 2.7GHz) at around 25 FPS, and even Intel admitted that ray tracing is just "brute forcing" the complex optical physics of a scene.

Considering between then and now, we have much prettier looking games that do NOT require ray tracing for their visual effects (and can run on much fewer CPU cycles), thanks to much more "clever" techniques and algorithms (DoF, bloom, subsurface scattering, PBR, etc), and if it weren't for the RTX graphics cards setting the precedent, there would probably be more research into making things prettier without having to "brute force" things.

Nowadays, you can drop a 100gb mesh into UE5 Nanite[1] and skip a lot of the work involved in making things pretty. You don't need to generate or even understand displacement maps if you can just add more polygons to the model. You don't need to billboard the mountain way in the distance if Unreal does it on the fly. Who cares if that means including a 250mb mesh in your assets, versus a 5mb HDRI.

So yes, producers have either become lazy or are simply unable to meet the demands of the hardware by coming up with better algorithms. That's unlikely to change until either hardware stops advancing and either A) can't run the latest games or B) games become too large that they're impossible to download.

[0]: https://www.gamedeveloper.com/programming/sponsored-feature-...

[1]: https://docs.unrealengine.com/5.0/en-US/nanite-virtualized-g...

[2]: https://twitter.com/mariobrothblog/status/163604089376436224...



> clever" techniques and algorithms

Lately I've become hyper-aware of games doing screen-space reflections and suddenly finding it distracting when I point my view down and the reflection of the scene at the horizon vanishes off the water/floor.

I upgraded from a 1070 to a 3080 right when the 3080 came out, and I actually have yet to play an RTX game.

> bloom

I hate that this ever became a thing.

Bloom just makes the scene blurry and bright parts end up blown out.


> and if it weren't for the RTX graphics cards setting the precedent, there would probably be more research into making things prettier without having to "brute force" things.

Not really. A lot of this research included making rudimentary versions of RTGI that could run on hardware of that age(see SVOGI). Of course you can bake everything and make amazing looking levels, that are completely static. But that doesn't work with open-world games, especially if they have different times of day. And it doesn't work in highly dynamic scenes. We basically hit the end of the road for scaling graphics without RT. And now we have hardware that can do realtime path-traced rendering of major AAA titles, and it looks miles better than anything smoke and mirrors could've ever delivered.


I'd rather have more RAM in the GPU and give up the ray tracing hardware.

The real problem in the industry is that GPUs are not getting cheaper. It cost about $250 for an 6GB NVidia 1060 in 2016, and it costs about $250 for a basic NVidia card today that can do slightly better. Meanwhile, people are buying $100 WalMart laptops and trying to play games on them.


You can buy RX 6600, which is roughly 2x faster than 1060 and has 8 gigs of VRAM for 200 dollars[1]. If we take inflation into account, 1060 did cost around 300$ in today's money. So basically you are getting 30% more ram and 2x performance for 30% less money. In other words you are getting performance 20% better than GTX 1080 for 4x less money(launch price of 1080 was $600). And it uses less energy too, so TCO difference is even higher. And I'm not even talking about expanded feature set.

[1] https://www.newegg.com/gigabyte-radeon-rx-6600-gv-r66eagle-8...


Yes, Radion seems to be cheaper than NVidia right now. The cheapest board on NVidia's site is now $249.[1] It's an MSI RTX 3050 with 8GB of memory.

[1] https://store.nvidia.com/en-us/geforce/store/


I mean, you could have a bunch of other things in the GPU instead of those cores, but the reality is that ray tracing serves multiple target markets for GPUs very well, and actually meets a demand people have: better graphical quality. In contrast, while I'm jealous of my friend with a 16GB 4080, versus my paltry 10GB 3080, it does not realistically impact anything I do outside of occasional ML retraining tasks. Neither did my old 8GB card. Gaming workloads just generally don't need it, because people there are mostly bound by compute, not texture or RAM limits -- and solutions like super-resolution/upscaling instead improve performance and memory usage all at once.

> and it costs about $250 for a basic NVidia card today that can do slightly better

Eh, the 3050 is like 30% more efficient in every single benchmark versus a 1060, with 30% more VRAM, and costs like $280. I don't think 30% is "slightly better". The Arc A750 also is around the same price point and is nearly 80% better, but I admit Nvidia has the best software stack on the market at the moment so the 3050 is a more fair comparison.


Blame CPU manufacturers. There isn’t a market for 50$ graphics cards when Intel and AMD have integrated graphics that can handle low end 3D games.

There’s still a gap between integrated and 250$ cards, but that’s mostly filled by used last generation cards.


> Considering between then and now, we have much prettier looking games that do NOT require ray tracing for their visual effects (and can run on much fewer CPU cycles), thanks to much more "clever" techniques and algorithms (DoF, bloom, subsurface scattering, PBR, etc), and if it weren't for the RTX graphics cards setting the precedent, there would probably be more research into making things prettier without having to "brute force" things.

Unreal Engine 5 has a new lighting system called "Lumen". It seems to be visually almost on the level of normal brute force ray tracing, while not requiring any special ray traycing hardware and still running with decent performance on mid range hardware. It's a smarter approach which tries to accumulate lighting information on textures over time instead of computing everything again for each frame.


There's a good breakdown from one of the Lumen devs for more detail on how it came about: https://knarkowicz.wordpress.com/2022/08/18/journey-to-lumen...


That's really interesting, although it goes way over my head. :)


> and even Intel admitted that ray tracing is just "brute forcing" the complex optical physics of a scene

Modern ray traced games are much less "brute force" than what they were doing back in 2008. The use clever techniques and algorithms like temporal denoising and radiance caching to achieve high quality ray tracing effects with a tiny number of secondary rays per pixel (The budget is often under one ray per pixel).

It's not really fair to call it "brute force" at all, just a different set of clever algorithms.


> Considering between then and now, we have much prettier looking games that do NOT require ray tracing for their visual effects (and can run on much fewer CPU cycles), thanks to much more "clever" techniques and algorithms (DoF, bloom, subsurface scattering, PBR, etc), and if it weren't for the RTX graphics cards setting the precedent, there would probably be more research into making things prettier without having to "brute force" things.

Despite being a graphics researcher, I’ve been a little skeptical in the past about ray tracing in games. Recently I’ve started hearing for the first time from some hard-core gamers that turning on ray tracing makes a “massive” difference in the quality of the experience, and that they’ll use it as long as the frame rate can stay above 30 fps most of the time.

That said, with all due respect, your speculation about clever non-ray tracing techniques seems a little off-target from my perspective. There has been, and still is, a huge amount of research into these, and the entire field of techniques is hitting limits and failing to progress as fast as ray tracing. There are multiple reasons for this. One problem is that many of these one-off raster effects require their own memory & compute. Another is that they each require their own implementation, each require their own tuning, they impose constraints on the art and the artists, and they only work in narrow ranges of curated input. Shadow maps are a good example - needs a buffer of it’s own, and get too close or too far away from the shadows, or if the scene is too big, and they look terrible. You have to spend precious art+dev time just to make them work, you have to spend precious memory for this one effect, it eats into your rendering budget, it only takes care of the shadows, it’s quality has a hard ceiling limit, and you still have 13 other effects to work on that have the same story. With ray tracing, once you setup the memory for it, it can render all the effects at the same time, it’s the unified theory of rendering, and doesn’t require per-effect memory or much per-effect tuning, not like the grab bag of raster and screen-space effects does. Ray tracing has it’s downsides, and it’s certainly cycle-hungry, but there is a good reason that game devs are gravitating toward using ray tracing, it’s because ray tracing is fundamentally easier and fundamentally looks better than the one-off raster effects.

Regarding ray tracing being ‘brute force’, keep in mind that if you’re rendering many instances, or have lots more geometry than pixels visible to the camera, ray tracing can be more efficient than raster. Is usually isn’t in today’s games, because games have hard budgets that limit the amount of geometry - mostly due to raster algorithms! - but the trend of increasing geometric complexity is clear, and that ray tracing is the likely path forward. Nanite is very interesting and has made waves - it’s being adopted and used for the same reason that use of ray tracing is growing, primarily because it makes game development easier. Maybe we’ll see these two things merge in the future, they are not mutually exclusive. Nanite is carefully designed to make sure you’re always rendering polygons that are about 1 pixel in size, plus or minus, and Nanite has pushed scene complexity, but there are certainly open questions about whether this approach will continue to work as complexity grows.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: