Hacker News new | past | comments | ask | show | jobs | submit login

Which of those images look real to you? To me, pretty much all of them look strictly like rendered images or have some aspect of them which gives away the fact that they are rendered.



We need a blind test where you evaluate a set of images on whether you think each one is real or rendered.


With a caveat: the real images should not be cherry picked to look as close to cgi as possible but the other way around. Real and render can look indistinguishable that way too, but we want real looking cgi, not carefully arranged cgi-looking reality. The biggest giveaway is the simplicity and pristine sterility of the rendered scenes, no mess, clutter, irregularity. Just look at photos that people haphazardly snap in the real world.pick those for a blind study and cgi is nowhere near. Pick carefully lit and arranged artificial scenes with heavy post-processing, like studio shoots or hotel brochures or car ads and the difference will be less obvious.


It seems like your point is not so much that we can't technically render photorealistic scenes, but that artists don't put in the work to recreate the incredible level of detail we have in reality.


This isn't a big distinction to me? Either way, it's still true that it's nigh impossible to generate a realistic rendered every day scene. Whether that's because the environment modeling isn't there yet or the ray tracing isn't there yet doesn't make a big difference to me. Maybe the super detailed modeling will be the "last stand" prior to achieving general photorealistic renders, and the final breakthrough will be some kind of improved ML procedural generation algorithm for everyday lifelike objects.


Yes. But in principle, tech could be advanced enough that it doesn't require so much manual effort. Like, if someone takes the time an works long hours on modeling a worn book with all its wrinkles and curls and crookedness, you can probably ray trace it realistically. But you could also say that artists can digitally paint a whole scene without any 3d engine or rendering tech.

The point is, currently it only works for simple scenes and needs tons of manual work otherwise. Reality is realistic effortlessly.


I love this idea. Quick question for you, is this scene real or rendered? https://imgur.com/a/MpKMo6j

If you think the answer is obvious, then CGI definitely has more work to do. If not ... ??


Looks real. The dirt on the computer, the curls of the books and magazines and many other small details look very real.

If this is CGI, then I'm impressed and want to see more from where this came from.


Alas, it's real.

Agreed with you that I'd be super impressed if anyone could render at that quality, but I haven't seen it yet.


In the version that is shrunk down to fit my browser, it kind of looks rendered, because you can't see the subtle details that another user mentioned. Dust, the curling of book pages, etc.

But in the full version[0], it's clearly a photo. But it has been HDR'd which can sometimes create shading that gives a somewhat rendered look.

[0]https://i.imgur.com/Wn2XgFg.jpg


The full version also has artifacts from the camera that ironically give it away as a real photo. At full resolution you see this characteristic noise plus noise-smoothing effect that cameras apply.


That effect would be much easier to fake with a digital filter than rendering the entire complicated scene from scratch, though. Adding noise and noise-smoothing is simple. So if that's your heuristic, then you're gonna be fooled by digital renders that just incorporate that simple to implement effect.


Yeah, absolutely. It also shows though how you can easily accidentally bias a blind test. If an artist did manage to create a perfectly realistic render but forgot to add in one of the subtle unrealistic effects that cameras cause, people would be able to tell which images were photos.

(Although, maybe you could argue that if the challenge is to make photorealistic renders rather than realistic renders, nailing the camera artifacts is part of the challenge.)


Yeah, the goal has to be photorealistic renders, because what would the control data for the test be if not photos? You can't test one image on a screen vs "what real life looks like". The one on the screen is obviously the rendered one then every time.


It's not real multiple exposure HDR, it's just whatever default faux single exposure digital HDR the Pixel phone does. If it were real HDR the photo wouldn't be so blown out and over-exposed from the sunlight on the left side, particularly on the leaf.


Heh, I agree! I'd love to try something like that if it exists.


I think Instagram is actually helping this from the other direction. My brain processes the two Jeep pictures as real but with a heavy-handed Instagram filter applied.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: