Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Racing for Realism (pixar.com)
396 points by mariuz on Dec 21, 2017 | hide | past | favorite | 150 comments


Going to work at a competitor to Pixar in ‘96, it was the first time I was exposed to real life computer science problems that needed more compute power than could be imagined.

Between then and 2017, think about the staggering performance increases available, more so than in many fields because of the embarrassing (parallel nature) thing inherent to many of the problems.

And yet there is still no end in sight to the compute power that could be leveraged.

I wonder what would change if we had say, 3 orders of magnitude performance improvements across the board starting tomorrow. What might be some of the first practical advancements to be exploited and seen?

Let’s assume the amount of compute power that could be leveraged effectively and practically for computer graphics is finite, however large.

How much less compute power would that be, compared to what it’s going to take achieve general AI? Or even a Turing effective chat bot?


That's an interesting question, with an interesting premise.

Will realism (graphics + physics simulation) take less compute than AI for sure?

If we look at the limit case, the reality I can see out my window has more atoms than are in my brain. So, in theory perhaps, human-equivalent AI will similarly require less compute than realistic graphics & physics simulation/animation.

OTOH, evolution took billions of years and presumably depends on all the animals and all the physics that has ever happened, so maybe inference will always be cheaper than graphics, and maybe AI training and evolution will always be more expensive?

I have no idea, but it's a fun thought experiment.

FWIW, I also worked for a Pixar competitor, and the renderfarm we had was indeed more compute than I had been exposed to before. That said, we didn't make a huge amount of effort to optimize the system, we only made sure really stupid things didn't cause overnight renders to take longer than required for dailies the next day. I'm absolutely certain that at least 1 order of magnitude is available to software render farms, if the primary focus shifts from production to optimization. I'm halfway sure that using today's and tomorrow's GPUs could get you at least another order of magnitude, but that comes with a second level of expensive engineering effort. In short, I think if you really really wanted it, and had the money & time to commit, you could probably have your three orders of magnitude right now.


> In short, I think if you really really wanted it, and had the money & time to commit, you could probably have your three orders of magnitude right now.

I'd imagine something along the lines of a datacenter, filled with racks stuffed full of these:

https://www.nvidia.com/en-us/data-center/dgx-1/

...but repurposed for graphics rendering instead of deep learning.

In fact, it wouldn't surprise me to find out something like this actually exists (though whether for graphics or deep learning, I don't know).

Also - I wouldn't be surprised to learn that a datacenter full of D-Wave machines exists somewhere (perhaps next door to the datacenter filled with DGX-1 systems - I mean, something has to make sense of the data output).

Pure speculation, of course; I'd suspect we'd know about it though if it were the case, unless it was done as a government black project...


I like the 3D animation movies of Pixir with non-realistic rendering (especially from 1990s, 2000s).

Though like many I dislike the CGI movies that started with the first Transformer movie (with blue screen action) and now with the dozens of super hero boring fad. Action movies of the 1980s and 1990s and early 2000s were so great. Nowadays thankfully they use more real stunts again, though this mass of boring super hero and hundreds of TV series that no one has time to see. All we want is new high value Hollywood movies.


Personally, I think one of the main problems with current use of CGI (specifically in non-animated movies) is color treatment. Comparisons between the original non-altered version of Jurassic Park (JP) against Jurassic World (JW) shows my point. There is an increased tendency to change hue and saturation of color in post-production, in order to transmit ideas or set moods on the film itself, but this makes CGI look faker than what it is and the brain is able to identify it.

EDIT: I prefer the look of JP over JW, not because I think it has better CGI, but because colors look more closer to reality.


Modern color treatment in film totally drives me crazy, independent of what it does to CGI.

It seems like every film coming out of Hollywood these days is processed exactly the same way:

* Crank down the low end so half the frame is solid inky black.

* Pick one or two key colors, and jack up their saturation.

* Desaturate everything else.

In other words, make the entire movie look like an over-dramatic poster for it. Throw any sense of naturalism or unique look out the door. Once you notice it, you see it everywhere. Here's the top live action movies of 2016:

Rogue One

Cyan and olive: http://www.ramascreen.com/wp-content/uploads/2016/04/Rogue-O...

Captain America: Civil War

Yellow and red: http://cdn3-www.superherohype.com/assets/uploads/gallery/civ...

Why they felt the need to make Cap's lip rosy red is beyond me. Even better is this shot:

http://www.theglobaldispatch.com/wp-content/uploads/2015/11/...

This is a daylight shot. Look at how the characters are all squinting. Look at the big white fluffly clouds that should be diffusing the light and softening shadows. And yet, magically, all of the shadows are jet black and almost all of the color is gone.

The Jungle Book

Green and red (to draw attention to Mowgli):

https://www.moviedeskback.com/wp-content/uploads/2016/02/The...

Deadpool

Another daylight shot with nonsensical shading:

http://turntherightcorner.com/wp-content/uploads/2015/12/Dea...

The light is clearly diffuse, if you look at the shadows on the ground, and yet there is this deep black everywhere. Deadpool took the look further by only picking one color (red, Deadpool's outfit color) for the entire film.

Lest you think this is just an issue with superhero movies, let's ignore those and look at the other top non-superhero movies of 2016:

Fantastic Beasts and Where to Find Them

More of the same:

http://harrypotterfanzone.com/wp-content/2016/09/fantastic-b...

It also suffers strongly from the "It's a period movie so we're going to sepia tone everything so hard it looks like you're watching the movie through an aquarium full of pee" effect.

Hidden Figures

https://www.themarysue.com/wp-content/uploads/2016/11/hidden...

More of the period movie sepia along with an improbably jacked up complementary cyan.

Star Trek Beyond

If the past is warm, the future must be cool:

http://www.ramascreen.com/wp-content/uploads/2015/12/Star-Tr...

Blacks, grays and neon cyan, just like Rogue One. Look how unnatural his skin tone is! They wanted to jack up yellow to complement their primary blue so much that the poor dude has jaundice:

http://cdn.wegotthiscovered.com/wp-content/uploads/2016/05/s...

La La Land

I'll give this one some credit for varying it up a bit. Also, dramatic spotlit stage lighting is a logical part of the movie's look. But it's still prey to the same cliched look in places:

http://henrymollman.com/content/images/2017/01/Screenshot-fr...

Ghostbusters

Ectoplasm green and spooky violet were the key colors for the whole film. Secondary colors tend to come across as magical or eerie to viewers since they're different from the safe and familiary primary colors. Of course, jacking up magenta doesn't do flattering things to skin:

http://www.ramascreen.com/wp-content/uploads/2016/03/Ghostbu...

Also, what color is Melissa McCarthy's coat? Is it actually black? Who fucking knows.

Central Intelligence

The traditional comedy look is brightly-lit and saturated, so Central Intelligence didn't fare quite as poorly as the above. Whenever they had a chance to dim the lights, though, they revert to the same cliched look as well:

https://i2.wp.com/doblu.com/wp-content/uploads/2016/09/centr...

The Legend of Tarzan

http://images3.static-bluray.com/reviews/14209_5.jpg

Jesus, we get it, colorist. They're in the wet jungle. We don't need to be physically assaulted with the colors blue and green to figure that out.

...You get the idea.

Go back and watch a movie from before 2000 and you'll quickly realize how much more pleasant to look at most of them where, and how much more variety there was in look. Compare Alien to The Graduate. But it seems like ever since the rise of digital coloring and superhero movies, Hollywood has decided every single fucking movie needs to look like a comic book page, whether it needs it or not.

One recent movie I really liked that bucked this trend was Her. It was heavily colorized, but in a way that stood out. In shots like these, even though there's a strong color cast and a lot of dark, there's still always some color and detail in the shadows:

https://i1.wp.com/turntherightcorner.com/wp-content/uploads/...

https://3kpnuxym9k04c8ilz2quku1czd-wpengine.netdna-ssl.com/w...


I think that part of the reason this is happening is the move to digital cinematography, which makes this kind of adjustment from raw video files much easier.

Also, moving to digital allows you to basically configure your own "film look" rather than relying on the filmstock that you're feeding into your cameras.

Digital also tends to look a bit more flat and desaturated than film in general. So to make up for that, they make use of strongly direct lighting to cast shadows all over the actors (it's really noticeable on the face) to give depth to the image.

This Star Trek example is a good one of this: http://www.ramascreen.com/wp-content/uploads/2015/12/Star-Tr...

There's a massive stage light sitting right off to the left of him.


You are correct in every aspect. Non-color treatement is something I definitely miss from movies of the 90s. Digital processing technologies have made it easier for people working on cinema to color-treat the movie as product. Even directors not involved with digital filming, fall on this: Christopher Nolan and Quentin Tarantino, all fail to deliver a natural and coherent visual representation of color.

EDIT: Her was a beautiful experience in itself.


The Netflix show Lady Dynamite does a sort of gag with colorization to define moods of scenes. Different timelines of her story have their colors overdone on purpose, to comedic effect. It's pretty clever.


https://i.imgur.com/sr1HLwV.jpg

comparison image for jurassic world. from top to bottom: 1st trailer, superbowl ad, tv ad from a later date.


Interesting to note that the background changed significantly in each of the shots.


I wonder if color graders have very different taste than I do, or they're pushed by producers to make the image garish.


That's correct.

You can compare Snyder's work in Batman v Superman to Joss Whedon's last-minute changes in Justice League and how the CGI is actually worse off in a film that has a higher budget and lower runtime than the former.


Justice League is in Star Wars: The Attack of the Clones' level of horrendous abuse of CGI.


I mean, complaining about the color realism of a film with gigantic monster dinosaurs (whose coloration we may never know to begin with) seems a bit silly. Movies are not documentaries. I'd buy in a bit more for things like the BBC's 'Planet Earth' needing color realism, but even then it's ok to dramatize things.

Every Frame a Painting goes over Michael Bay's work and includes some color content he uses and WHY: https://www.youtube.com/watch?v=2THVvshvq0Q


...but the whole point of the effects in those movies is to bring the idea to life, to make the unbelievable seem real and believable. If it fails to do that, it’s not silly to discuss how and why.


I thought movies were to make money, at least ones like the recent spat of Jurassic ones. JP, yeah, realism to a degree, BoB Horner became a household name because of it. And it was somewhat realistic for the research at the time. But nearly all the dinos should be covered in furry Emu-like feathers now if we want realism. And shaggy dirt-brown feathers aren't good looking and 'dazzle'. Maybe for some it's worth $10.50 to watch, but not for most.


When JP was filmed, the discovery of dinosaurs having feathers hadn't happened yet, and big efforts were made on providing accurate representations of what was known about the creatures. Back then scientists still thought dinosaurs were reptiles' ancestors, so the skin had to look reptile-like; personally, I think they succeeded, and a bigger success was shown in The Lost World, years after.


The relationship between dinosaurs and birds has been observed for at least 150 years, although until the 1990s the idea was that birds were descendants of dinosaurs, not the other way around.

Still, the idea that at least some dinosaurs might have had feathers predates Jurassic Park by decades. I distinctly remember learning about this possibility in my first grade classroom, which would have been 1988 at they latest.


You are correct. The idea of dinosaurs with feathers is older than the 90s, even Dr. Alan Grant suggests this when speaking about raptors, on the other hand evidence appeared until mid 90s.


The new trailer for JW just came out, no feathers: https://www.youtube.com/watch?v=vn9mMeWcgoM


Yes, and the original point was that JP did a better job than JW.


It's not silly when we're specifically discussing REALISM


But with Dinos, we can't. We've no idea what their color was, if they even have color receptors in their eyes, nor their movements. Fantasy is just as good as reality when it comes to T-Rex coloration and if they had feathers or not.

BTW, I tried to do a quick google on the current research and mostly came up with mixed answers. If anyone form HN knows about dino feather coverings, please chime in!


>Fantasy is just as good as reality when it comes to T-Rex coloration

The perception of realism would still matter.

Just use test audiences to see what people believe looks the most real.


No, the studios want returns on their investment, at least enough to recoup costs and the perception of realism, to an audience, is not realism. What an audience thinks is real is often not real. Audiences love to hear sounds in space movies, yet that is impossible. Audiences think that spies have techno baubles and cool stuff, but they mostly have budget rate bureaucracy. Audiences think that we all live happily ever after, but we all know that's not reality. Movies aren't documentaries, they are entertainment you pay for.

Also, a realistic T-Rex may never survive in our modern atmosphere. Those lungs are so large because O2 content was about half of today's (http://www.firstpost.com/tech/news-analysis/atmospheric-oxyg...) and the mix of dinos they have in JP is just a mess of evolutionary reality.


This thread has really activated your almonds, huh?


I mean, yeah? The Jurassic series has a special place in my heart, the films, by and large, are great monster movies. But they are just that, monster movies.

Trying to say that the dinos are realistic is just crazy. Maybe, kinda, in the first one, they were, a little bit. Even then Jack Horner got a lot of flak from the community over his consultation on the films. But the evidence we have now just points to a much different creature than what we though of even a few years ago. They aren't monsters, just standard terrestrial vertebrates trying to make their way in a different world. We have so much to learn from them, about climate change, about physiology, about adaptation, etc. So trying to keep the monster and the known 'reality' separate is a big deal to me.


I think you are really missing the point here. The audience doesn't care whether the dinosaurs depicted are scientifically accurate, they only care whether they look like they could be accurate, and that is a big distinction. It is about matching the viewer's expectations, not reality. In JP the dinosaurs look like they could exist, whereas in JW they looked like CGI (the so-called 'uncanny valley' effect in CGI). That being said, of course JW succeeded from the perspective of the investors since it made so much money. Still, that aspect of the film is completely orthogonal to the aesthetical aspects that people here are discussing.


"Fantasy is just as good as reality when it comes to T-Rex coloration".

Then let's go for plaid and gingham dinosaurs, like the pink elephants on parade!

It doesn't need to match reality, it needs to aid suspension of disbelief. If color choices make the audience say, "That looks so fake", it hurts the storytelling.


Nice, I like the wear and aging element it affords the renderings.

This makes me wonder if at some point realism will extend to movement whereupon a studio begins to shoot (render) feature films involving digital "people" as costs undercut actors. Will they be able to cultivate favorite "actors"? So you see an "actor" in different and disparate roles? In other words create interest and following for a constructed "actor".


Reminds of "The Congress" by Ari Folman. "Robin Wright is an aging actress with a reputation for being fickle and unreliable, so much so that nobody is willing to offer her roles. [...] Robin agrees to sell the film rights to her digital image to Miramount Studios in exchange for a hefty sum of money and the promise to never act again. After her body is digitally scanned, the studio will be able to make films starring her, using only computer-generated characters."

https://en.wikipedia.org/wiki/The_Congress_(2013_film)


https://en.wikipedia.org/wiki/Simone_(2002_film)

"When Nicola Anders, the star of out-of-favor director Viktor Taransky's new film, refuses to finish it, Taransky is forced to find a replacement. Contractual requirements totally prevent using her image in the film, so he must re-shoot. Instead, Viktor experiments with a new computer program he inherits from late acquaintance Hank Aleno which allows creation of a computer-generated woman which he can easily animate to play the film's central character. Viktor names his virtual actor "Simone", a name derived from the computer program's title, Simulation One."


Actually Happened (not a movie plot)

http://www.criticalcommons.org/Members/kellimarshall/clips/A...

"The late Fred Astaire 'dances with' a Dirt Devil vacuum cleaner. This controversial ad, okayed by Astaire's daughter but protested by his widow, inspired "the Astaire Bill," which was passed in 1999 to 'eliminate the exceptions and place the burden of proof on those using celebrity images, forcing them to show that their use is protected by the First Amendment.'"


I remember that commercial, and Audrey Hepburn's chocolate commercial. And Bill Clinton's role in Contact.

More recently there was also some news about actors having to double check their contracts to make sure they didn't sign away the rights to 3D scans of them being used outside of the film they were in.


> More recently there was also some news about actors having to double check their contracts to make sure they didn't sign away the rights to 3D scans of them being used outside of the film they were in.

Yikes! An actor works once and would never work again.


If that sort of thing becomes commonplace, that won't be an issue, because they'll never work at all, and actor will slowly but surely cease to exist as a profession at the high end. See the comments upthread about Hatsune Miku and such; Japan is much farther down this road than the US is.


Future contracts with the phrase "face and body royalties."

Creepy.


That reminds me of Novint (who made the Falcon haptic interface) inventing haptic copyright licenses and then buying exclusive rights from game companies for ~$40,000 each. http://www.hawkassociates.com/clients/press/full_news.cfm?cl...


Aki Ross from Final Fantasy: The Spirits Within (2001, https://en.wikipedia.org/wiki/Final_Fantasy:_The_Spirits_Wit...) was actually supposed to be Square's first CG actress for exactly this kind of thing.


Which turns out to be not anywhere close to what audience were expecting. I still remember When Toy Story was shown in 1995 created on Silicon Graphics, they said everything was so crazy good, that at the rate of Moore's law, in 10 years time realism in animation is finally achievable. 2005 and The Spirit Within came and we are still no where close. Fast forward another 10 years in 2015 we are definitely better, but realism and Digital Actor is still pretty much a fantasy. At least the case the audience knew it was not real actor. That is 20 years with Moore's law, now Moore's law is gone and we are slowing down, what other kind of innovation will lead us to realism?


But realism was achievable and was achieved not that long later; Lord of the Rings is still a very good movie, you can start to see the CGI is a bit dated but for a long time, e.g. Gollum's character was the benchmark - worse, the Hobbit movies couldn't even top it, despite being made over ten years later.


Gollum was definitely a benchmark, maybe even still. Then there's Avatar and Cesar from the Apes movies. But we've yet to get close with actual humans. What started with Final Fantasy we still have that "uncanny valley" in things like Tron and Star Wars: The Force Awakens.

Side note: I can't find an article but I believe Gollum was one of the first times deaf movie-goers could actually read a digital actor's lips.


I find it weird how producers still insist on shoe-horning CGI humans into movies even when the technology clearly isn't there yet.

In Star Wars: The Last Jedi, Snoke is entirely CGI for no discernible purpose, and it's uncanny valley hell every second he's on screen. It's all in the animation, of course; the rendering itself is great, but there's something really off about his movements and facial expressions. What's odd is that they could easily have produced this character using a real actor in prosthetics, with some subtle CGI augmentations.

Tarkin in Star Wars: The Force Awakens is even worse, like something out of a video-game cutscene. I don't understand why they couldn't just have used an actor.


Gollum, while very good, was never good enough to be said to be photorealistic or something. Your eye would not be fooled that easily.


The problem with "digital actor" is that, while we've got the "digital" part pretty well covered, that leaves the "actor" part which falls squarely under Moravec's paradox and probably (IMO) even requires consciousness to achieve.


They don't need consciousness, they need to quit ignoring the laws of physics when making things move.

They keep breaking the viewer's suspension of disbelief because they want to show their characters being so bad-ass that they defy the laws of physics. Instead of seeing people with super-human traits, they see really expensive cartoon characters.


If you look at movies along the decades everything is amplified. People have shifted their references. I get mental fatigue watching movies these days because everything is fast, quick, chopped, extreme. It's like a manga, without the childish part. High power doesn't mean high dynamic. For instance in Se7en, there's one single gun shot, the whole movie is tame but when that shot occurs the shift is so strong you feel it. It's a musical thing, what makes it is as much the notes than the silence between them. Balance.


Hah, I have the equal and opposite problem: I try to watch highly regarded old movies and find they're just incredibly slow and boring. (Certain kinds of movies work with a slow pace, but even supposed action movies seem to take forever to get to the point).


Some of them are too full. I must admit. But the other day I watched 3days of the condor and it was just right. All the presidents men too.

I still wonder if this is only youth neurons being drugged on all formats.


I’m pretty sure there are multiple gunshots in Se7en. E.g. when they chase the guy through the apartment building.



So I take this is at least the second shot ? damn fuzzy memory


> Moore's law is gone

For CPUs, yes. For GPUs, not as far as I am aware. And rendering is the traditional GPU task.


no he is right. Moore's Law in its original meaning is gone, and with GPU we are just shifting goal posts to pretend it still applies.


Moore's law in its original 1965 meaning was an observation that transistor counts doubled every year and a projection that this would continue for the next decade. Moore revised the projection downward in 1975 to every two years, which remarkably has held into the 2010s. But it is transistor count, not necessarily performance. A coworker calculated that performance was doubling in 18 months. https://www.kth.se/social/upload/507d1d3af276540519000002/Mo...

Transistor counts continue an exponential growth curve, particularly in GPUs. It may be time for another downward revision on the rate (2.5 years? 3 years?), but we aren't getting as much single-threaded performance out of the increased transistor count. This provides increasing performance difference between the highly parallel GPU architecture and the more limited parallelism of traditional CPUs.


Wasn't the Moore law about transistor count PER surface unit? If that's the case, then even GPU are way below the predictions now, because chips have certainly more transistors, but their surface is also increasing.


Components per integrated circuit. The link above quotes Moore's 1965 statement.


I feel like motion and facial expressions will be a much bigger hurdle than raw graphics. The answer to when graphics is not good enough is to get closer and model the physical material more accurately and with more detail. Not necessarily easy but relatively straightforward. Then it's just optimizations like this article describes (bump mapping doesn't add something that 100x sampling couldn't get, but it does it 100x faster).

But body motion and facial expressions are entirely different. It's not like an optimization problem where you could just throw more computation at it and it could go away, it's just not possible to do perfectly at all. The best we've been able to do is track the motion of real humans and map that to a digital model; we can't generate it from scratch. We don't even have a general and convincing model of how motion should work. With graphics we've been through half a dozen models, and now here we are with PBR+extensions and the result is fantastic.

Believable body and facial expressions are going to be a much bigger hurdle than we've seen with graphics.


We do have a general and convincing model of how motion should work - we have video examples of "proper" motion, so we can optimize models with the explicit goal of creating motion that's indistinguishable from real motion with e.g. generative adversarial neural network models.

Things like https://grail.cs.washington.edu/projects/AudioToObama/ are just the start, we can generate facial expressions from scratch with some quality today, and we can work to make those generated expressions better and more realistic in future.


A sample is not a model. Taking a picture of a surface, however detailed, doesn't get you any closer to rendering it.

Neural networks are exciting here though. But even if it gets off the ground, it will still be at the "Phong shading" level model for some time.


A single sample is not sufficient for a model, but if you have enough samples to infer the whole distribution, then that is sufficient.

A thousand pictures of a particular surface may easily carry sufficient information to enable rendering a million different pictures that are not distinguishable from the real ones; a thousand motion capture sets of different people walking carry sufficient information to allow generating new motion sets that are different from the original samples, but not distinguishable from real people walking.


I used to think this, too, but recent games like Metal Gear Solid V: The Phantom Pain have changed my mind. Astounding graphics and facial expressions.

Curious what Kojima will do with Death Stranding:

https://gamerant.com/death-stranding-still-shooting-scenes-m...

https://www.rollingstone.com/glixel/features/kojima-death-st...


These are great, but it's still not generating facial expressions, just mapping real faces onto digital ones.


We've already seen various things in that direction: characters performing plays (e.g. Monsterpiece Theater), one show doing a pastiche of another (e.g. the Death Note episode of Absolutely Lovely Children), fictional bands continuing outside the show they came from (EARPHONES or egoist) or bands that are simultaneously fictional and real (e.g. the one in Love Live), free-floating virtual characters being pushed as "celebrities" outside a particular show (e.g. Kizuna Ai). Look at e.g. the Hatsune Miku cover of the Virtual Insanity music video.

Black Rock Shooter is arguably already an example of what you described: a character "played" by Hatsune Miku. (Similarly subsequent efforts like Mekakucity Actors).


You may already know about Hatsune Miku[0]. I imagine it's only a matter of time before we see this kind of thing in film too.

[0]: https://ec.crypton.co.jp/pages/prod/vocaloid/cv01_us


This can happen only if they solve the problem of facial expressions. The more realistic the characters look, the more we expect them to behave like humans. You probably noticed how weird animated human characters are, it's like something is off but you can't really tell what. I suppose it's the microexpressions that animators can't express.

When we start render digital "people" as actors, doesn't that mean that whoever is the pupeteer needs to be a good actor himself. This possibility poses some interesting questions. Can AI learn the script and, given some personality traits, behave like and actor? etc


That being said, microexpressions have gone a long way already. But indeed,

> [...] whoever is the pupeteer needs to be a good actor himself

Example: https://youtu.be/jSJr3dXZfcg?t=212


I suspect that AI could help in this regard. Most precisely adversarial neural networks[1]. One network to improve realism, and the other to detect uncanniness.

1. https://en.wikipedia.org/wiki/Generative_adversarial_network


Isn't that already the case in background actors that are not in focus in many cases? There are a lot of examples on youtube, where background actors are either digitally cloned or entirely created only digitally. This goes back at leat a decade, only a matter of time until it will move into the spotlight more. Planet of the Apes is a recent example of pretty impressive CG characters where only movements come from real actors, but of course these are not humans and still look CGish.


Could be: a lot of the 'branding' of the actor these days is about curating what films they show up in. Like, if Johnny Depp is in a movie, you know generally what kind of film it'll be. The AA (Artificial Actor, haha) would benefit from having an agent, much like real actors, to keep consumers' meta-dissonance down.

That said, actors swap between wildly disparate roles all the time and no one seems to care!


The advantage of digital actors is that you can break down the job into multiple roles.

voice acting; on-scene presence, body language and facial expressions (acting in the theater sense); stunts; physical appearance.


Check the movie S1m0ne :) http://www.imdb.com/title/tt0258153/


They can already do the next best thing, and has been doing so since at least the original Matrix.

Damn it, it has reached the point where ads can reuse long dead actors.


There was a HN post some time ago about a ML tool that can put the face of an actress on top of the face of a pr0n actress.


It is not limited to any sex, you can just as well map male faces on male actors.

https://www.reddit.com/r/deepfakes/


It was all super impressive until the last photo of the sunset shot over the bridge, there seems something artificial about that. Possibly the blur of the car on the far side, but it's more than that. Just not sure what.


One problem is that if you don't live at the same longitude as where this shot is simulated, the colours are literally going to look wrong, giving you an overall feeling of "there's something off about this and I don't know what". It's the same reason why you can regret buying a beautiful jacket in Norway, only to discover it looks "completely different" when you get back to your homestead in Wisconsin.


Why would longitude affect an image like this? Can you elaborate?


They meant latitude.


As per the other comment, I meant latitude, not longitude.


but would we look at a photograph taken in another part of the world and suspect it of being computer generated?


You would if it were in an article about computer generated imagery.


That's true. I guess the context is key here!


It’s the trailer. Too much extra light on the rear surface to make it visible. Reproducing that shot with a camera would not produce such vivid clarity on that surface.


Which itself is a problem of global illumination. It's quite easy to mostly mask the lack of absolutely perfect ray tracing illumination by adding a basic ambient illumination, but it has this sort of odd effect.

The other problem is that the scene is too clean - there's no haze, or smoke which you'd expect from a highway shot.


Comparing it to three random pictures of roads at sunset:

https://pixabay.com/p-280863/?no_redirect

https://pxhere.com/en/photo/118396

https://pxhere.com/en/photo/1165757

The main difference appears to be that the pixar picture is way blurrier, the lines and detail in reality are just way more crisp. Perhaps the pixar picture is supposed to be in motion, but motion blur should be reduced over distance while even the background trees and sky are blurry.

The blacks in pixars image also don't really seem black enough for a sharp sun at sunset. You'd think at least the cabels right in front of the sun would have higher contrast. While on the other hand, some highlights don't seem to be as strong as you expect (the metal railing on the opposite side)

Overall, you kind of expect your eyes to hurt in a situation like this, but it's all very soft and gentle.


The foliage on the horizon looks like video-game flat sprites to me, with heavy blur to make up for lack of detail.

The rear of the truck is lit up in a way that is only achievable with flash, reflectors or other illumination tricks, but it is achievable.


Not just the car, the sides of the image are slightly blurred. The effect is a little similar to chromatic aberration, which tends to give an image an "artificial" feel.


It's probably a grab from the movie that has 1/24s motion blur effect. Note the car on the opposite side of the road.


I think the dynamic range is too compressed. In a real photo, you'd expect to see the difference in brightness between the clouds and the shadows be far greater.


The only part that really threw me off was the graphics on the race car hauling truck. Other than that it just looked like an over stylized picture with some sort of filter on it.


Since the article doesn't go into a whole lot of technical detail about the shading technique, here's the pixar whitepaper describing it: https://graphics.pixar.com/library/BumpRoughness/paper.pdf


Maybe there will be artificial intelligence painters for cgi movies? What I mean is AI that learns from real photos of similar objects and paints the computer generated images in a similar style. Deep learning AI can already colorize black and white images quite good.

Maybe the cgi artificial look can be seen as an art form in itself and 100% realism is not the end goal for children movies.


At a glance, there appears to be almost no rendering performance difference vs standard normal mapping. Seems like it would be feasible in realtime engines. Baked normal maps are commonplace now in leading game titles. AR/VR stereoscopic views would make anisotropic surface effects, diffraction, etc even more obvious for those micro details and really help sell the illusion. Very interested in seeing Cars 3 in 3D to test this hypothesis...


Valve paper on it from 2015: http://media.steampowered.com/apps/valve/2015/Alex_Vlachos_A...

Scroll to the part about normal maps.


It’s an additional texture fetch on the gpu, it’s virtualy free if you discount storage costs. It’s been in games for years.


Additional texture fetch is by no means free but when the alternative is 100x SSAA you can get away with a lot.


I swear for about 3 minutes I thought the title was "Racism for Realism". I said to myself, of course, Pixar. Can't have a believable fairytale without it.


> So, problem solved, right? Not entirely. Though these microfacet approaches take surface orientation into account and give us realistic results while shading, the problem is that they don’t take into account the geometric microfacet details contributed by Bump or Displacement mapping which can end up inside a single pixel and get filtered out

Could someone explain this in more detail?


Check the bump and bump+roughness disk platter image

Basically the scratches done via displacement mapping aren’t filtered trough sourface roughness using the previous approach. The new approach appies the roughness filtering on top of the displacement map, so even displaced geometry partecipates into the roughness map. physical based rendering pipeline is basically the swap between the bump and roughness filters.


That was a good read. I'm excited to see these gains trickle down into video game rendering. A 35% gain on normal mapping performance without any user intervention would be huge.

Next step is to explore a similar method with parallax mapping so that we can see these some of these gains transfer to VR titles.


Parallax mapping will not benefit from this, since the limiting factor there is performing a raymarch on a heightfield to find an exact intersection. At every step along the ray you need to test wether you are inside or outside of the heightfield. This means offsetting the uv coordinate (xy position inside the texture) by the ray vector, and then using that coordinate to sample the texture again, to check whether or not you penetrated the heightfield. The amount of texture lookups quickly becomes the bottleneck, especially on large textures since they incur cache misses. To give you an idea: for every parallaxed pixel on the screen the heightmap texture might be looked up several dozens of times. You don’t nearly get to subpixel accuracy before performance grinds to a halt. Parallax mapping is view dependent, so roughness mapping, even if somehow applicable would need to be highly anisotropic for it to work, which means a huge storage cost.


> for every parallaxed pixel on the screen the heightmap texture might be looked up several dozens of times

I didn't realize it was on the order of dozens. Just to clarify, we're talking pixels and not texels, right? This isn't dependent on the resolution of the map?

> roughness mapping, even if somehow applicable would need to be highly anisotropic for it to work

I didn't think about that, I guess you're right. How huge, exactly? Seems like something you could compress very well if you combine textures into a larger megatexture.


Why do we try to recreate reality instead of maintaining some amount of fantasy/imagination/unrealism?

When does it make more sense to simply photograph a car than to try and make a digital image of a car look as real as possible?


To your second question, it makes sense to photograph real cars when you want a movie of real cars doing normal & safe car things. If your car has eyes and a mouth and talks, then photography might not be an option.

For non-Pixar movies, if the cars are racing and crashing, then for safety it often makes more sense to go digital. Digital is very often much cheaper than photography as well. It's extremely expensive to take a movie crew out into a city to film real cars, and it can get more expensive depending on the venue and the crew size and the number of shots.

To your first question, why not is an equally valid question. Realism is a choice, and fantasy/unrealism is also a choice. People employ both choices all the time, so it's not one or the other, and Pixar's quest for realistic shading has no bearing on what other people choose. But in general, realism is good for believability, it makes things look tangible and can increase the viewer's emotional connection to a story. Sometimes non-photorealistic animation can break the viewer out of a story, and sometimes (depending on the story & the environment & details) non-photorealism can enhance a story. It all depends on what the story is, and how good it is.


If you're making a digital image of a car being crushed... only when it is, and will remain in future shots, cheaper to crush a physical car.

(Note that in some domains, such as video games, the cost of actually crushing the car is effectively infinite. In others, such as movies with animated talking cars, it may only be ridiculously high.)


Who says realism in particular details has to be contrary to the purposes of fantasy? You can still combine realistically expressed nuances in imaginative ways, or use the methods of realism to make "realistic" expressions of things that never existed before. If anything this elevates fantasy, opening up more possibilities to execute fantasy in a visually compelling way.

If you still want to do fantasy that's deliberately less detailed than reality, you can still do that. But it ought to be a conscious choice of individual creators. The degree of detail can be an aesthetic preference selected from a range of available options, rather than something you have to default to because we refused on principle to develop alternative methods.


> When does it make more sense to simply photograph a car than to try and make a digital image of a car look as real as possible?

If you're making a movie? Not as often as you might think.

With real cars you have to acquire them, transport them, close off streets to film them, in many shots rent out a helicopter to get a shot in, you're limited to certain times of day to get the right lighting...

Animation became part of film and (especially) TV quite early, and it wasn't about some stylized unrealistic aesthetic; it's just plain cheaper. And then of course you're going to want to get around the limitations of the form, and put your anthropomorphizing eyes onto objects that are as realistic as possible.


As for realism vs fantasy, Disney did some great work in improving physical realism as part of making the highly fantastic Wreck It Ralph.[1] Once they had the knobs set up that accurately tune the physical properties of their materials, the artists were able to push certain properties beyond realistic values without having the result be completely fake looking. Multiple real time rendering systems, including the Unreal Engine, site this paper as a source for their new "Physically Based" material systems.

[1] https://disney-animation.s3.amazonaws.com/library/s2012_pbs_...


We wouldn't have VR/AR today unless we pushed our GPUs till they cried. We wouldn't even have deep neural networks today if we didn't push hyper-realism because most deep neural network training without today's GPUs would be too slow to be useful.

The truth of the matter is that just because it seems trivial in one domain doesn't mean it won't benefit other domains, the problem is that we can't see the future so who knows what is going to pay dividends.


Very interesting article. I remember beikg mesmerized by the old track pannings in cars and thinking how far cgi has come, reading this afterward provides an excellent background to that feeling


It's crazy to think that someday soon. We will have assembled actors.

People that don't actually exist, appear to be human but are CGI, and that the audience prefer over real, living humans.


This idea has been explored at least a few times in movies:

S1m0ne (2002) http://www.imdb.com/title/tt0258153/


Why is this page hijacking the scroll wheel? As far as I can tell there's literally no point in doing so.


Looks like it may be being done as some part of the parallax effect on the full-width images, but it certainly needn't be.


I’m wondering the same, the page is entirely unusable due to that.


MS Edge works OK.


Hyper realistic talking cars.


Now someone has to implement this for graphics card shaders in real time.


PBR has seen commonplace adoption in games since 2012. Roughness mapping has been used for years in real time engines.


It is coming.


Ehhh. I don't think modern computers could even render Toy Story still, maybe an approximation but that's what realtime graphics are about - using tricks to make approximations, to make things look "good enough". Non-realtime graphics feel easier in some ways because you can use e.g. raycasting to get "free" realistic reflections and lighting and whatnot. Then you can focus more on what makes a material look the way it does, like they do in this article.


Kingdom Hearts 3 features an area based on Toy Story, so we can actually draw a direct comparison between the original movie and modern real-time rendering.

Digital Foundry compared them here: https://www.youtube.com/watch?v=tkDadVrBr1Y


I think the original Toy Story could be done in real time with modern graphics cards. The Quora answer below was talking about CPUs in 2011, and it seems that plenty of added time (relatively speaking) was due to network and disk infrastructure not having the same 1000x speedup.

A more subtle aspect to consider is that CPUs might have increased in raw computation by huge orders of magnitude, but memory latency means that the same programs from 20 years ago will run much faster, but not at the same multiple of the raw flops. This implies that to get the full effect of a cpu or gpu, the software would have to be rearchitected to an extent to get the full benefit of modern hardware.

Toy Story didn't use any ray tracing, Renderman at the time was reliant on shadow maps. Because of the lack of global access to the scene from shaders, the local illumination Reyes architecture should map extremely well to GPUs. I think it is possible that someone will do similar things with Vulkan or openGL compute buffers if they haven't already.


For raw compute cycles, I think you're right, but...

The one place where today's GPUs aren't as good as Toy Story is filtering & anti-aliasing. The texture filtering and pixel (camera) filtering in software renderers is much higher quality and more expensive than GPUs can (typically) do still. You could roll your own high quality texture sampling in CUDA or OpenCL, but the texture sampling that comes with GPUs is not great compared to what Renderman does.

BTW, textures & texture sampling are a huge portion of the cost of the runtime on render farms. They comprise the majority of the data needed to render a frame. The entire architecture of a render farm is built around texture caching. Just getting textures into a GPU would also pose a significant speed problem.


> The one place where today's GPUs aren't as good as Toy Story is filtering & anti-aliasing

This only makes sense if you are locked in to some texture filtering algorithm already, which isn't true. CPU renderers aren't doing anything with their texture filtering that can't be replicated on GPUs. Where the line should be drawn by using the GPUs native texture filtering and doing more thorough software filtering would be something to explore, but there is no reason why a single texture sample in the terms of a software renderer has to map to a single texture sample on the GPU.

> BTW, textures & texture sampling are a huge portion of the cost of the runtime on render farms. They comprise the majority of the data needed to render a frame.

I'm acutely aware of how much does or does not go into textures. Modern shaders can account for as much as half of rendering time, with tracing of rays accounting for the other half. This is the entire shader, not just textures and is an extreme example.

> The entire architecture of a render farm is built around texture caching.

This is not true at all. Render farm nodes are typically built with memory to CPU core ratios that match as the main priority.

> Just getting textures into a GPU would also pose a significant speed problem.

This is also not true. In 1995 an Onyx with a maximum of 32 _Sockets_ had a maximum of 2GB of memory. The bandwidth to PCIe 3.0 16x is about 16GB/s and plenty of cards already have 16GB of memory. The textures would also stay in memory for multiple frames, since most textures are not animated.


> I'm acutely aware of how much does or does not go into textures. Modern shaders can account for as much as half of rendering time, with tracing of rays accounting for the other half. This is the entire shader, not just textures and is an extreme example.

At least at VFX level (Pixar's slightly different, as they use a lot of procedural textures), Texture I/O time can be a significant amount of render time.

> This is not true at all. Render farm nodes are typically built with memory to CPU core ratios that match as the main priority.

I don't know what you mean by this (I assume that memory scales with cores?), but most render farms at high level have extremely expensive fast I/O caches very close to the render server nodes (usually Avere solutions) mainly just for the textures.

The raw source textures are normally of the order of hundreds of gigabytes and thus have to be out-of-core. Pulling them off disk, uncompressing them and filtering them (even tiled and pre-mipped) is extremely expensive.

> This is also not true. In 1995 an Onyx with a maximum of 32 _Sockets_ had a maximum of 2GB of memory. The bandwidth to PCIe 3.0 16x is about 16GB/s and plenty of cards already have 16GB of memory. The textures would also stay in memory for multiple frames, since most textures are not animated.

This is true. One of the reasons why GPU renderers still aren't being used at high-level VFX in general is precisely because of both memory limits (once you go out-of-core on a GPU, you might as well have stayed on the CPU) and due to PCI transfer costs of getting the stuff onto the GPU.

On top of that, almost all final rendering is still done on a per-frame basis, so for each frame, you start the renderer, give it the source scene/geo, it then loads the textures again and again for each different frame - precisely why fast Texture caches are needed.


> At least at VFX level (Pixar's slightly different, as they use a lot of procedural textures), Texture I/O time can be a significant amount of render time.

I was referring to visual effects.

> I don't know what you mean by this (I assume that memory scales with cores?), but most render farms at high level have extremely expensive fast I/O caches very close to the render server nodes (usually Avere solutions) mainly just for the textures.

I wouldn't say the SSDs on netapp appliances or putting hard drives in render nodes are 'architecting for texture caching'. These are important for disk IO all around. Still it's not relevant to rendering Toy Story in real time since it is clear that GPUs have substantially more memory than a packed SGI Onyx workstation in 1995.

> The raw source textures are normally of the order of hundreds of gigabytes and thus have to be out-of-core. Pulling them off disk, uncompressing them and filtering them (even tiled and pre-mipped) is extremely expensive.

I don't know if I would say 'normally', but in any event I don't think that was the case for Toy Story in 1995. Even so, the same out of core texture caching that PRman and other renderers use could be done from main memory to GPU memory, instead of hard disk to main memory.

> This is true. One of the reasons why GPU renderers still aren't being used at high-level VFX in general is precisely because of both memory limits (once you go out-of-core on a GPU, you might as well have stayed on the CPU) and due to PCI transfer costs of getting the stuff onto the GPU.

This was about the possibility of rendering the first Toy Story in real time on modern GPUs.

> On top of that, almost all final rendering is still done on a per-frame basis, so for each frame, you start the renderer, give it the source scene/geo, it then loads the textures again and again for each different frame - precisely why fast Texture caches are needed.

This is a matter of workflow, which makes perfect sense when renders take multiple hours per frame, but if trying to render in real time, the same pipeline wouldn't be reasonable or necessary.


> This only makes sense if you are locked in to some texture filtering algorithm already, which isn't true.

I said you could roll your own, didn't I? If you don't roll your own, you most definitely are locked in. All GPU libraries (OpenGL, CUDA, OpenCL, DirectX, Vulkan) only come with a limited set, none of which match the filtering quality that Renderman & other film renderers have.

If you do roll your own, your performance is going to suffer, and not by a little.

> I'm acutely aware of how much does or does not go into textures. Modern shaders can account for as much as half of rendering time, with tracing of rays accounting for the other half.

That ratio depends entirely on what you're doing. It's meaningless. That said, Pixar people have said 10:1 in the past (Toy Story time frame) for shading:rasterizing. You mention ray tracing, are you assuming ray tracing? Why? Toy Story wasn't ray traced.

> Render farm nodes are typically built with memory to CPU core ratios that match as the main priority.

That doesn't contradict what I said, at all.

> The textures would also stay in memory for multiple frames,

I doubt that was true for Toy Story, and it was not true for the films I worked on circa Toy Story. Textures usage was "out of core" at the time.

Texture & MIP tiles were loaded & purged on-demand into a RAM cache during a frame of render. Each renderfarm node also had a local disk cache for textures, to minimize network traffic. The amount of texture used during the render of the frame often (and I believe usually) exceeded the amount of RAM in our renderfarm nodes. You certainly can load textures on demand on a GPU, you just don't get any performance gain over a CPU when you do that.

Aside from Toy Story era assets, in-core GPU textures are not possible with today's film texture & geometry sizes. Film frames with good filtering can easily access terabytes of texture.

GPU core memory size & bandwidth is the single main determinant of whether studios can use GPUs for rendering today. Getting large (film sized, not game sized) amounts of geometry & textures onto a GPU is the main problem.


> You mention ray tracing, are you assuming ray tracing? Why? Toy Story wasn't ray traced.

I actually said that in my first post.

> I said you could roll your own, didn't I? If you don't roll your own, you most definitely are locked in

The point is that this is not something that is so computationally intensive that it would prevent it from running in real time, even when matching the filtering quality of PRman.

> > Render farm nodes are typically built with memory to CPU core ratios that match as the main priority. >That doesn't contradict what I said, at all.

You said that render farms focus on texture caching, and I'm telling you that is not true. I've built large render farms and the topic never comes up. Textures aren't nearly the focal point you are making them out to be.

> I doubt that was true for Toy Story, and it was not true for the films I worked on circa Toy Story. Textures usage was "out of core" at the time.

If you were rendering in real time, there would be no reason to unload textures from GPU memory and then put the textures back into GPU memory, so this isn't relevant. Film is rendered one frame at a time, however if the Reyes architecture were running in real time, the same workflow used for 2 hour frames would not be practical, nor does it weigh on whether real time would be possible here.

> You certainly can load textures on demand on a GPU, you just don't get any performance gain over a CPU when you do that.

That assumes that GPUs' only advantage is due to memory bandwidth, which isn't true, though that is irrelevant here, since there are plenty of GPUs with much more memory than the computers used to render Toy Story.

> Aside from Toy Story era assets, in-core GPU textures are not possible with today's film texture & geometry sizes. Film frames with good filtering can easily access terabytes of texture. > GPU core memory size & bandwidth is the single main determinant of whether studios can use GPUs for rendering today. Getting large (film sized, not game sized) amounts of geometry & textures onto a GPU is the main problem.

I'm not sure why you suddenly focused on modern GPU rendering for film. This thread was about whether Toy Story could be rendered in real time on modern GPUs, not modern film assets.


Man, I don't know why you're in hyperbolic attack mode, I'm sorry if I said something that irritated you. I'd love to have a peaceful technical conversation about how to do it, rather than a try to prove me wrong on every point fight. You're probably right, it's probably possible to render Toy Story in real time. I sincerely hope you have happy holidays.

It is a fact that the texture filtering Toy Story used is more computationally intensive than the "high quality" anisotropic mip mapping you get by default on a GPU. I did speculate getting PRman level texturing, filtering and anti-aliasing, combined with memory constraints, could compromise the ability to render Toy Story in real time. You disagree. That's fine, I could be wrong. But, if you don't mind, I'll wait to change my until after someone actually demonstrates it.

I've built render farms too, and renderers as well. My personal experience was that texture caching was a large factor in deciding the network layout, the hardware purchases, the software system supporting the render farm, and the software architecture of the renderer itself. You can tell me whatever you want, and keep saying I'm wrong, but it won't change my experience, all that means is that you might not have seen everything yet.

> If you were rendering in real time, there would be no reason to unload textures from GPU memory and then put the textures back into GPU memory, so this isn't relevant.

It is completely relevant, if you can't fit the textures into memory in the first place, which is precisely what was happening in my studio around the same time Toy Story was produced, and what I would speculate was also happening during production of Toy Story.

But, I don't know about Toy Story specifically, since I didn't work on it. I believe the production tree was smaller than 16GB, so perhaps it's entirely possible the entire thing could fit on a modern GPU. Still, this would mean that a good chunk of the software, the antialiased frame buffer, all animation and geometry data, all texture data -- all assets for the film -- would have to fit on the GPU in an accessible (potentially uncompressed) state. I'm somewhat skeptical, but since you're suggesting a theoretical re-write the entire pipeline & renderer, then yes, it definitely might be possible.

> I'm not sure why you suddenly focused on modern GPU rendering for film.

I was just making a side note that memory is still (and always has been) the GPU rendering bottleneck for film assets. Threads can't evolve? I'm not allowed to discuss anything else besides the first point ever?

I think my side note is relevant because an implicit meta-question in this conversation is: what year's film assets are renderable in real time on the GPU, regardless of whether Toy Story is?

The GPU memory limits are, IMO, becoming less of a bottleneck over time. Rendering today's film assets is becoming more possible on a GPU, not less, so I think if Toy Story couldn't be real-timed today today, it will happen pretty soon.


> Man, I don't know why you're in hyperbolic attack mode,

There isn't anything like that in my posts, just corrections along with pointing out irrelevancies, no need to be defensive.

> It is completely relevant, if you can't fit the textures into memory in the first place, which is precisely what was happening in my studio around the same time Toy Story was produced, and what I would speculate was also happening during production of Toy Story.

Yes, PRman has always had great texture caching and like I mentioned earlier, a 32 socket SGI Onyx would max out at 2GB of memory. I think a fraction of that was much more common.

> Still, this would mean that a good chunk of the software, the antialiased frame buffer, all animation and geometry data, all texture data -- all assets for the film -- would have to fit on the GPU in an accessible (potentially uncompressed) state.

I think you mean all assets for a shot, not the whole film.


You're probably right:

https://www.quora.com/How-much-faster-would-it-be-to-render-...

Going by the vague dates in the article it's probably ~1 min per frame rendering Toy Story today.


One thing about 3-D animated movies is that can be rerendered later on with improved rendering tech without being reshot.

Disney should go ahead and rerelease Toy Story with its latest rendering tech (at maybe 8k), to show these kinds of rendering improvements.


My guess is that it's significantly more difficult than that—those movies are pure legacy code by now, and we all know how difficult it is to modernize legacy applications :).


You are correct. Toy Story 1 and 2 were resurrected around 2010 for 3D re-rendering. It took a team of 4 or 5 working for the better part of a year, IIRC.

A 3D movie like Toy Story isn't like a Maya file; it's hundreds of thousands of files in a gigantic filesystem, and many pieces of complicated software working together just to compose them and generate a mere description of the scene, let alone make an image out of it. Getting the software to a place where it could reproduce an image at all was a feat.

And the end result, if everything goes well, is just the same image rendered again from two angles instead of one. The images would not magically look like modern graphics, because for that, an artist would have to re-model the characters to be higher resolution (and then re-animate them to be up to snuff), re-surface them, re-light them... basically re-make the whole movie. Not a button push. :)


you would probably only improve the resolution and minor details, but the assets (models, textures, shaders etc.) created for a CG movie from the 90ies certainly won't offer many of the details that CG movies nowadays have.


Re-rendering Toy Story with a global illumination lighting model would make a huge difference though, even if the geometry and textures remain the same.


It would look horrible, because the lighting was done assuming no global illumination.

Besides destroying artistic intent, the no-longer-missing indirect lighting effects would double up with all the "cheats" and clever techniques the artists had placed to make up for them.


Obviously all shader code and lighting would have to be redesigned. I wasn't trying to suggest that they could just multiply a GI render pass on top of the old shader output, or something like that.


It's not a matter of code (though that, too, would be an enormous job). It would clash with what the artists have done.

Here's a random image from TS1: [1]

Look at Buzz's butt-- he has some yellowish up-lighting. That's because an artist placed a yellow light underneath him to fake the indirect light from the bedspread. If you turned on GI, the bedspread would cast its own yellow light, and it would look wrong because now his butt is glowing.

Not to mention that ambient occlusion would likely darken lots of things that weren't supposed to be dark in the original composition. Really, it would look totally broken and destroy the artistry.

[1] https://cdn.vox-cdn.com/thumbor/kLXYT7zSJTVAiI_jflkF_mW7bPk=...


Better to put the effort into a new show, say TS4, that's intended from the beginning to take advantage of the new tech. ;-)


:)


As I said, lighting would have to be redesigned. We're not disagreeing.

I don't think it's something that Disney should ever do. I was just replying to the person who suggested that limitations in geometry and bitmap assets mean there's nothing that could be done to upgrade the look.


Ops point was that old movies could just be rerendered and quality would improve dramatically, and i disagreed. Of course, if you put in a lot of work and rework the entire lighting model and other things, you can improve quality a lot, but that wasn't the point.


A tremendous amount of effort would still be needed to provide detail to the models


There are two major, obvious problems with that.

1) Toy Story was made a long time ago, 22 years ago in fact. As stated by tikhonj, Toy Story would be pure legacy code by now, as so much has changed technically in 22 years. Getting it to work in modern software that could render it in 8K would be very difficult, as it was already difficult enough to re-render it in 2010 as tbabb's comment explains.

2) There's no point in upscaling in 8K. Yes, it would be in a very high resolution, but a lot of the textures were made in the 90s and are obviously not made for 8K. I would imagine that you would have to remake essentially all of the textures, otherwise they would probably look terrible in 8K.

Besides, there's no point of re-releasing it in 8K, because it wouldn't all the detail that rendering in 8K normally come with.


I think the assumption in the parent - which would also have been my guess - was that assets begin in higher resolution than required and get sampled down. Certainly used to be the case with games.


That was the plan with Babylon 5, except the digital assets were lost...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: