I've been watching and rewatching these videos to try and see why people appear to be pessimistic about this. And... I don't really get it. Yes, I have seen better 3D humans. But... I don't think I can say that is true for any non-AAA title. This will be multiple orders of magnitude better than what I currently have for my hobbyist stuff. This is a huge boon for indie studios. I am super positive about this.
Yeah this looks like a massive stepforward for indie-game fidelity. With the price of mocap equipment falling as well, huge chunks of the "cinematic" game pipeline are getting more and more accessible.
It's one reason I'm super-interested in deep speech synthesis techniques, because while we're probably not likely to get emotively convincing performances soon, there'd be incredible value in being able to do a "style transfer" type effect to let voice-actors be more distinct characters too.
I dont see the pessimism, and if they exist it is simply because they are focusing on the wrong sets of goals. Metahuman was not created to bring the best Digital Human 3D animation on the planet, but the fastest. A very high quality model done in mins and hours instead of weeks if not months.
It's getting close enough now that we might see "digital salons" in the near future, where you can have a model created of yourself, and be mo-capped while you try hair styles, tattoos, piercings, clothing, etc in real time.
All that's really left is bringing the practicality of creating an accurate "MetaHuman" from consumer-grade photographs in line with kinetic-style mo-cap.
The model doesn't need to be perfect, (though I'm sure the uncanny valley is doubly-uncanny when you're looking at yourself), it just needs to be accurate enough to be recognisable as yourself, and capture the highly individual differences in body/facial structure, skin tone and complexion, and existing body modifications/scars/birthmarks/etc.
Best part is that with the rate GPUs are improving, this will all be achievable from home.
> though I'm sure the uncanny valley is doubly-uncanny when you're looking at yourself
A nice example of this is the face actor of Dina in the excellent game The last of us part 2, made by Naughty Dog. ND is imo leaders in many game things, including facial animation. The characters are in-game really expressive and the animations are really smooth.
Anyway, Cascina Caradonna ("Dina" in the game) recorded herself playing the game. This [0] is when she sees "herself" for the first time. (Spoilers? If you watch the video from the beginning there are spoilers for the first game, which is equally fantastic, but the video is no real spoilers for p2. It's in the very first minutes of the game). Love the reaction.
I just didn't like it for the person they turned Ellie into. There were parts where I had to just turn off the game for a few days. A friend warned me I'd have this reaction. I actually tried to refund it before I even played it, but Sony wouldn't allow it (I had downloaded, but not started the game). In my mind TLOU II never happened, and Ellie and Joel are living a happy life in that western town.
Agree. Very happy to see AAA-kind of money being spent on games that aren't microtransactionridden (mtx) monstrosities. They do have their place, since each have their audience, but it's easy to see the temptation of recurring revenue on Fortnite-level (and others).
This is just a Hollywood actor generator. I want a human generator that has an age slider that melts faces like aging does in real life. I want to control fat, folds, swollen skin, grease, bald patches, moles and skin conditions. Very few people have perfect teeth. In my dream human generator you should be able to remove, rotate and reposition every single tooth.
All the marketing material of MetaHumans focuses on the head. There is probably not as much control over the rest of the body. We are probably decades away from a true human generator.
Still, I kinda think we really are ... close. For example, I recall watching "The House with a Clock in Its Walls" and wondering if Cate Blanchett was entirely CGI or CGI-augmented in several scenes. Or see "Aquaman" and see how the mom and dad are CGI-aged. We can still tell the difference, but... it's getting closer.
Come to think of it, in 10 years we'll laugh at "old" films and how they used different actors for different ages of a Protagonist. It will stick out like a sore thumb.
If you're manually adjusting teeth, you're not really in the realm of a generator anymore, imo. Fine grained control over stuff like that would be better served being done to the exported model.
Something like age and fat sliders seem likely to be implemented here in the near future I would think.
> I want a human generator that has an age slider that melts faces like aging does in real life. I want to control fat, folds, swollen skin, grease, bald patches, moles and skin conditions.
Ugh. You have me in flashbacks to those RPG games where you can't start the game until you DECIDE all these characteristics of your player. I just want to start the game!
Skin's one of those things that is WAY more complicated than you think at first glance. It's translucent, soft, anisotropic, specular, layered, with tons of structured fine detail. It doesn't move right unless the fat, muscle, ligaments, tendons, and bones underneath are modeled right and controlled right. And it's one of the things that we innately spend huge amounts of mental processing power on analyzing. It's no wonder it's hard to render convincingly.
You’re forgetting weathering on skin. Some parts get more sun, wind, rain, scars or other marks, we build it up during a lifetime. Good painters knew how to convey these effects, they often lack in shaders.
Yeah, people wearing too much concealer put them right into uncanny valley too. Like a "bad shader cosplay" kind of thing.
And it's one of those things I don't get (or, I do get it, but I disagree with) - I think imperfections and character are things to be celebrated, not hidden and removed.
I think the problem is that there are no fluids or oils on the skin. The lighting and texture is configured to mimic these excretions but they aren't the excretions themselves and we notice the lack of presence.
This and the eyes yeah. There are some of the models in the demo that actually look photorealistic if viewed in still frame, but are obviously models the minute they move. No one seems to get natural movement right either yet.
I've noticed another thing with these demos is they never do themselves favours by having the lighting absolutely perfect, like in a photography studio. I think it would help to put them in a more naturalistic light setting so that even though the end product "looks worse" it would sell it off as looking more real.
The lighting definitely helps. I've been playing around with Daz3D Studio recently and there are a few tricks I've found to make renders look more realistic.
I'm sure this is all basic knowledge for people who are more experienced, but HDRI lighting, adjusting depth of field to approximate how it would look through a "real" camera, and some of the tone mapping/curves adjustments you'd use in traditional photography or video color grading go a long way.
Yeah teeth generally look terrible, even in fully pre-rendered big budget CG films.
It’s like the ambient occlusion between the teeth is always way too high. So instead of their being subtle divisions between each tooth, they look dirty. Like actual dirt and grime.
I noticed the teeth issue also, and it seems all teeth are the same across characters, which draws attention. More variation in teeth settings would probably help this to not stand out (which contributes to the "uncanny valley" reaction, I think).
Skin, hair and eyes are all to clean to be real. The head movement is as if someone is moving a plastic head with hands out of view.
I think the movement of skin and head lacks physics. When head moves, or other parts on the face, inertia moves the skin in other places which we don't see here at all.
Remember you may consciously focus on the eyes, but the brain is taking in the whole image and importantly context - so the surrounding skin plays a role
A bit off topic: does anyone else find Unreal Engine 4 just ridiculously complex? I've spent so many hours getting nowhere using it. It makes the work most of us do in web dev look stupid easy. Which is a shame because ue4 seems to have a great future if you are into graphics. It's graphics stack is second to none amongst engines with SDK's available.
Eh, I suppose it's a matter of opinion, but no, I find it pretty consistently organized and the abstractions are pretty good.
Games are rediculously complex perhaps, and I think UE4 saves you from a lot of it while also giving low level access if you need it. One could call it bloated, but it's bloated like django is bloated, which is to say it has batteries and a lot of other stuff included.
Two things that were difficult for me to get used to in UE4.
1) Lots of nested menus. A class has multiple tabs, sometime with nested levels of information about them and it was easy to get lost in regard to where you actually were.
2) It felt hard to "find the code" that actually made stuff happen.
But neither bothers me now. Just bite the bullet and watch a basic make-a-game tutorial and you'll get the gist.
Yeah I feel a bit the same learning Unity. Lots of stuff seems buried in menus, tabs, and panes. Though I also feel the same learning a new code base. Like learning where everything is in someone else’s filing cabinet or kitchen.
FWIW I feel like that's the best part of UE4's blueprint system. It makes the engine API very discoverable. Type signatures in code take a lot more mental energy to parse. Dragging out a blueprint pin provides a nicely organized, context sensitive dropdown of potential methods, which is usually more helpful than a tab completion pop up imo.
I was lucky that I got into it when it first went open, 4.03 was unusable I think, 4.05 was getting good. Anything over 4.10 became rock solid and the complexity grew.
I agree I imagine it would be very confusing for someone to pick up today. The cinema stuff wasn't in it really at the start, neither was really terrain, or VR. Everything was simpler. In a lot of ways, it's still all there, but there are layers of new stuff on top and some stuff completely new as "the new standard".
It's one of those "you had to be there" and it's tough to get into it today. But given enough time, you will find your way and selectively ignore the stuff you won't touch.
UE4 has great tooling but their UI is really cluttered compared to other engines and their UX is split into many panels.
I wouldn't say it's difficult to learn UE4 but I think it is overly complex which can be a high barrier. I've used it and unity for years. Opening unity is generally less mental fatigue, like walking into an ordered office, whereas unreal is like falling into someone's messy room.
Coming from the game industry, I find Unreal Engine easy to understand and React Native app code to be completely inscrutable. So its really just perspective.
The only app framework I can grok properly is Flutter.
Relative to other game engines, particularly proprietary ones, no. Epic is dominating the engine licensing space in large part because their tooling is better than the alternatives.
Its not that complicated it just leaves you on your own to find what you need. The documentation is diabolically awful the moment you step beyond blueprints
The faces here in the header look more "real" than unreal demo. It looks like low quality video recording of real humans. The expressions, the movement, everything looks so real.
They look _alive_ unlike plastic heads of Metahumans.
EDIT: They look real in the head banner only though. Talking to them in the online demo is not impressive at all.
They are completely different product. SoulMachines characters are 3d scanned, while MetaHumans is a generator that can generate possibly infinite amount of realistic humans. Also, I think MetaHumans characters are rendered way more realistic in terms of skin textures and materials. The textures in SoulMachines seem to have baked-in lightning.
It seems to me like the animations are better but the illusion of "real human" is not as good as the texture and models themselves are lower resolution. Especially the close up of the eyes.
Either way, both are still within the uncanny valley to my eyes. The animation of Andy Serkis reading Macbeth made me feel very uneasy.
I look forward to the day when there is no longer a market for porn with real people, and everything is generated in real time based on your preferences ... and no real humans have to be trafficked or suffer from social stigma after their porn careers.
As for what porn industry people will do in the future if the robots take their jobs ... I don't know ... whatever gladiators did when bloodsports were abolished I guess.
They tried to do this about 20 years ago with "synthespians". The idea was that you create a digital persona and have it play in several, unrelated between them, films, as if they were real actors.
It was tried in the movie "final fantasy" with the synthespian "Aki Ross", who was the lead character. But it did not go anywhere.
Could someone please combine this with a Lego Technic printer that’s synchronized to the model it’s built from? Lo-fi cylon assembly would be pretty keen, especially if when you updated the model, the assembler swapped out bricks.
And there was me thinking that the future was getting to wear a helmet all the time to avoid accidents and contagion while getting a constant HUD and brain machine interface. Disappointed. Neat tricks though.
I hope they have strong clauses about the usage of their models in porn. I created the 3D Avatar Store, now closed, back in 2010 with ML-based auto-generation of full body 3D characters. I am also good friends with the author of Poser. Both of us have had porn producers as major users of the technology. I was unable to acquire the financing I needed to take my working system to a wider audience due to potential investors continually desiring to go into porn.
My guy, this is Epic Games. They make Fortnite, one of the most popular pieces of media ever, with billions of dollars of revenue a year from selling avatar hats to hundreds of millions of users. You don't have to worry about their investors.
Yeah, I know who they are. I also know the immense size and power of the pornography industry and the immense efforts it requires to get stockholders and investors to ethically deny hundreds of millions in potential revenues.
Not at all, but I am opposed to a generally available technology that enables ordinary people to create DeepFakes of pornography with photos of friends, family, co-workers, anyone they can acquire a photograph. I am opposed to such a system because I know how it will be abused and cause misery for hundreds of millions of unexpecting women.
Sounds unlikely to me. We have artists, and yet there's not outpouring of hand drawn porn without consent featuring real people. I'm sure it will happen to some extent. But hundreds of millions of women? Doubtful.
And I don't think you realize the impact either. This feels like it falls squarely in the camp of worrying that people will trick self driving cars to murder people. Good news is most people aren't murderers. https://xkcd.com/1958/
Most people aren't interested in sharing slanderous deep fake porn of others online either. There are social consequences to trying that and getting caught. And if they do, metahuman porn certainly is not going to be fooling many people.
I does not require many women or men to be affected for a mass negative reaction by the public. Just today a Hacker News post was about the author of Curl receiving threats. Would you like to be the guy that enables POS people to embed others into porn, for profit? That was what every pool of investors ultimately wanted to do.
I really don't care if someone creates a service to create a custom porn simulator. I don't think society would care either. People have been free to imagine others in sexual encounters since the dawn of time. The human experience won't radically be changed by rendering it on a screen.
There's a possible risk in pretending these videos are real and trying to slut shame people, but I'd bet against those being common or successful.
Ask 2-3 women you consider very beautiful what they'd think of such a service, and then what that'd think of the people who created it. Ask them if they've ever had a stalker - many beautiful women have, and many attractive men have too. The issues around unwanted sexual advances is far more prevalent and dangerous than you imagine. If you honestly look into this issue, you may learn something.
I wrote and acquired a global patent on a VFX process for mass production of short form video with select actors automatically replaced with anyone. I bit more involved than face replacements, it was a film VFX derived process. My intent was personalized advertising. A portion of the process is the creation of a digital double of a person given at little as one photo. I had the pipeline working and was seeking partnerships.
Judging by how human beings involuntarily register arbitrary headlines as facts to be dislodged, we will probably register impressions from these metahumans as real experiences as well.
The truth about narrative is it is orthogonal to truth, and it is just a matter of taking the initiative to establish impressions in the minds of others which they will use to frame truth as they experience it, and filter it to confirm existing impressions and beliefs. Between these and deepfakes, we could trivially use these to create mass political movements based on subliminally registered associations, which sounds like 80's brainwashing hysteria, but they didn't anticipate simulated humans escaping the uncanny valley.
Impressive tech, it's just ironic and troubling that the only sane response is to essentially dissociate and treat everything as being under the influence of a managed simulation. Sounds nuts, but it's really just ahead of the curve.