It's great when the footage was shot with an appropriate shutter angle. And terrible when you become familiar with interpolation artifacts from artificially generating frames, because then you will start to notice it everywhere, kind of like bad kerning.
I don't understand this comment. Of course the person mastering/editing the movie will have to know what they are doing. They need to ensure it's done properly. AI image generation is just a technique in the toolbox to achieve that.
That's not always practical, and sometimes even impossible for extreme slow motion shots. You aren't going to get cinema level quality out of a 5000FPS camera. For the same reason it's not a solution to say "just make every effect practical" instead of CGI.
Really high quality AI-based offline interpolation methods do exist already, though the usual caveats still apply: the larger the motion, the less good it is. Though whether the quality is passable is a decision that needs to be made in any case.
It already is; modern video games and graphics hardware have a trick where they render a frame at a lower resolution, then use AI to upscale it to full screen. Apparently AI upscaling is faster than rendering at higher resolution.
It is, and unfortunately, game devs and artists took advantages of it to get lazy instead of making things genuinely look better.
The way it goes is: use high resolution models and textures, always, rely on the engine to render everything in real-time (with techniques like raytracing), realize that no reasonable GPU can run the thing at full resolution, use AI (like DLSS) to compensate, it is still too much, so use AI to generate extra frames.
The end result is often not that great, there are limits on how AI can fill-in the gap, especially in real time. For frame generation, it is even worse as it introduces lag as the generated frame doesn't take into account the latest player actions.
Video games optimization is an art form that goes beyond making code run fast. Artists and programmers are supposed to work together and use the performance budget wisely by cutting the right corners. For example, in a combat scene, players won't look at the background, so make it as simple as possible, save the details for when the player doesn't have to focus on a bunch of monsters. It may even result in gameplay changes. AI can't replicate that (yet?)
Why innovate on the hardware side when you can just use AI to add fake frames to make the numbers you advertise look impressive even while the quality of the graphics your cards output nosedives? Especially if you're nvidia and making plenty of money selling AI chips to non-gamers. It doesn't help that the video card industry isn't even close to competitive and there's no pressure to really innovate in the first place.
It's fine for video games to employ tricks that limit what they have to make look good on screen. Games like Silent Hill did a pretty good job by filling the world with heavy fog or giving you a flashlight leaving everything outside of a small well-lit circle too dark to see. That actually added to the atmosphere and allowed them to make what little you could see look great (for the time).
Guessing at what players might be looking at and making everything else look like garbage is doomed to fail though. It punishes anyone who dares to look half an inch away from what the designers think you should be paying attention to, is distracting even if you are trying to focus where they want you to, and anyone who wants to take a cool a screenshot of the action gets screwed as well.