Something like this is used in some Discord servers. You can make a honeypot channel that bans anyone who posts in it, so if you do happen to get a spam bot that posts in every channel it effectively bans itself.
Most web forums I used the visit had something like that back in the day. Worked against primitive pre-LLM bots and presumably also against non-English-reading human spammers.
That was X-Men Origins: Wolverine. I was also thinking about that when I heard about this leak. This was the infamous Deadpool scene from it without the finished special effects, it's actually pretty interesting to see it this way.
I think it's something people keep rediscovering. It's a pretty fun programming problem that lets you explore lots of different domains at the same time (video processing, color theory, different coordinate systems for visualizing things) and you get a tangible "cool" piece of art at the end of your effort.
I built one of these back in the day. Part of the fun was seeing how fast I could make the pipeline. Once I realized that FFMPEG could read arbitrary byte ranges directly from S3, I went full ham into throwing machines at the problem. I could crunch through a 4 hour movie in a few seconds by distributing the scene extraction over an army lambdas (while staying in the free tier!). Ditto for color extraction and presentation. Lots of fun was had.
My intuition went for video compression artifact instead of AI modeling problem. There is even a moment directly before the cut that can be interpreted as the next key frame clearing up the face. To be honest, the whole video could have fooled me. There is definitely an aspect in discerning these videos that can be trained just by watching more of them with a critical eye, so try to be kind to those that did not concern themselves with generative AI as much as you have.
Yeah, it's unfortunate that video compression already introduces artifacts into real videos, so minor genAI artifacts don't stand out.
It also took me a while to find any truly unambiguous signs of AI generation. For example, the reflection on the inside of the windows is wonky, but in real life warped glass can also produce weird reflections.
I finally found a dark rectangle inside the door window, which at first stays fixed like a sign on the glass. However it then begins to move like part of the reflection, which really broke the illusion for me.
No one is looking at her face though, they're looking at the giant hello kitty train. And you were only looking at her face because you were told it's an AI-generated video. I agree with superfrank that extreme skepticism of everything seen online is going to have to be the default, unfortunately.
One thing that's not intuitive to spot but actually completely wrong, is that in the second clip we're apparently inside the train but the train is still rolling under us.
Or, y'know, the camera's moving smoothly backwards through the train? Would be bit of an odd choice (and high-effort to make it that smooth versus someone just carrying it) but not impossible by any means.
I always had this problem back in the day when they were in newspapers etc. I didn't really get what people were seeing, because to me it was all in reverse. I looked at these on smaller screens last night (phone and tablet) and I could see them! But just now I tried on my 27" workstation monitors and I got them reversed!
People have pointed out that these are "straight eye" rather than "cross eye" ones. So my theory is on a big screen these are too wide for my eyes or something. I can always go cross eyed (by looking at my nose), but I probably can't go "wide eyed".
The linked original article has an update saying they will refund the charge.
"Update May 9, 12:45 p.m. ET: After this story was published, Hertz informed The Drive that its Customer Care team would be "reaching out to Mr. Lee to apologize and will refund this erroneous charge.""