Hacker Newsnew | past | comments | ask | show | jobs | submit | starshadowx2's commentslogin

Something like this is used in some Discord servers. You can make a honeypot channel that bans anyone who posts in it, so if you do happen to get a spam bot that posts in every channel it effectively bans itself.


Most web forums I used the visit had something like that back in the day. Worked against primitive pre-LLM bots and presumably also against non-English-reading human spammers.


There is a new method with the 'server onboarding' where if you select a role when joining it auto bans you.


I'm getting a 404 from this.


That was X-Men Origins: Wolverine. I was also thinking about that when I heard about this leak. This was the infamous Deadpool scene from it without the finished special effects, it's actually pretty interesting to see it this way.

- https://www.youtube.com/watch?v=2R5ffysgVvA


You can inspect element on the achievements to see what their unlock text is.


This tumblr was posting these from 2011 to 2018 - moviebarcode.tumblr.com


I think it's something people keep rediscovering. It's a pretty fun programming problem that lets you explore lots of different domains at the same time (video processing, color theory, different coordinate systems for visualizing things) and you get a tangible "cool" piece of art at the end of your effort.

I built one of these back in the day. Part of the fun was seeing how fast I could make the pipeline. Once I realized that FFMPEG could read arbitrary byte ranges directly from S3, I went full ham into throwing machines at the problem. I could crunch through a 4 hour movie in a few seconds by distributing the scene extraction over an army lambdas (while staying in the free tier!). Ditto for color extraction and presentation. Lots of fun was had.


The face of the girl on the left at the start in the first second should have been a giveaway.


My intuition went for video compression artifact instead of AI modeling problem. There is even a moment directly before the cut that can be interpreted as the next key frame clearing up the face. To be honest, the whole video could have fooled me. There is definitely an aspect in discerning these videos that can be trained just by watching more of them with a critical eye, so try to be kind to those that did not concern themselves with generative AI as much as you have.


Yeah, it's unfortunate that video compression already introduces artifacts into real videos, so minor genAI artifacts don't stand out.

It also took me a while to find any truly unambiguous signs of AI generation. For example, the reflection on the inside of the windows is wonky, but in real life warped glass can also produce weird reflections. I finally found a dark rectangle inside the door window, which at first stays fixed like a sign on the glass. However it then begins to move like part of the reflection, which really broke the illusion for me.


No one is looking at her face though, they're looking at the giant hello kitty train. And you were only looking at her face because you were told it's an AI-generated video. I agree with superfrank that extreme skepticism of everything seen online is going to have to be the default, unfortunately.


Hard to not discount that as a compression artifact.


Just like all the obvious signs[1] the moon landings were faked.

[1]: https://web.archive.org/web/20120829004513/http://stuffucanu...


Just wanted to say I really enjoyed this!


One thing that's not intuitive to spot but actually completely wrong, is that in the second clip we're apparently inside the train but the train is still rolling under us.


Or, y'know, the camera's moving smoothly backwards through the train? Would be bit of an odd choice (and high-effort to make it that smooth versus someone just carrying it) but not impossible by any means.


Also "HELLO KITTY" being backwards is odd - writting on trains doesn't normally come out like that eg https://www.groupe-sncf.com/medias-publics/styles/crop_1_1/p...


All the text is mirrored. It's not unusual doing this to avoid copyright-filters. This kinda adds to distracting suspicions.


The whole video was probably mirrored before being posted. Doesn't seem to be related to being AI generated.


> You shoud see a flat background with one large copy of the letter π floating a little above the background (closer to you).

To me these all look like they're reversed from what this says, like they're further away from me behind the flat part.


I always had this problem back in the day when they were in newspapers etc. I didn't really get what people were seeing, because to me it was all in reverse. I looked at these on smaller screens last night (phone and tablet) and I could see them! But just now I tried on my 27" workstation monitors and I got them reversed!

People have pointed out that these are "straight eye" rather than "cross eye" ones. So my theory is on a big screen these are too wide for my eyes or something. I can always go cross eyed (by looking at my nose), but I probably can't go "wide eyed".


You should focus behind it instead of looking cross-eyed.


I believe this depends on if you focus "beyond" or "in front" of the image


The linked original article has an update saying they will refund the charge.

"Update May 9, 12:45 p.m. ET: After this story was published, Hertz informed The Drive that its Customer Care team would be "reaching out to Mr. Lee to apologize and will refund this erroneous charge.""

https://www.thedrive.com/news/hertz-is-charging-tesla-model-...


But I thought that they are "unable to provide an adjustment or refund since the service was provided and contract is closed."


It sounds like a common answer from Tier 1 support, to make you go away.


It doesn't really change the story. Them only giving back the money they took only after a journalist got involved is still slimy.

If a company doesn't offer a reason why it happened and how they are going to change policies, it's not a real resolution.


https://whispy.org/

Whispy seems promising, it's by the same dev as OldTwitter - https://github.com/dimdenGD/OldTwitter


I tried some old phone photos from my grandparent's rural Canadian farm and it either got the province wrong or it guessed Montana or North Dakota.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: