Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> They could be mistaken for real photograph or real work of (digital) art by a human. Especially by an algorithm.

Perhaps by a naive algorithm. I'm fairly sure it's fairly easy to train a neural network to recognize current generation AI generated images. And probably for quite a while longer.

Btw, if you want to create realistic images, it's fairly easy to create guaranteed pristine data: just take a video camera and create some footage.

Perhaps there will even be a market for such pristine data.

Now, if you want to create art and train on human artists' output, that might perhaps get harder in the future.



> I'm fairly sure it's fairly easy to train a neural network to recognize current generation AI generated images.

Could you explain how this is done and why it's easy?


You get a corpus and do some supervised learning.

Why do I think it's easy: the goal of current generation AI image generation project was just to produce images that look good to humans. Not to be indistinguishable.

Even for casual human observer, they are still relatively easy to spot. A trained machine that can pay more attention to details should do even better.

In some sense, this is just the same idea as a GAN. Only that the generator is fixed, and we are only training the discriminator.

With future systems, distinguishing them might be harder.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: