Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience it doesn’t require that much cherry picking if you use a carefully crafted prompt. For example: “ A professional photography of a software developer talking to a plastic duck on his desk, bright smooth lighting, f2.2, bokeh, Leica, corporate stock picture, highly detailed”

And this is the first picture I got: https://labs.openai.com/s/lSWOnxbHBYQAtli9CYlZGqcZ

It got it a bit strong on the depth of field and I don’t like the angle but I could iterate a few times and get a good one.



Additionally, wherever it classically falls over (such as currently for realistic human faces), there will be second pass models that both detect and replace all the faces with realistic ones. People are already using models that alter eyes to be life-like with excellent results (many of the dalle-2 ones appear somewhat dead atm).


Even this image is just an illusion of a perfect photo, which is a blur for most part, see the face of duck. I had access since past 4 5 days and it fails badly whenever I tried to create any unusual scene.

For the first few days when it was announced I use to look deep even in real photos in search of generative artifacts. They are not so difficult to spot now, most of the times anyway.


NB: when you share links like that, nobody who doesn't have access can see the results


sure they can, just tried in incognito


I didn't even need incognito.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: