Yeah, I have a product that rewrites content under a persona.
I wanted to take the persona (a plain English description of how someone communicates) and generate a headshot to go with it for the UI. So I asked ChatGPT to describe what the persona looks like as if they were talking to a sketch artist.
Instead of getting something I could feed into DallE, I got a lecture about stereotyping.
It’s very happy to stereotype communication - but rendering a photo of that stereotype is where it draws the line.
You can ask it to assume the persona of a well educated police officer writing a police report for a judge, and it will gladly do so. And that communication is unlikely to carry an American southern accent even though there likely exists a well educated police officer who speaks with a southern accent. But ask it to describe that police officer so I can draw a picture and it’s a different story.
Yet if you put “police officer” into a search engine there is a clear aesthetic that we (humans) associate with that stereotype. Blue/black outfit, hat with badge/logo, etc. That clear aesthetic is what I want in the headshot. And instead I got a lecture on inclusion - something important but tangential to the headshot of a police officer.
I wanted to take the persona (a plain English description of how someone communicates) and generate a headshot to go with it for the UI. So I asked ChatGPT to describe what the persona looks like as if they were talking to a sketch artist.
Instead of getting something I could feed into DallE, I got a lecture about stereotyping.