Hacker News new | past | comments | ask | show | jobs | submit login

Can you provide an example of how this bias might play out in a human-AI interaction?



The paper has them:

- An AI correctly infers (simply by reading text) that a physicist is male and a nurse is female.

- An AI correctly infers the gender of humans with androgyonous names.

- An AI infers insects are unpleasant and flowers are pleasant to humans.

- An AI also infers that African American names are more likely to be associated with unpleasantness than European names.

[edit: to those who dislike this comment, can you tell me what you object to? Which of my concrete examples is not in the paper?]


It appears that the linked paper has examples.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: