Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This precise thing is causing a funny problem in specialty areas. People are using e.g. Google Lens to identify plants, birds and insects, which sometimes returns wrong answers e.g. say it sees a picture of a Summer Tanager and calls it a Cardinal. If the people then post "Saw this Cardinal" and the model picks up that picture/post and incorporates it into its training set, it's just reinforcing the wrong identification..


That's not really a new problem, though. At one point someone got some bad training data about an old Incan town, the misidentification spread, and nowadays we train new human models to call it Macchu Picchu.


The difference between the name of an old Incan town and a modern time plant identification mistake is that maybe the plant is poisonous.

Made with gpt3



Imagine when there is an AI that is monitoring content creation and keeping tabs of original sources....


Then that's a cardinal now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: