Hacker News new | past | comments | ask | show | jobs | submit login

No, rubber ducks are still the ultimate rubber ducks, because they don't talk back with industrial grade overconfident bullshit that misleads and confuses you.



Is there a body of evidence that suggests people get more misled and confused after using things like ChatGPT? It seems like a reasonable hypothesis, but my own experience doesn't necessarily support it. I've used the language model at character.ai for a bit and have found it to be clarifying in a sense. When the model spits out some overconfident misinformation, it's a great opportunity to argue with the bot about it in ways one could never argue with another person - certainly not a stranger, at least.

Perhaps I've been confused and misled so badly I don't realize it, so all I can really say is I think it's premature to assume people will be any more misled or confused by technologies like ChatGPT when all they have to do now is get on the internet or flip on a TV to be personally targeted with misleading and confusing information already. I think there's very real potential for the technology to give people a lever against misinformation if it helps them understand and explore their own thoughts/thought processes.

I guess to me, fundamentally, it's a question of who's the one with agency over using it, and to what end. I'd be much more comfortable once we can fit models like this on home computers and worry less about them suddenly trying to sell us sponsored products or convince us of some ideology because their creator was paid to do so.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: