Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess a large part of it is just kind of the "rubber duck" thing. My thoughts can be pretty disorganized and hard to follow until I'm forced to articulate them. Finding out why ChatGPT is wrong is useful because it's a rubber duck that I can interrogate, not just talk to.

It can be hard for me to directly figure out when my mental model is wrong on something. I'm sure it happens all the time, but a lot of the time I will think I know something until I feel compelled to prove it to someone, and I'll often find out that I'm wrong.

That's actually happened a bunch of times with ChatGPT, where I think it's wrong until I actually interrogate it, look up a credible source, and realize that my understanding was incorrect.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: