Wow, I’ve never thought about that, but you’re right! It really has trained me to be skeptical of what I’m being taught and confirm the veracity of it with multiple sources. A bit time-consuming, of course, but generally a good way to go about educating yourself!
I genuinely think that arguing with it has been almost a secret weapon for me with my grad school work. I'll ask it a question about temporal logic or something, it'll say something that sounds accurate but is ultimately wrong or misleading after looking through traditional documentation, and I can fight with it, and see if it refines it to something correct, which I can then check again, etc. I keep doing this for a bunch of iterations and I end up with a pretty good understanding of the topic.
I guess at some level this is almost what "prompt engineering" is (though I really hate that term), but I use it as a learning tool and I do think it's been really good at helping me cement concepts in my brain.
> I'll ask it a question about temporal logic or something, it'll say something that sounds accurate but is ultimately wrong or misleading after looking through traditional documentation, and I can fight with it, and see if it refines it to something correct, which I can then check again, etc. I keep doing this for a bunch of iterations and I end up with a pretty good understanding of the topic.
Interesting, that's the basic process I follow myself when learning without ChatGPT. Comparing my mental representation of the thing I'm learning to existing literature/results, finding the disconnects between the two, reworking my understanding, wash rinse repeat.
I guess a large part of it is just kind of the "rubber duck" thing. My thoughts can be pretty disorganized and hard to follow until I'm forced to articulate them. Finding out why ChatGPT is wrong is useful because it's a rubber duck that I can interrogate, not just talk to.
It can be hard for me to directly figure out when my mental model is wrong on something. I'm sure it happens all the time, but a lot of the time I will think I know something until I feel compelled to prove it to someone, and I'll often find out that I'm wrong.
That's actually happened a bunch of times with ChatGPT, where I think it's wrong until I actually interrogate it, look up a credible source, and realize that my understanding was incorrect.