Unexpectedly, I kind of agree. I've found GPT to be a great tutor for things I'm trying to learn. It being somewhat unreliable / prone to confidently lying embeds a certain amount of useful skepticism and questioning of all the information, which in turn leads to an overall better understanding.
Fighting with the AI's wrongness out of spike is an unexpectedly good motivator.
Reminds me of a prof at uni, who's slides always appeard to have been written 5 mins before the lecture started, resulting in students pointing out mistakes in every other slide. He defended himself saying that you learn more if you aren't sure weather things are correct - which was right. Esp. during a lecture, it's sometimes not that easy to figure out if you truly understood something or fooled yourself, knowing that what you're looking at is provably right. If you know everything can be wrong, you trick your mind to verify it at a deeper level, and thus gain more understanding. It also results in a culture where you're allowed to question the prof. It resulted in many healthy arguments with the prof why something is the way it is, often resulting with him agreeing that his slides are wrong. He never corrected the underlying PPP.
I thought about doing that when I was doing adjunct last year, but what made me stop was the fact that these were introductory classes, so I was afraid I might pollute the minds of students who really haven't learned enough to question stuff yet.
Wow, I’ve never thought about that, but you’re right! It really has trained me to be skeptical of what I’m being taught and confirm the veracity of it with multiple sources. A bit time-consuming, of course, but generally a good way to go about educating yourself!
I genuinely think that arguing with it has been almost a secret weapon for me with my grad school work. I'll ask it a question about temporal logic or something, it'll say something that sounds accurate but is ultimately wrong or misleading after looking through traditional documentation, and I can fight with it, and see if it refines it to something correct, which I can then check again, etc. I keep doing this for a bunch of iterations and I end up with a pretty good understanding of the topic.
I guess at some level this is almost what "prompt engineering" is (though I really hate that term), but I use it as a learning tool and I do think it's been really good at helping me cement concepts in my brain.
> I'll ask it a question about temporal logic or something, it'll say something that sounds accurate but is ultimately wrong or misleading after looking through traditional documentation, and I can fight with it, and see if it refines it to something correct, which I can then check again, etc. I keep doing this for a bunch of iterations and I end up with a pretty good understanding of the topic.
Interesting, that's the basic process I follow myself when learning without ChatGPT. Comparing my mental representation of the thing I'm learning to existing literature/results, finding the disconnects between the two, reworking my understanding, wash rinse repeat.
I guess a large part of it is just kind of the "rubber duck" thing. My thoughts can be pretty disorganized and hard to follow until I'm forced to articulate them. Finding out why ChatGPT is wrong is useful because it's a rubber duck that I can interrogate, not just talk to.
It can be hard for me to directly figure out when my mental model is wrong on something. I'm sure it happens all the time, but a lot of the time I will think I know something until I feel compelled to prove it to someone, and I'll often find out that I'm wrong.
That's actually happened a bunch of times with ChatGPT, where I think it's wrong until I actually interrogate it, look up a credible source, and realize that my understanding was incorrect.
I actually learn a lot from arguing with not just AIs but people and it doesn't really matter if they're wrong or right. If they're right, it's an obvious learning experience for me, if they're wrong, it forced me to explain and understand _why_ they're wrong.
I completely agree with that, but the problem is finding a supply of people to argue with on niche subjects. I have occasionally argued with people on the Haskell IRC and the NixOS Matrix server about some stuff, but since they're humans who selfishly have their own lives to live so I can't argue with them infinitely, and since the topics I argue about are specific there just don't exists a lot of people I can argue with even in the best of times.
ChatGPT (Gemini/Anthropic/etc) have the advantage of never getting sick of arguing with me. I can go back and forth and argue about any weird topic that I want for as long as I want at any time of day and keep learning until I'm bored of it.
Obviously it depends on the person but I really like it.
> I completely agree with that, but the problem is finding a supply of people to argue with on niche subjects.
Beyond just subject-wise, finding people who argue in good faith seems to be an issue too. There are people I'm friends with almost specifically because we're able to consistently have good-faith arguments about our strongly opposing views. It doesn't seem to be a common skill, but perhaps that has something to do with my sample set or my own behaviors in arguments.
I dunno, for more niche computer science or math subjects, I don't feel like people argue in bad faith most of the time. The people I've argued with on the Haskell IRC years ago genuinely believe in what they're saying, even if I don't agree with them (I have a lot of negative opinions on Haskell as a language).
Politically? Yeah, nearly impossible to find anyone who argues in good faith.
Politics and related stuff is what I had in mind, yeah. To a lesser extent technical topics as well. But, I meant "good faith" in the sense of both believing what they're saying and also approaching the argument open to the possibility of being wrong themselves and/or treating you as capable of understanding their point. I've had arguments where the other person definitely believed what they were saying, but didn't think I was capable of understanding their point or being right myself and approached the discussion thusly.
Arguing is arguably one of humanity's super powers, and that we've yet to bring it to bear in any serious way gives me reason for optimism about sorting out the various major problems we've foolishly gotten ourselves into.
Yeah, and what I like is that I can get it to say things in "dumb language" instead of a bunch of scary math terms. It'll be confidently wrong, but in language that I can easily understand, forcing me to looking things up, and kind of forcing me learn the proper terminology and actually understanding it.
Arcane language is actually kind of a pet peeve of mine in theoretical CS and mathematics. Sometimes it feels like academics really obfuscate relatively simple concepts but using a bunch of weird math terms. I don't think it's malicious, I just think that there's value in having more approachable language and metaphors in the process of explaining thing.
Fighting with the AI's wrongness out of spike is an unexpectedly good motivator.