Sorry my point isn’t clear: the risk is you are being confidently led astray in ways you may not understand.
It’s like false memories of events that never occurred, but false knowledge - you think you have learned something, but a non-trivial percent of it, that you have no way of knowing, is flat out wrong.
It’s not a “helpful B+ student” for most people , it’s a teacher, and people are learning from it. But they are learning subtly wrong things, all day, every day.
Over time, the mind becomes polluted with plausible fictions across all types of subjects.
The internet is best when it spreads knowledge, but I think something else is happening here, and I think it’s quite dangerous.
Ah thankyou for clarifying. Yes, I agree with this. Maybe, its like a B+ student confidently teaching the world what it knows.
The news has an equivalent: The Gell-Mann amnesia effect, where people read a newspaper article on a topic they're an expert on and realise the journalists are idiots. Then suddenly forget they're idiots when they read the next article outside their expertise!
So yes, I agree that its important to bear in mind that chatgpt will sometimes be confidently wrong.
But I counter with: usually, remarkably, it doesn't matter. The crepe recipe it gave produced delicious crepes. If it was a bad recipe I would have figured that out with my mouth pretty quickly. I asked it to brainstorm weird quirks for D&D characters to have, some of the ideas it came up with were fabulous. For a question like that, there isn't really such a thing as right and wrong anyway. I was writing rust code, and it clearly doesn't really understand borrowing. Some code it gives just doesn't compile.
I'll let you in on a secret: I couldn't remember the name of the gell-mann amnesia effect when I went to write this comment. A few minutes ago I asked chatgpt what it was called. But I googled it after chatgpt told me what it was called to make sure it got it right so I wouldn't look like an idiot.
I claim most questions I have in life are like that.
But there are certainly times when (1) its difficult to know if an answer is correct or not and (2) believing an incorrect answer has large, negative consequences. For example, Computer security. Building rocket ships. Research papers. Civil engineering. Law. Medicine. I really hope people aren't taking chatgpt's answers in those fields too seriously.
But for almost everything else, it simply doesn't matter that chatgpt is occasionally confidently wrong.
For example, if I ask it to write an email for me, I can proofread the email before sending it. The other day asked it for scene suggestions in improv, and the suggestions were cheesy and bad. So I asked it again for better ones (less chessy this time). I ask for CSS and the CSS doesn't quite work? I complain at it and it tries again. And so on. This is what chatgpt is good for today. It is insanely useful.
It’s like false memories of events that never occurred, but false knowledge - you think you have learned something, but a non-trivial percent of it, that you have no way of knowing, is flat out wrong.
It’s not a “helpful B+ student” for most people , it’s a teacher, and people are learning from it. But they are learning subtly wrong things, all day, every day.
Over time, the mind becomes polluted with plausible fictions across all types of subjects.
The internet is best when it spreads knowledge, but I think something else is happening here, and I think it’s quite dangerous.