Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ChatGPT's knowledge doesn't run that deep. If ChatGPT can write a credible essay about a given subject it means your subject was rather generic to begin with.

I was playing a bit with ChatGPT and I wondered how much actual knowledge could be stored in that model. So I asked ChatGPT whether it knew the song by Franz Schubert called "Am Feierabend" and if so, if it could tell me the subject/meaning of the song. "Certainly I know this song", responded ChatGPT and then gave me -- with total confidence -- a completely wrong answer about the meaning of the song. In fact I was a bit baffled by the authority with which is spouted this nonsense answer. A truly intelligent system would be aware about the limits its knowledge, right?

So I think that with a minimum of creativity teachers can come up with questions that can easily stump ChatGPT.



> If ChatGPT can write a credible essay about a given subject it means your subject was rather generic to begin with.

ChatGPT created a pretty good essay for my daughter's home assignment. She had to write a fictive autobiography from the perspective of a 16th century noblewoman. Is that too generic? (Side note: ChatGPT did it in Hungarian.)

> A truly intelligent system would be aware about the limits its knowledge, right?

That's a question of definitions. If you ask me, ChatGPT is a truly intelligent system that is ridiculously unaware of its own limitations. It doesn't look like a contradiction per se, I've met very smart, highly functioning megalomaniacs.


> Is that too generic?

No but as you stated it's fictive... I can also tell you a lot of fictive science facts, or fictive president names, or anything fictive for that matter

The goal is to use your _own_ imagination, of course chatgpt can align sentences in a semi cohesive manner, it's its whole purpose


ChatGPT has no concept of trying to be correct with regard to world knowledge. This doesn't just apply to obscure things. For instance, when I ask it about mainstream books and TV shows, it frequently misattributes words or actions to the wrong character. But not only that, it will then proceed to explain why the character said so, and how does it reflect on the character.

It's not about awareness or limits of knowledge. From the point of view of a language model, it doesn't matter whether it was Todd or Walter White who killed Lydia, or whether it was Kinbote or Shade who invented a phrase. It only tries to generate a response to your input, such that it is a plausible continuation.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: