No, hallucination is a better term. It conveys the important fact -- "these chatbots will just confidently state things as facts even though they're completely made up" -- in a way that everyone understands. If you used the term "confabulation" you'd have to start by explaining what "confabulation" means any time you wanted to talk about a chatbot making something up.
It's not even more accurate. The problem with hallucinations isn't a "gap in memory". The fundamental problem is that the chatbots are "plausible English text" generators(*), not intelligent agents. As such, no existing term is going to fit perfectly -- it neither hallucinates nor confabulates, it just generates probable token sequences(*) -- so we may as well use a word people know.
(*) I know it's slightly more complicated, especially with RLHF and stuff, but you know what I mean.
It's not even more accurate. The problem with hallucinations isn't a "gap in memory". The fundamental problem is that the chatbots are "plausible English text" generators(*), not intelligent agents. As such, no existing term is going to fit perfectly -- it neither hallucinates nor confabulates, it just generates probable token sequences(*) -- so we may as well use a word people know.
(*) I know it's slightly more complicated, especially with RLHF and stuff, but you know what I mean.