there are many types of a wrong answer, and the difference is based on how the answer came to be. In case of BS/Hallucination there is no reason or logic behind the answer it is basically, in the case of LLM, just random text. There was no reasoning behind the output or it wasn't based on facts.
You can argue if it matters how a wrong answer came about ofc but there is a difference
No, it’s more specific than just wrong.
Hallucination is when a model creates a bit of fictitious knowledge, and uses that knowledge to answer a question.