That makes sense, and I think your definition on hallucinations is a technically correct one. Going forward, I think your readers might appreciate you tracking "dumb errors" alongside (but separate from) hallucinations. They're a regular part of working with these systems, but they take up some cognitive load on the part of the user, so it's useful to know if that load will rise, fall, or stay consistent with a new model release.