I think "mostly plausible-sounding" is, albeit simplified, OK for an analogy I guess. But the "word salad" part gives the impression it doesn't even look like real human text, which it kind of does at the surface. I think it's mostly "word salad" that makes it sound far off from the truth.
> Over the past year the chat bots have improved in many ways, but their written output has regressed to the mean: the average RLHF-driven preference.
How could you possibly judge such a diverse set of outputs? There are thousands of models, that can each be steered/programmed with prompts and with a lot parameter-twiddling, it's always impossible you could say "the chat bots" and give some sort of one-size-fits-all judgement of all LLMs. I think your reply shows a bit of ignorance if that's all you've seen.
Oxford Dictionaries says "word salad" is "a confused or unintelligible mixture of seemingly random words and phrases", and true, I'm no native speaker, but that's not commonly the output I get from LLMs. Sometimes though, some people’s opinions on the internet feel like word salad, but I guess it's hard to distinguish from bait too.
I meant what I said: chat bots, not models or APIs. Give it a try if you don’t believe me. Try using the leading chat interfaces logged out, from a clean browser and new IP.
Sure, I don't doubt that, but still, what do you think these chat bots are using? Or are you talking about ELIZA? If so, what you say now makes a lot of sense.