It's nonsensical, celeb announces they're going to rehab and notes it (?) is an issue affecting all women, at least, earlier today (??), they also noted it wasn't drugs or alcohol this time, but, a life (???)
Without instruction tuning, the perfect language model produces output which has the same level of intelligibility as random text from the training set. And the training set probably has a lot of spam and junk in.
I've heard a number of people say (from earlier) that the quantization and default sampling parameters is way wacked. Honestly even running that model size alone is the big achievement here and getting the accuracy to actually reach the benchmark is the beeg next step nao, I believe. <3 :'))))
But if it is a trimmed version, it is wong to call it LLaMa.