Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well... that's the whole point, it can not make sense. It's stringing up words based on it's dataset. There is 0 sense making, 0 interpretation, 0 understanding. Words. Strung together, including then it says "no nonsense" because somewhere in its datasets often enough that's the series of words that best match the "stop saying BS!" kind of prompt.




do you ever get tired of pointing out that a large language model is a language model?

UPD I do that as well when explaining to my relatives why I don't care what ChatGPT thinks about $X, but also they're not on HN


Worry not, pointing out improper use of language that benefits the biggest corporations on Earth that are destroying the planet is kind of hobby of mine.

stylistic preferences are pretty much the ONLY thing you could discuss (in the context of LLMs) that actually has anything to do with (natural) language in the first place; how is having preferences an "improper use of langauge"?

I'm not sure I follow. My point is that pretty much everybody who doesn't have a degree in CS or IT assumes due to BigAI corporations that LLMs or GenAI tools think. This is reflected by the words they use. Such people do not say "the model parse my query and process it via it neural network based architecture to give a statistically plausible answer given the context" but rather they say "I had a chat with Claude and he said something useful" thus implying agency and a lot more.

two questions:

1. do you ever point out that you can't actually mine bitcoin with a pickaxe?

2. what made you think that the parent comment somehow implied that it "actually thinks"?


Excellent questions,

1. I did actually mine Bitcoins back in the days (back when it was still a cryptoanarchist dream not coopted by the finance industry, scammers and destroying the planet... so a while ago) so I had to explain that too unfortunately. It does highlight a trend that, again, non technical expert take marketing terms at face value.

2. they said "maybe just don't include nonsense in the answer?" which does imply that they believe hallucinations are a side effect that can be solved.


1. my point is that "thinking" is easier to say than "composition of parameterized nonlinear functions trained by stochastic gradient descent with reinforcement learning on top". misnomer or not, it's not even ambiguous here (unless we're talking CoT vs arbitrary single token)

2. OR they meant that it's violating Gricean maxims; why are you assuming everyone is stupid?


> why are you assuming everyone is stupid?

I never said that. Please never contact me again. Such simplifications just prevent having a proper discussion. I don't need this kind of toxicity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: