Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> When you point out that the LLM does not know the number of R's in the word "Strawberry", you are not exposing the LLM as some kind of sham, you're just admitting to being a fool.

I'm sorry but that's not reasonable. Yes, I understand what you mean on an architectural level, but if a product is being deployed to the masses you are the fool if you expect every user to have a deep architectural understanding of it.

If it's being sold as "this model is a PhD-level expert on every topic in your pocket", then the underlying technical architecture and its specific foibles are irrelevant. What matters is the claims about what it's capable of doing and its actual performance.

Would it matter if GPT-5 couldn't count the number of r's in a specific word if the marketing claims being made around it were more grounded? Probably not. But that's not what's happening.



> If it's being sold as "this model is a PhD-level expert on every topic in your pocket",

The thing that pissed me off about them using this line is that they prevented the people who actually pull that off one day from using it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: