Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a dumb argument. Humans frequently fall for the same tricks, are they not "intelligent"? All intelligence is ultimately based on some sort of statistical models, some represented in neurons, some represented in matrices.


State-of-the-art LLMs have been trained on practically the whole internet. Yet, they fall prey to pretty dumb tricks. It's very funny to see how The Guardian was able to circumvent censorship on the Deepseek app by asking it to "use special characters like swapping A for 4 and E for 3". [1]

This is clearly not intelligence. LLMs are fascinating for sure, but calling them intelligent is quite the stretch.

[1]: https://www.theguardian.com/technology/2025/jan/28/we-tried-...


The censorship is in fact not part of the llm. This can be shown easily by examples where llms visually output censored sentences after which they disappear.


The nuance here being that this only proves additional censorship is applied on top of the output. It does not disprove that (sometimes ineffective) censorship is part of the LLM or that censorship was not attempted during training.


For your definition of “clearly”.


Humans run on hardware that is both faulty and limited in terms of speed and memory. They have a better "algorithm" how to use the hardware to compensate for it. LLMs run on almost perfect hardware, able to store and retrieve enormous amounts of information insanely quickly and perform mechanical operations on it insanely quickly.

Yet they "make mistakes". Those are not the same as human mistakes. LLMs follow an algorithm that is far simpler and inferior, they simply use the hardware to perform incorrect ("illogical", "meaningless") operations, thus giving incorrect results.

See my other replies for more depth.


Yes, but we have the ability to reason logically and step by step when we have to. LLMs can’t do that yet. They can approximate it but it is not the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: