Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't have a single reference that says outright "LLMs are doing the same kind of abstract thinking as humans do". Rather, this is something that's scattered across a thousand articles and evaluations - in which LLMs prove over and over again that they excel at the cognitive skills that were once exclusive to humans - or fail at them in ways that are amusingly humanlike.

But the closest thing is probably Anthropic's famous interpretability papers:

https://transformer-circuits.pub/2024/scaling-monosemanticit...

https://transformer-circuits.pub/2025/attribution-graphs/bio...

In which Anthropic finds circuits in an LLM that correspond to high level abstracts an LLM can recognize and use, and traces down the way they can be connected. Which forms the foundation of associative abstract thinking.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: