Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
svachalek
6 days ago
|
parent
|
context
|
favorite
| on:
Reasoning models don't always say what they think
This is an interesting paper, it postulates that the ability of an LLM to perform tasks correlates mostly to the number of layers it has, and that reasoning creates virtual layers in the context space.
https://arxiv.org/abs/2412.02975
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: