Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
Havoc
17 days ago
|
parent
|
context
|
favorite
| on:
DeepSeek-R1
That’s the nature of LLMs. They can’t really think ahead to „know“ whether reasoning is required. So if it’s tuned to spit out reasoning first then that’s what it’ll do
Consider applying for YC's Spring batch! Applications are open till Feb 11.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: