Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs have essentially no capability for internal thought. They can't produce the right answer without doing that.

Of course, you can use thinking mode and then it'll just hide that part from you.



No, even in thinking mode it will sycophant and write huge essays as output.

It can work without, I just have to prompt it five times increasingly aggressively and it’ll output the correct answer without the fluff just fine.


They already do hide alot from you when thinking, this person wants them to hide more instead of doing their 'thinking' 'out loud' in the response.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: