Hacker News new | past | comments | ask | show | jobs | submit login

Yes. I strongly feel that reinforcement learning should be applied to punish the LLMs for speculating about their past behavior. They should respond along the lines of “I’m sorry, I don’t know why I said 3 + 5 is 9, but I will try to answer again.”



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: