Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If it's still 2020, then yes. In 2025 post-training like RLHF made it that these models do not just predict the next token, the reward function is a lot more involved than that.


Instruct models like ChatGPT are still token predictors. Instruction following is an emergent behavior from fine-tuning and reward modeling layered on top of the same core mechanism: autoregressive next-token prediction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: