Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Think base models. LLMs are extremely good at predicting the future when applied to human languages, as this is literally their only optimization goal. Why couldn't they also be good at predicting the future when applied to other complex forecasting tasks?

Of course what can be mathematically calculated without inference is going to be. LLMs may however be able to interpret the results of these calculations better than humans or current stochastical evaluations.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: