Think base models. LLMs are extremely good at predicting the future when applied to human languages, as this is literally their only optimization goal. Why couldn't they also be good at predicting the future when applied to other complex forecasting tasks?
Of course what can be mathematically calculated without inference is going to be. LLMs may however be able to interpret the results of these calculations better than humans or current stochastical evaluations.
Of course what can be mathematically calculated without inference is going to be. LLMs may however be able to interpret the results of these calculations better than humans or current stochastical evaluations.