It's not exactly a secret that GPT is bad at processing math in inputs, which seems to have something to do with the token representation being poorly suited for it, for starters. But I thought we were talking more generally about functions mapping inputs to outputs - the "perfect understanding" function is the one that captures all the relationships between the entities in the input and thus can always give a perfect answer to the extent that training data contains it, while an approximated "partial understanding" function internally has a much simpler model of those entities and their relationships (which can still turn out to be still good enough). Mapping inputs to outputs is an especially simple model, but it's still a model and an approximation.
And yes, in your example, of course it's simpler to just store the function itself - provided that you know in advance what it is, which, to remind, GPT does not. But when we're dealing with f(question)=answer of chatbots, the function that GPT ends up approximating is decidedly not simple.
And yes, in your example, of course it's simpler to just store the function itself - provided that you know in advance what it is, which, to remind, GPT does not. But when we're dealing with f(question)=answer of chatbots, the function that GPT ends up approximating is decidedly not simple.