No it does not: if you google this and restrict the time to before 2021 (the learning cutoff date) you will find the same answer. Without having access to the training data it's impossible to tell what we seeing.
It absolutely needed to know who the successor would be via training data.
But to know that "The Queen of England died" also means that the head of state of Australia has changed means that it has an internal representation of those relationships.
(Another way of seeing this is with multi-modal models where the visual concepts and word concepts are related enough it can map between the two.)
I see what you mean, and it's indeed quite likely that texts containing such hypothetical scenarios were included in the dataset.
Nonetheless, the implication is that the model was able to extract the conditional represented, recognize when that condition was in fact met (or at least asserted: "The queen died."), and then apply the entailed truth.
To me that demonstrates reasoning capabilities, even if for example it memorized/encoded entire Quora threads in its weights (which seems unlikely).
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.