I don't see how your example demonstrates your hypothesis, though. Summing two numbers and telling the next number in the Fibonacci sequence would be expected from a deep and complex statistical modelling of the existing internet data.
Both of these examples show GPT not barely approximating outputs (which doesn’t exist in real worlds for these inputs) based on training set but understands algortihms and able to apply them. I don’t believe our brains are doing anything different from that.