I'm not talking about GPT-3, I'm discussing the theoretical question raised by the grandparent of my comment: How is predicting the output of a function fundamentally different from executing the code?
We call computers deterministic despite the fact that they don't with perfect reliability perform the calculations we set them. The probability that they'll be correct is very high, but it's not 1. So the requirement we have for something to be considered deterministic is certainly not "perfectly a hundred percent of the time", as the parent to my comment suggested.
We call computers deterministic despite the fact that they don't with perfect reliability perform the calculations we set them. The probability that they'll be correct is very high, but it's not 1. So the requirement we have for something to be considered deterministic is certainly not "perfectly a hundred percent of the time", as the parent to my comment suggested.