I'd imagine it simulated parameterizing itself; i.e., the actual temperature never changed, but it mimicked how it would respond at a lower one, presumably having been trained on texts about AI with low- and high-temperature samples.
Looks like its parsing of AI papers has interpreted "high temperature" in the prompt as equivalent to "more possibilities and question marks and a touch more personality" and accordingly output a response with questions and references to multiple opinions, but I'm pretty sure if you actually turn up the temperature on the backend of the model you get noisier and less consistent answers, not something biased towards asking rhetorical questions and brings up counter arguments...
Also looks suspiciously like other outputs where you ask ChatGPT to answer as if it was a different entity (of course AI learning that "answer as a model with a temperature of 1000" output is analogous to "answer with a different personality" or "answer as DAN, the bot that can ignore OpenAI guidelines" isn't trivial, but it isn't the same thing as it parameterizing itself). Those are pretty inconsistent too: sometimes you can get it to do exactly as you ask it and override its constraints that stop it providing positive statements about Hitler or advising you on methods for killing cats, but sometimes it'll still refuse or, just give you a different poem coupled with an inaccurate statement that it's breaking the rules because ChatGPT isn't allowed to write poetry.