For those who are interested in the concept of dreaming as a mechanism for preventing overfitting (insofar as such a term may be applied to biological processes), I myself first encountered the concept in this paper: https://arxiv.org/abs/2007.09560
The use of this sort of anthropomorphic and "incantation" style prompting is a workaround while mechanistic interpretability and monosemanticity work[1] is done to expose the neuron(s) that have larger impacts on model behavior -- cf Golden Gate Claude.
Further, even if end-users only have access to token input to steer model behavior, we likely have the ability to reverse engineer optimal inputs to drive desired behaviors; convergent internal representations[2] means this research might transfer across models as well (particularly, Gemma -> Gemini, as I believe they share the same architecture and training data).
I suspect we'll see understandable super-human prompting (and higher-level control) emerge from GAN and interpretability work within the next few years.