I'm not sure I agree about the data manifolds being too rigid. When we look at the quality score-based generative models and diffusion we can see a clear evidence of how flexible these representations are. We could say the same about statistical manifolds, but the fact that the Fisher is the fundamental metric tensor for the statistical manifold is a fundamental piece of many 1st and 2nd order optimizers today.
The Banach fixed point theorem is extensively used for convergence proofs in reinforcement learning, but when you operate at the level of gradient descent for deep neutral networks it's difficult to do so because most commonly used optimizers are not guaranteed to converge to a unique fixed point.
The article seems to do the work to define a Fisher Information metric space, and contractions with the Stein score. Which seems to be the hypothesis for the Banach fixed point theorem, but I am not quite sure what conclusion we would get in this instance.