Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure I agree about the data manifolds being too rigid. When we look at the quality score-based generative models and diffusion we can see a clear evidence of how flexible these representations are. We could say the same about statistical manifolds, but the fact that the Fisher is the fundamental metric tensor for the statistical manifold is a fundamental piece of many 1st and 2nd order optimizers today.



Would applying https://en.wikipedia.org/wiki/Banach_fixed-point_theorem yield interesting convergence (and uniqueness) guarantees ?


The Banach fixed point theorem is extensively used for convergence proofs in reinforcement learning, but when you operate at the level of gradient descent for deep neutral networks it's difficult to do so because most commonly used optimizers are not guaranteed to converge to a unique fixed point.


The article seems to do the work to define a Fisher Information metric space, and contractions with the Stein score. Which seems to be the hypothesis for the Banach fixed point theorem, but I am not quite sure what conclusion we would get in this instance.


This will be a good introduction, but won't cover the cost and benefits of a statistical space IID etc..

https://youtu.be/q8gng_2gn70




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: