If there were theory that led to directly useful results (like, telling you the right hyperparameters to use for your data in a simple way, or giving you a new kind of regularization that you can drop in to dramatically improve learning) then deep learning practitioners would love it. As it currently stands, such theories don't really exist.
This is way too rigorous. You can absolutely have theories that lead to useful results even if they aren't as predictive as you describe. Theory of evolution for an obvious counterpoint.