This is an odd way of putting it. I think it's better to say that, given some mostly uncontroversial assumptions, if one is willing to assign real number degrees of belief to uncertain claims, then Bayesian statistical inference is the only way of reasoning about those claims that's compatible with classical propositional logic.
The will to assign real numbers to degrees of belief is the controversial assumption. Converted bayesians tend to gloss over this fact. Many, as in a sibling comment, state that MLE is bayesian statistics with a uniform prior, but this isn't true of most if not all frequentist inference, based on frequentist NHT and CI, not MAP. Modeling uncertainty with uniform priors (or even more sophisticated non-informative priors a la Jaynes) is a recipe for paradoxes and there is no alternative practical proposal that I know of. I have no issue with bayesian modeling in a ML context of model selection and validation based on resampling methods, but IMO it's not up to the foundational claims its proponents often do.
This is an odd way of putting it. I think it's better to say that, given some mostly uncontroversial assumptions, if one is willing to assign real number degrees of belief to uncertain claims, then Bayesian statistical inference is the only way of reasoning about those claims that's compatible with classical propositional logic.