I would say that the TeX language was designed for the final user to add the "last mile", not for piling layers of macrosubstitution on top of something akin to lambda calculus. As amazing a feat of engineering LaTeX is, it has abused the TeX language beyond its natural limits. The price paid in complexity for abstraction was high. But the TeX language itself is a tiny elegant language.
I would say that the TeX language was designed for the final user to add the "last mile", not for piling layers of macrosubstitution on top of something akin to lambda calculus. As amazing a feat of engineering LaTeX is, it has abused the TeX language beyond its natural limits. The price paid in complexity for abstraction was high.
Nice in theory, in practice you have LaTeX tools with synctex, command, environment and references autocompletion, live math preview, proper syntax highlighting, jump to error line, etc. Nothing like that is available for pandoc markdown AFAIK, except perhaps for Quarto, which may have its uses but is too slow for small/medium sized documents and its tooling is not that capable anyway. Besides, it adds yet another complex layer on top of an already way too layered stack.
A legitimate concern given that Typst is still maturing. But I have at least one thing to say in its favour: you can lock the version of packages that you import. The only reason LaTeX documents full of \usepackages are reproducible ten years later is because packages are in maintenance mode, not because of well-thought-out future-proof design.
The will to assign real numbers to degrees of belief is the controversial assumption. Converted bayesians tend to gloss over this fact. Many, as in a sibling comment, state that MLE is bayesian statistics with a uniform prior, but this isn't true of most if not all frequentist inference, based on frequentist NHT and CI, not MAP. Modeling uncertainty with uniform priors (or even more sophisticated non-informative priors a la Jaynes) is a recipe for paradoxes and there is no alternative practical proposal that I know of. I have no issue with bayesian modeling in a ML context of model selection and validation based on resampling methods, but IMO it's not up to the foundational claims its proponents often do.
There are some time traveling products that might help you fix that.