... So it's about not being able to observe short-lived particles directly, and having to work backwards from longer lived interaction or decay products? Or about how those intermediate particles they have to calculate through also have empirically-determined properties?
Most of that is measured corrections, not a theoretical model.
Entanglement is just a statistical effect in our measurements — we can’t say what is happening or why that occurs. We can calculate that effect because we’ve fitted models, but that’s it.
Similarly, to predict proton collisions, you need to add a bunch of corrective epicycles (“virtual quarks”) to get what we measure out of the basic theory. But adding such corrections is just curve fitting via adding terms in a basis to match measurement. Again, we can’t say what is happening or why that occurs.
We have great approximators that produce accurate and precise results — but we don’t have a model of what and why, hence we don’t understand QM.
> Entanglement is just a statistical effect in our measurements — we can’t say what is happening or why that occurs. We can calculate that effect because we’ve fitted models, but that’s it.
Bell's theorem was a prediction from math before people found ways to measure and confirm it. A model based on fitting to observations would have happened in the other order.
> A model based on fitting to observations would have happened in the other order.
We’d already had models which said that certain quantities were conserved in a system — and entanglement says that is true of certain systems with multiple particles.
To repeat myself:
> Entanglement is just a statistical effect in our measurements — we can’t say what is happening or why that occurs.
Bell’s inequality is just a way to measure that correlation, ie, statistical effect — and I think it’s supporting my point the way to measure entanglement is via statistical effect.
ER=EPR is an example of a model that tries to explain what and why of entanglement.