I don't even know what you mean by "under-powered statistical training."
But about the harder sciences: when this ball got rolling, I attended a lecture of a statistician, who explained that basically all genetic results preceding were likely wrong, unless they showed something like 6 sigma significance. That's because H0 is so easy to reject when you base H0 and H1 on different measurements. The result is that every theory is true.
Many scientists in non-mathematical fields are wrote taught "how to write a paper" in undergrad. P-values and statistical significance are taught completely devoid of context, essentially just a step in your final analysis. Many scientists perpetrating p-hacking or data dredging thought this was a process of good science, and didn't understand the axioms for which these metrics depend. This is something learning fixes, and ignorance makes worse.
It comes from the fact that soft sciences deal with vast numbers of variables, far too many to control. They actually get a ton of statistics, trying to find a signal in so much noise.
It would be great if human beings were more amenable to rigorous experiment. Failing that, we at least need to understand what these things do and don't actually mean. It's either that or give up trying to study people entirely.
Statistics are not values you collect - but the analysis you perform. In data with many confounding variables or many degrees of freedom, the only way to be honest about what the data shows is through statistics. Statistics is what allows you to filter signal from noise. Statistical significance is a very useful tool, you just have to be honest about what it represents and under what conditions this breaks down.