As a scientist I think you are addressing a very important problem with this book. I've taken two statistics classes, one graduate level, and even I am plagued with doubt as to wether the statistics I've used have all been applied and interpreted "correctly". That said, I think the recent spate of "a majority of science publications are wrong" stories is incredible hyperbole. Is it the raw data that is wrong (fabricated)? The main conclusions? One or two minor side points? What if the broad strokes are right but the statistics are sloppy?
People also need to realize what while the Discussion and Conclusion section of publications may often read like statements of truth, they're usually just a huge lump of spinoff hypotheses in prose form. Despite my frequent frustrations with the ways science could be better, the overall arrow of progress points in the right direction. Science isn't a process where the goal is to ensure that 100% of what gets published is correct, but whereby previous assertions can be refuted and corrected.
Edit:
To be more specific, I think the statement in your Introduction is overly critical: "The problem isn’t fraud but poor statistical education – poor enough that some scientists conclude that most published research findings are probably false". I would change it to say: "conclude that most published research findings contain (significant) errors", or something along those lines.
He's drawn some criticism for the paper, and perhaps things aren't as bad as he makes it seem, but it is true that someone has suggested most findings are false.
If anything, Ioannidis's paper would hugely understate the problem, because he was only looking at the percentage of papers that couldn't be replicated. But just because a result can be replicated doesn't mean that the study is actually correct. In fact, the vast majority of wrong papers are likely very replicable, since most wrong papers are the result of bad methodology (or other process-related issues) rather than fudged data.
There is also a big divide between statistical rigor in "science" research and "medical" research. In their defense, I think it's just extremely difficult for most medical research studies to get the kinds of random or N needed for reliable statistics.
Also regarding John Ioannidis's essay (not paper):
First, he uses the blanket term "research" in his meta-analysis (or at least examples) but his work seems focused primarily on medical research studies. Second, I'm not sure he clearly defines what it means to be "False", or for "most" published research to be "false".
Let's say there is clearly a right and a wrong answer to a question, and up until yesterday, publications A, B and C had concluded the wrong answer. But someone releases a newer, more rigorous finding D that refutes A, B and C conclusively and choses the correct answer. I wouldn't consider this particular field to be 75% wrong after the publication of D. (Though it accurately could have been described as close to 0% conclusive before D). For any particular line of inquiry, the quality of research in this area seems like it should be shifted strongly toward the maximum exemplar of this body of work, and not it's average.
As a scientist, I think this is probably correct. In my experience, a great majority of publications draw improper statistical conclusions, and I believe many of these are wrong in substance.
"What if the broad strokes are right but the statistics are sloppy?"
Publishing statements as statements of truth when they are improperly or falsely backed up would be better described as politics than science.
I take the view that "a majority of science publications are wrong" is a purposefully misleading and sensationalistic take, even though it may be technically true. IMO, only the leading-edge of known science should factor into such studies, and I think that is probably not "mostly wrong". After all, even if there has only been 1 rock-solid publication in favor of a round Earth that is preceded by 99 publications in support of a flat-Earth, I would't call that field 99% wrong.
If, say, you find that your results are not statistically sigificant at a reasonable p-level, you don't get to claim your conclusion is right in 'broad strokes', maybe has some 'significant errors', a bit 'sloppy statistics', but it's broadly true, right?
Nope, instead it's not statistically significant, so there's no reason to think it wasn't just random chance that made it 'close' to statistically significant.
But if your results are statistically validated, but only because you used statistics incorrectly, you've crossed the line to broadly-true-but-with-sloppiness?
Nope.
There is room for qualitative research. I like qualitative research. But qualitative research has, righty, a different sort of impact.
If you are claiming your research is quantitative, then you need to live and die by good statistics. That's the implicit promise of providing the statistics in the first place, otherwise why do statistical calculations at all?
What if someone claims their p=0.05 but their control group wasn't quite as representative as they assumed and their p is really something more like 0.1?
Or what if one of the experiments in a paper is well-designed and well-executed and supports a hypothesis with very high certainty but some of the other experiments were sloppy or botched? Should the conclusions of the entire paper be labeled as "wrong"?
I find it pretty funny when critiques on statistical rigor in science arrive at language with words such as "mostly" and "wrong".
That said, I think the recent spate of "a majority of science publications are wrong" stories is incredible hyperbole. Is it the raw data that is wrong (fabricated)?
This is only a good working assumption of some (open access) journals and of papers (co-)authored exclusively by nationals of some countries. That's a lot of papers.
The main conclusions? One or two minor side points? What if the broad strokes are right but the statistics are sloppy?
If the main conclusions are right but the statistics are sloppy the paper is true, not false.
My confidence in what Ioannidis published went up significantly on learning that epidemiology is mostly bullshit[0] and "Bayer halts nearly two-thirds of its target-validation projects because in-house experimental findings fail to match up with published literature claims, finds a first-of-a-kind analysis on data irreproducibility."[1]
I hope the author of the textbook does not listen to you.
To be accurate the paper should then come to the conclusion that "a majority of published work is inconclusive and fails in the proper application of statistics used to support their claims" instead of "a majority of science publications are wrong". The second is just pure sensationalist troll.
I'm not arguing against the work itself, or against more rigorous application of statistics. I'm just arguing against sensationalistic and inflammatory language. Anyone who practices science in a particular field for any length of time will have a pretty good idea of what work is "good" and "bad". Certainly they are smart enough to ignore previous work that has been refuted and/or retracted, and it's not really fair for this previous work to contribute to assessments of what % of the field is "wrong".
People also need to realize what while the Discussion and Conclusion section of publications may often read like statements of truth, they're usually just a huge lump of spinoff hypotheses in prose form. Despite my frequent frustrations with the ways science could be better, the overall arrow of progress points in the right direction. Science isn't a process where the goal is to ensure that 100% of what gets published is correct, but whereby previous assertions can be refuted and corrected.
Edit:
To be more specific, I think the statement in your Introduction is overly critical: "The problem isn’t fraud but poor statistical education – poor enough that some scientists conclude that most published research findings are probably false". I would change it to say: "conclude that most published research findings contain (significant) errors", or something along those lines.