I don't put any credence on this. As the authors say this study merely suggests a possible relationship that needs to be investigated in a proper RCT.
What I don't like
1. Passive observational studies only. Therefore a huge risk of confounders (e.g. people who generally live a healthy lifestyle).
2. The effect size is not very large, making it much harder to winnow out confounders.
3. Invariably attempts to 'control' for confounders use simple linear models that that come nowhere close to actually controlling for confounders.
4. They generally only control for known confounders. If there is an unknown confounder it will get through and mess up the results.
5. It is really hard to tell what is a confounder and what is a legitimate node in the causal chain. As a simplistic example consider 'controlling' for lung cancer in a study of whether smoking is bad for you. Of course this would take away much of the reason why smoking is bad for you. You need a proper causal model informed by data - something that is hard to do. See Judea Pearl "Causality".
6. Note the heterogeneity of results. This suggests that things are going on that are not captured by the studies. Note also that the types of mushrooms don't seem to matter - a strange thing if there were some magic factor in mushrooms, given how mushrooms vary greatly in nutrient content.
7. Misuse of the term "significant" usually a shorthand for statistically significant. A very different thing from "practically significant, large enough to matter".
I don't think it's realistic to expect an RCT for this. Usually RCT setup for nutritional studies means "locking" people up and controlling every bite of food. However since the study in the article examines cancer risk (i.e. long-term outcome), doing an RCT with the idea of measuring who got cancer and who didn't would mean locking people up for years if not decades, which nobody would agree to.
that doesn't mean that the study is to be trusted 100%, just that it's not always appropriate to say "BS, need RCT", since sometimes studies like that aren't realistic
1. Suggested a way that a study like this could feasibly be executed as a RCT.
2. Directly addressed parts of the study that you thought were insufficiently modeled.
I think the reality is that it’s very difficult to do both of these things, but it doesn’t mean studies like this shouldn’t be done because they can yield otherwise unobtainable information. Like all studies, they need to be fully evaluated individually on their own merits.
And unfortunately, many nutrition studies are garbage as well.
From the link, first sentence:
> Next time you make a salad, you might want to consider adding mushrooms to it. That's because higher mushroom consumption is associated with a lower risk of cancer
Who writes garbage like this? They say it’s "linked" but draw causal conclusions anyways.
"If you're the kind of person who adds mushrooms to their food, you might or might not have lower probability of getting cancer"?
It always drives me crazy, because I'm not sure if I'm that kind of person. Am I a person who doesn't smoke or who only doesn't because it's unhealthy? Am I a person who doesn't eat butter or just avoid it because it might be bad for me? I genuinely think I can't know.
"because this is not a properly designed study and there is so much chance that this is pure coincidence or related to a non-captured factor, we don't recommend you do anything about your diet. And you can skip reading this paper as well."
That's kind of the whole point - the study should explicitly state that the new findings (as of now) do not provide any diagnostic about your personal chances of getting cancer, and do not provide any dietary recommendation whatsoever.
It's an interesting finding that may motivate further, more specific studies that can result in something actionable, which has value for researchers in the field but (as of now) it should not be used by general public for analysing their diet.
Register the experimental design before hand and a commitment to publish non-results. That’s the logistical piece. The harder one is the incentive structure in academia that focuses on publishing new/novel research. There should be similar rewards for doing good replication work on significantly important results (eg if everyone based their research on a study that hasn’t been replicated which happens with alarming regularity in the social sciences and psychology). High energy particle physics has this problem now too to some extent and they’re tackling it in other ways (you only have one LHC for example)
This should be possible. I think it is important for scientists to do both novel and replication studies.
One way to improve on the status quo would be to make students do both types of work (the point of a PhD is to demonstrate that you can do research, by doing it. So why not do both kinds). This actually could fit in nicely in the program: first/second years tend to be inexperienced anyway, so they could start by doing some replication work. Halfway through the degree, they switch to novel work.
What I don't like
1. Passive observational studies only. Therefore a huge risk of confounders (e.g. people who generally live a healthy lifestyle).
2. The effect size is not very large, making it much harder to winnow out confounders.
3. Invariably attempts to 'control' for confounders use simple linear models that that come nowhere close to actually controlling for confounders.
4. They generally only control for known confounders. If there is an unknown confounder it will get through and mess up the results.
5. It is really hard to tell what is a confounder and what is a legitimate node in the causal chain. As a simplistic example consider 'controlling' for lung cancer in a study of whether smoking is bad for you. Of course this would take away much of the reason why smoking is bad for you. You need a proper causal model informed by data - something that is hard to do. See Judea Pearl "Causality".
6. Note the heterogeneity of results. This suggests that things are going on that are not captured by the studies. Note also that the types of mushrooms don't seem to matter - a strange thing if there were some magic factor in mushrooms, given how mushrooms vary greatly in nutrient content.
7. Misuse of the term "significant" usually a shorthand for statistically significant. A very different thing from "practically significant, large enough to matter".