Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It looks a bit like you’re quoting the conclusion of the study and saying there wasn’t much to conclude. But here is the actual conclusion, copied and pasted, in case people read your version and come away thinking it’s what the study actually concluded.

“This study suggests that combined MICT and HIIT has no effect on all cause mortality compared with recommended physical activity levels. However, we observed a lower all cause mortality trend after HIIT compared with controls and MICT.”

This is a scientific study, presumably posted because the poster believes we can read it and understand it. I’m not sure why it needs to have its conclusion rewritten and then some opinion based advice added.



I think the useful takeaway for Norway is that there is no evidence that they need to change their national recommended guidelines for physical activity.


That quote leaves out important context. They did not actually observe lower all-cause mortality with HIIT compared to the controls nor MICT, because both results were statistically insignificant.


They did observe lower all-cause mortality, just like they said themselves in their conclusion. They observed that, but were not able to safely conclude a causal relationship.

In particular, in the case of HIIT vs MICT they were within a percent or two of that 95% confidence. It is clear that further investigation is necessary, as it is quite a bit more likely than not that a causal relationship will be determined.

It is a good practice for scientists to not draw firm conclusions, nor for policy makers to adjust policy, for results that fail to meet that level of statistical significance. However, trying to shut down discussion and interpretation on that basis is neither helpful, nor productive.


Statistical significance is not binary. And, personally, I have more confidence in a p=0.07 study than a p=0.048 study


> And, personally, I have more confidence in a p=0.07 study than a p=0.048 study

Why on earth would that be?


There's a depressingly good chance that the p=0.048 was p-hacked while we can trust the p=0.07 to be what it says on the tin.


I was reading a scientific paper on the efficiency of face masks based on a meta-study of other papers. My perosnal conclusion was a lot more uncertain than the conclusion that the paper came to.

Most of the evidence came from n95 respirators in hospital settings, while the "normal" medical mask evidence was a lot weaker. Yet their conclusion was to advise mask use for the general public (which is gong to be the normal masks, or cloth masks in the majority of cases) based mainly on n95 evidence.


Scientific studies are validly subject to inspection and comment. Just because the authors state something doesn't mean they've reached the correct conclusion, that the study was well designed, that there are not alternate explanations, etc. IOW, it's perfectly acceptable, and useful, to critique a scientific study. In fact, it's a cornerstone of the scientific method, practiced daily in research institutions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: