This document is both a political document and, to a lesser extent, a scientific one.
On the political front, the document encourages people to reject that cancer predilection is stochastic and to encourage the continued study of cancer.
The scientific component attempts to provide an underpinning to the political goal. It uses a handful of epidemiological examples, such as the climbing incidence of colorectal cancer in Japan over a genetically-insignificant timescale (which suggests an environmental factor at play).
The article fails to support its headline ('Most types of cancer not due to "bad luck"') through any scientific instrument. While its policy goal (the continued pursuit of the causes of cancer) is reasonable, nobody is saying that we now know that cancer is predominantly caused by bad luck so we should stop studying it (certainly not Vogelstein, whose career is built on the study of cancer).
Overall, it's not clear to me why this was published.
The cancer research and cancer epidemiology communities do not really talk to each other (as a cancer researcher for many years I have never been to an epidemiology seminar despite working in a 'cancer center'). So it is really a fight for money and mind influence. Most cancer researchers lean towards drugs targeting molecular mechanisms of cancer, cancer epidemiologists, on the other hand, aim for political decisions that combat environmental factors. Where should most money go? what should be discussed in the media?
Vogelstein proposes a simple model that explains a lot of data. Sure, the implications are bleak, but the scientists I talked to were not really surprised that it finally got out.
Iodine. Have you looked into it? The abstracts over on nih.gov are pretty interesting - google for "iodine cancer nih" without quotes. I'm curious what a cancer researcher has to say. Mostly I'm interested in weather your peer group has even looked at this.
I think it will help most if I explain the incentives for scientists (at least in the US). From their first day in the lab scientists are rewarded most for generating data and making discoveries. The more novel and paradigm shifting the discovery the better - as it opens the door to high-impact publications and future research money. But "extraordinary claims require extraordinary evidence" and the more unexpected the finding the more difficult it is to convince your peers and get it published.
Somewhat idealistically, this has two consequences for a good lead scientist:
1) hew will guide his lab to explore the unexplored
2) and subject the most novel and promising findings to the most stringent experimental verification.
So the answer to your direct question - yes probably someone somewhere tried to test the iodine-breast cancer hypothesis, but since it is not a major topic in breast cancer research the experiments have probably failed and were not published and thus independently replicated
Overall, it's not clear to me why this was published.
The fifth paragraph explains why it was published:
“Concluding that ‘bad luck’ is the major cause of cancer would be misleading and may detract from efforts
to identify the causes of the disease and effectively prevent it.”
It's fairly obvious that if a study tells people most cancers are caused by bad luck and not bad actions, there will be some subset of the population that rationalizes their bad actions through such a study. If the study is inaccurate it could then actually cause additional cancers through misinformation.
> "if a study tells people most cancers are caused by bad luck and not bad actions"
Something to take into consideration. As far as I remember the original article talked about most "types" of cancers and it didn't refer to cancer cases. By volume, some cancers are common and very likely to have an environmental cause.
This shows a severe misunderstanding of statistics.
The original paper (here: http://www.sciencemag.org/content/347/6217/78.abstract -- the abstract is enough to deduce the conclusion) is a simple statistical analysis of mutation probability in certain cell types. It's basically saying that many cancers are not necessarily influenced by external factors and can be predicted by nothing more than the number of cell divisions and the probability of DNA mutation compounded.
That, in itself, is a fascinating scientific result. Arguing against the statistics here is probably not a good idea, unless there was a serious error in the math or an order of magnitude error in the probability of mutation. Certainly there are other possibilities of error in the data or methodology.
But the article in opposition (OP) is doing something odd that doesn't appear to be a scientific argument: it's railing against the language and the implications of the very concept of statistical analysis applied to the problem. It's assuming that because we discovered that, perhaps, cancer has a certain probability of occurring independent of external factors, that it will somehow slow research or cause us to throw up our hands and give up on prevention strategies or research.
I don't think that's the case. Rather than understanding the absolutely fascinating statistical analysis here, the OP article comes across as reactionary and unscientific.
Assuming the data is sound, the statistical analysis is profound. Most often, statistics such as that are profoundly misunderstood as well: people have an incredible capacity for attribution bias and data disbelief. This simply underscores the need for a better education in basic statistics as well as science across the board.
A specific critique raised in the press release from IARC is that the study has an
"emphasis on very rare cancers (e.g. osteosarcoma, medulloblastoma) that together make only a small contribution to the total cancer burden."
and that it
"excludes [...] common cancers for which incidence differs substantially between populations and over time."
So it sounds like the generalization hinted at in the abstract shows a bigger misunderstanding of statistics than any in the press release. Would be nice if the paper was not paywalled, so we could actually read it.
You're right. You might say "most cancers are caused by bad luck," and across the set of types of cancers... that might be the case. But if you were to say, "Most cases of cancer are caused by behavioral or environmental factors" you'd be saying something entirely different.
Yeah, we you look at total cancer burden and the associated epidemiology you can explain something like 90% of cancers from environmental sources (which includes things like obesity). Don't have a reference on me at the moment so feel free to disagree.
A 2005 Lancet paper [1] says that ~35% of cancer deaths can be attributable to a modifiable risk factor, and that's only risk factors likely to be causal, not known to be causal. If accurate, that would would support the "most cancer due to luck." However, this is the first that I've looked into it, and if it's higher than 50% I would be quite interested to see data supporting that.
The severe misunderstanding of statistics is in the original paper. I have a much more detailed post here, but basically just because many types of cancer are stochastic in their risk, doesn't mean that the incidence of cancer overall is stochastic. In fact, the most prevalent cancers are deterministic as mentioned in the WHO release.
The Science link is basically a retraction from the journalist who wrote the original summary article. She quotes the study authors making excuses about not having enough time to explain the subtleties under press deadlines. I've never read anything quite like it.
It seems everyone agrees that Figure 1 of the Vogelstein paper, showing the striking correlation between Log(cancer incidence) and Log(# stem cell divisions) was really a beautiful result. But then things fell apart with the further analysis and (especially) interpretation.
The Altenberg paper on Arxiv looks like exactly the sort of careful treatment that the original paper should have gotten in the review process. (MIT's Aaron Meyer raised some of these points in a blog post around the time of publication http://www.ameyer.me/science/2015/01/02/vogel.html )
>The Science link is basically a retraction from the journalist who wrote the original summary article. She quotes the study authors making excuses about not having enough time to explain the subtleties under press deadlines. I've never read anything quite like it.
I don't agree with that characterization of the Science news article at all. It's a summary of the secondary reaction to the press coverage, but I can't construe it as a retraction of the original news coverage. For example, here's what David Spiegelhalter says in it, who's entire research lately has been on accurately communicating risk:
>In this case, he felt, “the gist of the coverage is very reasonable—most cases of cancer are due to chance.”
I too feel that the coverage was quite accurate, perhaps the most accurate that I've seen of a science tidbit in quite some time. If one's entire research program is on how to minimize environmental causes of cancer, I can see how it would feel like one's research was being minimized entirely, and require a vigorous defense. However, these "defenses", in particular Aaron Meyer's, seem to be fact free. In contrast, the epidemiology world seems to agree almost exactly with the Vogelstein paper's estimate of 65% chance, 35% modifiable causes (though perhaps somewhat by chance, as their measurements are not exactly of the same thing):
>I don't agree with that characterization of the Science news article at all.
Here's the section of the news article that I feel reads like the journalist walking back the central claim in her original news article. It also seemed to me that she was attempting to pin the blame for the misunderstanding on Tomasetti (by quoting from her initial interview with him, and reporting that he had vetted her initial piece):
"...[W]as the “two-thirds” figure actually referring to a fraction of cancer cases? Tomasetti had explained to Science that “if you go to the American Cancer Society website and you check what are the causes of cancer, you will find a list of either inherited or environmental things. We are saying two-thirds is neither of them.” He also confirmed the news story's language describing the study before it was published. In a follow-up interview [...] Tomasetti clarified that the study argued that bad luck explained two-thirds of the variation in cancer rates in different tissues—a subtly different claim.
Despite the confusion among reporters, Tomasetti did not feel they had been careless[...] And, he believes, he did his best to convey his findings to nonexperts. “If given enough time, or space, I can explain the subtleties of any given scientific result to anyone really,” but there were only so many hours he could spend speaking with reporters on deadline. The material is complicated even for statistical gurus, he believes. He has been busy preparing a technical report with additional details, and Johns Hopkins also sent a follow-up explainer to journalists and posted it online."
I don't agree at all with your contention that the criticisms of the original paper are 'fact-free' and based on researchers feeling threatened. There are lots of problems with this paper; the fact that their conclusions may agree with data from other fields (or may even be right) doesn't change that.
I'm pretty confused by the term 'luck' in these papers. Surely 'random mutation' is exactly how cancer propagates in all cases? Risk factors coexist with 'luck'. The chance of suffering a cancerous mutation is like rolling a dice, and risk factors are the equivalent of the number of dice being rolled. It's not a question of 'OR', it's a question of 'AND'.
E.g.: You're a smoker who develops lung cancer: you are 'unlucky' in the sense you suffered a random mutation AND also 'responsible' because you smoke, which is a primary risk factor.
I haven't read the Vogelstein paper yet, but from the abstract it seems that when its talking about 'luck', its talking about errors in DNA replication. The rate of these errors is pretty constant in healthy cells (about 1 per duplication), so its pretty much a matter of luck whether you acquire a mutation in a place that matters or a place that doesn't, and it doesn't really matter who you are or where you are. He seems to be contrasting this to other mechanisms that cause mutations, like chemicals, radiation, or viruses. Sure there is 'luck' involved at the individual level once you've been exposed, but the point is if you reduce exposure across the population, you reduce incidence rates. With DNA replication, there isn't anything you can do to reduce incidence rates.
The question is: would the random mutation have occurred and have resulted in cancer anyway for most of those engaging in 'risky' behavior?
Suppose only breast cancer (in women), prostate cancer (in men) and lung cancer (in both genders) existed. Suppose further that breast and prostate cancer account for 60% of all cancers and that smoking doubles the chance of lung cancer, but doesn't influence the chances for breast or prostate cancer.
For any individual cancer patient, the chance that they were 'unlucky' would be much larger than the chance that their behavior was relevant to their cancer. This is true even for those that smoke. Especially for those that get breast or prostate cancer, but even for lung cancer it would be 50/50.
It bears pointing out some inaccuracy in language here, notably that 'types' and 'cases' are not the same thing.
Most 'types' of cancer are unequivocally about bad luck; nearly any cell line in your body can develop overproliferative mutations. There are hundreds of 'types' of cancer.
'Cases' and deaths from cancer are, globally and in the US, more environmentally influenced, because of the huge impact of smoking on lung and colorectal cancer incidence and mortality in particular.
Great advise. There is nothing we can do about our "cancer good/bad luck" but there is a lot we can do to control life style issues.
This is just my opinion, but I think all of the following probably help: try to have much less stress in your life; plenty of exercise; prefer organic food if available; avoid deep fried food; go light on barbecuing, as tasty as it is; eat lots of and a wide variety of green vegetables.
Physician here. I think I can provide some clarification for what is going on here. I just read the article the WHO is addressing and this is a line from the final paragraph:
Our analysis shows that stochastic effects associated with DNA replication contribute in a substantial way to human cancer incidence in the United States.
First, the terminology is important to understand. In medicine, "stochastic" is in contrast to "deterministic" which describe the occurences of pathology based on a risk factor. In a "stochastic" model, if you receive 10 severe sun burns then your chances of getting a skin cancer go up over someone who only received 1 severe sun burn, but are by no means certain or even that your cancer will be more severe... simply an increase in probability. In a "deterministic" model, if you receive 5 Grays of radiation then you will have less severe symptoms than someone who received 50 Grays of radiation and for the most part, your symptoms will be certain and predictable.
Now you need to have one element of cancer biology under your belt:
This basically says that as cells divide, errors are propagated through their progeny until there is a proverbial 'straw that breaks the camel's back' and the cell becomes cancerous.
In summary, that line from the final paragraph is basically saying that cells that divide more rapidly in the human body are more likely to become cancer. Well, yeah of course!
We have known this for a long time but what this paper did is sort cancers into two groups, one that is primarily stochastic and one that is primarily deterministic. The result appears impressive because they have a lot of cancers on the stochastic side... stating that their incidence is more predicted by cell division rather than the deterministic, preventable factors.
But as the WHO correctly stated, they are incorrectly analyzing this data as described in this paragraph:
These include the emphasis on very rare cancers (e.g. osteosarcoma, medulloblastoma) that together make only a small contribution to the total cancer burden. The report also excludes, because of the lack of data, common cancers for which incidence differs substantially between populations and over time. The latter category includes some of the most frequent cancers worldwide, for example those of the stomach, cervix, and breast, each known to be associated with infections or lifestyle and environmental factors.
The cancers grouped into the stochastic side overall make up a tiny percentage of all cancers and so by number/%/incidence, the deterministic side would be enormous if graphed appropriately. For example, their most 'stochastic' cancer is pancreatic islet cell which only has 2500 cases/year in the US vs their 'lung smokers' cancer which has >220,000 cases/year.
I mean they seem almost willfully blind to their incorrect conclusion. Environmental factors are HUGE and the cancers caused independent of those would not benefit from screening and environmental risk reduction--a fact we already know!
As a last note, the 'most' deterministic cancers--inherited types--have to be agressively treated with screening, prophylactic measures and environmental exposure risk reduction, etc. You can read about a couple of the deterministic cancers they mention in this paper:
This is the key point. Tomasetti and Vogelstein (2015) arguably demonstrated that most types of cancer are primarily stochastic. But it's the primarily deterministic types of cancer that account for most morbidity and mortality.
The variation among geographic locations is interesting. On the other hand the rise in the rate of colorectal cancer in Japan may simply be due to a rise in cancer screening. Do Japanese die more often from colorectal cancers? For an example of why this is relevant, over 80% of men in their eighties may have prostate cancer yet the lifetime risk of dying from it is only 3% for a 50 year old man in the US (see http://ije.oxfordjournals.org/content/36/2/278.full ). So if we improve screening of prostate cancer we may find a lot more cases even though men's risk of dying from it is unchanged.
"A recent autopsy study reported on 314 African American and 211 Caucasian (sic) men aged 20–80 years who died of trauma in Detroit, USA. Microscopic evaluation in each case was based on 10–14 whole-mount step sections that were 2–3 mm thick.4 This study demonstrated a high prevalence of prostate cancer that increased progressively with advancing age and was similar at all ages in African American compared with Caucasian men (around 10%, 30%, 40%, 45%, 70% and 80% in the 3rd, 4th, 5th, 6th, 7th and 8th decades, respectively). When this ubiquity of microscopic prostate cancer is placed in the context of lifetime risks of clinical or fatal prostate cancer (about 10% and 3%, respectively for a man aged 50 years in the USA),5 these data indicate that local or distal progression of early cancer is far from inevitable within a man's lifetime. Put another way, only a minority of prostate tumours are highly aggressive and life-threatening, while the majority are slow-growing and indolent."
A piece of data that appears to corroborate this article was recently on HN. It concerned a Greek island where people have unusual long life[0], and followed a man who was diagnosed in the US by 9 doctors with terminal cancer... only to survive them all from his island where it is believed several lifestyle factors play important roles in longevity (and cancer prevention/recovery for this man).
> For many cancers, the authors argue for a greater focus on the early detection of the disease rather than on prevention of its occurrence. If misinterpreted, this position could have serious negative consequences from both cancer research and public health perspectives.
> In principle ... nearly half of all cancer cases worldwide can be prevented
I am not sure what to take from this article. Lung cancer prevention is already in progress. At the same time prostate cancer has even higher mortality (for non smokers) and preventive checks are not even covered by most insurance companies.
I believe that it's something elderly men commonly die with rather than of (especially allied with medication to suppress the tumour) but it's not a universal truth either.
There is a great difference between the incidence of prostate cancer and the death rate from it. Prostate cancer is currently diagnosed in 15 to 20% of men during their lifetime, but the
lifetime risk of death from prostate cancer is only 3%. Source: European prostate cancer guideline (page 45): http://www.uroweb.org/gls/pdf/09%20Prostate%20Cancer_LRLV2.p...
> Tumour grade is clearly significant, with very low survival rates for grade 3 tumours. Although the 10-year cancer-specific rate is equally good (87%) for grade 1 and 2 tumours, the latter have a significantly higher progression rate, with 42% of patients with these tumours developing metastases (Table 8.4). Patients with grade 1, 2 and 3 tumours had 10-year cancer-specific survival rates of 91%, 90% and 74%, respectively, correlating with data from the pooled analysis (49) (LE: 3).
Wherever one dies 'with it' or 'because of it', this cancer might progress into metastases, and it brings higher mortality rate.
You can become old with localized prostate cancer. You have to make the choice if you want to live with some uncertainty (the cancer could progress into metastases) or to get treatment now. The downside of treatments, like surgery and radiation, is the risk of becoming impotent and/or incontinent.
The council recommendation on cancer screening for the European Union accepted in 2003 states that PSA testing for prostate cancer, though promising, does not meet the criteria of having proved to decrease the cancer-specific mortality, or well known and acceptable benefits and risks, as well as cost-effectiveness. Therefore, prostate cancer screening is not recommended. The statement emphasizes the importance of the randomized trials and specifically cites the ERSPC in this respect.
This position is consistent with the recommendations of an expert panel organized by the World Health Organization (WHO) and the International Cancer Union, which stated that sufficient evidence showing the benefits of prostate cancer screening in terms of mortality reduction is still to emerge. Therefore, offering screening as part of health care policy can not be recommended without further evidence.
Similar conclusions about withholding screening due to lack of evidence have also been reached in assessments of the U.S. Preventive Services Task Group and the U.S. National Cancer Institute.
Nevertheless, screening does take place even if it is not part of the policy. This is done on the basis of judgment and the responsibility of individual physicians and their patients, who may in some circumstances, regard the possibility of benefit as more important than lack of demonstrated effectiveness.
The PSA test, which can give an early indication of prostate cancer, is available to you
if you want to be tested. However, experts disagree on how useful the PSA test is. This is why there is a lot of research and why there is no national screening programme for prostate cancer in the United Kingdom (UK).
You need to:
> screen 1410 men
> then take a biopt from 340 of them
> then diagnose 82 men with prostate cancer
> to save 1 man from dying from prostate cancer
This official statement is rather odd as others have pointed out. Anyway, the biggest problem with the original paper is that they leave out breast and prostate cancer, the two most common cancers. Maybe these would support their hypothesis, but they leave them out and they don't really explain why. It's probably because we aren't sure what the true incidence is, due to ascertainment bias from increased screening. Whenever I read a broad ranging conclusion in life sciences research, I remind myself to stick <Except when it isn't> on the end (eg Most cancer is due to bad luck - except when it isn't.)
The paper also technically sets out to explain why cancer incidence varies between different tissues. An analogy is correlating the average temperature with distance from the equator. It is clear that most of the difference in the mean temperature of Equador compared to Iceland is explained by distance from the equator. You can create categories like number of days with rain, number of days with a temperature > 30 degrees C, and each of these values will be highly correlated with distance from the equator. But it does not follow that this strong correlation means that there are hardly any environmental influences on the temperature in St Petersburg tomorrow, and that we can fire all the meteorologists.
(Disclaimer for what follows: I do life sciences research, so I may have an overly pristine view about physics.)
The really interesting thing about all this I think is the collision of maths/physics/engineering and life sciences! Vogelstein wrote this paper with a mathematician. They basically did a back of the envelope calculation, an approach that is much favoured by engineers but completely alien to life science researchers. These simple calculations are useful because in the physics/engineering paradigm, abstractions are incredibly powerful and non-local. For example, many important physical laws that apply equally across many orders of magnitude, can be derived from thinking about falling apples or billiard balls. Paul Dirac predicted the positron by simply exploring other possible solutions to an equation! To me, this is an extraordinary and winning moment for quantum physics, where the theory is so powerful, that it can drag us screaming towards completely unintuitive and otherwise inaccessible conclusions. To a biologist, this kind of thing is ridiculous and alien. The same abstractive power is almost completely absent, and both experimental and theoretical models are extremely limited.
There are many reasons for this. You could argue that biology is not amenable to the often time-invariant abstractions so useful in the other sciences. Or perhaps biologists aren't trained to think that way. In any case, it is interesting to watch when an engineer encounters biology, and this paper is an example. I am still not sure whether we need more engineers in biology or not?
I think it is abundantly clear though that breakthroughs in biology don't come from professors having epiphanies while walking amongst the pine cones on a cloudy autumn day. Human intuition alone is failing to get us very far in biology. We need a paradigm that can deal efficiently with uncertainty, incomplete yet massive data, noise, simulation of highly parallel processes on long time scales, causality in networks of highly correlated actors etc - I hope that some future melding of computer science, biology and staggering computational resources will give us more useful ways to investigate life.
It's not as simple as that. Most "indecisiveness" that appears to come from science is from different studies being taken out of context, or poor media reporting. Most wild claims you see aren't from the researchers themselves.
See Steven Novella's recent blog post on this subject.
I'm not sure that matters. It's ok to eat cholesterol now, right? All these "mistakes" give science a black eye. We've already got a Climate Change trustworthyness problem and for some reason people don't trust vaccines, mostly educated people. As a nation, we refused to fund both the Hubble and the Supercollider, which would have been bigger than CERN.
It's probably time that we stopped making excuses and think of a better way to explain the "science" to people.
The position of "science" has been to eat a balanced meal and exercise for as long as I can remember. You won't see a medical journal recommending fad diets. Steve covers this fairly well in the article I linked.
But still, public outreach could be greatly improved. It's unfortunate that many public doctors or educators on the subject are shills like Dr. Oz.
You could start by providing the links to the science! Why should I trust what you're saying? There's always someone on the Internet making claims. People debate have raging debates and few people actually provide real data.
Fair enough. :) Though in my case, showing where something "doesn't exist" isn't the most possible. But there's plenty out there on recommending balanced meals and exercise.
> It's ok to eat cholesterol now, right? All these "mistakes" give science a black eye.
You're confusing central planning and science. Governmental politics and advisory is a form of central planning, and in that realm mistakes are shunned upon.
Science is a protocol that, if duly practiced, will evolve our understanding towards the truth. But the path is not straight. And science does not give any way for determining whether we have found the truth at the present moment.
That's why in proper science everything is in constant flux. For some people that's terrifying, so they prefer central planning and static advisory from above.
"Proper science" is practically practiced by no scientist outside of Math and possibly Physics and Chemistry. Biology, medicine and nutrition, for example, are not "proper sciences".
E.g. the lipid hypothesis (cholesterol causes heart problems) does not, and never had, scientific support (In the "proper science" sense), and did have data against it for about 30 years, but it's very hard to find any scientist who would admit it. Similarly, the hypothesis that cholesterol intake makes a significant difference to serum levels was never proved, essentially disproved many times - and yet, it is taken as an axiom but most "scientists".
So, the fact that some "proper scientific practice" would have shown differently is of no practical consequence.
I think it's more of a problem of misaligned incentives rather than the method itself. There's a lot of room for politics, ass-covering, etc. Studies that fail to replicate something are shelved, even though they're useful information. Studies that indicate a negative, ditto. Studies are "nudged" to show something so the author can show that it wasn't a complete waste of time, etc.
They're understandable, but they're bad for science.
The author makes a few references to the idea of "scientific consensus", which I find curious. It's a phrase that I tend to cast aside as politically motivated. Politicians in the climate change debate like to point to "scientific consensus" as a way of saying, "look, all these intelligent people whose work we personally don't understand agree, so they must be correct."
But one tenant of scientific thought is that truth and factuality are not measures of an individual's or society's belief, but rather, they are discovered by experiments whose results can be replicated.
> People distrust science when it conflicts with their valued beliefs, or when science suggests a solution or intervention that conflicts with their beliefs. People are happy to trust science when it does not conflict with their ideology or narrative.
I agree, but those same people generally rationalize those views to themselves in the way Adams noted: by pointing at reasons to distrust science or scientists. They get caught up in issues like "climategate", they assert that results about medical science are distorted by financial pressures, etc.
In short, the rationalizations of individuals who reject scientific results often point to the scientists behind the results, their interests and motivations, instead of the results themselves. This isn't exclusively true, but I think the tie in to the idea of "scientific consensus" is this:
Perhaps science would be better served by a decreased emphasis on the individuals carrying out scientific research and increased emphasis on the results themselves.
> But one tenant of scientific thought is that truth and factuality are not measures of an individual's or society's belief, but rather, they are discovered by experiments whose results can be replicated.
Discovery and replication can be complex affairs though: was the experiment competently executed, was it interpreted the right way, did any attempt at replication properly replicate the conditions of the original, is there theory to go along with the experiment and how convincing and coherent is that theory, and so on and so on. And if there are points of disagreement, good luck trying to figure out who is right without any domain knowledge.
So, yes, ultimately, as a non-scientist in a lot of cases your best bet is to look at what the scientific consensus is, rather than trying to be a scientist yourself.
This is reasonable -- I admit, I accept as true many results that I don't fully understand precisely because I don't have time to learn everything about every field.
But I'm not sure that this kind of leniency with respect to evidence for beliefs is incompatible with the idea that we should focus on the results over the individuals.
> Perhaps science would be better served by a decreased emphasis on the individuals carrying out scientific research and increased emphasis on the results themselves.
No thanks.
One of the worst aspects of modern day science right now is that the media et al seems to think it is okay to allow anyone to present counter arguments regardless of their scientific background. This causes problems for the laymen who has no ability to even comprehend let alone compare results from different individuals. And it is laymen who ultimately determines what science gets funded and what happens to the results of the science e.g. political action.
The results of one study could mean anything and I agree it should not be emphasized to the public.
However, the term "scientific consensus" makes truth sound like a popularity contest and worse - a popularity contest by people who claim they are better than you. This appeals to the authoritarian nature of people, not their critical nature. It is basically telling the public they aren't allowed to think. It's not only a jerk thing to do, it discourages scientific literacy outside of scientific professionals.
Another problem with the "scientific consensus" is due to its political influence and nebulous definition it is rife for abuse. If there is no rigorous methodology to determine it in an accurate and replicable manner, then it can be whatever someone says it is.
I agree that we need some sort of way of determining scientific validity by taking into account many tests, theory and perpectives, but "Scientific consensus" (whatever that is) is not it.
I agree that armchair "expert opinion" is a significant problem at this time (see Harry Collins, "Are we all scientific experts now?"). However, I'm not sure I see how a social emphasis on evidence over authority would invite more of this.
Collins, for instance, traces this problem to misplaced feelings of expertise. He believes we should "elevate science to a special position in our society." I mean to suggest that if we focus less on authority ("x says this, x is an expert, x is true") and more on science ("experiment y is evidence that z"), then this kind of layperson interjection would be excluded from social discourse.
Your use of an imagined ignorant politician wielding the phrase 'scientific consensus' in the affirmative as a disagreeable act seems odd to me. It is generally the 'skeptic' ignorantly asserting 'there is no consensus' which brings focus to the concept in that debate:
"The skeptic attitude to consensus usually starts with “there is no consensus”. That’s wrong, and they usually retreat from it to “but consensus science is meaningless”, and/or “consensus has nothing to do with science”. The latter is largely true but irrelevant. The existence of the consensus doesn’t do a lot to determine what science is done; it doesn’t prevent contrary lines being explored. But the consensus view does come into the tricky interface between science and policy, and science and the media." [1]
> they assert that results about medical science are distorted by financial pressures, etc.
I'm not sure what you mean here? Much science conducted is so distorted by financial pressure that it isn't scientific. Consider how an entire area of research in the pharmaceutical industry should probably just be binned [2].
Considering this, it is more essential than ever in considering the individuals carrying out the research. As long as we have weak and corrupted regulatory systems there will be a continued distrust of any industry-sponsored research, and in many areas that's the research that dominates.
Until there's a global change in how research is funded and conducted there seems little hope of trusting that the output is very scientific.
The hallmark of science is predictive power. Experiments, replication, peer review are all nice but optional.
When you have a general enough theory, you don't need to re-do the exact same experiment. You may not need to experiment at all and get by on mere observation. (Just call it a "natural experiment" when pressed.)
I don't think this is correct. Willingness to accept theories based on our limited understanding of the underlying facts and our observations has led to widespread, incorrect belief before.
Descartes and Galileo accepted atomism because, based on their limited understanding of chemistry, it made sense. It wasn't until experiments like the cathode ray tube experiment and the gold foil experiment revealed more elementary particles and forced people to revise their beliefs.
Philosophers of science like Karl Popper like to focus on falsifiability over induction precisely because we tend to become complacent in beliefs that don't tell the whole story and otherwise obscure more interesting, less intuitive underlying phenomena.
I feel like, if we had it your way, we might still be atomists.
On the political front, the document encourages people to reject that cancer predilection is stochastic and to encourage the continued study of cancer.
The scientific component attempts to provide an underpinning to the political goal. It uses a handful of epidemiological examples, such as the climbing incidence of colorectal cancer in Japan over a genetically-insignificant timescale (which suggests an environmental factor at play).
The article fails to support its headline ('Most types of cancer not due to "bad luck"') through any scientific instrument. While its policy goal (the continued pursuit of the causes of cancer) is reasonable, nobody is saying that we now know that cancer is predominantly caused by bad luck so we should stop studying it (certainly not Vogelstein, whose career is built on the study of cancer).
Overall, it's not clear to me why this was published.