In most fields of study you can't really perform double-blind experiments. We know that smoking is linked to cancer through decades of correlational studies and careful analysis of confounding factors, for example.
The article discusses how the study looked at different universities during the same time period, some of which had access to facebook and some of which didn't, and discovered that in the first case there was an increase in mental health issues over that period. There could still be confounders, sure, (or the sample size could be too small etc.), but at a first glance, that's not an unreasonable approach, as it tries to isolate the variable "facebook yes/no".
That said, if you haven't read the article, I'm not sure why you even felt the need to comment? This is exactly the same kind of shallow dismissal I was calling out.
I made it a point to not read it, because virtually all social science papers are like these. It's really not worth my time and why the "shallow" dismissals should be the default response.
Going back to your specific comments. Clearly the universities were not randomly assigned the treatment and control. And the actual number of independent sample sizes is extremely unlikely to give stat sig results at the single percentage digit impact shown. And no matter what they do, for something as complex as mental health, listing out all the confounding factors is hopeless - unlike lung cancer where you are literally sucking tar into your lungs and the sample sizes and effects are huge. Its a useful observational study, but it is ridiculous to call it a proof.
> We know that smoking is linked to cancer through decades of correlational studies and careful analysis of confounding factors, for example.
Yes, it took decades, when there is no proper control set. There are work arounds like backdoor and front door criteria, but yeah - it will take decades of work and looking inside the "black box".
Proofs are for mathematics, not for science. (I share your distaste for science journalism that throws big words like "prove" around without much care, but that's probably not something you can fault the study authors for.)
This is evidence in favour of a theory. It is to be understood within a larger body of evidence. Eventually, hopefully, there is enough evidence in one direction or another that we may draw more or less definitive conclusions.
> I made it a point to not read it, because virtually all social science papers are like these. It's really not worth my time
Nobody is forcing you to read this study, but somehow you seem to assume that your shallow dismissals (to which you are of course entitled privately) are worth anyone's time.
> but at a first glance, that's not an unreasonable approach, as it tries to isolate the variable "facebook yes/no".
I agree it's not unreasonable, but you have to account for the fact that back then, most of the colleges that had it were top tier/high stress/highly selective colleges. Facebook started at Harvard, then went to Yale and Princeton, and then on to basically most of the US News top 50.
The smoking comparison is very apt I think. People and institutions persistently pointing out that correlation isn't causation is a big part of why it took decades for the link between smoking and cancer to become commonly accepted after it was well established.
Some were surely acting in their own personal financial interests but I'm also certain that a lot of it was more nuanced and personal. People need to think of themselves as, for the most part, good people who do mostly good things. Knowingly contributing to something that makes life much worse for many people doesn't align with that and they will need to deny it. I know if you polled phillip morris employees about cancer in the late 60s after the link was confirmed you'd hear a lot about correlation and uncertainty.
HN isn't a random slice of the population. A lot of us here work in this domain or on similar products. There are certainly people in this comment section who directly worked on the core facebook product being discussed. They need to think of themselves as good still, too.
Smoking to lung cancer has a very direct delivery mechanism, inhaling tar into the lungs. The effect size and sample sizes are big. The hypothesized mechanisms here - unfavorable social comparison is far more tenuous, and the sample size here is number of universities- not number of students.
These 2 are vastly different situations.
To give an example. Establishing causal effect between nicotine and lung cancer is an open question, even as the causal effect of smoking on cancer is very clear.
The article discusses how the study looked at different universities during the same time period, some of which had access to facebook and some of which didn't, and discovered that in the first case there was an increase in mental health issues over that period. There could still be confounders, sure, (or the sample size could be too small etc.), but at a first glance, that's not an unreasonable approach, as it tries to isolate the variable "facebook yes/no".
That said, if you haven't read the article, I'm not sure why you even felt the need to comment? This is exactly the same kind of shallow dismissal I was calling out.