It's an interesting article, but it seems they're extrapolating a lot from very little signal.
Look at the y-axis on the graphs. The average value for "relationship satisfaction" looks to be around 79%, but the variation over time only appears to go from a low of around 77% to a high of 83% (eyeballing it).
Basically +/- 3%.
There's barely any signal there. A variation of a couple percent up or down might indicate something small on a large enough sample size, but it's basically nothing in the big picture.
If you've ever done a social sciences study as a participant in your undergrad years you know these stats are complete BS. People just try to get it over with as soon as possible and click fast through surveys. Ratings are incredibly subjective and there basically is no objective measurement. The authors of the studies go in with preconceived notions of what the results should look like based on the existing literature.
It reminds me of a famous racial bias study taught in "Leadership" courses in which the authors found that people had slower reaction times to pictures of black people and came to the conclusion that this meant we must have some innate bias against black people. Zero attempt to control for the image brightness, contrast, or any other potential explanatory factors.
> It reminds me of a famous racial bias study taught in "Leadership" courses in which the authors found that people had slower reaction times to pictures of black people and came to the conclusion that this meant we must have some innate bias against black people. Zero attempt to control for the image brightness, contrast, or any other potential explanatory factors.
Look, the IAT (implicit association test), which you are presumably talking about, has a lot of problems, but that ain't one of them. Pretty much all of the things you have talked about have been examined in multiple, independent studies.
It doesn't seem to replicate massively well in terms of behavioural impacts, but to suggest that social scientists don't control for obvious things is just false.
That being said, the US approach for getting social science participants is insane, and should be destroyed.
Look at the y-axis on the graphs. The average value for "relationship satisfaction" looks to be around 79%, but the variation over time only appears to go from a low of around 77% to a high of 83% (eyeballing it).
Basically +/- 3%.
There's barely any signal there. A variation of a couple percent up or down might indicate something small on a large enough sample size, but it's basically nothing in the big picture.