> 2011: Daryl Bem publishes his article, “Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect,” in a top journal in psychology. Not too many people thought Bem had discovered ESP but there was a general impression that his work was basically solid, and thus this was presented as a concern for pscyhology research.
> In retrospect, Bem’s paper had huge, obvious multiple comparisons problems—the editor and his four reviewers just didn’t know what to look for—but back in 2011 we weren’t so good at noticing this sort of thing.
I was a postdoc in a Psychology department when this was going on, and "obvious multiple comparisons problems" isn't a good characterization. Any competent psychology researcher in 2011 (a) understood multiple comparisons and looked for them as a matter of course (b) knew there was something wrong with Bem's paper (see the editorial disclaimer).
That is some pretty advanced statistics, not just "correct for multiple comparisons".
What was ongoing then, and continues now, is that psychology and social science in general is coming around to the realization that the tools of the past 50 years are flawed, and to correct them, they need to become better statisticians. But it isn't a matter of "take stats 101 noobs", these are people who have been doing statistical analysis routinely for years. I think there is anxiety that to really do things right you need to _primarily be_ a statistician.
So there is some defensiveness in social sciences about this, certainly not helped by the fact that every jackass on the internet whose taken an undergrad math class thinks they know better.
In the end I quit my psych research job to be a software engineer since all the stats hurt my head and I needed something less quantitative to do.
I too am coming from psychology looking to make the transition into a career in tech, and would be very interested to hear more about your experience making the transition*. But I would like to offer my experience having just got out of school.
I agree that there is absolutely a need for a transition to more advanced statistical methods in the field. In cognitive psychology at least, you are starting to see growing interest in adopting Bayesian techniques and moving away from null hypothesis significance testing. But unless you come across it on your own, the difference between Bayesian and Frequentist statistics is unlikely to be referenced until the graduate level. I believe Bayesian methods may alleviate some issues. For instance, one of the studies we were starting up towards the end of my time at the lab had an interesting property in that using Bayesian methods the study expected to do what could be thought of as corroborating or supporting the null hypothesis. If Bayesian methods start seeing wider adoption, I have to wonder how careful people will be thinking about their choice of priors, but it's a step in the right direction.
With regards to the statistics that are being taught. Quite a few of the K300 (Statistics for Psychology) courses are now being taught using R, but my own K300 course emphasized learning to do it by hand and didn't allow calculators. An interesting point brought up on a podcast I was listening to [1], was that we still teach statistical methods in the order they were developed/discovered and not the order that makes the most sense. I could see how teaching ANOVA as a special case of linear models might be less hand wavy. Interesting podcast, the professor they're interviewing is advocating a technique called structural equation modeling which I would love to find time to read up on.
However the research the lab I was with does specializes in building mathematical models of category learning, and has a relatively strong quantitative focus, so I can't say how this extends to other subfields or other universities. I no longer have the paper, but I saw a survey of psychology departments awhile ago that I believe found the number of methods courses being required in graduate programs was declining and fewer universities having researchers that specialize specifically in methodology. The paper made an interesting point that when you're specialized in methodology, you may play an important role increasing the quality of everyone else's work. However if your work specifically targets researchers, you're unlikely to see the same kind of funding or high profile journal publications seen by people in more applied areas. We can't all be like Tversky, but hopefully we'll start seeing some of that return now especially after it kind of declined around whenever psychophysics decreased in prominence.
I guess part of where the apprehension of increasing the complexity of statistical methods may be coming from is (1) that people might be worried about decreasing the accessibility of their work or (2) if you don't have a strong understanding of the math, you run the risk of pushing complexity somewhere you aren't as equipped to deal with it. With regards to (1), I know that one of our frequent collaborators had developed a quantum dynamics model of decision making that was showing impressive results characterizing the data, but I do not envy the amount of effort I'm sure he has to put into explaining the math in his papers. (2) might be addressed through more interdisciplinary collaboration, but I think you need both the support for development of methodology and adoption.
If you don't mind me asking, what area were you working in? And how did you go about transitioning to becoming a software engineer? I started out programming doing simulations like Conway's Game of Life and the like in a course that taught programming for cognitive scientists, and along the way kinda fell in love with it. When I decided to do an honors thesis, I got the chance to do way more programming than is expected in an undergraduate psychology degree: Using an OWL ontology in the planning stage to help with the experiment's implementation, debugging the experiment, doing analyses, and writing a visualization program to simplify recovering the orientation of Multidimensional Scaling Solutions. Coming out of my undergraduate, I'm thinking a career writing software either as a developer or engineer looks preferable to going back to school right now, but python junior-dev positions are proving tricky to find in Indy.
> In retrospect, Bem’s paper had huge, obvious multiple comparisons problems—the editor and his four reviewers just didn’t know what to look for—but back in 2011 we weren’t so good at noticing this sort of thing.
I was a postdoc in a Psychology department when this was going on, and "obvious multiple comparisons problems" isn't a good characterization. Any competent psychology researcher in 2011 (a) understood multiple comparisons and looked for them as a matter of course (b) knew there was something wrong with Bem's paper (see the editorial disclaimer).
Here is the main takedown of it: https://dl.dropboxusercontent.com/u/1018886/Bem6.pdf
That is some pretty advanced statistics, not just "correct for multiple comparisons".
What was ongoing then, and continues now, is that psychology and social science in general is coming around to the realization that the tools of the past 50 years are flawed, and to correct them, they need to become better statisticians. But it isn't a matter of "take stats 101 noobs", these are people who have been doing statistical analysis routinely for years. I think there is anxiety that to really do things right you need to _primarily be_ a statistician.
So there is some defensiveness in social sciences about this, certainly not helped by the fact that every jackass on the internet whose taken an undergrad math class thinks they know better.
In the end I quit my psych research job to be a software engineer since all the stats hurt my head and I needed something less quantitative to do.