Hacker News new | past | comments | ask | show | jobs | submit login

In lieu of just linking to studies, I'll link to a great Slatestar Codex post: https://slatestarcodex.com/2014/04/28/the-control-group-is-o...



This would naively seem to be a yes to psi. Bem got a strong signal and Scott doesn't believe in psi so he thinks science in general is deeply flawed as it currently works.


That is absolutely not my takeaway from that article.


What is your takeaway? Scott at the end says he doesn't believe is psi despite the results because Coyne says psi is crap. That doesn't sound like checking your priors at the door.

At any rate contra your original claim it appears a psi effect has been extensively scientifically documented.


Should we "check our priors at the door", though? When neutrinos were "observed" to travel faster than light, the most appropriate conclusion wasn't "well I guess we were fundamentally wrong about just about everything we've found in the last century", but was probably something like "okay, but can I take a closer look at your data". With priors left aside, the foundations of physics are questioned. I think the same holds for psi: if you abandon your priors and just make conclusions from which way some data swings, there is something very different about the universe than what has previously been attested to.

I think Scott's analysis of "my priors against this are so strong that evidence in support of it looks more like evidence against the validity of the methodology" broadly matches with mine.

Forgive me if I am misreading you, here.


Yes, that is my takeaway. Psi is heavily evidenced by the scientific literature, and is dismissed because it goes against the priors of the scientific establishment. Not dismissed because there is anything inherently faulty with the research, at least compared to the standard scientific milieu.


Section V is where he explains why he doesn't believe the results.

"I looked through his list of ninety studies for all the ones that were both exact replications and had been peer-reviewed (with one caveat to be mentioned later). I found only seven...Three find large positive effects, two find approximate zero effects, and two find large negative effects. Without doing any calculatin’, this seems pretty darned close to chance for me.

...

That is my best guess at what happened here – a bunch of poor-quality, peer-unreviewed studies that weren’t as exact replications as we would like to believe, all subject to mysterious experimenter effects."


Why would an even proportion of positive to negative be considered chance? If half the entrants won the powerball and half did not, I would not consider that chance.


Because "large" here is still "very tiny". Large with respect to "p-values" isn't the same as "actually large". Equating the "positive results" to "winning the powerball" is not a fair comparison. If there were Powerball-level results here we wouldn't need further studies to confirm the effect exists.


Radin claims in his meta analyses he gets powerball level results, much beyond that in fact.

Either these are some quite incompetent and/or deceitful researchers, or they have found something we currently cannot explain.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: