Hacker News new | past | comments | ask | show | jobs | submit login

Are you familiar with the critiques of Radin's works? They're plentiful.



I found this nature article which claims he has inflated the p values, because Radin fails to include hypothetical negative tests in his sample analysis of the file drawer. Seems pretty clear, although I don't understand it yet, so I will need need to think about it further. That being said, I would still have trouble accounting for all the various pieces of evidence in his book. E.g. the anticipation graphs are pretty remarkable.

https://www.nature.com/articles/39784


I made a search in Google and picked one of the articles by Radin https://www.researchgate.net/publication/254203087_Evidence_...

The first thing I though was a bad synchronization, but the anticipation was 3 seconds, that is big enough.

The second possibility is the filtering process. It's weird and it's a common source of error. The interesting part is in page 21-22:

> Low consistency responders

> Given that there was better evidence for presentiment from the consistent responders, one wonders how the inconsistent responders performed?

> Examination of the raw data revealed that in most cases, the inconsistent responders were so labeled because one or two of their calm trials had exceptionally large within-trial variances (i.e., variance of the physiological measure from the moment of the stimulus to the end of the cool-down period). Because of this observation, as a post-hoc test we examined the mean presponse correlation for SCL for each inconsistent responder, after removing the one calm trial with the most extreme within-trial variance.

> Table 9 shows that the effect of removing this single trial from each of the total of 49 inconsistent responders (across both experiments), indicated as I* in the Table, dramatically changed the overall results.7

> All data from all subjects combined resulted in a nonsignificant zpre-r = 0.03, whereas removing the single highest-variance calm trial from each of the inconsistent subjects (leaving 98.5% of all data) resulted in a combined significant zpre-r = 2.99 for SCL.

So the problem is that they are filtering the people that made a random move just before seeing a calm image, but they are not filtering the people that did a random involuntary move just before seeing an emotional picture.


Very interesting. It is suspicious, but I also wonder if removing 1.5% of the data is enough to produce a statistically significant effect? In other words, filtering is definitely happening, but is it enough to account for the effect?

Also, he has a number of such studies. Are they all suspect due to filtering, or are some filter free?


If you remove the 1.5% of the data at random, it will almost sure no make a change that is statistically significant. If you cherrypick the 1.5% to remove, it's possible.

To simplify the examples, I'll assume that each person saw the same amount of clam and emotional photos, and also that half of the photos were in each class. (In the paper the number varied from person to person and the ratio was approximately 2 to 1, the conclusion is similar but it's more difficult to write.)

They use an analog sensor, but they are essentiality counting how many times a person reacted before seen an image. It can be a premonition or a sneeze or any other cause.

In the part I posted, they selected the people that reacted exactly once before a calm photo. With this selections it's not clear how many times each person would react before an emotional photo. They compare the data, and the difference was not statistically significant, so they reacted approximately once before an emotional photo.

This is somewhat a coincidence, there is no theoretical reason for this, but if you assume some sensible distribution of the chance to react randomly before a photo and use some hand waving, this is not very surprising because if people have no PSI abilities, they'd react approximately the same number of times before each set.

So now you have a bunch of people that reacted exactly once before a calm photo and approximately once before an emotional photo. We all agree that this is obviously not a proof that they have some premonition.

Then they remove the 1.5% of the reactions before a calm photo and keep all the reactions before the emotional photos.

And now you have a bunch of people that never reacted before a calm photo and reacted approximately once before an emotional photo. So there is a clear difference in the reactions before the photos and they misinterpret this as a proof of premonition.

They actually use an analog sensor, so there is more noise involved and makes everything more fuzzy. If the noise level were too high it could have overshadow the bad cut they made in the data, but the noise was not so high.

> Also, he has a number of such studies. Are they all suspect due to filtering, or are some filter free?

I don't have time to read every study he published, but if you link one (with full text) I'll try to see if I can find an error.


Thank you, I will dig through his studies and see if I can find one that does not use filtering.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: