In general these are valid remarks. But we're not talking about random clueless people deciding to play statistics, the research is obviously conducted by people who know how this kind of stuff (statistics) works. This reminds me of laypeople trying to poke holes in theories of physics (and I'm not talking about quacks and lunatics, just well intentioned people who think they have a better explanation based on rudimentary high school or even undergraduate science and math). It's such a charming, cute naivete. Do they really think their simple ideas never occurred to a single professional physicist who immediately saw it was wrong for a number of reasons?
So, do you really think the researchers in this case hadn't thought of this? I mean yeah, it's OK to doubt and inquire, but come on, this is so rudimentary...
> If the questions are all yes/no questions
The article says (or at least implies) that the subjects give probabilities of "yes" to each question.
EDIT: Oh, and to add, the article clearly says they're not just making shit up and randomly coming up with probabilities. They collect open source info on events and base their estimates on that. So it's not just blind guessing.
As to the researchers probably know what they're doing, see my original comment and the very first thing I said, "I need to know more". It's not charming cute naiveté, I simply really think that the article tells me nothing about the methodology.
Well, I didn't mean your comment was a case of naivete, because, TBH, the analogy with my layman physics example isn't perfect, but it reminded me, it's a similar thing: people who are or should be aware that they have only superficial info on something, act as though what they know is all there is to it. The point is, we can't possibly come up with a meaningful critique of this research based on the given article, and, I think, the most reasonable course of action in this particular case is to assume the researchers know what they're doing (considering who's involved).
Hm, I wouldn't say normal people, but subpopulations of normal people (i.e. not individuals but groups). It's an important distinction in this case. But it depends on how CIA analysts' estimates are aggregated if at all. In fact it would be extremely interesting if CIA normally got predictions by averaging multiple analysts' estimates. That would mean normal people with open source intel are beating trained analysts with confidential intel! In that case, we could say normal people beat CIA analysts. But to me this article implies analysts' estimates are not taken as averages. Any CIA analysts here to set the record straight? :)
So, do you really think the researchers in this case hadn't thought of this? I mean yeah, it's OK to doubt and inquire, but come on, this is so rudimentary...
> If the questions are all yes/no questions
The article says (or at least implies) that the subjects give probabilities of "yes" to each question.
EDIT: Oh, and to add, the article clearly says they're not just making shit up and randomly coming up with probabilities. They collect open source info on events and base their estimates on that. So it's not just blind guessing.