Take their example question - 'Will North Korea launch a new multi-stage missile before May 1, 2014?' That's a yes/no question. But none of the participants knows the answer. They are just trying to forecast the future based upon the balance of probabilities. So if NK does launch a missile, the people who answered 'no' were wrong, but the reasons upon which they came to pick 'no' could still have been right. So giving feedback to these people and telling them that they were wrong does not magically aid/improve predictions.
Perhaps a simpler example could be used. 'Will the next roll of this dice score 3?' Well, it's obviously more likely that some other result will happen. But if the 3 does come up, you can't say that all those people who said 'no' are worse at predictions...
Even trickier - it sounds like the participants are giving probabilities. So if you guessed some possibility was 42% likely and then it did in fact happen - are you right or wrong? I don't think there is an answer to that question.
Simple math applied to a large enough set of predictions will let you know whether or not your probabilities are accurate. It is the same as playing poker - every hand you are essentially betting on probabilities multiple times (okay right now I thing I have a 50% chance of winning this hand and the pot is X and I have to put in Y to keep playing, I'm going to continue). In the short term it is very hard to know whether or not you are putting money in when you "should" because of variance. In the long term, the variance disappears and you can see that, for example, you are making an estimated 2 bets/100 hands you play which means you are "more correct" than the people you are playing against at estimating the odds. A little more complicated than that of course because you are also using bets as weapons to drive better hands out through bluffing and semi-bluffing, etc but the general point holds that a large enough sample size of actual results versus compared results means that your probabilities will hold or they won't
Works great for poker, where you play lots of hands. Works less great for the situation in the article. Just how many North Korean missile launches do you need a person to predict before you know they are lucky or not?
Doesn't matter, as long as they predict a large enough sample of events. Just requires an assumption that they will be as accurate on every type of event, which probably isn't true, but close enough for statistics