Unlikely. Taleb seems to like how his argument has been summarised.
The crux of the issue taleb has with silver is showing probabilities without making a decision/declaration is cowardly. When asked about making a prediction he says one thing then follows it up with a “ but don’t be surprised if it could be completely random.” And that’s the point - the fact he covers his ass is the problem. Election forecasting is hard so don’t pretend you know something you don’t if you won’t stake something on your predictions. Oh so it could be anything and you don’t want to be held accountable? Then don’t say a damn thing. Basically Silver never wants to be held accountable for his models but always have uncertainty covering his ass as a cop out. so I’m
With taleb on this one. Either silver makes a claim and sticks with it or don’t show probabilities and pretend he knows things
If asked about the outcome of some uncertain future event (say, an election or some sports game), saying "team A has a 20% chance of winning" is making a prediction.
Making a decision/declaration in such a case where the outcome isn't (yet) certain does not mean "being held accountable", it means that you're either stupid, or a liar, or a stupid liar. If you want to call the results of a game before it's certain, then you shouldn't say a damn thing, because anything you say is a lie if you're falsely implying that the result is certain.
If reality is uncertain, then any certain statements/predictions are by definition wrong. Some predictions have more certainty than others, you can stake things on such predictions (sports betting is a great example - if you think that there's a 20% chance of winning but others think that it's 10% or 30%, then there's an opportunity), but it's ridiculous to require certainty where certainty shouldn't be expected.
This is not what the disagreement is about. It’s got nothing to do with uncertainty in the predictions and the debate is not about saying for certain what is going to happen. We know that distributions have means and variances so one can’t say what WILL happen. The point taleb is making is that Nate presents likelihoods and then throws his hands up and says whoa there I’m not saying what is going to happen just look at these likelihoods and draw your own conclusions. When the public naively looks at them and it looks as if it was right he gets praise for being a predictive genius and when the less likely outcome occurs he covers his arse with “well it was part of the 10% chance so I’m still right”. one is approaching stats from a real world perspective of there being consequences for actions and the other wants the praise that comes with being seen as a predictive so and so but never be accountable for his predictions. Put it another way: Nate’s “predictions” are only worth how much money/reputation he stakes on them. So far that’s zero by his own admission
I think this is a mischaracterization of Nate's own words. He certainly stakes his own reputation on the accuracy of his forecast. They spend a lot of time looking back and seeing how accurate the model was for the election as a whole.
There are two kinds of things Nate is saying here, and they are related but distinct:
1. Educating readers/listeners about how 20% chances happen all the time
2. Internal analysis and external reporting of how often his 20% predictions were right. If 538 predicts a group of 100 congressmen to all have a 20% chance of being elected and then 21 do get elected, the model did well and Nate's work is worth money to ABC and his reputation is improved (or maintained). If instead 33 of those congressmen get elected, the opposite result happens.
I disagree. Probabilities can be useful even if they're not 0% or 100%. For example weather forecasts often give the probability of rain. My plans will be different if the probability is 20% vs 80%. My plans don't tend to vary based on who wins the election, but I imagine that that information is very useful to some people.
And if your data is telling you to be 80% sure, I don't think you should fudge it and claim 100%. Report the uncertainty that you actually have.
With the weather man, I could keep track of when it rains and doesn't. If it rains significantly more or less than 80% of the time they say there's an 80% chance of rain, I can say that their model is bad.
I can't think of a similar way to evaluate Silver's election forecasting model. They very clearly aren't independent probabilities, and his model changes significantly from cycle to cycle. Was his model good in 2012 when every state went to where he predicted the likely probability was? Was it bad when his model said Hillary had a 71.4% chance of winning?
They do not only predict top-line presidential results, but every race, in every state, for president, House, and Senate. Non-independence is accounted for in the model, so they have no qualms about you judging them by the calibration of their predictions, I. e. You want roughly 60% of their “60%” forecasts to be right, and 40% to be wrong. If all of the races they predict 60/40 go to the more likely candidate, they themselves consider this a failure: https://fivethirtyeight.com/features/how-fivethirtyeights-20...
> I can't think of a similar way to evaluate Silver's election forecasting mode
You bucket every prediction, look at the outcome, and then confirm whether the favoured outcomes in the 8th decile actually occurred 70% to 80% of the time.
IMHO what Nate Silver does is not a prediction on the outcome of future events, such that when he doesn't get it right it is because he has failed, but rather an analysis of existing data in order to provide a detailed description of our own uncertainty. And by seeing how this uncertainty evolves as more data comes in or different events happen, we get a better understanding of the underlaying process.
When reporting on the run-up to an election, taking great care to avoid having one's figures misconstrued as suggesting that the outcome is settled, when it is not, is the responsible thing to do. For Silver to not say a damn thing would leave all the interminable speculation in the mouths of those with even less knowledge of what might happen, and those with an agenda.
Absolutely disagree. People have a tendency to think forecasting is simply making a binary prediction: something will happen or it won't. That simply is not the case. It's more than that. It's quantifying uncertainty. In actuality, reducing a prediction to a simple "X will happen/won't happen" statement throws away information. I can predict the sun will rise tomorrow, and can also predict that the Denver Nuggets will beat the Dallas Mavericks tonight, but the probabilities associated with those predictions are vastly different.
The quandary that FiveThirtyEight and Silver are in is that the general public is stupendously bad with probabilities. Prior to the the 2016 election, Silver was personally being attacked, in some cases by major media organizations like the Huffington Post, and accused of tipping the scales towards Trump (for some inexplicable reason), since their models were giving him approximately a 1 in 4 chance of winning. Then, in the months following the election, the narrative somehow switched, and the fact that Trump won despite only being given a 1 in 4 chance by FiveThirtyEight meant that Silver was now a hack and the model was wrong.
So now, fast forward to this year's election, Silver is doing his best to drill into people's minds that, yes, their model showed a Democratic win in the house as the most likely outcome, but that absolutely doesn't not mean their model says it is an absolute certainty, or that other outcomes would be unusual. It isn't "covering his ass", its educating the public on how probabilistic statements work.
The crux of the issue taleb has with silver is showing probabilities without making a decision/declaration is cowardly. When asked about making a prediction he says one thing then follows it up with a “ but don’t be surprised if it could be completely random.” And that’s the point - the fact he covers his ass is the problem. Election forecasting is hard so don’t pretend you know something you don’t if you won’t stake something on your predictions. Oh so it could be anything and you don’t want to be held accountable? Then don’t say a damn thing. Basically Silver never wants to be held accountable for his models but always have uncertainty covering his ass as a cop out. so I’m With taleb on this one. Either silver makes a claim and sticks with it or don’t show probabilities and pretend he knows things