> Giving something like a 95-98% chance to Hillary was arguably a fundamental failure.
Was it? What's the likely hood that someone who was polling as well as Clinton losing? 15% like The Upshot at the NY Times had? ~7% like with Sam Wang's model? Even 7% is around 1 in 14, not something shockingly improbable.
People often act like Silver's prediction was good because it gave Trump a higher probability of winning than many of the others, but that's not how probability works. If you say there's a 1/6 chance of rolling a die and getting a 2, and I say there's a 5/6 chance of rolling a 2, and we roll and get a 2, that doesn't mean that I'm correct. I don't think we've had enough rolls of elections resembling 2016 to really have a good grasp of where the percentages should be.
In general I question the value of assigning probabilities to election outcomes. This is especially true when you look at the probabilities a few months earlier - for instance, 538 had Clinton going from 49.9% on July 30, 2016 to 88.1% on October 18, 2016. Look at the probabilities they gave during the recent Democratic primaries, and they're also very bouncy. These probabilities lead people to believe that there's a much better understanding of the state of the race than what actually exists, to the point where I'd argue it edges up against pseudoscience.
Election forecasting is mostly about trying to quantify the current state of play based on imperfect signals. There is theoretically a "right answer" well before the final tally, but without being able to look inside people's heads en masse you can only guess at it. Still, this is conceptually different from forecasting the behavior of a system that's actually subject to randomness or [semi-]chaotic instability, where the given uncertainties will correspond at least partly with actual nondeterminism sitting between the current state of the system and the answer.
Therefore I think it's fair to say that the weight an election forecast assigns to the actual winner is a direct indicator of the accuracy of its model. We aren't trying to guess at how a set of dice are weighted, knowing they'll only be thrown once—we're trying to get as close as we can to knowing who is going to vote and who they are going to vote for, and (absent some large disaster or upheaval) a misforecast will be largely attributable to systematic errors in our methodology.
Was it? What's the likely hood that someone who was polling as well as Clinton losing? 15% like The Upshot at the NY Times had? ~7% like with Sam Wang's model? Even 7% is around 1 in 14, not something shockingly improbable.
People often act like Silver's prediction was good because it gave Trump a higher probability of winning than many of the others, but that's not how probability works. If you say there's a 1/6 chance of rolling a die and getting a 2, and I say there's a 5/6 chance of rolling a 2, and we roll and get a 2, that doesn't mean that I'm correct. I don't think we've had enough rolls of elections resembling 2016 to really have a good grasp of where the percentages should be.
In general I question the value of assigning probabilities to election outcomes. This is especially true when you look at the probabilities a few months earlier - for instance, 538 had Clinton going from 49.9% on July 30, 2016 to 88.1% on October 18, 2016. Look at the probabilities they gave during the recent Democratic primaries, and they're also very bouncy. These probabilities lead people to believe that there's a much better understanding of the state of the race than what actually exists, to the point where I'd argue it edges up against pseudoscience.