I don't agree, I think he speaks and writes with nuance and intelligence. What is your issue with his Bayesian statistical approach? He went to UofC and LSE so he clearly had top notch training.
His 2016 model and he himself were much more predictive of the trump EC win, he repeatedly stated it was a possible outcome, something the vast majority of other forecasters completely missed.
Nate Silver was merely wrong, as compared to everyone else, who were spectacularly wrong. I'm not sure how that counts as a ringing endorsement.
These polls also never factor in things like "social acceptability of admitting that one voted for an unpopular candidate" or "groupthink among media organizations which aligns to their side of the isle." The entire polling fiasco should be interpreted as the limitations of quantitative data, as opposed to qualitative data.
Our final forecast, issued early Tuesday evening, had Trump with a 29 percent chance of winning the Electoral College.1 By comparison, other models tracked by The New York Times put Trump’s odds at: 15 percent, 8 percent, 2 percent and less than 1 percent. And betting markets put Trump’s chances at just 18 percent at midnight on Tuesday, when Dixville Notch, New Hampshire, cast its votes.
Consider this on the individual poller level. Every four years you get a new batch of pundit driven ideas about what "really" driving the voters. Suppose you add a question that's meant to magically reveal hidden voter preferences that are hidden by shame. What do the results of that question mean for the bottom line? Well, you're probably going to need multiple election cycle to find out. And by the time you do, the pundit have a new pile of bullshit for you to implement.
Now take it up a level. You've got a bunch of pollsters of caring predictive quality. Let them figure out what works best and include an estimate of their quality in how you process their results. I have no idea is pollster X's new question about cheese preferences is predictive, and honestly neither do they until a couple more cycles pass. All you can do is wight by past performance.
> social acceptability of admitting that one voted for an unpopular candidate
Not sure why you are getting downvotes. This phenomenon has been academically researched and documented under the name “preference falsification” with many past examples. If anyone is interested the 1995 book “Private Truths, Public Lies” is on this research.
No doubt they would try, but their prediction is bound to be at best as good as predicting the preference without asking the preference, which makes it profiling and not polling anymore. And as OP suggests, they failed at this spectacularly.
IMO LSE is a negative signal of someone's mathematical / statistical skill. I once interviewed someone that was doing a PhD in Math at LSE, and couldn't program or solve simple math / probability questions.
It's quite obvious why - if you want to study math in London, you go to Imperial. LSE isn't probably even a second choice.
His 2016 model and he himself were much more predictive of the trump EC win, he repeatedly stated it was a possible outcome, something the vast majority of other forecasters completely missed.