"One promising approach, known as “dropout”, runs a problem through a neural network algorithm multiple times, each with slightly different settings. We can then ask to what extent the resulting answers agree -- a proxy for how uncertain the network is about its choice."
Isn't this just the good old consensus algorithm? And things get tense when you start to consider a real-world deployment or a distributed system with which some of the nodes trying to reach a consensus might be malicious; cf.
I use classification models all the time that are probability calibrated, it’s not that hard to do and works most of the time. The thing is you have to do it and most people don’t.
Isn't this just the good old consensus algorithm? And things get tense when you start to consider a real-world deployment or a distributed system with which some of the nodes trying to reach a consensus might be malicious; cf.
https://news.ycombinator.com/item?id=39086775