Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wish that reputable reporting on this kind of topic would start putting accuracy next to what it can reportedly do. I remember visiting a machine learning poster session where many of the posters reported results with accuracy as low as 30%.

If a program is able to predict this once or twice it's not a miracle. If it's able to do so with 60% I'd raise some eyebrows. But I'd say it's only a turning point when it's able to beat false positive rates of human doctors. Without an accuracy score, this news is absolutely meaningless.



Accuracy is well understood to be a terrible, misleading metric and any reputable report would strive to avoid emphasizing its importance.

For those curious their test set AUC appears to be 77.


Maybe I’m misunderstanding, but if you could have a test that doesn’t say anything if it isn’t sure, but if it is sure with very high confidence gives you a true positive, that has a lot of value.


As an aside, this was also the only strategy that seemed to scale in intrusion detection.


Even with an accuracy score, it’s meaningless. Accuracy isn’t the right metric.


If you like, please mention more; I at least am interested to hear


Consider it this way.

If your AI algo scanned the general population and it said 100% of the time, "no risk of esophageal cancer" It'd still be 99.5% accurate.

It's highly accurate. And useless.

To get a good idea of how useful it is, you need to know it's false negative rate and false positive rate as well as it's accuracy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: