I wish that reputable reporting on this kind of topic would start putting accuracy next to what it can reportedly do. I remember visiting a machine learning poster session where many of the posters reported results with accuracy as low as 30%.
If a program is able to predict this once or twice it's not a miracle. If it's able to do so with 60% I'd raise some eyebrows. But I'd say it's only a turning point when it's able to beat false positive rates of human doctors. Without an accuracy score, this news is absolutely meaningless.
Maybe I’m misunderstanding, but if you could have a test that doesn’t say anything if it isn’t sure, but if it is sure with very high confidence gives you a true positive, that has a lot of value.
If a program is able to predict this once or twice it's not a miracle. If it's able to do so with 60% I'd raise some eyebrows. But I'd say it's only a turning point when it's able to beat false positive rates of human doctors. Without an accuracy score, this news is absolutely meaningless.