Hope I remember it correctly, a story form couple years ago about students form some polish university (was it PUT?) working on such system. It was meant to be only an assist to a doctor, because legally it was all it could ever be...
Anyway, the students loaded the system with rules from literature, interviews, etc and the testing started. Soon it was clear that there was mismatch between what system suggested and what doctors diagnosed. Not always but more than expected. The rules were updated and it was a bit better, but still not there. After some back and forth finally the doctors were told to speak aloud about what they are doing examining the patients - and it become clear that doctors used additional criteria, not mentioned in the interviews (when directly asked about that stuff). And even then it was not enough to explain the differences! Simply the doctors were using additional rules that they themeself were not fully aware they were using.
Sadly I don't know what happened to that project or any more details but the implications from this story always make me think... even highly trained individuals using very strict and well defined decision process end up with result that they can't fully explain! What if we could make this hidden expertize explicit to better train future doctors or just check if it is even valid?
I heard about it from people working on that project, it was hour long presentation but this single anecdote is all I remember from it after 9(?) years...
Anyway, the students loaded the system with rules from literature, interviews, etc and the testing started. Soon it was clear that there was mismatch between what system suggested and what doctors diagnosed. Not always but more than expected. The rules were updated and it was a bit better, but still not there. After some back and forth finally the doctors were told to speak aloud about what they are doing examining the patients - and it become clear that doctors used additional criteria, not mentioned in the interviews (when directly asked about that stuff). And even then it was not enough to explain the differences! Simply the doctors were using additional rules that they themeself were not fully aware they were using.
Sadly I don't know what happened to that project or any more details but the implications from this story always make me think... even highly trained individuals using very strict and well defined decision process end up with result that they can't fully explain! What if we could make this hidden expertize explicit to better train future doctors or just check if it is even valid?