I think he was describing the fact that they already operate with decision framework that they already understand. Implicit in the results received from a particular test is the fact that there was a particular observation made that suggested they get such a test.
If they get results from a test, but without the compelling observation, they're then operating outside their well established statistical framework, and they can't confidently evaluate the meaningfulness of the test results.
To me, this doesn't mean the extra information is bad, or unhelpful, it's just they are not yet properly calibrated to use it properly.
I've heard this sentiment from medical professionals before and this was my conclusion.
That makes sense. Explainability would be a big issue/requirement with any attempted automated decision framework. I don't know if I would want my doctor to just order up tests based on the output of some app without understanding why they're ordering them up.
If they get results from a test, but without the compelling observation, they're then operating outside their well established statistical framework, and they can't confidently evaluate the meaningfulness of the test results.
To me, this doesn't mean the extra information is bad, or unhelpful, it's just they are not yet properly calibrated to use it properly.
I've heard this sentiment from medical professionals before and this was my conclusion.