I'm not quite sure what you're saying. Doctors don't observe probabilities or enormous frequencies. Either way, there are good odds that this is information that someone is communicating to them, not the result of their personal experience.
If I may rephrase (and steelman) the parent's point:
Reality does not neatly format itself for easy plug-and-chug into your formulas. To appropriately respond to reality, you must be good at recognizing when there's a mapping to a well-tested formula, technique, or phenomenon.
Therefore, if you require problems to be phrased in a way such that that's already done, that means you're not good at that domain; blaming the phrasing of the problem is missing the point.
Indeed, 90% of the mental work lies in recognizing such isomorphisms, not in cranking through the algorithm once it's recognized, and this is a hard skill to teach. (Schools that teach how to attack word problems have to rely on crude word-match techniques to identify e.g. when you want to subtract vs add vs divide.)
> The prescription put forward is simple. Essentially, we should all be using natural frequencies to express and think about uncertain events. Conditional probabilities are used in the first of the following statements; natural frequencies in the second (both are quoted from the book):
> The probability that one of these women [asymptomatic, aged 40 to 50, from a particular region, participating in mammography screening] has breast cancer is 0.8 percent. If a woman has breast cancer, the probability is 90 percent that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7 percent that she will still have a positive mammogram.
> Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer?
> Eight out of every 1,000 women have breast cancer. Of these 8 women with breast cancer, 7 will have a positive mammogram. Of the remaining 992 women who don't have breast cancer, some 70 will still have a positive mammogram. Imagine a sample of women who have positive mammograms in screening. How many of these women actually have breast cancer?
Doctors observe a result of the test, and know the basic probabilities (in the example, 99% test accuracy, 1% of population have the disease). The problem is that they [often] draw incorrect conclusions from those observations (99% test accuracy and you tested positive? well then you likely - 99% - have the disease, right?).
The question formed as 'your one patient tested positively' is more immediately relevant, I'd think. The correspondence with actual practice is obvious. The question formed as 'out of 10000 ...' could be remembered as a quirk of statistics, but not actually recalled when someone tests positively for cancer.
Of course, doctors do not randomly assign tests to patients. Their prior that a patient has a disease is a lot higher than the background frequency of it occurring.
Getting them to estimate their prior would be interesting.
Except when they do. For example, when 100% of men above a certain age are screened for prostate cancer, or 100% of women above a certain age are screened for breast cancer. Both cases spawned major public health campaigns to encourage screening, followed years later by recommendations AGAINST 100% screening, based on the high degree of false positives and unnecessary treatment.
Other cases that come to mind:
-- doctors who offer "full body scans" as a part of an executive physical; you're pretty much guaranteed to turn up something that is 2 sigma away from the population norm, somewhere in the body, on such a scan
-- spinal x-rays for back pain. Doctors almost always find something abnormal, and use that to justify the back pain and treat aggressively. But, we don't really have a good prior; if you x-rayed 1000 people off the street, would we find similar abnormalities frequently?
It depends. Some tests are applied without prior suspicion, so you deal with exactly the background frequency. With others, the disease in question is so rare that false positives will dominate even if the doctor has serious suspicions. The second case is the reason for the "think horses not zebras" aphorism.
Doing a few explicit Bayesian calculations can help one internalize just how much important is that often forgotten P(A) factor is.
The same applies, by the way, to the antiterrorism security theater - many support it just because they have no intuition (or idea) about base rates.
I think GP's point is that in the case of interacting with an individual patient, the Bayesian conception of probability as a quantification of degree of belief is actually more intuitive than the frequentist conception of probability as a relative frequency of outcomes under repeated hypothetical experimentation.