Hacker News new | past | comments | ask | show | jobs | submit login
Why Doctors Reject Tools That Make Their Jobs Easier (scientificamerican.com)
199 points by dsr12 on Oct 15, 2018 | hide | past | favorite | 126 comments



An alternative headline is "Tool Makers Make Tools That Don't Help Doctors". I know a ton of colleagues who flinch every time someone mentions that this'll just hook into the EMR system, because they know everything will be broken and tedious for years before another vendor comes along.

At a conference I was at recently, I'm nigh positive "Decision Support Tool" was a dirty word.


I agree. This article is off the mark. On the contrary, doctors want tools that will assist them in providing better patient care outcomes. In all my career, I've never met an anesthesiologist, trauma surgeon nor a hospitalist who has ever expressed a concern that technology will make them obsolete. On the contrary, almost universally these individuals are overworked and are thrilled to hear about anything that might save them a little time or make their job easier.

That being said, doctors do not want tools that are clunky, tools that force them to change their existing workflows, tools that run slow, can't pull in relevant data in real-time, rely on fragile HL7 interfaces, etc. Doctors want tools that work for them. Doctors do not want to be fighting or struggling with tools that are supposed to help them. While that may seem obvious, it's a very difficult bar for most software makers to meet -- particularly in the inpatient and ED space.

I can understand the author's perspective -- she's a new resident at Yale New Haven Hospital and previously ran a medical software start-up that closed up shop relatively quickly. It would be convenient to rationalize the failure of her start-up due to fears of new tech. While it may have been a fear of new tech that prevented her start-up from succeeding, it was not a fear of being made obsolete -- it was a fear that her product wouldn't do the job well.


> doctors do not want tools that are clunky

And yet every EMR system I've ever seen is comically bad. Why the medical providers haven't revolted over being forced to use those things is beyond me.


They really are comically bad (Cerner, Epic, Meditech, all of them), but they are effective data silos which is a step up from having that data silo'ed in filing cabinets.

People outside healthcare really don't understand the sheer amount of time your physician spends trawling through the medical record in the regular course of doing his or her job. You can think of the patient chart as a shared "My Documents" folder on your computer. In modern multi-disciplinary care, you have many different parties taking care of the same patient and working in shifts. In order to get up to speed with what's happening for a particular patient, you need to open up each of the recent files in "My Documents" for that patient and review it. Lab results, progress notes, schedules, etc. Then repeat a couple dozen times for all the other patients you're caring for that day. Endless clicking. Constantly flipping between windows. Burn out.

That's not too much of an exaggeration of what the real EMR experience is like today.


These experiences also exacerbate the perception that I referred to before that can basically be summarized as "every previous attempt at making this better ended up being completely terrible". A whole slew of people want to improve things, but a lot of the time they don't take that responsibility seriously enough -- or know, fully, what that entails -- and when they bail that leaves the next would-be "disruptor" at a disadvantage.


Right, and it turns out that the devil you know is infinitely more attractive than the next helf finished buggy piece of shit that someone assures you will revolutionise your workflow.


My doctor's office just switched to the latest MyChart system and the app on Android has a pretty modern UX, decent usability and shows all my upcoming appointments, lab results, bill statements, etc. No complaints on my end, having access to my records right after my physician enters it into their computer is nice compared to requesting packets of physical paper then having to securely store them from prying eyes...


Mychart is really slick in my opinion. Very usable Android interface and everything you mentioned. I was definitely impreased for the patient experience. Sometimes with blood tests it can be a bit of information overload. But that is why you have a doctor explain what matters. It's also nice to see and compare previous tests to see things improve in a simple format.


Because it’s the same situation as with any enterprise software - the buyers are not the users, and they are evaluated at how well they did or not do their job based on a whole host of criteria that have absolutely nothing to do with satisfied users.


Buyers? I want the makers trained as users.

Everything I've optimized really well was after a week or so using the old bad solution.

Programs made for me, to automate some part of my job? Invariably break some part of my workflow that promptly gets labeled an edge case.


Same problem. Makers are not paid to think like users. Makers are paid to satisfy whoever's actually paying.


If makers dont have a fundamental understanding of how the users will utilize it, the makers are making a bad product. Every developer should do customer service and a sales call on a regular basis.


> If makers dont have a fundamental understanding of how the users will utilize it, the makers are making a bad product.

Well, exactly. That's half of the reason why enterprise, medical, restaurant and other similar software generally sucks.

> Every developer should do customer service and a sales call on a regular basis.

In B2C, sure. In B2B for the examples I mentioned above, a "sales call" is where the problem starts. On such a call, you'll get the opinions of people paying for software, not of those using it.


In either case the developer the should get a first hand experience about the pain points. Developers tend to be isolated and build things that really doesn't fit the need that is required. Ask the potential customer and they will come. (Maybe, but at least the developer has a better idea of things to build)


Its really strange how much punishment doctors seem to receive and how much they're willing to take. The hours alone would be laughable in any other industry.


The hours are, to some extent, imposed by the various medical governing bodies either directly or by artificially restricting the supply of doctors. These bodies are composed largely of doctors so the problem is to some extent self imposed. In the UK doctors are not governed by the working time directive because the UK negotiated an opt out so hospital doctors work ridiculous hours. As far as I can tell this problem does not exist to the same degree elsewhere in Europe.


>In the UK doctors are not governed by the working time directive because the UK negotiated an opt out so hospital doctors work ridiculous hours.

In UK doctors are treated much worse than they are in the USA.


In Poland it is also very common for doctors to work crazy hours.


Most consumer software is tragically bad. For example, the automatic cat feeder I have has probably the second worst user interface I've ever seen.

The worst was a thermostat.

Heaven help you if you ever lose the manual.


Because 2 main things.

First: you're dealing with humans so you have to let a lot of things possible. Which is how you easily get shitty UI.

Second: everything is a legal horror. Certification, certification, certification.

PS: and third. Most medical software is not chosen by its users but by management.


Its my believe that its impossible to build the killer EMR, or at least, nobody has approached that.

There are many reasons for it, just as making software to manage restaurants has been an impossible one. The use cases are operationally extremely different in all cases.


I think the common thing between restaurant POS systems and EMR is that they aren't optimized for the user, they're optimized for management.


One of the problems I see here, and would love to find a way to solve it, is that software is best developed interatively. I see a lot of software that has some cool ideas, it just needs some tweaks to get rid of ridiculously annoying flaws, or needs to put some more control in the user's hands.

Unfortunately, and inevitably, a little tweak in medical software may as well be a tweak on a satellite in space.

Like I said, changes need to be prevented from making the software bad, but the difficulty of making changes also makes the software bad. I don't know what the solution is, but I would like to see this aspect addressed more explicitly.


Safety first, full stop, no exceptions.

Until correctness can be done easily in accessible programming languages with a decent developer pool, iteration in this space, along with other life-critical spaces such as trains, planes (and wouldn't automobiles be nice too?) is sadly a long, manual process compared to web development.

Edit: to expand a bit on correctness, think Coq but with the ease of Java or Typescript. Even Rust's safety isn't yet formally proven; that's not a knock on rust, but a demonstration of just how hard it is to do.


Well that's totally true, but even then I have no idea what the legal mechanism is for making a formally verified language in the context of a medical device looks like, or even decision support software like the grandparent was discussing.

Even if such a thing existed, it would require a very rich set of constraints on the program to be able to encode something like "won't make the wrong decision when a patient has a pulmonary embolism and these three other conditions" and I think that's what doctors would want.


Formal verification is a barrier to quick iteration (parent comment) not just for medical software, but anything "mission critical" where the mission is "worst case scenario, don't kill someone". Hence my point about train switching and avionics. Legal mechanisms are beside the point, I think.

As for rich constraints, that does bring up another point- how do you quickly iterate on a system full of bad data or systems of incompatible data? To get Watson to a degree of certainty, so much data manipulation and review happened by hand that it was a wasted effort.

This isn't to say that I'm in love with what I've seen of the software my doctors use (or the puckering feeling seeing how outdated the OS tends to be). I'd be a whole lot less happy, though, if their software had the reliability of say, imgur or reddit.


> Safety first, full stop, no exceptions.

And here you are, cutting-edg hospitals running 2012 versions of modern software, and paying millions of dollars for the privilege.

The direction has to be exactly the opposite, by giving the patient increasingly more control over his own health.


Having seen several talks from some patient advocates recently, I'm skeptical that's a good idea.


I think health services are some of the most misunderstood by the population above everything else.


Patients are the worst at their own health, and often deluded about their ability to manage it. Gwyneth Paltrow and homeopaths don’t rake in money because people make great decisions. When you “empower” patients past their competence you get Steve Jobs in the ground. If he had just been another schmuck, he’d probably still be alive.


Health services are in service of the patients, not on your commoner opinion of what health should be and how it should be imparted.


And being in service of the patient often requires not doing what the patient wants, because the patient is not a medical professional and doesn't know shit.


Most people taking homeopathic medicine want their symptoms to get better. It’s not as though they have some different definition of a “good health outcome,” they’re just mislead about the effects of the what they’re buying.


Placebos work. Sad but true.


   Coq but with the ease of 
   Java or Typescript
Don't hold your breath. There is currently no technology on the horizon that holds a promise to bring use even close to what you ask for.

Note also that Coq/Isabelle/Agda and their successors will still suffer from a variant of the oracle problem: where should the specifications come from?

Finally note that almost virtually aviation software is formally verified in with interactive theorem proving as of 2018. The extreme levels of reliability in this space have always been achieved with other software engineering methods, including rigorous testing.


I was thinking more of that would be (a or the) barrier to rapid iteration in medical software, not that it is likely to be achievable. Given the rigorous testing required in its place, we don't get web dev agile style iteration in medical software.


I don't think safety issues are preventing anything here. You don't have to iterate on live systems or real patients. You can test and develop on mockups and simulated scenarios taken straight from real life. But you need to be able to get regular feedback from actual end-users.


This is not unique to software. Physical objects also benefit from iterations in development. That doesn't mean you have to update things you've already sold. The iteration happens when people buy the next version. This used to be how software was sold too.


I believe GP's point is that you won't even make the first version of the software good if you aren't allowed to iterate on prototypes, with feedback from actual end-users.


The first version doesn't have to be good. It just has to be plausibly better than not having that software or device.


Sure. But most of the time, the first version is worse than having no software, by the virtue of interfering with existing workflow.

(This is what you get when you design for those who pay for your software, and not for those who'll use it.)


Also, for seasoned professionals you have to factor in malpractice and insurance coverage as contributing to the inertia.


I've found mostly these are given mostly as cover. Those are important considerations, but: A) most CDS is considered educational, and therefore is not supposed to replace an expert's opinion B) insurance coverage is sorta BS because most health systems are moving away from pure fee for service revenue models. Besides, I feel like you should maybe do your job as a doctor to ensure the best care for your patients than hyper optimizing your own reimbursement.


Nevertheless there is concern about any data that goes into an EMR. There is always concern about what happens in the worst case scenario where something very bad happens to a patient and their whole record is examined in a courtroom after the fact. If, say, a machine learning model predicts that thing X will happen to the patient and it wasn't acted on, a hypothetical lawsuit may go worse for the provider than if the predicted risk of thing X weren't part of the record.

This doesn't mean that the doctors bury their heads in the sand, but it is a part of the reason providers are resistant to adopting these tools. Relying on their own knowledge and intuition may not be better but it won't hurt them as evidence in a hypothetical malpractice suit. The decision support tool has to be sufficiently better to outweigh this.

I hold no opinion on the validity of this line of thinking, but I can say that it is part of the calculus, at least for some.


Some very good points. In the practice of medicine, and in particular in the litigious US, you cant ask questions you wont act on. You are liable for the responses, so might as well not ask.

Stupid af.


And of course "just hook into the EMR" is fraught with problems as it is, what with HL7 quirks and proprietary APIs and homegrown interfaces and all that jazz.

Even "better" when there's no concept of "the EMR". My foray into healthcare IT was at a hospital/clinic district that used no less than 4 EMR systems (one new and one legacy for the main two hospitals, plus one new and one legacy for all the satellite clinics). That's not including the specialized systems for ER, Oncology, Obstetrics/Gynecology...

All for the pursuit of Meaningless Use.


Yes, its a mess. And its because none of them work for the specific use cases very well. For all the things told about it, EPIC is still the best out there.


Indeed it is, though that ain't really saying much. EPIC was one of the ones in the "new" category for us (for the satellite clinics specifically; the actual hospitals instead went with CPSI - aka "COBOL, Perl, and Server Issues"). It was definitely the least bad of all the EMRs and EMR-like systems we used.


The article talks about thermometers being rejected by some doctors who thought fevers were more qualitative... they're not wrong. Different people have different average body temperatures, so a fever for one person might not be a fever for another person [1].

Obviously it's not smart for a doctor to reject the idea of knowing internal body temperatures at least to have as a data point, but those doctors weren't COMPLETELY off base.

https://www.webmd.com/first-aid/normal-body-temperature


Wouldn’t it be useful to have a tool like a thermometer to establish a baseline value?


if they've already got the fever then it's a bit late for that


Seems like it'd make sense to go into your doctor at some point when you're perfectly healthy and let them measure a bunch of baseline values to put in your medical file.


It's like my bird's vet said once: If you start measuring, you're gonna find stuff. The more you dig, the more you find. Does it mean your bird is sick? Eh look at him, does he look like a sick bird?

My girlfriend can be very ... concerned. We spent $1000 in vet bills before a real avian expert was like "Guys, he's fine. Look how happy he is. He just loves the attention."

Apparently one of his tests for vet interns is to tell them "This bird is sick. Figure it out". Then he gives them a perfectly healthy bird. Their bias always makes them find something. Caused by their tests half the time because they're stressing the bird.


> Apparently one of his tests for vet interns is to tell them "This bird is sick. Figure it out". Then he gives them a perfectly healthy bird. Their bias always makes them find something.

Seems like the "bias" in that case is likely due to an authority figure intentionally misleading them in the context of a "test", where one naturally assumes they're not being deceived by the very premise of said test? If an actual bird owner came in as a client and said the same (or if the test explicitly told them to assume this is the situation), the interns might very well still realize the client is wrong.


I think the lesson he’s teaching is that examination should be hollistic, not just focus on specific tests. Different tests are appropriate in different situations and they often give conflicting results.

I’m no vet but if you have to dig hard for signs of illness, the patient likely isn’t sick. Our bird had increased uric acid. Could be anything. It was a little high.

If he was actua sick we wouldn’t be guessing whether the number is high. It’s be 10x. Waaay out of bounds. A clear signal.

It’s like in product design. Your conversion rate inproved 0.05% after you ran an experiment for 2 days. Is that a signal or noise? Eh probably just noise. Observe longer.

The problem with comfortably saying the client is wrong is tht they’re the authority figure. They have years of data, you have 30 minutes with the patient.

And yes I have had vets casually chat with me for 2 hours when they did spot something suspicious in the bird’s behavior and wanted to observe longer to see if it’s a pattern.


I'm not a vet, but if I were and someone said "This bird is sick." I hope the first question I'd ask is "Why do you think so?" Not to doubt the person but to figure out what the presenting symptoms were.


It is not abstract general someone. It is very concretely more experience professional and teacher whose judgement you trust more then your own. And no, if you dont go much out of way to be approachable or have reason to be afraid of him, no matter how small, they will not ask more. Because most interns know they are beginners and don't know yet.


"Diagnostic stewardship" is currently a hot topic in infection control and antimicrobial resistance work for exactly this reason. If we look hard enough, we'll find something to give you antibiotics for, even if you don't need them.


I've heard similar sentiments around looking at API monitoring graphs when you're not investigating a specific problem. Weird stuff happens all the time, and most of the time it's no big deal.


I have similar sentiments towards the health of humans.


Body temperature for an individual patient can vary significantly based on time of day, activity level, diet, menstrual cycle, etc. A single point measurement taken once every two years doesn't really give a useful baseline and might even be misleading.


Within the parent comment and beyond, it seems people are putting emphasis on a change in temperature but do not state a deviation from the average 98.2°. Even if we did have it though, I don’t think having a longitudinal look at someone’s vital signs (VS) would be misleading in any sense. It establishes a baseline history of the person and deviations will be put in context with the addition charting notes from that visit.

While I do agree with a single point not being enough information, nobody is looking at body temperature alone when diagnosing a patient— unless they’re reaching internal temperatures of below 95° and above ~100.9 for hypothermia or hyperthermia respectfully. With each diagnosis there are /x/ number of signs and symptoms that go along with it so it is crucial that we gather as much information on all the VS we can, no matter how minuscule the data may seem.


That might be why they take my temperature whenever I go for a regular checkup, or non-cold/flu/fever related concerns?

Procedures aren't the same between clinics, though.


The industry is annoyingly conservative but whizzkids pretending to have a better opinion about how to cure insert desease from isolated, mechanistic, non-standard observations of few markers are dangerous.


Yeah - I work a lot in healthcare associated infections, and a good 80% of the stuff I read about "We're from Silicon Valley and we're here to help..." just ends with me going "They're going to kill people."


I've second hand experience with this. Though not a lethal situation (thank the heavens!) the mindset was dead on. A grad student in the BioEng lab was doing her PhD on a fall detection sensor for the elderly. Kinda like what the new Apple watch is supposed to do, but, you know, something that actually works. The lab was working with a device manufacturer that was looking to branch into medical devices as a new market and was partnered with the lab to do the initial PoC. Well, one day she came into the lab to discover that the company had shut down and remote bricked the devices overnight. They were still in early testing, so no people were harmed, which was really lucky. But if they had been under clinical trial already, there was a real possibility that real people out there would have been harmed severely (a 'bad' geriatric fall can become lethal).

All emails to the company were unanswered. All phone calls went to voicemail. Nothing could be done. She was 5 years into her PhD on this and had to restart the whole project (she wasn't much of a coder, more a clinical trials person).

In medicine, the SV mindset of 'move fast, break things' means that granny is breaking her hip and isn't going to last much longer. You CANNOT 'break things' in medicine. However, if you do get though clinical trials, hoooo baby! You essentially have a monopoly and will be rolling in it. MedTech investing is a lot like SV's VC investing, just a lot more careful.


What most people dont realize, depending on the country, those press for emergency buttons for the elderly are not considered an emergency call. Including the advertisement you see on buses targeting people fearing a heart attack. Meaning, it is not guaranteed that the message will go through. It is an app like any else with the same reliability. That was fortunately the point where my former boss declined a contract.


> Kinda like what the new Apple watch is supposed to do, but, you know, something that actually works.

Does the Apple watch not work?

> But if they had been under clinical trial already, there was a real possibility that real people out there would have been harmed severely (a 'bad' geriatric fall can become lethal).

How would that be different if they didn't have a device? They would have died anyway, right?


> How would that be different if they didn't have a device? They would have died anyway, right?

Maybe. If they think that the device will help them in their hour of need, then they may be taking chances they normally would not have. To have something remote brick and then not tell the people that it was bricked is a gigantic ethical violation.

> Does the Apple watch not work?

Not to a medically relevant level:

"Apple Watch cannot detect all falls. The more physically active you are, the more likely you are to trigger Fall Detection due to high impact activity that can appear to be a fall.”

https://support.apple.com/en-us/HT208944


> Apple Watch cannot detect all falls…

If we pick apart disclaimers, I think we will find that nothing works. /gentle_sarcasm


I think this is something that isnt that present in our profession as it should be. If you are an electrical engineer, it is clear and present, that your products might kill someone if you screw up. The same goes for a mechanical engineer. And it will be your responsibility. If you screw up and should have known better, you might even go to jail for negligence.

We on the other hand have the Therac-25.

We claim to be on a level of engineering fields. We should also act that way.


Agreed. The Order of the Engineer is a good place to start and bringing that tradition down into comp-eng/sci would be a nice start.

https://en.wikipedia.org/wiki/Order_of_the_Engineer


Interesting, had not heard of that. What should we computer programmers, we flexible always trying new things without enough tried and true practices think of real engineers, what should they think of us?


Real engineers should probably look down on us from a great height.



Therac-25 was thirty years ago, and still is the go to example. That made me wonder. If you compare, for example, https://en.wikipedia.org/wiki/List_of_bridge_failures#2000–p... or https://en.wikipedia.org/wiki/List_of_structural_failures_an... with https://en.wikipedia.org/wiki/List_of_software_bugs, software doesn’t look that deadly.


People generally don't attribute failures to software bugs if there is anything else that contributes to failure. The list in particular is missing the loads of accidents caused by bad UI; incidents of "I was just following the GPS's/autopilot's directions" that result in death are missing (such as the KAL 007 airliner incident).


The Therac-25 is the go-to example because it is the best documented and least-contested case. Also, software usually isn't in ultimate control, most systems have additional (mechanical) safety controls.

Mars Climate Orbiter, 1999: http://articles.latimes.com/1999/oct/01/news/mn-17288. Undetected metric conversion error (though unclear if this was a manual process or software).

Toyota unintended acceleration, 2010: https://www.edmunds.com/car-safety/for-toyota-owners-uninten.... Toyota maintains that it was not caused by software error.

Now let's list some cases where the software did have ultimate control:

Schiaparelli EDM lander, 2016: https://newatlas.com/esa-schiaparelli-mars-crash-inquiry/496.... Faulty decision-making in the automated descent system due to input saturation.

Tesla auto-pilot crash, 2016: https://www.theguardian.com/technology/2016/jun/30/tesla-aut.... Faulty decision-making due to incorrect image analysis. In this case, the additional safety controls (the driver himself) failed too.

Uber car crash, 2018: https://money.cnn.com/2018/03/20/news/companies/self-driving.... Faulty decision-making due to incorrect image analysis, though the investigation is still ongoing, so no definite cause determined (FAFAIK).


Software doesn't have many opportunities to be deadly, and when said opportunities do exist, this is frequently in regulated fields where safeties are mandated by law (interlocks on a laser or a welding robot arm etc).


I'd imagine a lot of places where software is part of a safety critical system also come with physical fail safes. An example being burst plates on pressure vessels, if the software that controls pressurization spasms and causes an over pressure the physical fail safe will prevent a catastrophic accident.


Software engineering is more like witch-doctoring than to civil engineering. It is not remotely price-efficient to prove completeness on programs.

If you expect your doctor's software to be perfect, be prepared to pay for 2 software engineers salaries per doctor.


I agree that we act recklessly sometimes, but I hope I'm not alone among software engineers in believing that this finding from the committee was just plain stupidity which should not be done by any engineer worth his or her salt:

"AECL had never tested the Therac-25 with the combination of software and hardware until it was assembled at the hospital."


> From the thermometer’s invention onward, physicians have feared—incorrectly—that new technology would make their jobs obsolete

Is that true, or are doctors incredibly risk averse since they have seen what can happen when something goes wrong, and that is death.


I've also read that physicians can be overconfident in their own abilities:

https://www.ncbi.nlm.nih.gov/pubmed/18440350

> … argue that physicians in general underappreciate the likelihood that their diagnoses are wrong and that this tendency to overconfidence…

More on the combination of algorithms and humans:

https://en.wikipedia.org/wiki/Paul_E._Meehl#Clinical_versus_...

> Meta-analyses comparing clinical and mechanical prediction efficiency have supported Meehl's (1954) conclusion that mechanical data combination and prediction outperforms clinical combination and prediction.


As someone who has had several CT scans (and paid for CT scans), the idea that a CT scan is better than a physician feeling your abdomen is absolutely ridiculous. Pumping contrast agents into everyone who has pain is a very bad idea. Moreover, enforcing decisions by algorithmic rules is problematic, especially considering who might be making those decisions.


Can't resist to at least provide some response here.

1) Having CT scans, and paying for them, does not really objectively lead to the conclusion that followed. Let's feel that tumor inside your lungs.

2) CT scans can also work without contrast agents. In addition they typically do not register everybody for a CT scan nor pump them full with contrast agents. There is a process. In US some hospitals are trigger happy as they get paid per case, blame the system not the technology. If anything an algorithm will fix that nasty human behaviour.

3) Having biased humans enforce decisions is not always a guarantee for success either. Every human sees only a fraction of the total amount of cases an algorithm processes within seconds. There are several fields where AI already outperforms elaborate test panels of MDs. Though it is hard to introduce these algorithms for the same reasons Tesla is having issues. Who is responsible when a mistake is made?

3.1) you would be amazed how often MDs do not agree when the same problem is put in front of them. 50/50 and 60/40 are very common cases. AI is typically more in the 80/20 90/10 range which is a huge improvement.

Now, all of this does not mean we do not need MDs anymore. An important aspect often neglected due to time bounds is the interaction of a patient with the doctor. With algorithms saving time more could go to the patient. That's a win.


Plus radiation. Every CT scan increases your odds of cancer.

Also, mammography or even colonoscopies have been proved for most of the population to do more harm than good. cochrane is full of meta-studies about it.

The medical industry is very shady.


> Every CT scan increases your odds of cancer.

There is no evidence for that statement. More specifically, there is no evidence that a single radiation dose below 100mSv is harmful at all, but plenty of evidence (Taiwanese radioactive apartment buildings, nuclear navy worker study) that it isn't. Muller made it up for political reasons.


There is evidence:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3660619/

> Title: Cancer risk in 680 000 people exposed to computed tomography scans in childhood or adolescence: data linkage study of 11 million Australians

> Conclusions: The increased incidence of cancer after CT scan exposure in this cohort was mostly due to irradiation. Because the cancer excess was still continuing at the end of follow-up, the eventual lifetime risk from CT scans cannot yet be determined. Radiation doses from contemporary CT scans are likely to be lower than those in 1985-2005, but some increase in cancer risk is still likely from current scans. Future CT scans should be limited to situations where there is a definite clinical indication, with every scan optimised to provide a diagnostic CT image at the lowest possible radiation dose.

And about "a single radiation dose": As soon as you get a CT the chances that you will have only a single one in your life are greatly reduced, because you just had that one. So it still is better if the count remains at zero, or your precondition can easily be invalidated.


The problem with that study is that "people who take a CT scan" is not exactly an unbiased sample of the general population.

Now compare this to

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2477708/

> only a single one in your life

"A single dose" as in "a discrete event". Another single dose the next month is (probably) harmless again. Cells react to radiation with repair mechanisms, and once that activity subsides, the event is over.

Radiation exposure isn't linearly cumulative. The argument that it is was made before we even knew the structure of DNA! Today, we know better.


I also recommend at least the "Conclusion" section of this document, selected as an example, not as the one definite document: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3611719/ It is a good read overall too. You make it sound as if it does not matter. Apparently that is not the general medical opinion.

I also don't see what the problem with the selection of people is supposed to be. Those selected are more likely to not be able to repair DNA damage? I think this particular selection makes no difference for the purpose.

Overall, OP said "there is no evidence" and it seems that yes, there is. What you think of that evidence is not the question, OP had said there isn't any. When I look at the actual recommendations it seems that most medical people don't think so, after all, the recommendation still is to limit the radiation exposure, not just for the frequently exposed (radiation workers) but also for those one-time patients.

Even on a per-event basis reducing the amounts of radiation was and is a major design goal for the devices. Does not look like those who are involved in all of this think there is no problem.


> Overall, OP said "there is no evidence" and it seems that yes, there is.

This is evidence for a correlation between the number of CT scans and cancer incidence. To jump to the conclusion that the cancer is caused by the radiation from the CT requires a leap of faith.

The funny thing is, if an epidemiological study shows that low dose ionizing radiation is beneficial (radioactive apartment buildings, nuclear navy workers), it's dismissed by a completely ad-hoc "healthy worker effect" or "healthy student effect". But in a study of people who received a CT scan, where you should expect a "sick people effect" (healthy people don't get CT scans), you "don't see a problem".


This study is solely focused on childhood and adolescence CT scans. Any radiation dosage you take during childhood has magnified effects.


Thanks for pointing that out, but, okay, children and adolescents count too?

OP responded to a specific comment, I responded to OPs comment. I don't understand your point in that given context. I'd think showing one study - I didn't bother to look any further - that shows a risk was sufficient.

Since even adults have plenty of still dividing cells left I see it as reasonable to assume that adults are at risk too, even if that will likely be lower.

I also recommend at least the "Conclusion" section of this document, selected as an example, not as the one definite document: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3611719/ It is a good read overall too.


I don't remember the exact number but when it comes to cancer risk increase getting a regular 100 mSv x-ray as a kid counts something like 10 xrays of an adult done one after another( i.e. single dose). That's because any mutations that happen will stay with you for the rest of your life plus propagate to a lot more cells in total as your body is still growing. So while an important topic for some specialties it would be wrong to make broad statements based on this.


There is no "broad statement": There only is a specific response to a specific comment. I responded to "there is no evidence" - I don't have to show existence of 100% knowledge in the response, only that "no (zero) evidence" is not true. That is not a "broad statement", especially since I myself did not make one, only pointed to studies for the subject. Those studies don't conclude with "no evidence found".


>>Plus radiation. Every CT scan increases your odds of cancer.

If the doctor makes money from your CT Scan you are absolutely right to question the need. Conflict of interest and all. Sure you increase the chances of cancer but that has to be weighed by what can happen if you don't do the CT Scan.


You also need to balance what happens if you do have the scan too.

Over-testing leads to over-diagnosis, and that can be harmful.


Very few modern insurance plans pay per procedure and that will be done away with entirely as time goes on. Typically physicians are paid a flat rate per patient or a flat rate per diagnosis with a complexity multiplier. This creates an incentive to NOT perform imaging unless it's necessary.


In what country? The US is still mostly per per procedure. There are codes for procedures and for diagnoses and they both get factored into the bill.


Take a look at DRGs. In the US they were first used for payments in the Medicare system but they have expanded outside that program over the years:

https://en.wikipedia.org/wiki/Diagnosis-related_group


Part of the issue is that the doctor is on the hook if the tool fouls up.

Consequently, having an extremely skeptical viewpoint on tools is perfectly rational.

I can also tell you from discussions with doctors that one of the problems is that the intersection of GUI programmer, competent engineer, and medical domain knowledge is either a null set or a single person. (For example: EEG analysis seems to be a natural fit for ML/AI--enormous amounts of data with events only sporadically scattered in it--yet there is nobody capable of handling the intersection of talents required.)


It reminds me of Moneyball; in it, the author points out that the statisticians were frustrated by their inability to get traction within the MLB, but their pitch essentially boiled down to "you guys don't listen to statistics, you should listen to these new ones we just developed," which left unsaid that the statisticians made up new quantifiers because the old ones were ineffective. The MLB had the lived experience of those statistics being ineffective, so they knew Bill James et al were right about that, but the idea that the answer was more numbers that didn't make a lot of intuitive sense was a hard sell.

I would also add that my perception from working at a major east coast hospital has actually been that hospital IT clamps down on new tools more than anyone because of HIPAA requirements, etc, that the doctors ignore/don't care about as much as they should. It's a complicated, layered system.


> intersection of GUI programmer, competent engineer, and medical domain knowledge is either a null set or a single person

Ah interesting.

My neighbor's son is a doctor who needed better software. He trained himself to become a GUI programmer. The resulting company is very succesful. (As reported by his mother, so I'm allowing for some parental bias.)


There is a bunch of ML/EEG devices on the market, it is a commodity technology by now, just google "fda machine learning eeg".


The number one thing you can do to get doctors to use your new tool is find a way so that they don't have to enter patient data again.

The fear they have isn't that they'll be replaced on the job. The fear they have is that they will be required by some policy to enter yet another copy of the same data into yet another "time saving" system.


Healthcare industry insiders joke:

Any theory of the current state of medicine that involves a cardio-thoracic surgeon feeling like they are not completely irreplaceable, one-in-a-trillion geniuses/minor deities, put here on this planet to spare us lesser mortals (as scheduling allows) seems... improbable.


"They thoughtlessly order tests and thoughtlessly obey the results."

Hacker News, come on. You're better than this.

Taking it from the top: The obvious take is that the new tools this is referring to is EMR and things like Watson. Will return to this in a moment.

Subjective and objective data both a play a role in medicine. The eye of an experienced person can often see in a blink what would be missed by someone looking only at numbers in a chart. Gestalt, or the fast system of Kahneman, is invaluable when time is a serious concern. But noone starts out that way. The slower, methodical plod of consciously using bayesian thinking is how the art is learned. Hear hoofbeats, think horses, not zebras... trying to weigh all available data and attempting to chart a course that gives patients the best outcomes at the most reasonable costs. Nowadays additional hoops must be jumped through: laws constrain, institutions have policies that must be followed, and most of all care is dictated by what is allowed by the insurance company. Rather than an invisible hand, this is an invisible supervisor robbing much autonomy and initiative from a sense of worthwhile work. Furthermore the ever-present fear of litigation pushes towards a course with more testing than might be suggested by treatment and diagnosis alone: how would this course be defended if things go wrong, as they will for a certain number? All of these things individually stood to reason, but we as a society must keep in mind the cumulative weight of it all. Emergent phenomena isn't just a thing of programs and physics, it's a thing of human systems like healthcare.

Back to the article. A happy picture is painted of modern CT scans, yet it neglects the downsides. In 1980 the average per capita dose of radiation was 3.0 mSv, with 0.5 coming from medical imaging. It is now 5.5 mSv and rising, with medical imaging alone exceeding 3.0 mSv. Medical imaging is now a larger source of ionizing radiation than all other sources combined, with particularly high risks for those in utero or pediatrics. Like any other test or treatment, there is a risk/reward ratio. As technology improves, it is more likely to be adopted, not because earlier physicians were anti-technology Luddites, but because the improved technology changed that risk/reward ratio. We are more likely to use imaging with less exposure, or better yet use a modality without that risk.

Back to the Bayesian part of thinking... testing isn't perfect. I'd love to see a test that is 100% sensitive and 100% specific. But there are inevitably false positives and false negatives. Tools and tests need to be used in an appropriate situation. For example: I have a test that is 99% sensitive. Great! It'll catch someone with the disease, 99% of the time. So I can thoughtlessly order tests and thoughtlessly obey the results, right? Wrong. What happens if you use it to test for a rare disease that only 0.1% of the population will have? It depends on how specific the test is. How many false positives does it let in? If I test it on 1,000 folks indiscriminately, I'll end up with a basket of folks, only one of which actually has the disease. How many false positives got treated (and possibly harmed by that treatment)? Mammograms work this way (which have fallen a little out of favor in younger demographics without risk factors like the BRCAs), necessitating imaging and invasive biopsies that, upon further collection of data and review, seem not worthwhile for those under 40 and of questionable value under 50.

Tools are great! They need to be used appropriately though. Things have a cost, not just financial but physical and temporal. Indiscriminate use of tests and tools is the last thing anyone should want.

Nothing to add in a world of advancing technology? Bah. Most would love for its promises to come to fruition. EMR for example. We were promised time savings, with cross-talk between systems for better availability of data and improved patient safety. Mostly what has happened is administrators now have data used to push docs to see more and more patients (and spend less and less time with any one of them), all the while the paperwork stacks up. Somehow the paperwork never quite seemed to go away.

Maybe doctors don't reject tools that make their jobs easier. The article is full of tools that were eventually adopted, after all. I can point to many in development that have their ardent advocates, like point-of-care ultrasound among many others. Maybe they don't like tools that were sold as making their jobs easier but mostly don't, and instead benefit insurance companies and conglomerate administrators.


> Mostly what has happened is administrators now have data used to push docs to see more and more patients (and spend fewer and fewer with any one of them), all the while the paperwork stacks up. Somehow they never quite seemed to go away.

Under capitalism, old companies (like hospitals) don't really tend to adapt in response to market forces by actually changing anything as drastic as the shape/relative scale of their internal bureaucracy.

It looks like that happens from a 10,000ft view, but what's really happening is that old companies are just dying, having been outcompeted by new small companies that "grew up in" the market environment where the changes were "the new normal." And then, eventually, the new, small companies acquire the big old dying companies for their brand value—so the resulting merged company has the appearance of the big old company having managed to turn over a new leaf.

When a company is only slightly relatively unfit (due to e.g. serving a market with inelastic demand, like medical care), it can take decades for their relative unfitness to deplete their resources to the point that they'd seek to be acquired. The current heavily-bureaucratic hospitals might be actively dying right now—it'll just take them another 50 years to become all-the-way dead.


> Too many doctors have resigned that they have nothing to add in a world of advanced technology. They thoughtlessly order tests and thoughtlessly obey the results.

Thoughtlessly obeying is undoubtedly a sign of incompetence; a doctor must always use judgement based on patient's condition and not rely on a bunch of numbers, a doctor told me in my teens. However, ordering tech based tests is not thoughtless all the time, at least not with competent doctors. I mean, in this age of self diagnosing based on googling, a doctor not ordering such tests would be seen as incompetent, and even ignorant. Would a doctor risk his reputation, and possibly livelihood, just to prove a point, even when his spidey senses tells him there is nothing severe about the patient's condition when the patient insists on it, directly or indirectly? Only a House would do so. The writer is too eager to generalize for some reason.


Flashy article title clickbait.

Doctors are not fools. If a tool truly made their job easier, it would not be rejected out of hand.

Job security for physicians is rarely the issue, there are plenty of sick people.


Surgical safety checklists were shown to lower 30-day post-operative mortality by 22%:

https://www.cardiovascularbusiness.com/topics/coronary-inter...

You'd imagine something as basic as checklists would be implemented as standard procedure in modern medicine -- yet this study was in 2013, not 1963. Most likely the majority of surgeons are still dragging their feet on this. It's a very conservative field with lots of big egos.


Checklists make patient outcomes better; they don't make surgeons' work easier.


I guess killing the patients as soon as the cut them open would be easier for the doctor :), but their job us to heal patients. Anything that improves on outcomes is good. The OP might have suggested that instead of new tools, just use the old checklist system.


There's also trust issue. Doctors, who like everyone can be opinionated, had to be convinced one-by-one that checklists actually work and are worth messing up their work habits - unlike the many other things people are trying to sell hospitals.


I don't completely agree. Medicine has a reputation for being extremely conservative with technology.

It's understandable, because you don't want to mess around with people's health, but I also see doctors largely acknowledge that their field is more conservative than it should be.

It's apparently also different based on specialization. Opthalmologists apparently have a reputation for being early adopters.


And early thermometers weren't as practical as they are today, someone going to the doctor for a cold probably doesn't want to be sent to the hospital across town to wait in line for a 25 minute temperature rating when the doctor's hand is usually "good enough":

The original thermometers were a foot long, available only in academic hospitals, and took twenty minutes to get a reading.

And it's probably the same with CT scans -- I have a stomach ache, I don't want to come back tomorrow to be scheduled for a $2000 CT scan just in case it's appendicitis. Now if I take the medicine and it's not better in a day or two, maybe then I will want that CT scan.


>> Now if I take the medicine and it's not better in a day or two, maybe then I will want that CT scan.

That's what most doctors in my country would do. Are you saying that US doctors will send you to CT right away, just to be sure?


I went to the ER complaining of stomach pains. They heard my description and immediately put me into a room. A few hours later, I had a CT scan confirming that I had appendicitis. I had surgery that same that and was out of the hospital 2 days later.

Had they sent me home, medicine or no, things could have gone a lot worse.

I'm glad they do the CT scan here and not just send people away until they get worse.


I had a similar experience. I show up at the ER and they do their thing. Doctor says, "the symptoms indicate appendicitis, but the tests (which are mostly just poking you in the side and seeing how bad it hurts) are inconclusive. Normal procedure is to send you home and see how it goes, but I'd really feel more comfortable if we took a CT scan to be sure". At this point I'm in so much pain if he said he wanted to do a voodoo ceremony, I'd have been all in. CT scan reveals I had a weirdly shaped appendix, hidden behind a couple of folds of intestine, in a slightly different area than they are usually in. So the poking test didn't work cause they were poking around in the wrong area. If they'd sent me home I'd have come back with a burst appendix, which is potentially fatal. Ironically, just as they were wheeling me into the operating room, a woman came into the ER with an actual burst appendix so they wheeled me out and took her first.

I had a dog who started having a runny bloody nose all the time. 4 xrays, 2 nasal endooscope procedures and several thousand dollars and still can't figure out what's wrong. In desperation took him to a really good emergency/surgery vet clinic, they did a CT scan and it reveals a large tumor growing on the nasal cavity side of the soft palate. Confirmed with an endoscope going into the mouth and back up into the sinuses, couldn't see it from the front. Super frustrating because it was too late to do anything at this point.

I think using CT scans in the context of a checkup, to look for problems, is bad, because as discussed elsewhere on this thread, if you go looking for something, you'll find something. But if you already know there is a problem, they're the best tool we have for looking inside the body without cutting.


That's exactly what should happen -- you came in with symptoms that seemed more serious than just a stomachache, so you were scheduled for a CT.

It's not like you came in with gas pains and the doctor had to send you for a CT just in case it turned out to be something more serious.


That seems to be what the article is implying should happen, just like doctors of the past should have embraced expensive and inconvenient thermometers sooner, it seems that the article is implying that modern doctors should embrace modern diagnostic tools:

Modern CT scans, for example, perform better than even the best surgeons’ palpation of a painful abdomen in detecting appendicitis. As CT scans become cheaper, faster, and dose less radiation, they will become even more accurate.

Though Defensive Medicine is definitely happens in the USA and probably other countries as well:

https://en.wikipedia.org/wiki/Defensive_medicine


>...Job security for physicians is rarely the issue, there are plenty of sick people.

Yea you would think so... Unfortunately, in medicine, just because a computer program is better then a Dr at diagnosing a patient is no guarantee it will be used. The classic example here was the MYCIN expert system developed in the 1970s. MYCIN was shown to outperform infectious disease experts by 1979 in a blind testing:

>... Eight independent evaluators with special expertise in the management of meningitis compared MYCIN's choice of antimicrobials with the choices of nine human prescribers for ten test cases of meningitis. MYCIN received an acceptability rating of 65% by the evaluators; the corresponding ratings for acceptability of the regimen prescribed by the five faculty specialists ranged from 42.5% to 62.5%. The system never failed to cover a treatable pathogen while demonstrating efficiency in minimizing the number of antimicrobials prescribed.

https://jamanetwork.com/journals/jama/article-abstract/36660...

https://en.wikipedia.org/wiki/Mycin


There are good reasons why MYCIN and similar systems were never accepted, mainly the fact that diagnosing an infectious disease is pretty useless on it's own. If the doctor has already narrowed it down to an infectious disease, telling which one it is is trivial. The hard part is getting there.


>... diagnosing an infectious disease is pretty useless on it's own.

I don't think that was what was being measured.

The evaluation was a comparison of:

>...MYCIN's choice of antimicrobials with the choices of nine human prescribers for ten test cases of meningitis.

In that evaluation:

>...MYCIN received an acceptability rating of 65% by the evaluators; the corresponding ratings for acceptability of the regimen prescribed by the five faculty specialists ranged from 42.5% to 62.5%.

The fact that the acceptability ratings of the five faculty specialists ranged from 42.5% to 62.5% implies that this isn't a trivial problem.


As a med student, using shinny new tech in medicine has become a dick measuring contest. There is absolutely no fear of new tech, just professionals rejecting useless unoriginal products that don't actually make our jobs easier. Like the data fad that will just make us manually reenter data into a new data silo for the benefits of other companies who will take no responsibility to the stability required in the field. I absolutely resent some of the interfaces we have to use but at least the CT/MRI reading software doesn't crash. Anything new and shiny does though.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: