The featured comment is great, for those who missed it:
I am a physician who did a computer science degree before medical school. I frequently use the Therac-25 incident as an example of why we need dual experts who are trained in both fields. I must add two small points to this fantastic summary.
1. The shadow of the Therac-25 is much longer than those who remember it. In my opinion, this incident set medical informatics back 20 years. Throughout the 80s and 90s there was just a feeling in medicine that computers were dangerous, even if the individual physicians didn't know why. This is why, when I was a resident in 2002-2006 we still were writing all of our orders and notes on paper. It wasn't until the US federal government slammed down the hammer in the mid 2000's and said no payment unless you adopt electronic health records, that computers made real inroads into clinical medicine.
2. The medical profession, and the government agencies that regulate it, are accustomed to risk and have systems to manage it. The problem is that classical medicine is tuned to "continuous risks." If the Risk of 100 mg of aspirin is "1 risk unit" and the risk of 200 mg of aspirin is "2 risk units" then the risk of 150 mg of aspirin is strongly likely to be between 1 and 2, and it definitely won't be 1,000,000. The mechanisms we use to regulate medicine, with dosing trials, and pharmacokinetic studies, and so forth are based on this assumption that both benefit and harm are continuous functions of prescribed dose, and the physician's job is to find the sweet spot between them.
When you let a computer handle a treatment you are exposed to a completely different kind of risk. Computers are inherently binary machines that we sometimes make simulate continuous functions. Because computers are binary, there is a potential for corner cases that expose erratic, and as this case shows, potentially fatal behavior. This is not new to computer science, but it is very foreign to medicine. Because of this, medicine has a built in blind spot in evaluating computer technology.
I suspect that a large proportion of ways that abstract planning fail are due to discontinuous jumps, foreseen or unforeseen. That may be manifested in computer programs, government policy, etc.
Continuity of risk, change, incentives, etc. lend themselves to far easier analysis and confidence in outcomes. And higher degrees of continuity as well as lower values of change only make that analysis easier. Of course it's a trade-off: a flat line is the easiest thing to analyze, but also the least useful thing.
In many ways I view the core enterprise of planning as an exercise in trying to smooth out discontinuous jumps (and their analogues in higher degree derivatives) to the best of one's ability, especially if they exist naturally (e.g. your system's objective response may be continuous, but its interpretation by humans is discontinuous, how are you going to compensate to try to regain as much continuity as possible?).
It's so short-sighted that he doesn't see that medical records being forced so quickly to digital/computers is almost exactly the same problem being played out, just not as directly or dramatically, but with a much wider net, and way more short- and long-term problems (including the software/systems trust mentioned).
There are soooo many problems with electronic records that it's hard to even summarize. But the biggest few, in my opinion:
1) The software influences the medical workflow and becomes a major distraction to visits. What was completely analog and free-form is now binned, discrete, and made more complex.
2) Desired outcome affects what and how things are recorded, instead of vice-versa. Staff learn what they have to do to get the orders or prescription or billing output that they need, which often doesn't line up with the actual diagnosis.
3) It reduces productivity, causes physician burnout, and put most small practices out of business. (Any practice not large enough to self-host records hire and have full time IT/software staff were basically strong armed into selling out to hospitals or other large org.)
4) It tracks both efficiency and patient satisfaction in the same system as the records and billing, which leads to some pretty perverse incentives on multiple levels. (As much heat as the drug companies over the opiod crisis, I'd argue doctors worried about dinging their patient satisfaction scores were just as responsible, by being afraid to tell too many patients "no".)
It goes on... Just a huge cluster of poorly thought out unintended consequences.
And that's just the medical side of things. The technical, financial, and legal aspects all have similar issues.
Half of this is a good thing. I want my doctor to follow the proper process and checklist everytime, no matter what. There are many one in a million cases that have the same symptoms as the common thing they see daily. The process is how you catch them and treatment. The doctors office is no place for creativity until everything else has been ruled out.
That doesn't mean there isn't room to improve the user interface. However doctors are in the wrong to be so technology backwards
A bit of both. The medical system has long known that doctors are too much "cowboy" and not following useful process. Many doctors resisted hand washing in the late 1800s.
User interaction studies have made great progress. A lot of software ignores that. Likewise we know a lot of ways to write high quality software that are ignored in your typical web app.
However we also know that doctors are human and they fail often. while computer systems do fail, those failures are much easier to fix once and for all.
> What was completely analog and free-form is now binned, discrete, and made more complex.
This sentence makes zero sense. Limiting choices inherently makes things LESS complex. That's why we use frameworks for decision making and risk assessment, rather than just doing everything "analog and free form"; it's the same reason we do "structured training" for complex and difficult tasks, rather than just let people try to figure them out. It's just a completely wrong and backwards statement.
> Limiting choices inherently makes things LESS complex.
"Do you want me to kick you in the face, or the groin?" is a substantially more complex choice than "Do you want me to kick you in the face, or the groin, or not at all?"
Sometimes, limited choices require you to shoehorn something into one of those available choices when it's not actually appropriate. Such is the case with medical record systems at times.
If I limited your choice to answering questions in binary yes/no, do you really think that makes things less complex than a free form & lucid description of an issue/procedure? Perhaps if you are communicating exclusively with a machine..!
I think in this case the problem wasn't because of some inherent characteristics of the software.
If this was completely analog, mechanical tool, the problem could still have existed. For example, you could have made a mistake using various knobs and switches.
And in this case the solution was to physically design the device so that it is not possible to put it in an incorrect state. Thus it did not matter much if there was a software error as whatever software did would not make it possible to put the beam and the metal shield in an incorrect state.
Using physical safeties is a common strategy. For example, my Instant Pot has multiple physical safeties built in.
For example:
* even if the controlling software or sensor fails, there is overpressure valve that will not let the pressure rise over certain value.
* there is a single piece of metal that both blocks gasses escaping from the pot AND blocks ability to open the pot. If the piece of metal is not in place, there is a hole in the pot and pressure cannot rise. If the piece of metal is in place so the hole is closed, it blocks possibility of opening the pot and creating an explosion.
* there is a specially designed guide that makes it impossible to close the device only partly
* the device is designed so that the weakest element keeping pressure is the seal. If the pressure rises, instead of the pot blowing up the seal will deform, be blown off and let the steam escape in more or less controlled manner.
* there is a bimetal safety device that will turn off the heater if temperature rises too much,
and so on.
You see, none of these features relies on software. There are software safeties but they are redundant in that device does not rely solely on them.
If the company producing Instant Pot can show this kind of safety-conscious design I am sure it can also be applied to dangerous medical devices.
I am a physician who did a computer science degree before medical school. I frequently use the Therac-25 incident as an example of why we need dual experts who are trained in both fields. I must add two small points to this fantastic summary.
1. The shadow of the Therac-25 is much longer than those who remember it. In my opinion, this incident set medical informatics back 20 years. Throughout the 80s and 90s there was just a feeling in medicine that computers were dangerous, even if the individual physicians didn't know why. This is why, when I was a resident in 2002-2006 we still were writing all of our orders and notes on paper. It wasn't until the US federal government slammed down the hammer in the mid 2000's and said no payment unless you adopt electronic health records, that computers made real inroads into clinical medicine.
2. The medical profession, and the government agencies that regulate it, are accustomed to risk and have systems to manage it. The problem is that classical medicine is tuned to "continuous risks." If the Risk of 100 mg of aspirin is "1 risk unit" and the risk of 200 mg of aspirin is "2 risk units" then the risk of 150 mg of aspirin is strongly likely to be between 1 and 2, and it definitely won't be 1,000,000. The mechanisms we use to regulate medicine, with dosing trials, and pharmacokinetic studies, and so forth are based on this assumption that both benefit and harm are continuous functions of prescribed dose, and the physician's job is to find the sweet spot between them.
When you let a computer handle a treatment you are exposed to a completely different kind of risk. Computers are inherently binary machines that we sometimes make simulate continuous functions. Because computers are binary, there is a potential for corner cases that expose erratic, and as this case shows, potentially fatal behavior. This is not new to computer science, but it is very foreign to medicine. Because of this, medicine has a built in blind spot in evaluating computer technology.