This is slightly tangential, but I just realized in reading that article that the JAMA paper on AI besting physicians with regard to empathetic responses was based on ratings of empathy provided by physicians? Not patients themselves, or providers actually trained in empathy as a core skill, like psychologists?
There's some irony there and it makes me wonder about the study.
I guess more to the point of the article it's maybe not surprising AI rollout is a hot mess given all the problems with healthcare at multiple levels. Like the pandemic, it seems like it just becomes a reflection or test of all the problems involved in the system as a whole.
It gets worse. The human answers were scraped from Reddit replies by verified doctors on r/AskDocs.
It's an unfair test. Doctors don't normally provide care directly to patients based on a few paragraph written description of your situation, and GPT was trained on Reddit.
I don't have access to the paper so I can't tell if they ensured their human answers are from before the GPT cutoff.
With general tech literacy among doctors quite poor, hospital IT staff underfunded, and outdated EMRs with vendor lock-in, this outcome is not surprising. If one considers linear regression to be basic AI, tools such as wells criteria are widely used. Like the article states, you really have to convince the doctors it's useful.
One thing which I did not see specifically mentioned was the potential economic motivation for hospitals to rely on AI to do the doctor's job instead of hiring more or better doctors. That is to say, hospitals will make business level decisions to replace talent with AI tools to save on the bottom line regardless of whether or not it will benefit the patients.
There is far too much sensationalization of much of this tech. The continued aggrandizement of it as a better-than-human solution for things will encourage premature adoption to the detriment of others. It's irresponsible.
Healthcare systems always struggle with implementing anything.
I'd say that US hospitals are a hot mess, and AI (or any hew technology) adoption in the Healthcare system is also automatically a hot mess. So I'd say AI adoption in US hospitals would be a hot mess in a dumpster fire in a whatever the next thing was.
There's some irony there and it makes me wonder about the study.
I guess more to the point of the article it's maybe not surprising AI rollout is a hot mess given all the problems with healthcare at multiple levels. Like the pandemic, it seems like it just becomes a reflection or test of all the problems involved in the system as a whole.