I remember working in the operating room space. I noticed some sites had cameras built into their lights. I asked if they wanted to be able to capture images
timestamped to entries in the case record. Everyone was adamant they did not want that as it would open them up to too much subjective liability.
We have a hard time capturing enough information to be useful and understand things. I didn't realize before now but AI is going to make that much much worse. In aerospace we had yearly training from out insurance company lawyers on how to record information into our systems, format emails, etc. It was interesting getting to interface with high power Lloyds lawyers but also surreal/kinda wrong.
You probably mean well. Something something road something something good intentions.
Meta (not the company) AI product idea from this. AI to skim your reports and make sure there is nothing that a 'legal AI' could reconstruct into a lawsuit narrative. It scans all medical records before saving and recommends verbiage adjustments. (You're welcome dev at Oracle reading this needing to respond to an 'Larry needs medical software value propositions for AI' email). Begun the AI wars have. Why stop there with my above examples. Siemens needs a legal AI for PLM at the least as manufacturing engineer defect notes could definitely be interpreted in legally sketchy ways so should be auto scanned/rejected/massaged. On weird note on one part that went through MRB/NCR that ended up in a crashed plane could be trouble.
Thanks for your comment! Medical chronologies are already very common across personal injury law and other legal practice types. The problem is that paralegals are spending days, if not weeks, combing through thousands of medical records and entering them into an Excel spreadsheet. This automates most of that task.
These cases often bring in medical records experts when one side disputes the scope or completeness of their access. That’s where I’d expect AI to debut.
Actually, Siemens Healthineers could do three things:
1. Style transfer from one standard to another. Would a particular history have resulted in this FHIR v5 data when it was written under FHIR v4?
2. The ability to incorporate feedback for de-identification. In court, that must be explained as an accidental parallel construction rather than an oversight flaunting GDPR; or worse, a hallucination substituting in training data. Siemens could build a product that proves so.
3. Automatically cache pre-filled chronologies. This is easier than the other two, and what I’d expect lands on someone’s desk. By pre-forming the (usually expensive) paralegal material, a doctor or administrator can preview the legal case they’re up against. And alternatively, a plaintiff can claim that a hospital or doctor was aware of the risk of a pre-existing condition. Siemens mostly speaks in risk.
Then again if it's your law firm, the one who's representing you; and doing so lets them build your case at half the cost in research hours... or lets them figure out whether they can take your case where it might not have been worth their time to see before? I don't know much about how these things work, but I could see where people I know might consent to such a thing.
Today, both sides already have some level of access to medical records; could be a judge’s decision.
And when it comes to medical records for people unrelated to the lawsuit, using de-identified cases is not a violation of HIPAA. The question is: can we use AI on those full cases and de-identify afterward? Is using AI on de-identified cases allowed (because the de-identification process can mess with chronology)?
I guess the profit is where the hazards lie! Skimming your landing page, it sounds like you're making meaningful efforts to compensate for the aspects of LLM that horrify us around here: sounds like you're primarily providing direct references to the "needles" in the case record haystack, rather than synthesized "insights"; serving the legal professionals themselves with a specific mundane research task of theirs rather than playing lawyer for consumers or purporting to produce polished results.
Better you than me, but the very best of luck to you, and congratulations on your launch.
I get why you’d want to distinguish yourself from competition that relies heavily on RAG, but “chatting” is putting it mildly.
I’d use RAG with prompts like “what was billed but did not produce a record”. Or rehydrating the context of, for example, the hospital’s own model for predicted drug interactions. I could see it being lucrative if that model produced those results without traceability.
It's a productivity AI agent meant for workers who were already dealing with medical records and creating medical chronologies manually, primarily in the legal space.
Specifically, because you will require vast human health records to train your model, and that model will interact with my health records, and I trust you just about as far as I can throw you as a steward of my or the public’s data. You will intentionally or accidentally expose me to risk, with no meaningful punishment.
Now as a person, of course I’m sure you are kind and responsible and we’d have a lovely lunch (and I mean that). It sounds like a fascinating problem to solve. As a group though, acting within a regulatory regime that doesn’t value privacy at all - excepting that one law from 1996 with more holes than my socks - you just can’t be trusted.
Would you claim personal responsibility for any downside risks your product introduces, in a “like for like” manner with respect to the actual damage caused? Like if a doctor relying on your product caused a death?
We have a hard time capturing enough information to be useful and understand things. I didn't realize before now but AI is going to make that much much worse. In aerospace we had yearly training from out insurance company lawyers on how to record information into our systems, format emails, etc. It was interesting getting to interface with high power Lloyds lawyers but also surreal/kinda wrong.
You probably mean well. Something something road something something good intentions.
Meta (not the company) AI product idea from this. AI to skim your reports and make sure there is nothing that a 'legal AI' could reconstruct into a lawsuit narrative. It scans all medical records before saving and recommends verbiage adjustments. (You're welcome dev at Oracle reading this needing to respond to an 'Larry needs medical software value propositions for AI' email). Begun the AI wars have. Why stop there with my above examples. Siemens needs a legal AI for PLM at the least as manufacturing engineer defect notes could definitely be interpreted in legally sketchy ways so should be auto scanned/rejected/massaged. On weird note on one part that went through MRB/NCR that ended up in a crashed plane could be trouble.
reply