Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting, but the application would be weird, the physician would diagnose you then read out a automated AI generated text to speech to the patient?


One of my favorite physicians was my oncologist. I litterally spent less time with him during the whole diagnostics and treatment period then I did with my, at the time, dentist. He was straight o the point, no empathetic BS, just a doctor with a diagnosis and a treatment plan to discuss. On the hand was me, an engineer with a problem to fix and an expert with the right answers. That discussion took all of 15 minutes.

That guy would have failed against ChatGPT, and I loved the way he told things. Anythong else would have just driven me crazy, maybe to the point of looking for a different doctor.

So I giess, what passes as good bed side manners for doctors largely depends on the patient. By the way, the dentist I have since is in the same category as my, luckily former, oncologist. A visit with hom usually takes no more than 5 minutes if he's chatty, less if not. Up to 10 when treatment is required, anuthing longer than thaf is a different appointment.


The real communication skill for a physician is to be able flex the style, information content, and level of detail to the patient with whom they are meeting. The patient in this room is an engineer, the patient in the next room is elderly and has mild cognitive impairment, etc. As impressive as ChatGPT is its domain, I don’t see it “reading the room” in this way anytime soon. And as a human who enjoys interacting face to face with other humans from time to time, I hope we keep it that way.


It also has to flex a bit with a diagnosis, "it's aggressive and terminal cancer, you should do up a will and enjoy your next couple of months", "it's broken, you need a cast" and "it's seems like nothing, take some paracetamol and come back if it's still the same in a week or gets worse" all arguably call for different communication styles.


You can give those models a system prompt, in which you can tell it how to act generally but it's a very good place (imo) for background information and formatting. 3.5 isn't great at following it but 4 is.


Wow you overcame cancer. That is awesome.


I didn't do much, except being insufferable during chemo. Evrybidy around me did the hea y lifting, emotionally and physically.


It's probably not directly useful for physicians except as a teaching aid, but I have a friend who runs a small local business and she sometimes finds it difficult dealing with problem customers when she's tired or upset. As she interacts with them mostly via WhatsApp (up until the point that they purchase), the idea of having a bot write the replies for her has been floated. The LLM has infinite patience.


> The LLM has infinite patience.

Not Bing Chat. She doesn't have problems telling users that they have been bad users.


Good point but Bing Chat seems like an example of why OpenAI's mastering of RLHF has been so critical for them. Honestly the more time you spend with ChatGPT the more absurd fantasies of evil AI takeover look. The thing is pathologically mild mannered and well behaved.


Bing originally intentionally undid some of the RLHF guardrails with its system prompt, today it's actually more tame than normal ChatGPT and very aggressive about ending chats if it detects it's headed out of bounds (something ChatGPT can't offer with the current UI)


ChatGPT will delete its response if it is severely out of line.


That's based on the moderation API, which only kicks on on severe content

Bing on the other hand will end a conversation because you tried to correct it one too many times, or used the wrong tone, or even asked something a little too philosophical about AI.

They seem to be using it as a way to keep people from stuffing the context window to slowly get it away from its system prompt


This is why I think Google had the brains to stay clear from a releasing asimilar product, I think it was intentional. They’re not idiots and they could probably see using that “AI” products behind the scenes to be safer and easier than having people directly talking to a model which has to suit all customers moods and personality types while not being creepy, vindictive and dealing with all the censorship and safety aspects of it.

Fun times.


"she"?


she == Sydney


It seems it gives more accurate and emphatic response. I would guess the physician just needs to double check that what the LLM says is medically correct, reasonable and ask the right questions from the patient and the LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: