Hacker News new | past | comments | ask | show | jobs | submit login

I'm a psychiatrist, and (like most people) I'm both intrigued and terrified about what AI will bring.

I'm not terrified about losing my job (we're spread so thin that I don't think it will be an issue for competent psychiatrists in my lifetime), but I'm terrified that the mentally ill are marginalized and that a crappy but cost-effective approach with a high margin of error may rise to prominence.

The diagnostic categories, as mentioned by just_steve, are indeed imprecise, and the RDoc approach (https://www.nimh.nih.gov/research/research-funded-by-nimh/rd...) would be a much better fit for this type of research.

Lastly, here is my dream for a good use of AI in psychiatry: With so much psychiatric care happening remotely, I would love to have a real-time dashboard/HUD with measures of disorganized speech patterns or affective intensity, etc, that I could use to supplement the information available via video link. It would be nice to have some additional data to make up for the trade-off of not being in the room. Perhaps one day with some kind of AR it might even be able to happen in the same room. It would be hard to make it not distracting, but done well it could add a lot. And if it went with the patient if they moved then it would be far more useful as it would be trained to their own personal variations. It would be really nice to catch early onset of episodes of mania or psychosis though subtle changes, and many of the public-sector patients I work with bounce around too often for someone to get to know them well enough to catch those subtle changes from baseline.




> a crappy but cost-effective approach with a high margin of error may rise to prominence.

What, like antidepressants?


Ba-zing. I was thinking the same thing. It’s incredible how much of a hammer mental practitioners think they have. It’s borderline criminal how so many have turned into a nail.


>but I'm terrified that the mentally ill are marginalized and that a crappy but cost-effective approach with a high margin of error may rise to prominence.

I understand the fear but is there any value to be gained by vastly expanding access to Mental Health diagnostics, even flawed ones?

Not being cheeky, genuinely interested. One hears about a Mental Health crisis in the US quite often, so could this actually be a net positive tool?


Personally, acknowledging that mental health is a real thing that sometimes goes awry and requires real treatment is still an issue that more people need to be aware of and more accepting of.

I think we've made progress in addressing postpartum depression and first responder PTSD. There remain many stigmas and a lack of awareness about mental health, mental injury, and moral injury, that in North American society lead to substance and screen addictions, as well as anti-social behaviours.

So diagnostics would be helpful, but treatment and societal perspective needs to improve.


no small irony in the name of a company involved as "unite AI" .. humans are predatory animals in packs, and make no mistake, this tech will be used to hunt weak humans by other humans


What do you think of tools like one? https://www.aifredhealth.com/


I can't really tell from the website, but maybe? The site seems focused on depression, and at least in my practice I'm not sure that would add much for me.


> a crappy but cost-effective approach with a high margin of error may rise to prominence.

The tech startup that eventually creates that will call this "efficiency". This type of 'solution' is exactly what capitalism creates.

> I would love to have a real-time dashboard/HUD with measures of disorganized speech patterns or affective intensity

> It would be hard to make it not distracting

Not only would it be distracting, it could also bias you in unexpected ways.

> I would love to have a real-time dashboard/HUD with measures of disorganized speech patterns or affective intensity,

I already don't trust a lot of the mental health industry because of the very bad experiences[1] I've had in the past. The easiest/fastest way to guarantee I never visit a psychiatrist again is to start using that kind of "AI" tech without first showing me the source code. "Magic" hidden algorithms are already a problem in other medical situations like pacemakers[2] and CPAP[3] devices.

> make up for the trade-off of not being in the room

Maybe what you need isn't some sort of "AI" or other tech buzzword. It sounds like you need better communication technology that doesn't lose as much information.

--

On the more general topic of "AI in psychiatry", I strongly encourage you to play the visual novel Eliza[4] by Zachtronics. It's about your fear of a cheap, high error rate system with an additional twist: the same system also optimizes your role into a "gig economy" job.

[1] a brief description of one of those experiences: https://news.ycombinator.com/item?id=26035775

[2] https://www.youtube.com/watch?v=k2FNqXhr4c8

[3] https://www.vice.com/en/article/xwjd4w/im-possibly-alive-bec...

[4] https://www.zachtronics.com/eliza/




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: