I expect any LLM, even a fine-tuned one, is going to run into the problem of user-selected conversations that drift ever further away from whatever discourse the original LLM deployers consider appropriate.
Actual therapy requires more unsafe topics than regular talk. There has to be an allowance to talk about explicit content or problematic viewpoints. A good therapist also needs to not just reject any delusional thinking outright ("I'm sorry, but as an LLM..."), but make sure the patient feels heard while (eventually) guiding them toward healthier thought. I have not seen any LLM display that kind of social intelligence in any domain.
Actual therapy requires more unsafe topics than regular talk. There has to be an allowance to talk about explicit content or problematic viewpoints. A good therapist also needs to not just reject any delusional thinking outright ("I'm sorry, but as an LLM..."), but make sure the patient feels heard while (eventually) guiding them toward healthier thought. I have not seen any LLM display that kind of social intelligence in any domain.