I'd argue chatbots give zero actual attention since they're not human (other than in the irrelevant technical sense.) Saying they can is a bit like saying a character in a book or an imaginary friend can.
It will probably take a few years for the general public to fully appreciate what that means.
Assuming we are comparing ChatGPT to an in person therapist, there's a whole universe of extra signals ChatGPT is not privy to. Tone of voice, cadence of speech, time to think, reformulated responses, body language.
These are all CRUCIAL data points that trained professionals also take cues from. An AI can also be trained on these but I don't think we're close to that yet AFAIK as an outsider.
People in need of therapy could (and probably are) unreliable narrators and a therapist's job is to manage long range context and specialist training to manage that.
> don't think we're close to that yet AFAIK as an outsider.
I was gonna say: Wait until LLMs start vectorizing to sentiment, inflection and other "non content" information, and matching that to labeled points, somehow ...
I am curious how this will work in the wild. I believe the capability will exist but with things like body language and facial expressions, it can be really subtle and even if it's possible, I think that run of the mill consumer hardware will not be high fidelity enough and will bring in too much noise.
This reminds me of the story of how McDonald's abandoned automated drive thru voice input because in the wild there was too many uncontrolled variables but speech recognition has been a "solved problem" for a long time now...
EDIT I recently had issues trying to biometrically verify my face for a service and after 20-30 failed attempts to get my face recognised I was locked out of the service so sensor-related services are a still a bit of a murky world