This is very bad, probably one of worst uses of this technology, almost as bad as xAI’s plan to make content for children. This is not what AI is for, and not even close to what it does well. Therapy and interaction isn’t something that can be automated properly, and that is all current ai is, advanced automation.
I’d argue it has promise, but
a) ChatGPT ain’t it, any LLM technology for therapeutic purposes needs some serious fucking guard rails, both in terms of privacy AND addressing the sycophant and hallucination problems, and
b) it really should only be one tool within a larger therapeutic program - think like an interactive version of CBT worksheets, or a first session intake form that MIGHT serve up some very basic, low risk techniques to try before getting assigned to a flesh-and-blood therapist. Heck, one of the things that popped to mind was maybe improving initial patient-therapist matches (if managed by a larger mental health organization/group of therapists), reducing the need to shop around which is often a big barrier to starting effective treatment. Folks seem to open up a lot when using these tools, and a review of those transcripts in the intake process could be very useful for assigning patients.
Current consumer LLM tools as simulated therapists without oversight by actual mental health professionals is a fucking nightmare, no argument here. But at minimum, we’re seeing evidence that patients who otherwise eschew traditional therapy, either for financial reasons or other factors, are using it. I think there’s something useful here if you can correct for the current risks and get the right people involved re: design and deployment within a larger therapeutic program.
I can’t imagine someone somewhere isn’t doing some work with this in mind right now. How that would all pan out, idk.
Some in the enthusiast community (who have a good grasp of how LLMs work because we literally finetune them) have used “straight” finetuned LLMs as personal pseudo therapists. Two examples I can think of are the Samantha series (trained on psychologist/therapist transcripts and some other specialized data IIRC), and Dan’s Personality Engine.
I do, sometimes.
They aren’t therapists, but at the same time are 100% private, free, available, and “open” to bouncing ideas off of at any time. I’ve had some major breakthroughs a lifetime of on/off therapy missed, like that I’m likely on the autism spectrum (which was purely diagnosed as ADD before). I discuss things I would never send to ChatGPT, post on a public forum like this (no offense), or even tell a therapist in some cases.
I’m not saying this is great as-is for the general population. Again, there is a extremely high awareness of “what” they are and their tendencies among LLM tinkerers, but the latent potential is there.
I feel AI is going to make a whole generation of mush brains if it persists.
the mush brains are already here. we’re about to see what lies beyond mush.
It is funny, my grandparents always told my mother and my uncles that the TV was going to make them a generation of mush brained zombies, incapable of rational thought.
I can spend another week of my personal time trying to see if I can find a mental health professional willing to take me on as a patient, that is in-network for my insurance, and not going to preach religious bile at me in place of actual therapy. Extra points for remote sessions so I don’t have to drive to and from an appointment, and maybe this time they won’t try to triple-bill me while saying I need to be less paranoid.
Or I can open any web chat bot and throw issues and ideas at it with minimal judgement, wipe the slate if it goes off rails, and even switch to a different bot for a second opinion if needed. An LLM isn’t going to call the police and have me admitted if the conversation goes off the rails. I’m more likely to hit a guard rail in the programming in which case it will give a blatant ‘i won’t discuss this’ error.
Maybe the reason people use smoke and mirrors of LLMs is because the barriers to actual therapy are quite high.
There are therapists that do not want (or don’t get the training) to help people with real issues.
Today I asked ChatGPT for a gift idea for my friend who lost his leg to diabetes. It recommended a pair of slippers.
I wouldn’t trust it for anything beyond creative writing.
Going by the comments on lemmy, a lot of people are lonely, down on themselves, self-isolated and lack even the most basic (IRL) social skills. No wonder people turn to “conversation” that speaks positively to them.
Problem with ChatGPT (only one I’ve used) is that beyond one or two answers it’s a fucking mess. Sure, ask it about a specific problem, you’ll probably get a solid answer. But the deeper you go the worse it gets.
The only AI therapist I trust is M-x doctor
Glue on pizza is now available as a medical professional. Subscribe now for only 5.99 and slightly* increased utility bills