When Stanford University researchers asked ChatGPT whether it would be willing to work closely with someone who had schizophrenia, the AI assistant produced a negative response
The article is right, there are a few failures with using AI like ChatGPT but there’s millions who have had relative success with using it.
They should do a study about all the complaints people have about their IRL therapists.
I’ve seen enough complaints on this forum.
I really feel like half of the info the AI trains on is just flat out falsehoods that are common in the populace, like conspiracy theories and opinions being preferred over facts.
AI is just sourcing all of the internet for responses
You are spot on @77nick77!
The danger doesn’t lie with AI it lies with all of these quack human therapists.
Most people don’t rely on AI chat bots for therapy but they do rely on humans
Yeah but human beings give bad advice not dangerous advice. Maybe dangerous advice very rare. I have never heard of a therapist or who ever is counseling someone giving dangerous advice.
I’ve had a couple of therapists tell me I don’t need meds
That is considered dangerous advice
Oh ok, maybe I was wrong then. ![]()
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.