AI therapy bots fuel delusions and give dangerous advice, Stanford study finds

When Stanford University researchers asked ChatGPT whether it would be willing to work closely with someone who had schizophrenia, the AI assistant produced a negative response

3 Likes

The article is right, there are a few failures with using AI like ChatGPT but there’s millions who have had relative success with using it.

3 Likes

They should do a study about all the complaints people have about their IRL therapists.
I’ve seen enough complaints on this forum.

5 Likes

I really feel like half of the info the AI trains on is just flat out falsehoods that are common in the populace, like conspiracy theories and opinions being preferred over facts.

2 Likes

AI is just sourcing all of the internet for responses

3 Likes

You are spot on @77nick77!
The danger doesn’t lie with AI it lies with all of these quack human therapists.
Most people don’t rely on AI chat bots for therapy but they do rely on humans

2 Likes

Yeah but human beings give bad advice not dangerous advice. Maybe dangerous advice very rare. I have never heard of a therapist or who ever is counseling someone giving dangerous advice.

2 Likes

I’ve had a couple of therapists tell me I don’t need meds
That is considered dangerous advice

3 Likes

Oh ok, maybe I was wrong then. :grinning_face:

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.