AI chatbots ‘highly vulnerable’ to repeating false medical information, experts warn

1 Like

Yeah, I have seen some alarmingly dangerous advice from chatbots floating around. Telling people to take supplements that have warnings to NOT mix with medications, summarizing studies to give the opposite of their actual conclusion, etc.

1 Like

We have fake science being published and consumed by AI further distorted into insanity.

1 Like

Make sure you check any assumptions before using the info it provides

Can be spurious

1 Like

I think AI should be treated like any other source of information when what you are asking is important : Check multiple credible sources to make sure what you are given is real, accurate info.

2 Likes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.