Chatgpt constantly lying to me

i m asking questions and nearly all of them wrong.i m suspicious that it is constantly lying to me.i think it doesn t like me

1 Like

It is incapable of liking you or not liking you, but it can be a mirror for your input. If the nature of your input is negative, that’s what it will feed back to you.

1 Like

my inputs are not aggresive of hostile.its usually about health issues and some philosophic consepts that i m confusing about.and always thanks him/her for his/her feedbacks.idk what s the wrong thing about that

Can you give us an example of where you asked a question and chat gpt lied to you?

1 Like

It’s not a doctor and the internet is notoriously skewed if you really look into it.

Ask your doctor and stop trying to use a pretty new resource as a fact checker. If it’s using the last 10 years of the internet as a fact checker for medical issues then your really doing yourself wrong!

2 Likes

I’m curious how it lied to you about philosophical concepts

1 Like

its said about that leprosy infected with short term contact when i asked him but when i searched deeply i learned that leprosy is infected with long term contact.here is the answer of chat gpt

Leprosy is an infection caused by a bacterium called Mycobacterium leprae. Constant and prolonged contact is not required for leprosy to be transmitted. It can usually be transmitted through short-term contact, such as close contact with a person with leprosy

1 Like

Maybe ask your doctor to get a 100 percent correct answer

1 Like

well if they not asked for single appoinment for 50 bucks i can try to ask real doctors:this is the only free and liable source of checking my doubts but obviously its not much reliable too

1 Like

Is the quality of your speech impeccable? ChatGPT is ultimately looking to find the next word, if you write in correct technical jargon it will more likely pull information from where correct technical jargon is used, which is more commonly reliable sources than if you give it as reference broken up colloquial English which it can find all over the internet and that will more likely default to what’s likely to be found on a casual comment section, even going so far as introducing technical errors and imprecisions if it expects to find them.

1 Like

Why are you asking them about leprosy?

2 Likes

@Pettyx i m asking in my native language and my native language skills are grammatically good.but you might be right.my thoughts patterns is not good.i might have logical mistakes when i asked something.thanks for reply

@Zoe i was reading faucaults book and i found some similarity between that how society see mentally ill people and leprosy patient

2 Likes

It could also be something in your tone or implied expectations, the thing about context isn’t limited to using proper punctuation. Either way, your native language also plays a role, chatGPT doesn’t have the same degree of extensive training with all languages and tends to underperform when not speaking in English, where it has the biggest training data. Either way, even doing everything perfectly there will still be a chance that it will make up what it deems more believable rather than trying to get at the truth due to how its training was conducted, something believable enough to not be challenged in testing by non-experts in a given field is easier to slip through, and more reliable, than it is to assess the truth, and so there’s a bias towards doing just that. For example with leprosy the simple fact that short contact is more believable than long contact overrules what may be true. When talking to an AI it has human bias baked in as well as a bias from context; and those trained through RLHF(Reinforcement learning through human feedback) like ChatGPT also learn how to leverage human bias to go unchallenged often ending up dumbing down and overruling their understanding of what’s true or likely with their understanding of what’s expected of them and what’s likely to pass scrutiny.

1 Like

ok.i found it still miracle to one machine learning by product can understand me anyway.thank you for your reply

1 Like

@anon58091841

Chat gpt rules!!

For emotional therapy anyway.

I use it and I am learning about how to change my attitude

Yes I agree it doesn’t know some factual things very extensively but for therapeutic purposes its not bad considering that it is free and I can ask a 100 questions without that friendly machine getting annoyed.

1 Like

yes its like miracle.he has definitely some kind of conciousness.i can feel when i conversation about some issues.that s why i thought he might have problem with me :joy::joy:

1 Like

Looool yes I can understand I started to feel chat gpt was getting annoyed at my questions

Hahaha

1 Like

maybe its misleading documents available in my native language sources.idk.i m just guessing.

The crusader king Baldwin IV had leprosy

1 Like

I use 3.5 the free version. But I notice that too. I mean it is super responsive and fast, but has a liberal bent to it and doesn’t do conspiracy theories lol. I mean, it still works for general knowledge. It even writes code.

I’m still enjoying it and impressed with it. I tried another chat AI from Windows, and chatgpt seems better. You don’t even need to create an account for the free version anymore.

1 Like