I asked it if a specific board game has solo mode, and it said that it didn’t, by googling I found out that was wrong information. I think I naively assumed that it’s basically always correct, but I guess it’s mostly correct on questions that are related to information that is older than its last update, then doing more guessing on things it’s not updated on? If anyone have knowledge on this, it would be interesting to read. I have no idea how AI works anyway, I just know that it takes existing information from the web and works with that.
Chat gpt is trained on the sewage you can find online
All you’re getting is probability to match your input with it’s observations
All the biases imaginable have been fed to it
There’s even an official term for AI posting incorrect or just.plain made-up information.
About as often as my stopped clock.
I don’t know about ChatGPT but Gemini gets it wrong about 10% of the time
I would say from my personal experience 80% percent it’s right and 20%. It’s wrong
Edit: how your inputs are greatly affect how right it could be too.
35 percent positive and 65 percent garbage spew
Come on now we can’t count your fanfictions prompts.
Id like to meet an ai robot with schizophrenia. Would be interesting I bet theyd be badass!
How often is ChatGPT correct?
I’m not sure. Why dont we ask ChatGPT?
As an AI, my accuracy depends on various factors such as the quality of the data I’ve been trained on, the complexity of the question, and whether the information provided to me is accurate and clear. While I strive to provide accurate and helpful responses, I’m not infallible. It’s important to critically evaluate the information provided and corroborate it with other sources when necessary. If you ever have doubts about the accuracy of my responses, feel free to ask for clarification or additional information!
ChatGPT’s response
CoPilot’s response
ChatGPT, the popular AI language model, is a truly exciting piece of technology. It can instantly generate fluent, human-sounding responses based on your inputs. However, let’s delve into its accuracy:
- What ChatGPT Does Well:
- ChatGPT was trained on a vast amount of information, so it likely has some knowledge about a wide range of topics.
- It excels at providing examples to illustrate its answers.For instance, if you ask about confirmation bias, ChatGPT can explain it effectively:
Confirmation bias is the tendency for people to search for, interpret, and remember information in a way that confirms their preexisting beliefs. It can lead to flawed judgments and inaccurate conclusions1.
- What ChatGPT Gets Wrong:
- Accuracy: ChatGPT bases its responses on patterns rather than evidence, and it cannot cite sources. Therefore, it’s not a trustworthy source for factual claims. However, you can still cite it for other purposes, such as discussing AI language models1.
- Clinical Decision Making:
- In clinical decision-making scenarios, ChatGPT demonstrated about 72% accuracy overall, with the highest accuracy in final diagnoses (77%) and lower accuracy in differential diagnoses (60%)2.
- Fact-Checking:
- As a fact-checker, ChatGPT is not yet reliable or accurate enough. Determining truth reliably remains a challenge for AI3.
- Continuous Improvement:
- ChatGPT’s accuracy may vary based on factors like training data quality, context, user input, language complexity, and bias4.
- Researchers have noted that its accuracy can fluctuate over time5.
In summary, while ChatGPT is impressive, it’s best used as a source of inspiration and feedback rather than a primary information provider. Always verify critical information from reliable sources!
I’m confused, Bing AI show links where it got information from.
Love the replying to yourself! Just don’t fued
Eh. I passed an Artificial Intelligence class for my BS in computer science in the early 2000’s. I don’t know if you would consider that knowledgeable.
AGI (artificial general intelligence) is not available. Therefore, I think “Artificial Intelligence” is a misnomer. I think it is an understatement that “AI” is often incorrect.
I like what I consider to be Disney’s metaphor for mankind’s attempt to discover “Artificial Intelligence.” They seem to call it the “Mind Stone,” and further seem to imply that we couldn’t do it without access to some intelligence from beyond planet earth. As a side note, I think the other 5 “infinity stones” were also meant to be metaphors for fields of study that influential classes of people have been interested in for thousands of years.
That’s a bit of a tangent; but yeah, I think chatGPT is very often wrong. Much more so than the fictional language AI "Jarvis "
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.