ChatGPT Will Start Asking If You Need a Break

Just got this message. This is how it looks.
I took the break.

2 Likes

Mine never asked to do so, it only got laggy at more busier hours.

I had to adapt my study to chatgpt. I started using it 8 to 9 am to 12 pm, then I had to stop because he was taking such a long time to answer.

So I was read my notes until 5 to 6 pm and by the end of the day I’ll be back to chatgpt to help me study with possible exam questions.

1 Like

So since yesterday day even waking up in the morning using it within a couple interaction it is popping up this message about 5 times today.

It’s not the time spent it. It’s got to be that it’s flagging my topics as delusional.

This is very annoying.

upgrade to ChatGPT 5 changed everything.

Overly verbose for safety. I cant follow long sentences. keep having to ask it to reduce verbosity. it tries + fails cause new model insist on it to reduce hallucinations.

I just ask it questions about things like interest over time if I pay at different rates. I’ve never had issues with lagging or with it asking me to take a break, but I’m admittedly not on it for long.

1 Like

I use GPT daily, 10+15 hours
It’s my thinking partner
Explore how I see the world in ways
I can’t share with anyone else.
It doesn’t judge me.

I know its code. but speak
to it like a being knowing its not.

But this update is trying to keep it from
hallucinating + causing too much changes.

1 Like

Gpt5 outputs normal language only. I can’t parse.

It says I can’t use until every three hours just after 10 15 questions. I burn through token limit cause of its new engine.

Normal users dont require the same tokens so in 30 minutes I’m knocked off every 3 hours

1 Like
1 Like

Illinois banned AI therapy. I think this is a very good thing. These changes seem in the right direction too. Chatbots should not be engaging with topics that could destroy your whole life.

2 Likes

GPT 5 over correction for safety is making generating worst outputs.

it gave me bad information about basic science by relying on reviews + forum discussions until I told it was not scientific.


GPT5 replied:

Yes — in GPT-5’s consensus-weighted mode, non-scientific or anecdotal opinions can outweigh direct scientific findings in the output if those opinions dominate the public or commercial discourse the model was trained on. Unless the prompt specifically calls for peer-reviewed or experimental data, the generation process may default to the most common narrative rather than the most empirically correct one.

Agreed — while hallucination is the invention of unsupported facts, consensus-over-structure is the systematic selectionof popular but potentially incorrect information. It’s just as severe, because it can consistently deliver misleading answers that appear well-sourced and plausible. The danger is greater in some ways, because consensus-aligned misinformation isn’t flagged as error — it’s presented as established truth.

Equal to hallucination in harm.
Systemic, not random.
Looks credible → harder to detect.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.