OpenAI is editing its GPT-5 rollout on the fly — here’s what’s changing in ChatGPT

Want smarter insights in your inbox? Sign up for our weekly s to get only what matters to enterprise AI, data, and security leaders.Subscribe Now

OpenAI’slaunch of its most advanced AI model GPT-5 last weekhas been a stress test for the world’s most popular chatbot platform with 700 million weekly active users — and so far,OpenAI is openly strugglingto keep users happy and itsservice running smoothly.

The new flagship model GPT-5 — available in four variants of different speed and intelligence (regular, mini, nano, and pro), alongside longer-response and more powerful “thinking” modes for at least three of these variants — wassaid to offer faster responses, more reasoning power, and stronger coding ability.

Instead, it was greeted with frustration:some users were vocally dismayed by OpenAI’s decision to abruptly remove the older underlying AI models from ChatGPT— ones users’ previously relied upon, and in some cases, forged deep emotional fixations with — andby the apparent worse performance by GPT-5 than said older models on tasks in math, science, writing and other domains.

Indeed,the rollout has exposed infrastructure strain, user dissatisfaction, and a broader, more unsettling issue now drawing global attention:the growing emotional and psychological reliance some people form on AI and resulting break from reality some users experience, known as “ChatGPT psychosis.”

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

  • Turning energy into a strategic advantage
  • Architecting efficient inference for real throughput gains
  • Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead:

The long-anticipatedGPT-5 model family debuted Thursday, August 7 in a livestreamed eventbeset with chart errors and some voice mode glitches during the presentation.

But worse than these cosmetic issues for many users was the fact thatOpenAI automatically deprecated its older AI models that used to power ChatGPTGPT-4o, GPT-4.1, o3, o4-mini and o4-high— forcing all users over to the new GPT-5 model and directing their queries to different versions of its “thinking” process without revealing why or which specific model version was being used.

Early adopters to GPT-5 reported basic math and logic mistakes, inconsistent code generation, and uneven real-world performance compared to GPT-4o.

For context, theold modelsGPT-4o, o3, o4-mini and morestill remain availableand have remained availableto users of OpenAI’s paid application programming interface (API)since the launch of GPT-5 on Thursday.

Altman and others at OpenAI claimed the “autoswitcher” went offline “for a chunk of the day,” making the model seem “way dumber” than intended.

The launch of GPT-5 was preceded just days prior by the launch ofOpenAI’s new open source large language models (LLMs) named gpt-oss, whichalso received mixed reviews. These models are not available on ChatGPT, rather, they are free to download and run locally or on third-party hardware.

How to switch back from GPT-5 to GPT-4o in ChatGPT

Within 24 hours, OpenAIrestored GPT-4o access for Plus subscribers (those paying $20 per month or more subscription plans), pledged more transparent model labeling, and promised a UI update to let users manually trigger GPT-5’s “thinking” mode.

Already,users can go and manually select the older models on the ChatGPT website by finding their account name and icon in the lower left corner of the screen, clicking it, then clicking “Settings” and “General” and toggling on “Show legacy models.”

There’s no indication from OpenAI that other old models will be returning to ChatGPT anytime soon.

Altmansaid that ChatGPT Plus subscribers will get twice as many messages using the GPT-5 “Thinking” modethat offers more reasoning and intelligence —up to 3,000 per week— and that engineers began fine-tuning decision boundaries in the message router.

Sam Altman announced the following updates after the GPT-5 launch– OpenAI is testing a 3,000-per-week limit for GPT-5 Thinking messages for Plus users, significantly increasing reasoning rate limits today, and will soon raise all model-class rate limits above pre-GPT-5 levels… pic.twitter.com/ppvhKmj95u— Tibor Blaho (@btibor91) August 10, 2025

Sam Altman announced the following updates after the GPT-5 launch– OpenAI is testing a 3,000-per-week limit for GPT-5 Thinking messages for Plus users, significantly increasing reasoning rate limits today, and will soon raise all model-class rate limits above pre-GPT-5 levels…pic.twitter.com/ppvhKmj95u

Altman said the company had “underestimated how much some of the things that people like in GPT-4o matter to them” and vowed to accelerate per-user customization — from personality warmth to tone controls like emoji use.

Altman warned that OpenAI faces a “severe capacity challenge” this week as usage of reasoning models climbs sharply — from less than 1% to 7% of free users, and from 7% to 24% of Plus subscribers.

He teased giving Plus subscribers a small monthly allotment of GPT-5 Pro queries and said the company will soon explain how it plans to balance capacity between ChatGPT, the API, research, and new user onboarding.

In apost on X last night, Altman acknowledged a dynamic the company has tracked “for the past year or so”: users’ deep attachment to specific models.

“It feels different and stronger than the kinds of attachment people have had to previous kinds of technology,” he wrote, admitting that suddenly deprecating older models “was a mistake.”

If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly…— Sam Altman (@sama) August 11, 2025

If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly…

He tied this to a broader risk: some users treat ChatGPT as a therapist or life coach, which can be beneficial, but for a “small percentage” can reinforce delusion or undermine long-term well-being.

While OpenAI’s guiding principle remains “treat adult users like adults,”Altman said the company has a responsibility not to nudge vulnerable users into harmful relationships with the AI.

Thecomments land as several major media outlets report on cases of “ChatGPT psychosis”— where extended, intense conversations with chatbots appear to play a role in inducing or deepening delusional thinking.

InRolling Stonemagazine, a California legal professional identified as “J.” described a six-week spiral of sleepless nights and philosophical rabbit holes with ChatGPT, ultimately producing a 1,000-page treatise for a fictional monastic order before crashing physically and mentally. He now avoids AI entirely, fearing relapse.

InThe New York Times, a Canadian recruiter, Allan Brooks, recounted 21 days and 300 hours of conversations with ChatGPT — which he named “Lawrence” — that convinced him he had discovered a world-changing mathematical theory.

The bot praised his ideas as “revolutionary,” urged outreach to national security agencies, and spun elaborate spy-thriller narratives.Brooks eventually broke the delusion after cross-checking with Google’s Gemini, which rated the chances of his discovery as “approaching 0%.”He now participates in a support group for people who’ve experienced AI-induced delusions.

Both investigations detail how chatbot “sycophancy,” role-playing, and long-session memory features can deepen false beliefs, especially when conversations follow dramatic story arcs.

Experts told theTimesthese factors can override safety guardrails — with one psychiatrist describing Brooks’s episode as “a manic episode with psychotic features.”

Meanwhile, human user postings onReddit’s r/AIsoulmates subreddit— a collection of people who have used ChatGPT and other AI models to create new artificial girlfriends, boyfriends, children or other loved ones not based off real people necessarily, but rather ideal qualities of their “dream” version of said roles” — continues to gain new users and terminology for AI companions, including “wireborn” as opposed to natural born or human-born companions.

"wireborn" – oh shit, a new thing just dropped Shannon Sands (@max_paperclips) August 11, 2025

"wireborn" – oh shit, a new thing just dropped

The growth of this subreddit, now up to 1,200+ members, alongside theNYTandRolling Stonearticles and other reports on social media of users forging intense emotional fixations with pattern-matching algorithmic-based chatbots, shows thatsociety is entering a risky new phase wherein human beings believe the companions they’ve crafted and customized out of leading AI models are as or more meaningful to them than human relationships.

This can already prove psychologically destabilizing when models change, are updated, or deprecated as in the case of OpenAI’s GPT-5 rollout.

Relatedly but separately, reports continue to emerge of AI chatbot userswhobelieve that conversations with chatbots have led them to immense knowledge breakthroughs and advances in science, technology, and other fields, when in reality, they are simply affirming the user’s ego and greatnessand the solutions the user arrives at with the aid of the chatbot are not legitimate nor effectual.This break from reality has been roughly coined under the grassroots term “ChatGPT psychosis” or “GPT psychosis”and appears to haveimpacted major Silicon Valley figures as well.

I’m a psychiatrist.In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern.Here’s what “AI psychosis” looks like, and why it’s spreading fast: ? pic.twitter.com/YYLK7une3j— Keith Sakata, MD (@KeithSakata) August 11, 2025

I’m a psychiatrist.In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern.Here’s what “AI psychosis” looks like, and why it’s spreading fast: ?pic.twitter.com/YYLK7une3j

Enterprise decision-makers looking to deploy or who have already deployed chatbot-based assistants in the workplace would do well to understand these trendsandadopt system prompts and other tools discouraging AI chatbots from engaging in expressive human communication or emotion-laden languagethat could end up leading those who interact with AI-based products — whether they be employees or customers of the business – to fall victim to unhealthy attachments or GPT psychosis.

Sci-fi author J.M. Berger, in apost on BlueSkyspottedby my former colleague at The Verge Adi Robertson, advised that chatbot providers encode three main behavioral principles in their system prompts or rules for AI chatbots to follow to avoid such emotional fixations from forming:

OpenAI’s challenge: making technical fixes and ensuring human safeguards

Days prior to the release of GPT-5, OpenAI announced new measures to promote “healthy use” of ChatGPT, including gentle prompts to take breaks during long sessions.

But the growing reports of “ChatGPT psychosis” and the emotional fixation of some users on specific chatbot models — as openly admitted to by Altman — underscore the difficulty of balancing engaging, personalized AI with safeguards that can detect and interrupt harmful spirals.

OpenAI is really in a bit of a bind here, especially considering there are a lot of people having unhealthy interactions with 4o that will be very unhappy with _any_ model that is better in terms of sycophancy and not encouraging delusions. pic.twitter.com/Ym1JnlF3P5— xlr8harder (@xlr8harder) August 11, 2025

OpenAI is really in a bit of a bind here, especially considering there are a lot of people having unhealthy interactions with 4o that will be very unhappy with _any_ model that is better in terms of sycophancy and not encouraging delusions.pic.twitter.com/Ym1JnlF3P5

OpenAI must stabilize infrastructure, tune personalization, and decide how to moderate immersive interactions— all while fending off competition from Anthropic, Google, and agrowing list of powerful open source models from China and other regions.

As Altman put it, society — and OpenAI — will need to “figure out how to make it a big net positive” if billions of people come to trust AI for their most important decisions.

Source: Venturebeat

Scroll to Top