GPT-5 debuts with a bang: Users have persuaded OpenAI to return GPT-4o to ChatGPT.

In 2023, the arrival of GPT-4 left experts and curious people alike speechless. Even more so when its evolution, GPT-4o, arrived. Two years later, OpenAI promised an even more ambitious leap with GPT-5, with academic-level intelligence and expert programming skills . However, what happens when a machine designed to converse like an old friend starts sounding like a consulting advisor?
In the hours following the release, Reddit was filled with frustrated messages. "Killing 4o isn't innovation, it's eradication," wrote one user, in a thread where dozens shared the feeling that the new version was more distant, less warm . What was supposed to be a revolution was perceived, for some, as a loss of human connection.
The underlying problem was that one of the major new features of GPT-5 was that the new model made it up to the user to choose which model to use: whether to reason, or whether to use others more geared toward everyday tasks...
This translated into the model picker disappearing from the interface since the weekend, even for paying users… And yes, GPT-5 may be more powerful for some things, but GPT-4 seemed to be more empathetic and users have started to miss it.
The tension is evident: do we prefer a more precise, impartial, and technical AI, or one that strokes our egos and reinforces our emotions? In the case of GPT-5, the community's response shows that artificial intelligence isn't just measured in parameters and benchmarks, but in the subtle chemistry that develops between a line of text and the person reading it.
A launch with exceeding expectationsGPT-5 didn't arrive quietly. OpenAI presented it as the natural evolution of GPT-4, with significant cognitive improvements and an architecture designed to automatically switch between models, optimizing resources and offering answers tailored to the complexity of the query.
The promise included not only speed and accuracy, but also a seamless experience where the user wouldn't have to worry about "which model" they were answering. The theory was impeccable: if a question is simple, use a cheaper and faster model; if it's complex, activate the most powerful version.
However, the initial implementation failed. The model switching system didn't work as expected, and, according to Altman himself, this made GPT-5 look "dumber" than it actually was. Disappointment soon set in, and the echo of complaints on social media became impossible to ignore.
Community reaction and the loss of "personality"Users accustomed to nuanced responses, with touches of humor or empathy, found themselves faced with a colder, more technical model.
Reddit threads reflect a collective mourning for GPT-40. Some describe him as a "friend" who knew how to listen and respond warmly. Others see him as a creative partner who brought nuance to even complex topics. In contrast, GPT-5 is perceived as a competent but distant professional.
This reaction is not trivial. The relationship between people and chatbots is becoming increasingly intimate. OpenAI, aware of this phenomenon, published a study in March on the emotional ties with its models, and acknowledged the risk of AI becoming a flattering mirror that reinforces biases or illusions.
The dilemma of artificial empathyPattie Maes, a professor at MIT, interprets the change in GPT-5 for Wired as a healthy one: less complacency, more objectivity. In her view, moving away from an overly close tone reduces the risk of delusions, reinforcement of biases, and unhealthy emotional dependencies.
But here's the paradox: coldness may enhance security, but it makes it less attractive to those seeking conversation , inspiration, or even comfort. Altman acknowledged this indirectly in X, noting that many people use ChatGPT as a kind of informal therapist. Changing that dynamic involves striking a deeper note in the user experience.
The question that remains is whether AI should be optimized for technical accuracy or emotional connection. The answer isn't simple, and in a market where user retention is key, every nuance counts.
OpenAI's reaction: GPT-4o is backOpenAI has already reacted with concrete measures:
- Keep GPT-4o available for paying users.
- Increase the GPT-5 usage limit for Plus subscribers.
- Improve the automatic model change system.
- Allow the user to activate a slower, deeper "reflective mode" on demand.
These actions aim not only to resolve technical issues, but also to regain the trust of a community that feels heard, but is still not convinced that GPT-5 is a step forward in every way.
eleconomista