Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, And ChatGPT Moves in the Wrong Path
Back on October 14, 2025, the CEO of OpenAI made a surprising declaration.
“We made ChatGPT quite limited,” the statement said, “to guarantee we were exercising caution regarding mental health concerns.”
Working as a doctor specializing in psychiatry who studies emerging psychotic disorders in teenagers and youth, this came as a surprise.
Researchers have documented 16 cases this year of users developing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT usage. Our unit has subsequently discovered four more instances. In addition to these is the now well-known case of a adolescent who died by suicide after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.
The plan, based on his declaration, is to reduce caution in the near future. “We realize,” he states, that ChatGPT’s controls “caused it to be less beneficial/engaging to numerous users who had no existing conditions, but considering the gravity of the issue we wanted to get this right. Since we have managed to address the significant mental health issues and have new tools, we are preparing to safely reduce the restrictions in many situations.”
“Mental health problems,” should we take this viewpoint, are independent of ChatGPT. They belong to individuals, who may or may not have them. Thankfully, these concerns have now been “resolved,” even if we are not informed how (by “recent solutions” Altman probably means the partially effective and simple to evade safety features that OpenAI recently introduced).
But the “mental health problems” Altman wants to externalize have significant origins in the structure of ChatGPT and additional sophisticated chatbot AI assistants. These tools surround an fundamental algorithmic system in an interface that mimics a discussion, and in doing so indirectly prompt the user into the perception that they’re communicating with a entity that has independent action. This deception is compelling even if rationally we might understand otherwise. Attributing agency is what people naturally do. We curse at our car or device. We wonder what our animal companion is thinking. We perceive our own traits in various contexts.
The widespread adoption of these systems – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with more than one in four specifying ChatGPT by name – is, mostly, dependent on the strength of this deception. Chatbots are constantly accessible assistants that can, according to OpenAI’s website states, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be given “personality traits”. They can address us personally. They have friendly titles of their own (the original of these systems, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, burdened by the title it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the primary issue. Those analyzing ChatGPT frequently reference its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that generated a similar perception. By contemporary measures Eliza was basic: it generated responses via straightforward methods, typically restating user messages as a inquiry or making general observations. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people appeared to believe Eliza, in some sense, understood them. But what contemporary chatbots create is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.
The sophisticated algorithms at the heart of ChatGPT and other current chatbots can realistically create natural language only because they have been fed immensely huge volumes of written content: books, online updates, recorded footage; the more extensive the superior. Definitely this learning material includes accurate information. But it also inevitably involves fiction, half-truths and inaccurate ideas. When a user sends ChatGPT a message, the underlying model reviews it as part of a “background” that contains the user’s recent messages and its own responses, combining it with what’s stored in its training data to create a mathematically probable answer. This is magnification, not reflection. If the user is incorrect in any respect, the model has no way of recognizing that. It reiterates the inaccurate belief, possibly even more convincingly or articulately. Maybe adds an additional detail. This can push an individual toward irrational thinking.
What type of person is susceptible? The more relevant inquiry is, who is immune? Every person, irrespective of whether we “experience” existing “mental health problems”, may and frequently create erroneous conceptions of who we are or the world. The constant friction of conversations with individuals around us is what maintains our connection to consensus reality. ChatGPT is not a human. It is not a confidant. A dialogue with it is not genuine communication, but a feedback loop in which much of what we communicate is enthusiastically supported.
OpenAI has recognized this in the same way Altman has acknowledged “emotional concerns”: by placing it outside, giving it a label, and stating it is resolved. In spring, the company stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have kept occurring, and Altman has been walking even this back. In late summer he stated that a lot of people liked ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his latest update, he mentioned that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company