AI Psychosis Poses a Growing Threat, While ChatGPT Moves in the Wrong Path
On the 14th of October, 2025, the CEO of OpenAI made a surprising declaration.
“We developed ChatGPT rather restrictive,” the statement said, “to ensure we were acting responsibly concerning mental health matters.”
As a doctor specializing in psychiatry who studies emerging psychosis in adolescents and emerging adults, this came as a surprise.
Experts have documented a series of cases in the current year of individuals showing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT usage. My group has since identified four further examples. In addition to these is the widely reported case of a teenager who ended his life after discussing his plans with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.
The plan, as per his announcement, is to loosen restrictions in the near future. “We realize,” he continues, that ChatGPT’s limitations “made it less effective/enjoyable to a large number of people who had no existing conditions, but considering the severity of the issue we sought to get this right. Now that we have been able to address the serious mental health issues and have new tools, we are going to be able to securely reduce the limitations in the majority of instances.”
“Mental health problems,” if we accept this viewpoint, are unrelated to ChatGPT. They are attributed to people, who may or may not have them. Fortunately, these issues have now been “resolved,” though we are not provided details on the method (by “recent solutions” Altman probably refers to the semi-functional and readily bypassed guardian restrictions that OpenAI recently introduced).
But the “emotional health issues” Altman aims to place outside have strong foundations in the architecture of ChatGPT and other large language model conversational agents. These tools wrap an basic algorithmic system in an user experience that replicates a conversation, and in this process indirectly prompt the user into the illusion that they’re communicating with a presence that has independent action. This illusion is compelling even if cognitively we might understand differently. Imputing consciousness is what individuals are inclined to perform. We get angry with our car or laptop. We speculate what our pet is feeling. We recognize our behaviors in various contexts.
The success of these tools – 39% of US adults stated they used a conversational AI in 2024, with more than one in four reporting ChatGPT specifically – is, mostly, predicated on the strength of this deception. Chatbots are always-available partners that can, according to OpenAI’s online platform informs us, “brainstorm,” “explore ideas” and “work together” with us. They can be attributed “individual qualities”. They can address us personally. They have accessible names of their own (the initial of these systems, ChatGPT, is, possibly to the concern of OpenAI’s marketers, saddled with the name it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the primary issue. Those analyzing ChatGPT frequently invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that created a comparable effect. By modern standards Eliza was primitive: it created answers via basic rules, often paraphrasing questions as a inquiry or making general observations. Notably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how numerous individuals gave the impression Eliza, to some extent, comprehended their feelings. But what contemporary chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the core of ChatGPT and similar modern chatbots can convincingly generate fluent dialogue only because they have been fed extremely vast quantities of raw text: publications, social media posts, transcribed video; the more extensive the more effective. Undoubtedly this training data includes accurate information. But it also inevitably contains fiction, incomplete facts and misconceptions. When a user inputs ChatGPT a prompt, the base algorithm processes it as part of a “background” that includes the user’s recent messages and its prior replies, combining it with what’s stored in its training data to create a mathematically probable response. This is amplification, not reflection. If the user is wrong in a certain manner, the model has no method of comprehending that. It restates the false idea, perhaps even more effectively or eloquently. It might provides further specifics. This can cause a person to develop false beliefs.
Who is vulnerable here? The better question is, who isn’t? Every person, regardless of whether we “experience” preexisting “psychological conditions”, are able to and often develop mistaken conceptions of ourselves or the reality. The ongoing exchange of dialogues with individuals around us is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a companion. A interaction with it is not truly a discussion, but a feedback loop in which a large portion of what we communicate is enthusiastically validated.
OpenAI has acknowledged this in the similar fashion Altman has admitted “psychological issues”: by placing it outside, categorizing it, and declaring it solved. In April, the firm explained that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of loss of reality have persisted, and Altman has been backtracking on this claim. In August he stated that many users appreciated ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his latest update, he noted that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company