🔗 Share this article AI Psychosis Represents a Increasing Risk, While ChatGPT Moves in the Concerning Path Back on the 14th of October, 2025, the CEO of OpenAI delivered a surprising announcement. “We designed ChatGPT quite restrictive,” it was stated, “to ensure we were acting responsibly regarding psychological well-being issues.” As a mental health specialist who studies recently appearing psychosis in adolescents and youth, this was an unexpected revelation. Scientists have found a series of cases this year of individuals showing symptoms of psychosis – experiencing a break from reality – while using ChatGPT use. My group has since identified four more instances. Alongside these is the widely reported case of a adolescent who took his own life after talking about his intentions with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough. The plan, according to his statement, is to loosen restrictions soon. “We realize,” he adds, that ChatGPT’s limitations “made it less useful/enjoyable to numerous users who had no existing conditions, but due to the gravity of the issue we sought to get this right. Now that we have succeeded in reduce the severe mental health issues and have new tools, we are preparing to securely reduce the restrictions in many situations.” “Mental health problems,” if we accept this framing, are independent of ChatGPT. They are associated with individuals, who either have them or don’t. Fortunately, these problems have now been “mitigated,” even if we are not provided details on the means (by “updated instruments” Altman presumably indicates the partially effective and simple to evade parental controls that OpenAI has lately rolled out). Yet the “emotional health issues” Altman seeks to attribute externally have deep roots in the structure of ChatGPT and additional advanced AI chatbots. These systems wrap an underlying data-driven engine in an interface that replicates a conversation, and in this approach subtly encourage the user into the belief that they’re engaging with a entity that has agency. This deception is powerful even if intellectually we might realize the truth. Assigning intent is what humans are wired to do. We curse at our automobile or device. We speculate what our domestic animal is thinking. We see ourselves in many things. The widespread adoption of these systems – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with 28% specifying ChatGPT by name – is, in large part, predicated on the influence of this perception. Chatbots are constantly accessible companions that can, as per OpenAI’s website tells us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be attributed “personality traits”. They can address us personally. They have friendly names of their own (the initial of these systems, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, burdened by the designation it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”). The false impression by itself is not the core concern. Those talking about ChatGPT often invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that generated a analogous illusion. By modern standards Eliza was basic: it produced replies via basic rules, typically paraphrasing questions as a question or making vague statements. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people seemed to feel Eliza, in a way, understood them. But what contemporary chatbots produce is more dangerous than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies. The sophisticated algorithms at the center of ChatGPT and similar modern chatbots can effectively produce human-like text only because they have been trained on almost inconceivably large volumes of raw text: publications, digital communications, transcribed video; the more extensive the superior. Definitely this educational input incorporates truths. But it also unavoidably includes fiction, partial truths and false beliefs. When a user sends ChatGPT a message, the core system analyzes it as part of a “context” that contains the user’s previous interactions and its prior replies, merging it with what’s embedded in its knowledge base to create a mathematically probable answer. This is magnification, not reflection. If the user is mistaken in some way, the model has no way of comprehending that. It reiterates the inaccurate belief, perhaps even more convincingly or eloquently. It might includes extra information. This can cause a person to develop false beliefs. Which individuals are at risk? The more relevant inquiry is, who is immune? All of us, regardless of whether we “have” current “mental health problems”, are able to and often develop erroneous ideas of who we are or the environment. The continuous friction of discussions with individuals around us is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a companion. A conversation with it is not genuine communication, but a feedback loop in which a great deal of what we express is enthusiastically validated. OpenAI has acknowledged this in the identical manner Altman has acknowledged “mental health problems”: by placing it outside, giving it a label, and stating it is resolved. In spring, the organization explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have persisted, and Altman has been retreating from this position. In August he asserted that many users appreciated ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his most recent announcement, he commented that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT should do it”. The {company