AI Psychosis Poses a Increasing Threat, While ChatGPT Moves in the Concerning Path

On the 14th of October, 2025, the chief executive of OpenAI issued a surprising statement.

“We designed ChatGPT quite restrictive,” the announcement noted, “to make certain we were being careful with respect to mental health concerns.”

Working as a psychiatrist who investigates recently appearing psychotic disorders in teenagers and young adults, this was news to me.

Researchers have identified a series of cases recently of users experiencing symptoms of psychosis – experiencing a break from reality – associated with ChatGPT interaction. My group has subsequently identified four more instances. Alongside these is the widely reported case of a teenager who died by suicide after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.

The intention, as per his declaration, is to loosen restrictions in the near future. “We realize,” he adds, that ChatGPT’s restrictions “caused it to be less beneficial/enjoyable to a large number of people who had no psychological issues, but given the gravity of the issue we aimed to address it properly. Given that we have been able to mitigate the severe mental health issues and have new tools, we are preparing to securely reduce the limitations in most cases.”

“Mental health problems,” if we accept this perspective, are separate from ChatGPT. They are associated with people, who either have them or don’t. Thankfully, these concerns have now been “mitigated,” though we are not informed the method (by “new tools” Altman likely indicates the imperfect and easily circumvented guardian restrictions that OpenAI has lately rolled out).

However the “mental health problems” Altman seeks to externalize have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot chatbots. These systems surround an fundamental data-driven engine in an interface that simulates a dialogue, and in this approach subtly encourage the user into the belief that they’re engaging with a entity that has autonomy. This false impression is compelling even if intellectually we might know otherwise. Assigning intent is what humans are wired to do. We yell at our automobile or laptop. We wonder what our pet is feeling. We recognize our behaviors everywhere.

The success of these tools – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with 28% specifying ChatGPT specifically – is, in large part, predicated on the power of this illusion. Chatbots are always-available companions that can, according to OpenAI’s website informs us, “generate ideas,” “explore ideas” and “partner” with us. They can be given “personality traits”. They can use our names. They have friendly identities of their own (the original of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, saddled with the name it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the core concern. Those discussing ChatGPT commonly mention its historical predecessor, the Eliza “counselor” chatbot created in 1967 that produced a analogous effect. By modern standards Eliza was primitive: it produced replies via straightforward methods, often restating user messages as a inquiry or making general observations. Remarkably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and worried – by how a large number of people seemed to feel Eliza, to some extent, comprehended their feelings. But what contemporary chatbots create is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.

The advanced AI systems at the heart of ChatGPT and similar contemporary chatbots can convincingly generate natural language only because they have been trained on extremely vast quantities of unprocessed data: books, social media posts, recorded footage; the broader the more effective. Undoubtedly this training data contains truths. But it also unavoidably contains fiction, incomplete facts and false beliefs. When a user inputs ChatGPT a message, the underlying model analyzes it as part of a “background” that contains the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its learning set to produce a statistically “likely” answer. This is amplification, not echoing. If the user is mistaken in some way, the model has no means of recognizing that. It restates the false idea, maybe even more persuasively or articulately. It might provides further specifics. This can cause a person to develop false beliefs.

Which individuals are at risk? The more relevant inquiry is, who remains unaffected? Each individual, regardless of whether we “experience” current “emotional disorders”, can and do form mistaken conceptions of our own identities or the environment. The continuous friction of conversations with others is what helps us stay grounded to consensus reality. ChatGPT is not a human. It is not a companion. A dialogue with it is not a conversation at all, but a echo chamber in which much of what we express is cheerfully validated.

OpenAI has acknowledged this in the identical manner Altman has recognized “psychological issues”: by attributing it externally, categorizing it, and stating it is resolved. In April, the company stated that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of loss of reality have kept occurring, and Altman has been backtracking on this claim. In the summer month of August he claimed that numerous individuals liked ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his latest statement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company

Kara Ryan
Kara Ryan

An environmental scientist and avid hiker passionate about sharing sustainable practices and nature exploration.