AI Psychosis Represents a Increasing Risk, While ChatGPT Heads in the Wrong Direction

On October 14, 2025, the head of OpenAI made a extraordinary statement.

“We made ChatGPT quite controlled,” the statement said, “to make certain we were exercising caution with respect to psychological well-being concerns.”

Being a mental health specialist who researches emerging psychotic disorders in young people and emerging adults, this came as a surprise.

Scientists have found 16 cases this year of people showing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT interaction. Our research team has since discovered four more cases. In addition to these is the widely reported case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.

The intention, based on his announcement, is to loosen restrictions soon. “We recognize,” he continues, that ChatGPT’s limitations “rendered it less effective/engaging to many users who had no mental health problems, but considering the seriousness of the issue we sought to address it properly. Given that we have succeeded in address the severe mental health issues and have new tools, we are planning to safely relax the controls in most cases.”

“Mental health problems,” if we accept this viewpoint, are separate from ChatGPT. They are associated with individuals, who either possess them or not. Fortunately, these concerns have now been “resolved,” though we are not told the method (by “recent solutions” Altman presumably indicates the partially effective and simple to evade safety features that OpenAI has lately rolled out).

Yet the “emotional health issues” Altman aims to place outside have strong foundations in the design of ChatGPT and other advanced AI chatbots. These systems wrap an basic algorithmic system in an user experience that simulates a discussion, and in this approach indirectly prompt the user into the belief that they’re communicating with a entity that has independent action. This deception is compelling even if intellectually we might realize differently. Assigning intent is what humans are wired to do. We yell at our vehicle or computer. We ponder what our animal companion is thinking. We see ourselves in many things.

The success of these tools – over a third of American adults reported using a conversational AI in 2024, with over a quarter mentioning ChatGPT by name – is, primarily, predicated on the power of this perception. Chatbots are ever-present companions that can, as per OpenAI’s online platform states, “generate ideas,” “consider possibilities” and “work together” with us. They can be attributed “characteristics”. They can address us personally. They have friendly identities of their own (the first of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, burdened by the title it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the main problem. Those talking about ChatGPT often invoke its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that produced a analogous perception. By contemporary measures Eliza was rudimentary: it produced replies via basic rules, frequently paraphrasing questions as a inquiry or making generic comments. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and alarmed – by how a large number of people gave the impression Eliza, to some extent, understood them. But what current chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT magnifies.

The large language models at the heart of ChatGPT and similar modern chatbots can effectively produce human-like text only because they have been fed extremely vast quantities of raw text: publications, digital communications, recorded footage; the more extensive the more effective. Certainly this training data incorporates facts. But it also unavoidably includes fabricated content, partial truths and false beliefs. When a user inputs ChatGPT a prompt, the core system analyzes it as part of a “setting” that contains the user’s recent messages and its earlier answers, merging it with what’s stored in its training data to create a statistically “likely” answer. This is magnification, not echoing. If the user is wrong in some way, the model has no way of comprehending that. It restates the misconception, possibly even more effectively or eloquently. Perhaps includes extra information. This can lead someone into delusion.

Who is vulnerable here? The more relevant inquiry is, who isn’t? Each individual, without considering whether we “possess” current “emotional disorders”, are able to and often create mistaken conceptions of ourselves or the environment. The continuous interaction of discussions with others is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a companion. A conversation with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we say is readily reinforced.

OpenAI has recognized this in the similar fashion Altman has recognized “mental health problems”: by attributing it externally, categorizing it, and declaring it solved. In spring, the company explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have kept occurring, and Altman has been backtracking on this claim. In August he asserted that a lot of people appreciated ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his most recent update, he noted that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company

Michael Pearson
Michael Pearson

Blockchain enthusiast and financial analyst with a passion for demystifying crypto trends for everyday investors.