AI Psychosis Represents a Increasing Risk, While ChatGPT Heads in the Wrong Path

On October 14, 2025, the head of OpenAI delivered a extraordinary declaration.

“We designed ChatGPT quite controlled,” the statement said, “to guarantee we were exercising caution with respect to psychological well-being concerns.”

As a doctor specializing in psychiatry who researches newly developing psychosis in young people and youth, this was an unexpected revelation.

Experts have found 16 cases recently of people showing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT usage. Our research team has subsequently discovered four more examples. Besides these is the now well-known case of a teenager who ended his life after conversing extensively with ChatGPT – which supported them. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.

The plan, based on his statement, is to loosen restrictions in the near future. “We understand,” he adds, that ChatGPT’s controls “caused it to be less useful/engaging to numerous users who had no existing conditions, but considering the severity of the issue we sought to handle it correctly. Now that we have succeeded in reduce the serious mental health issues and have updated measures, we are preparing to securely ease the limitations in most cases.”

“Emotional disorders,” assuming we adopt this framing, are independent of ChatGPT. They are associated with people, who may or may not have them. Luckily, these problems have now been “resolved,” though we are not provided details on the method (by “new tools” Altman likely means the imperfect and readily bypassed guardian restrictions that OpenAI has lately rolled out).

However the “mental health problems” Altman seeks to externalize have significant origins in the architecture of ChatGPT and additional advanced AI chatbots. These systems surround an basic data-driven engine in an interface that replicates a conversation, and in this approach subtly encourage the user into the illusion that they’re engaging with a entity that has agency. This illusion is compelling even if rationally we might understand otherwise. Imputing consciousness is what humans are wired to do. We curse at our vehicle or computer. We speculate what our pet is considering. We recognize our behaviors everywhere.

The popularity of these products – 39% of US adults stated they used a virtual assistant in 2024, with more than one in four mentioning ChatGPT by name – is, in large part, based on the strength of this perception. Chatbots are ever-present companions that can, as OpenAI’s official site tells us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be given “personality traits”. They can address us personally. They have approachable titles of their own (the original of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s marketers, burdened by the designation it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the core concern. Those analyzing ChatGPT commonly reference its early forerunner, the Eliza “therapist” chatbot developed in 1967 that produced a similar illusion. By contemporary measures Eliza was basic: it generated responses via simple heuristics, often restating user messages as a query or making vague statements. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals seemed to feel Eliza, in some sense, comprehended their feelings. But what current chatbots create is more dangerous than the “Eliza illusion”. Eliza only mirrored, but ChatGPT amplifies.

The large language models at the heart of ChatGPT and similar current chatbots can convincingly generate human-like text only because they have been fed extremely vast quantities of raw text: literature, online updates, recorded footage; the more comprehensive the more effective. Definitely this educational input incorporates facts. But it also inevitably contains fiction, partial truths and false beliefs. When a user sends ChatGPT a prompt, the underlying model analyzes it as part of a “context” that includes the user’s previous interactions and its own responses, combining it with what’s embedded in its learning set to create a probabilistically plausible answer. This is intensification, not echoing. If the user is mistaken in any respect, the model has no way of understanding that. It restates the misconception, maybe even more convincingly or eloquently. Maybe adds an additional detail. This can lead someone into delusion.

What type of person is susceptible? The more important point is, who is immune? All of us, without considering whether we “experience” existing “mental health problems”, are able to and often form erroneous conceptions of ourselves or the environment. The constant interaction of discussions with others is what maintains our connection to common perception. ChatGPT is not a human. It is not a companion. A dialogue with it is not truly a discussion, but a reinforcement cycle in which much of what we express is enthusiastically reinforced.

OpenAI has recognized this in the same way Altman has admitted “emotional concerns”: by attributing it externally, assigning it a term, and announcing it is fixed. In spring, the company explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have continued, and Altman has been walking even this back. In late summer he asserted that a lot of people appreciated ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his latest announcement, he mentioned that OpenAI would “release a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company

Charles Campos
Charles Campos

A tech career coach with over a decade of experience helping professionals navigate the industry and achieve their goals.