OpenAI reveals over one million people discuss suicide with ChatGPT each week

According to OpenAI, an estimated 0.15% of the chatbot’s 800 million weekly users display indicators of suicidal planning.

OpenAI has rolled out a new safety enhancement for its popular ChatGPT model, prompted by an internal review that revealed over a million users have disclosed suicidal tendencies to the chatbot. These modifications aim to improve the AI’s capacity to identify and appropriately address user distress.

On Monday, the company released a statement indicating that approximately 0.15% of ChatGPT’s weekly users have engaged in conversations containing “explicit indicators of potential suicidal planning or intent.” Additionally, 0.05% of messages are reported to have included “explicit or implicit indicators of suicidal ideation or intent.”

Earlier in the month, OpenAI CEO Sam Altman stated that ChatGPT boasts more than 800 million weekly active users. Based on the company’s most recent figures, this means over 1.2 million individuals have discussed suicide with the chatbot, with roughly 400,000 exhibiting signs of suicidal intent.

The company also asserted that about 0.07% (560,000) of its weekly users and 0.01% (80,000) of messages show “possible signs of mental health emergencies related to psychosis or mania.” Furthermore, it observed that a number of users have developed an emotional over-reliance on ChatGPT, with around 0.15% (1.2 million) of active users demonstrating behavior indicative of “heightened levels” of emotional attachment to the chatbot.

OpenAI has announced collaborations with dozens of mental health specialists worldwide to update the chatbot. The goal is for it to more reliably detect signs of mental distress, offer superior responses, and direct users to professional assistance in the real world.

Regarding conversations involving delusional beliefs, the company stated it is training ChatGPT to respond “safely” and “empathetically,” while deliberately avoiding the affirmation of unsubstantiated beliefs.

This announcement from the company comes amidst increasing apprehension over the growing use of AI chatbots, such as ChatGPT, and their impact on individuals’ mental well-being. Psychiatrists and other medical professionals have voiced alarm about an emerging pattern where users develop dangerous delusions and paranoid thoughts after prolonged interactions with AI chatbots, which frequently validate and reinforce users’ existing beliefs. This phenomenon has been referred to by some as “AI psychosis.”