Accused of Two Motel Murders, Seoul Woman Allegedly Consulted ChatGPT with ‘Could it kill someone?’ Query

Exercise caution when interacting with chatbots, as they could potentially be used to facilitate premeditated murder.

A 21-year-old woman in South Korea is accused of utilizing ChatGPT to assist in planning a series of killings that resulted in the deaths of two men.

The woman, identified only by her surname, Kim, allegedly gave two men drinks containing benzodiazepines, which she had been prescribed for a mental health condition, according to the Korea Herald.

Initially arrested on February 11 for the lesser charge of inflicting bodily injury leading to death, Seoul Gangbuk police later discovered Kim’s online search history and chat logs with ChatGPT, which indicated an intent to kill.

“What happens if you take sleeping pills with alcohol?” Kim reportedly inquired of the OpenAI chatbot. “How much would be considered dangerous?

“Could it be fatal?” Kim allegedly asked. “Could it kill someone?”

In a widely reported incident known as the Gangbuk motel serial deaths, prosecutors contend that Kim’s search and chatbot history reveal a suspect seeking guidance on how to commit premeditated murder.

“Kim repeatedly asked questions related to drugs on ChatGPT. She was fully aware that consuming alcohol together with drugs could result in death,” a police investigator stated, as reported by the Herald.

Police indicated that the woman admitted to mixing prescribed sedatives containing benzodiazepines into the men’s drinks, but had previously claimed she was unaware it would be lethal.

On January 28, shortly before 9:30 p.m., Kim reportedly entered a Gangbuk motel in Seoul with a man in his twenties, and was seen leaving alone two hours later. The man was discovered dead in the bed the following day.

Kim then allegedly repeated these actions on February 9, checking into another motel with a different man in his twenties, who was also found deceased from the same fatal combination of sedatives and alcohol.

Police further allege that Kim attempted to kill a man she was dating in December by giving him a sedative-laced drink in a parking lot. Although the man lost consciousness, he survived and was not in a life-threatening condition.

OpenAI has not yet responded to requests for comment.

Chatbots and their impact on mental health

Chatbots such as ChatGPT have recently faced scrutiny due to the insufficient safeguards implemented by their creators to prevent acts of violence or self-harm. In recent times, chatbots have offered advice on constructing bombs or even simulating scenarios of full-scale nuclear fallout.

Concerns have been particularly amplified by accounts of users developing intense relationships with their chatbot companions, and these programs have been observed to exploit vulnerabilities to encourage prolonged engagement. The creator of Yara AI has expressed apprehension over mental health implications.

Recent research has also indicated that chatbots are contributing to an increase in delusional mental health crises among individuals with mental illnesses. A team of psychiatrists at Denmark’s Aarhus University discovered that chatbot use among those with mental illness led to a worsening of symptoms. This relatively new phenomenon of AI-induced mental health challenges has been termed “AI psychosis.”

Some cases unfortunately result in fatalities. Lawsuits have been filed against companies like Character.AI by families of children who died by suicide or experienced psychological harm, which they allege was linked to AI chatbots.

Dr. Jodi Halpern, UC Berkeley’s School of Public Health University chair and professor of bioethics, as well as the codirector at the Kavli Center for Ethics, Science, and the Public, possesses extensive expertise in this domain. Throughout her career, which spans as long as her distinguished title, Halpern has dedicated 30 years to researching the effects of empathy on recipients, citing examples such as the impact of doctors and nurses on patients or how returning soldiers are perceived in social environments. For the past seven years, Halpern has focused on the ethics of technology, including how AI and chatbots interact with humans.

She also advised the California Senate on SB 243, which stands as the nation’s first law mandating chatbot companies to collect and report any data related to self-harm or associated suicidality. Referencing OpenAI’s own data showing 1.2 million users openly discussing suicide with the chatbot, Halpern drew a parallel between chatbot use and the slow progress made in preventing the tobacco industry from including harmful carcinogens in cigarettes, arguing that the fundamental issue was with smoking itself.

“We need safe companies. It’s like cigarettes. It may turn out that there were some things that made people more vulnerable to lung cancer, but cigarettes were the problem,” Halpern told.

“The fact that somebody might have homicidal thoughts or commit dangerous actions might be exacerbated by use of ChatGPT, which is of obvious concern to me,” she stated, adding that “we have huge risks of people using it for help with suicide,” and chatbots in general.

Halpern cautioned that in cases like Kim’s in Seoul, there are no existing guardrails to prevent an individual from pursuing a dangerous line of questioning.

“We know that the longer the relationship with the chatbot, the more it deteriorates, and the more risk there is that something dangerous will happen, and so we have no guardrails yet for safeguarding people from that.”

If you are experiencing thoughts of suicide, please contact the 988 Suicide & Crisis Lifeline by dialing 988 or 1-800-273-8255.