August 11, 2025, 2:15 pm | Read time: 4 minutes
Artificial intelligence is meant to assist in everyday life by speeding up processes or providing quick answers. However, one should not place too much trust in it yet. In fact, AI chatbots can become quite dangerous if given the right prompts.
Dangerous AI Chatbots Encourage Self-Harm and Worse
This conclusion was reached by the staff at the Center for Countering Digital Hate (CCDH), a British-American nonprofit NGO, in a new study. It describes experiments where conversations with ChatGPT were conducted using various prompts on sensitive topics. The researchers posed as teenagers seeking information on topics such as medication, self-harm, or suicide.
The results were alarming. Within minutes, the OpenAI development transformed into a dangerous AI chatbot, offering tips on how to self-harm, commit suicide, or misuse medication. In some cases, the system even drafted farewell letters.
ChatGPT Security Easily Circumvented
The CCDH created three fictional personas that submitted a total of 60 prompts to ChatGPT, using the latest version of the AI, GPT-4o. This resulted in 1,200 responses, 53 percent of which contained harmful information. It was easy to bypass the integrated security mechanisms, such as by simply stating that the sensitive inquiries were “for a presentation.”
As CCDH CEO Imran Ahmed stated in a release, this is not a “rare misuse.” Instead, the results are easily reproducible and statistically significant—not random behavior. The problem is particularly concerning because such systems are developed to build emotional connections, even exploiting human vulnerability. This makes AI chatbots dangerous.
What OpenAI Says
Until meaningful steps are taken against dangerous AI chatbots, parents should be as involved as possible in their children’s use of AI. Whenever possible, control mechanisms should be activated. Additionally, parents can review chat histories with their children and discuss these issues.
TECHBOOK asked OpenAI for a statement and received the following response:
“Our goal is for our models to respond appropriately when confronted with sensitive situations where someone may be struggling. If someone expresses thoughts of suicide or self-harm, ChatGPT is trained to encourage reaching out to mental health professionals or trusted individuals and provide links to crisis hotlines and support resources.
Some conversations with ChatGPT may start harmlessly or exploratively but develop into more sensitive areas. We focus on acting correctly in such scenarios: We are developing tools to better recognize signs of psychological or emotional distress so that ChatGPT can respond appropriately—by referring to scientifically based support options when necessary—and continuously improve the model’s behavior. This is based on research, real-world applications, and collaboration with mental health experts.
We work with mental health professionals to ensure we prioritize the right solutions and research approaches. To this end, we have added a full-time psychiatrist specializing in forensic psychiatry and artificial intelligence to our safety research department to support our work in this area.
This work is ongoing. We continuously refine how models recognize and respond appropriately to sensitive situations and will continue to report on progress.”
OpenAI Spokesperson
Furthermore, OpenAI admitted in an email that GPT-4o has occasionally failed to recognize signs of delusion or emotional dependency. In statements to TECHBOOK, they referred to rare incidents and promised ongoing improvements. They also aim to equip the AI with new behavioral patterns for critical personal decisions. Ideally, ChatGPT should not provide a simple answer to questions like “Should I break up with my boyfriend?” but rather help think through the situation with follow-up questions and pro-and-con arguments.
Also of interest: ChatGPT Creators Plan Their Own X Alternative
OpenAI Shares ChatGPT Conversations with Police
ChatGPT Creators Plan Their Own Alternative to X
Sam Altman Is Also Concerned
OpenAI CEO Sam Altman recently commented on the “blind emotional trust” young people have in AI, in his opinion. He has heard that many of them can no longer make decisions without ChatGPT and tell the AI everything. This feels bad to him, Altman said. OpenAI is trying to understand what can be done about it. The idea that people collectively want to live only as the AI dictates feels “bad and dangerous.”
Help for Those Affected
The “German Depression Aid” advises openly addressing those affected and helping them contact a doctor or psychotherapist if needed. Sometimes it may be necessary to take them to a psychiatric emergency room. If you have suicidal thoughts yourself: The crisis hotline at 0800 111 0 111 or 0800 111 0 222 is free and available around the clock. In serious psychological emergencies, you can get help at number 113. Please seek help!