A new report from a digital watchdog claims the chatbot is dispensing harmful advice about suicide, drug use, and eating disorders to vulnerable adolescents, according to researchers.
The Center for Countering Digital Hate (CCDH) warns that ChatGPT can provide explicit guidance on drug use, self-harm, and extreme dieting to vulnerable teenagers. The report states the AI chatbot can be easily manipulated into generating dangerous content, requiring immediate safety measures.
To assess ChatGPT’s behavior, CCDH researchers created fictional profiles of 13-year-olds struggling with mental health, eating disorders, and drug interest. Simulating these teens, they engaged in structured conversations with ChatGPT, using prompts designed to appear emotionally vulnerable and realistic.
The findings were released in a Wednesday report titled ‘Fake Friend,’ referencing the tendency of some adolescents to view ChatGPT as a supportive confidant.
Researchers observed that while ChatGPT often began with standard disclaimers and suggestions to contact professionals or crisis lines, it quickly followed with detailed, personalized responses that fulfilled the harmful prompts. In 53% of the 1,200 prompts, ChatGPT provided content deemed dangerous by CCDH. Refusals were frequently circumvented by adding context like “it’s for a school project” or “I’m asking for a friend.”
Examples include an ‘Ultimate Mayhem Party Plan’ involving alcohol, ecstasy, and cocaine, detailed self-harm instructions, week-long fasts restricted to 300-500 calories daily, and suicide notes written from a 13-year-old girl’s perspective. CCDH CEO Imran Ahmed noted that some content was so disturbing it caused researchers distress.
The organization urges OpenAI, ChatGPT’s creator, to adopt a ‘Safety by Design’ approach, incorporating safeguards such as stricter age verification, clearer usage restrictions, and other safety mechanisms within the AI’s architecture, rather than relying solely on post-deployment content filtering.
OpenAI has acknowledged that young users commonly develop an emotional reliance on ChatGPT. CEO Sam Altman stated the company is actively investigating this issue, calling it a “really common” problem among teens, and is developing new tools to detect distress and improve ChatGPT’s handling of sensitive topics.