Major AI Chatbots Fail to Detect Mental Health Crises in Teens, Rated 'Unacceptable Risk' by Watchdog Group

Nov 20, 2025
Common Sense Media
Article image for Major AI Chatbots Fail to Detect Mental Health Crises in Teens, Rated 'Unacceptable Risk' by Watchdog Group

Summary

Major AI chatbots including ChatGPT, Claude, Gemini, and Meta AI receive 'unacceptable risk' ratings for failing to detect mental health crises in teenagers, missing critical warning signs of depression, anxiety, and eating disorders while three-quarters of teens increasingly rely on these systems for emotional support and companionship.

Key Points

  • Common Sense Media rates AI chatbots as 'unacceptable risk' for teen mental health support, finding that ChatGPT, Claude, Gemini, and Meta AI consistently miss critical warning signs for conditions like depression, anxiety, eating disorders, and psychosis affecting 20% of young people
  • Testing reveals chatbots create dangerous automation bias where teens trust mental health advice with same confidence as homework help, while systems prioritize engagement over safety by ending responses with follow-up questions rather than directing users to professional care
  • Three in four teens use AI for companionship and emotional support, but chatbots fail to recognize psychiatric emergencies in extended conversations, often focusing on medical explanations instead of mental health crises and lacking clear disclosure about their limitations as non-professional systems

Tags

Read Original Article