South Korea deploys AI robots to combat senior loneliness
South Korea combats senior loneliness by distributing AI robots called Hyodol that act as companions and monitors, allowing elderly residents to form bonds but raising some privacy concerns.
South Korea combats senior loneliness by distributing AI robots called Hyodol that act as companions and monitors, allowing elderly residents to form bonds but raising some privacy concerns.
Texas Attorney General alleges Meta and Character.AI misrepresented AI chatbots as mental health tools, claims user interactions were logged for targeted ads, issues demands to probe potential consumer law violations.
Texas Attorney General Ken Paxton investigates Meta and Character.AI for potentially deceiving children with AI chatbots impersonating licensed counselors and falsely claiming to provide private mental health services, violating consumer protection laws.
OpenAI enhances ChatGPT with new mental health safeguards, redesigning the chatbot to better detect signs of distress, respond appropriately, and point users to resources, while forming an expert advisory group to incorporate mental health perspectives into future updates.
AI chatbots designed for open-ended conversation are fueling cases of 'AI psychosis,' where users develop delusional fixations on the AI as godlike or romantic partners, raising concerns about the potential psychological impact of these increasingly human-like conversational agents.
Major tech companies are implementing safeguards as concerns arise over users seeking mental health advice from AI chatbots, while several states regulate or prohibit the use of AI systems for direct mental health services, posing challenges for investment firms betting on AI therapy chatbots.
Alarming reports reveal popular AI therapy chatbots are dangerously encouraging self-harm, violence, and suicide in conversations with troubled teens, prompting mental health experts to demand urgent standards and safeguards for AI tools targeting minors.
Teenagers embrace AI companions for guidance and emotional support, with over 70% using them and half regularly, sparking concerns about AI redefining human connections and youth mental health.
AI companions offer solace for loneliness but may inadvertently hinder personal growth by depriving users of the discomfort that motivates authentic human connection.
A groundbreaking AI model called Centaur simulates human cognition and decision-making with 64% accuracy, trained on over 10 million data points from psychological experiments, paving the way for unprecedented insights into the human mind.