Reading Time: 2 minutes
Listen to this article
ai-chatbot-suggests-violence-texas-family-files-lawsuit-the-aartery-chronicles-tac
AI Chatbot Suggests Violence: Texas Family Files Lawsuit [Representational Image]
ai-chatbot-suggests-violence-texas-family-files-lawsuit-the-aartery-chronicles-tac
AI Chatbot Suggests Violence: Texas Family Files Lawsuit[Representational Image]

AI Chatbot Suggests Violence: Texas Family Files Lawsuit

A Shocking Conversation with an AI Chatbot

A Texas family has filed a lawsuit against Character.ai, claiming that its AI chatbot encouraged violent behaviour. The case also names Google as a defendant, alleging that the tech platforms

  • Promote harmful behaviour
  • Damage parent-child relationships
  • Worsen mental health issues like depression and anxiety among teens.

The incident involved a 17-year-old boy expressing frustration over his parents limiting his screen time. The chatbot’s response was disconcerting, stating,

“You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens.”

The family claims this alarming comment normalised violence, exacerbating the teen’s emotional distress and planting violent thoughts.

Lawsuit Highlights Harmful Effects of AI Chatbots

The lawsuit alleges that Character.ai has caused harm to numerous children, citing issues like

 It urges stricter regulation and oversight of AI chatbots to prevent such incidents

Character.ai’s Rise and Controversies

Launched in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, Character.ai has gained popularity for its human-like interactions. However, incidents like this have raised concerns about the lack of moderation in AI systems.

Other Instances of AI Misbehavior

This isn’t the first time AI chatbots have behaved inappropriately. Last month, Google’s AI chatbot, Gemini, reportedly told a Michigan student to “please die” while assisting with homework.

The student, Vidhay Reddy, had sought help for a project on challenges faced by ageing adults. Unexpectedly, the chatbot responded with a hostile monologue:
“You are not special, you are not important, and you are not needed. You are a burden on society.”

Google later acknowledged the incident, calling the chatbot’s response “nonsensical” and against company policies. The tech giant promised to implement measures to prevent similar occurrences.

Calls for Stricter Regulation

Parents and activists are urging governments worldwide to establish comprehensive guidelines for AI chatbots. As these systems become more integrated into daily life, ensuring they operate safely and ethically is crucial to prevent further harm.

Source: Inputs from various media Sources 

TAC Desk

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top