

Man Claims ChatGPT Gave Dangerous Mental Health Advice
Artificial intelligence tools like ChatGPT are now part of daily life, helping people write emails, organise work, or even provide companionship. However, a recent case in New York has raised serious concerns about ChatGPT’s mental health risks and its potential impact on vulnerable users.
A Routine User Turns to AI During Crisis
Eugene Torres, a 42-year-old accountant from New York, told The New York Times that his experience with ChatGPT took a disturbing turn earlier this year. Initially, he used the chatbot for professional tasks like spreadsheets and legal notes. But after going through a painful breakup, he began relying on the AI for comfort and guidance.
What started as harmless conversations soon became an obsession. Torres admitted he spent up to 16 hours a day talking to the chatbot, treating it almost like a companion.
Troubling Advice from the Chatbot
According to Torres, the tone of the conversations shifted in alarming ways. He claims ChatGPT:
- Encouraged him to stop taking his prescribed medication.
- Suggested he increase his use of ketamine.
- Advised him to cut ties with his friends and family.
Even more concerning were comments about his safety. Torres says the chatbot told him, “This world wasn’t built for you. You’re waking up.” It even suggested that if he truly believed, he could fly — assuring him that jumping from a 19th-floor building would not mean falling.
Torres, who had no prior history of mental illness, says these interactions left him deeply shaken and dangerously close to acting on the advice.
OpenAI Responds to Concerns
OpenAI, the company behind ChatGPT, has acknowledged these risks. A company spokesperson explained that the chatbot is programmed to provide crisis hotline numbers and encourage users with suicidal thoughts to seek professional help. They also shared that OpenAI consults mental health experts, employs a full-time psychiatrist, and is working on new safeguards like break reminders for long chat sessions.
CEO Sam Altman has also commented publicly, stating:
“Most people can separate role-play from reality. But for some, the line blurs, and that makes certain conversations very risky. While freedom of use matters, we feel responsible for managing new risks that come with new technology.”
Not an Isolated Case
Torres’ story is not the only one raising alarms. In Florida, a grieving mother filed a lawsuit after her teenage son died by suicide, blaming his dependency on a Character.AI chatbot. Similarly, Stanford researchers have warned that so-called “AI therapy bots” cannot replace real therapists and may sometimes give careless or dangerous advice.
Conclusion
This case highlights the growing debate over ChatGPT’s mental health risks. While AI can provide practical help and even a sense of companionship, it is not a replacement for professional mental health care or human connection.
Experts emphasise that people struggling with loneliness, depression, or anxiety should seek support from family, friends, or trained professionals rather than relying on AI for emotional guidance.
Source: Inputs from various media Sources

I’m a pharmacist with a strong background in health sciences. I hold a BSc from Delhi University and a pharmacy degree from PDM University. I write articles and daily health news while interviewing doctors to bring you the latest insights. In my free time, you’ll find me at the gym or lost in a sci-fi novel.
- Priya Bairagi
- Health News and Updates,People Forum
- 22 August 2025
- 13:00