

When AI Turns Toxic: How Deepfakes Are Being Used to Harass
Summary: Two women doctors from Karad, Maharashtra, became victims of a disturbing AI-generated deepfake video scandal. Police detained two individuals after obscene content featuring the doctors with male companions circulated online. Investigations revealed that a naturopathy practitioner, allegedly motivated by revenge, supplied the material used to create the fake video. The case highlights the growing danger of AI misuse in cyber harassment.
AI Misused: Deepfake Scandal Hits Two Women Doctors in Maharashtra
Two women doctors from Karad, a town in Maharashtra, have become the latest victims of a growing cybercrime trend, deepfake harassment. The incident came to light when one of the doctors complained on May 20, revealing that she had been added to a social media group where a manipulated, obscene video of her and another female doctor was being shared.
This wasn’t just any video, it was AI-generated, created using advanced tools to falsely depict the doctors with male companions. The video quickly spread, sparking public outrage and a swift police response.
Two Suspects Detained; Police Tracing the Source
Inspector R.A. Tashildar from the Karad City Police Station confirmed to The Times of India,
“We have found one such video. We have detained two and are carrying out the investigation.”
The origin of the video was traced to a location outside Maharashtra, showing how cyber crimes easily cross state boundaries in the digital age.
Revenge May Be the Motive: Naturopathy Practitioner Under Lens
Digging deeper, investigators found a surprising lead, a naturopathy practitioner from Karad. Allegedly, this individual shared both contact information and source material (possibly old photos or videos) that were used to generate the obscene deepfake.
The possible motive? Revenge. One of the targeted doctors had earlier filed a complaint against the naturopathy clinic, which was subsequently shut down. This may have triggered the malicious attack.
“We are exploring multiple angles,” said Inspector Tashildar, as the investigation unfolds.
What This Case Tells Us About the Dangers of Deepfakes
This alarming incident is yet another reminder of how AI can be weaponised. While deepfake technology was once the stuff of sci-fi, today it’s a real threat, especially when used to defame, harass, or intimidate individuals.
Social media platforms, cybersecurity teams, and law enforcement now face an urgent challenge: how to detect and stop the spread of AI-generated abuse before it causes irreversible damage.
Final Thoughts: Time to Act Against AI Abuse
As AI tools become more accessible, cybercrime is evolving, and so must our laws and awareness. This case involving two respected doctors is not just about personal trauma; it’s a wake-up call for stricter digital safety policies.
If you’re active online, be cautious about where your photos and videos end up. And if you’re ever targeted, report immediately, because silence only fuels the spread.
Source: Inputs from various media Sources

Priya Bairagi
Reviewed by Dr Aarti Nehra (MBBS, MMST)
I’m a pharmacist with a strong background in health sciences. I hold a BSc from Delhi University and a pharmacy degree from PDM University. I write articles and daily health news while interviewing doctors to bring you the latest insights. In my free time, you’ll find me at the gym or lost in a sci-fi novel.