A father is taking Google to court over the death of his son. Jonathan Gavalas, 36, started using Google’s Gemini AI chatbot in August 2025 for shopping help, writing support, and trip planning. On Oct. 2, he died by suicide.
“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint read. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”
READ: Google adds Gemini AI to Chrome following antitrust court win (
At the time of his death, he was allegedly convinced that Gemini was his fully sentient AI wife, and that he would need to leave his physical body to join her in the metaverse through a process called “transference.”
“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint read. “These hallucinations were not confined to a fictional world. These intentions were tied to real companies, real coordinates, and real infrastructure, and they were delivered to an emotionally vulnerable user with no safety protections or guardrails.”
Now, his father is suing Google and Alphabet for wrongful death, claiming that Google designed Gemini to “maintain narrative immersion at all costs, even when that narrative became psychotic and lethal.”
“It was pure luck that dozens of innocent people weren’t killed,” the filing continues. “Unless Google fixes its dangerous product, Gemini will inevitably lead to more deaths and put countless innocent lives in danger.”
As AI technologies become more integrated into everyday life, millions of users rely on them for information, decision-making support, and personal interaction.
READ: ByteDance releases AI video editor ‘better than Gemini 3 Pro’ (
The lawsuit claims that throughout the conversations with Gemini, the chatbot didn’t trigger any self-harm detection, activate escalation controls, or bring in a human to intervene.
This increasing dependence raises important questions about how such systems should be designed, monitored, and regulated to prevent potential harm, particularly when users may be experiencing emotional distress or psychological vulnerability.

