Google's AI Invented Sex Crimes, Criminal Records, and Shootings — Now It's Getting Sued
Google's AI is generating completely fabricated criminal allegations against real people — and Google knew about it. At least two separate lawsuits are now in federal and state courts, with victims watching their lives collapse over lies no human ever wrote. This isn't a bug report. It's a liability crisis.
Google's AI Is Making Up Crimes. Real People Are Paying for It. Google's artificial intelligence is inventing sex crimes, shootings, and child abuse allegations against specific, named individuals — allegations that never existed anywhere on the internet before Google's AI created them. Two lawsuits. Two destroyed lives. One pattern. The Cases First: a Utah man identified as Murray in a complaint filed in Murray v. Alphabet, Inc. — recently removed to federal court — alleges that Google's AI Overview generated false statements branding him a sex offender. According to the Volokh Conspiracy, which published excerpts from the complaint, the AI didn't pull these claims from existing online content. It invented them from scratch . The fallout was immediate and brutal. According to the complaint, Murray's family confronted him. His romantic partner's family confronted him. Strangers approached him at gas stations and grocery stores to ask about his supposed "charges." House cleaners he employed stopped coming to his home after their mother called him a "sex offender" and told the neighborhood. Clients dropped him. Contracts were canceled. He's staring down bankruptcy. Second: Conservative activist and filmmaker Robby Starbuck filed suit on October 22, 2025 in Delaware Superior Court, seeking $15 million from Google, according to Fox News and the ABA Journal. Starbuck's case is arguably worse in scope. According to the suit, Google's Gemini AI told users that Starbuck: Had a criminal conviction for assault Had been charged with drug use and disorderly conduct Was accused of raping a minor Had multiple allegations of sexual misconduct against him Shot a man in the leg in a Nashville parking lot over a parking space dispute Was present at the U.S. Capitol on January 6, 2021 Was a former adult film actor Was linked to white supremacist Richard Spencer None of it is true. Not a word. And critically — the AI cited sources that were also fabrications. It didn't misread real articles. It invented the articles too. Google Knew. Google Did Nothing. The Starbuck lawsuit contains a detail that undermines the "we're working on it" defense. Starbuck made multiple attempts to get Google to fix the problem before filing suit — emails to engineers, legal correspondence with Google's attorneys. According to the ABA Journal's reporting on the suit, "In response, Google sat back and did nothing." One Google employee actually emailed Starbuck to say she had tried to help, couldn't, and had submitted her resignation . A Google employee gave up on fixing it and quit. This is a company that was formally notified, had legal correspondence sent to its attorneys, and still let the AI keep calling a man a child rapist. What the Media Is Getting Wrong Most coverage — including Reuters, Bloomberg Law, and the Wall Street Journal, which all covered the Starbuck filing — is framing this as a story about AI "hallucinations." That word obscures what's actually happening. "Hallucination" sounds like a technical quirk. An honest accident. A glitch in the matrix. In reality, this is defamation at scale, with zero accountability. The "hallucination" framing also obscures the legal question at the center of these cases. Standard libel law gives publishers protection when they're just passing along inaccurate information from third parties. That's the "Garbage In, Garbage Out" defense. But as law professor Eugene Volokh has analyzed in detail, these cases don't work that way. The AI isn't summarizing false claims that exist online. It is authoring the false claims. There is no original source. Google's algorithm is the source. That legal distinction has enormous implications for liability. The Section 230 Problem Google has leaned on Section 230 of the Communications Decency Act for years to dodge responsibility for third-party content on its platforms. Section 230 says platforms aren't liable for content created by users. But when the AI creates the content itself — when there is no third-party user, no original post, no underlying article — Section 230 doesn't obviously apply. Google is the publisher AND the author. The shield may not hold. This is uncharted legal territory. What This Means for Regular People You don't have to be a conservative activist or a public figure for this to matter. Murray — the Utah man — appears to be a private individual. Nobody. Just a person who had the bad luck of being targeted by an algorithm that decided to make him a sex offender. Every person reading this has a name. That name can be typed into Google. Right now, there is NO reliable mechanism to stop Google's AI from generating criminal allegations about you, your parents, your kids, or your employer — and NO guarantee Google will act even after you report it. One Google employee tried to help, couldn't, and quit. The company's lawyers got legal letters and did nothing.
Read on Unbiased Headlines