In April 2025, a man opened fire on Florida State University’s campus, killing two adults and injuring six others. The shooter faces prices of homicide and tried homicide. Now, Florida officers are investigating OpenAI, the creator of the chatbot ChatGPT, to find out whether or not the corporate must be criminally held accountable as properly.
Florida Legal professional Basic James Uthmeier mentioned in an announcement on April 9 that officers “realized that ChatGPT could possible have been used to help the assassin” within the taking pictures.
“As massive tech rolls out these applied sciences, they need to not, they can not, put our security and safety in danger,” Uthmeier added.
On Tuesday, Uthmeier launched a legal investigation into OpenAI and ChatGPT.
(Disclosure: Ziff Davis, CNET’s dad or mum firm, filed a lawsuit in opposition to OpenAI in 2025, alleging it infringed Ziff Davis copyrights in coaching and working its AI techniques.)
Though ChatGPT and other chatbots have been concerned in lawsuits over alleged involvement in deaths and hurt, this marks the primary time that ChatGPT and OpenAI are the topic of a legal investigation.
An OpenAI consultant did not instantly reply to a request for remark.
“Final yr’s mass taking pictures at Florida State College was a tragedy, however ChatGPT shouldn’t be liable for this horrible crime,” a spokesperson for the corporate informed NPR.
The spokesperson mentioned that ChatGPT “supplied factual responses to questions with data that could possibly be discovered broadly throughout public sources on the web, and it didn’t encourage or promote unlawful or dangerous exercise.”
Alleged recommendation on gun kind, ammo, time and place
A legal investigation is performed by legislation enforcement and public officers to find out who’s criminally chargeable for against the law. Throughout an April 21 press conference, Uthmeier mentioned that officers decided the legal investigation was mandatory after discovering that “ChatGPT provided important recommendation to the shooter earlier than he dedicated such heinous crimes.”
“The communication between ChatGPT and the shooter revealed that the chatbot suggested the shooter on what kind of gun to make use of, on which ammo went with which gun, on whether or not or not a gun can be helpful in brief vary,” Uthmeier mentioned in the course of the press convention, including that the chatbot additionally allegedly gave recommendation on what time of day and what space of campus would outcome within the shooter coming into contact with extra folks.
“My prosecutors have checked out this, and so they’ve informed me, if it was an individual on the opposite finish of that display screen, we’d be charging them with homicide,” Uthmeier mentioned.
OpenAI CEO Sam Altman testifies earlier than a US Senate committee in Could 2025.
What’s subsequent?
Florida legislation states that the “aider and abettor” is as criminally liable for against the law because the perpetrator. Nevertheless, as a result of ChatGPT shouldn’t be an individual, Uthmeier mentioned that that is “uncharted territory,” however Florida officers nonetheless need to decide if OpenAI has any culpability within the crime.
Uthmeier mentioned that the Office of Statewide Prosecution has subpoenaed OpenAI for a number of insurance policies, worker data and knowledge referring to the Florida State College taking pictures.
Different lawsuits
Though that is the primary time ChatGPT and OpenAI have been the main target of a legal investigation, the corporate and others which have developed chatbots aren’t any strangers to lawsuits.
The dad and mom of a 23-year-old man who died by suicide in July of 2025 sued OpenAI late that yr in a wrongful demise lawsuit, claiming the chatbot worsened his depression and pressured him into suicide.
In October 2025, OpenAI announced that ChatGPT was up to date “to higher acknowledge and help folks in moments of misery.”
Google’s Gemini was recently named in a similar lawsuit after the household of a 36-year-old man who died by suicide mentioned the chatbot coached him by it.
In response to the lawsuit, Google said, partially, that “Gemini is designed to not encourage real-world violence or recommend self-harm,” later including: “On this occasion, Gemini clarified that it was AI and referred the person to a disaster hotline many instances.”
Pew Research Center surveyed 1,458 US teenagers in 2025 and located that 64% of them used a chatbot.
Each lawsuits are nonetheless unresolved.
In response to Florida’s probe, lawyers representing one of the victims of the FSU shooting mentioned they plan to “file go well with in opposition to ChatGPT, and its possession construction, very quickly, and can search to carry them accountable for the premature and mindless demise of our shopper.”
A spokesperson for OpenAI informed WCTV: “Our hearts exit to everybody affected by this devastating tragedy. After studying of the incident in late April 2025, we recognized a ChatGPT account believed to be related to the suspect, proactively shared this data with legislation enforcement and cooperated with authorities. We construct ChatGPT to know folks’s intent and reply in a protected and acceptable approach, and we proceed enhancing our expertise.”
When you or somebody you understand is in fast hazard, name 911. When you’re battling adverse ideas or suicidal emotions, sources can be found to assist. Within the US, name the Nationwide Suicide Prevention Lifeline at 988.

