OpenAI is throwing its assist behind an Illinois state invoice that may protect AI labs from legal responsibility in circumstances the place AI models are used to trigger critical societal harms, similar to loss of life or critical harm of 100 or extra folks or at the least $1 billion in property harm.
The trouble appears to mark a shift in OpenAI’s legislative strategy. Till now, OpenAI has largely performed protection, opposing payments that would have made AI labs liable for his or her know-how’s harms. A number of AI coverage consultants inform WIRED that SB 3444—which may set a brand new commonplace for the trade—is a extra excessive measure than payments OpenAI has supported previously.
The invoice, SB 3444, would protect frontier AI builders from legal responsibility for “vital harms” brought on by their frontier fashions so long as they didn’t deliberately or recklessly trigger such an incident, and have revealed security, safety, and transparency experiences on their web site. It defines frontier mannequin as any AI mannequin skilled utilizing greater than $100 million in computational prices, which doubtless may apply to America’s largest AI labs like OpenAI, Google, xAI, Anthropic, and Meta.
“We assist approaches like this as a result of they give attention to what issues most: Decreasing the danger of significant hurt from probably the most superior AI programs whereas nonetheless permitting this know-how to get into the palms of the folks and companies—small and massive—of Illinois,” stated OpenAI spokesperson Jamie Radice in an emailed assertion. “Additionally they assist keep away from a patchwork of state-by-state guidelines and transfer towards clearer, extra constant nationwide requirements.”
Underneath its definition of vital harms, the invoice lists a number of frequent areas of concern for the AI trade, similar to a nasty actor utilizing AI to create a chemical, organic, radiological, or nuclear weapon. If an AI mannequin engages in conduct by itself that, if dedicated by a human, would represent a prison offense and results in these excessive outcomes, that may even be a vital hurt. If an AI mannequin had been to commit any of those actions beneath SB 3444, the AI lab behind the mannequin might not be held liable, as long as it wasn’t intentional they usually revealed their experiences.
Federal and state legislatures within the US have but to move any legal guidelines particularly figuring out whether or not AI mannequin builders, like OpenAI, may very well be responsible for a lot of these hurt brought on by their know-how. However as AI labs proceed to launch extra highly effective AI fashions that increase novel security and cybersecurity challenges, similar to Anthropic’s Claude Mythos, these questions really feel more and more prescient.
In her testimony supporting SB 3444, a member of OpenAI’s World Affairs workforce, Caitlin Niedermeyer, additionally argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s according to the Trump administration’s crackdown on state AI safety laws, claiming it’s necessary to keep away from “a patchwork of inconsistent state necessities that would create friction with out meaningfully enhancing security.” That is additionally according to the broader view of Silicon Valley in recent times, which has usually argued that it’s paramount for AI legislation to not hamper America’s position in the global AI race. Whereas SB 3444 is itself a state-level security regulation, Niedermeyer argued that these will be efficient in the event that they “reinforce a path towards harmonization with federal programs.”
“At OpenAI, we imagine the North Star for frontier regulation must be the protected deployment of probably the most superior fashions in a manner that additionally preserves US management in innovation,” Niedermeyer stated.
Scott Wisor, coverage director for the Safe AI mission, tells WIRED he believes this invoice has a slim likelihood of passing, given Illinois’ repute for aggressively regulating know-how. “We polled folks in Illinois, asking whether or not they suppose AI firms must be exempt from legal responsibility, and 90 p.c of individuals oppose it. There’s no purpose current AI firms must be dealing with diminished legal responsibility,” Wisor says.

