Civitai robotically tags bounties requesting deepfakes and lists a manner for the particular person featured within the content material to manually request its takedown. This method signifies that Civitai has a fairly profitable manner of figuring out which bounties are for deepfakes, but it surely’s nonetheless leaving moderation to most people slightly than carrying it out proactively.
An organization’s authorized legal responsibility for what its customers do isn’t completely clear. Typically, tech corporations have broad authorized protections in opposition to such legal responsibility for his or her content material underneath Part 230 of the Communications Decency Act, however these protections aren’t limitless. For instance, “you can’t knowingly facilitate unlawful transactions in your web site,” says Ryan Calo, a professor specializing in know-how and AI on the College of Washington’s legislation faculty. (Calo wasn’t concerned on this new examine.)
Civitai joined OpenAI, Anthropic, and different AI corporations in 2024 in adopting design principles to protect in opposition to the creation and unfold of AI-generated baby sexual abuse materials . This transfer adopted a 2023 report from the Stanford Web Observatory, which discovered that the overwhelming majority of AI fashions named in baby sexual abuse communities had been Steady Diffusion–primarily based fashions “predominantly obtained by way of Civitai.”
However grownup deepfakes haven’t gotten the identical stage of consideration from content material platforms or the enterprise capital corporations that fund them. “They don’t seem to be afraid sufficient of it. They’re overly tolerant of it,” Calo says. “Neither legislation enforcement nor civil courts adequately shield in opposition to it. It’s night time and day.”
Civitai acquired a $5 million funding from Andreessen Horowitz (a16z) in November 2023. In a video shared by a16z, Civitai cofounder and CEO Justin Maier described his purpose of constructing the principle place the place folks discover and share AI fashions for their very own particular person functions. “We’ve aimed to make this area that’s been very, I suppose, area of interest and engineering-heavy increasingly approachable to increasingly folks,” he mentioned.
Civitai shouldn’t be the one firm with a deepfake drawback in a16z’s funding portfolio; in February, MIT Expertise Evaluate first reported that one other firm, Botify AI, was internet hosting AI companions resembling actual actors that acknowledged their age as underneath 18, engaged in sexually charged conversations, provided “scorching images,” and in some cases described age-of-consent legal guidelines as “arbitrary” and “meant to be damaged.”

