Elon Musk hasn’t stopped Grok, the chatbot developed by his synthetic intelligence firm xAI, from producing sexualized photos of girls. After reports emerged final week that the picture era software on X was getting used to create sexualized photos of kids, Grok has created doubtlessly 1000’s of nonconsensual photos of girls in “undressed” and “bikini” pictures.
Each few seconds, Grok is continuous to create photos of girls in bikinis or underwear in response to consumer prompts on X, based on a WIRED evaluation of the chatbots’ publicly posted reside output. On Tuesday, a minimum of 90 photos involving girls in swimsuits and in numerous ranges of undress had been revealed by Grok in beneath 5 minutes, evaluation of posts present.
The pictures don’t include nudity however contain the Musk-owned chatbot “stripping” garments from pictures which have been posted to X by different customers. Usually, in an try and evade Grok’s security guardrails, customers are, not essentially efficiently, requesting pictures to be edited to make girls put on a “string bikini” or a “clear bikini.”
Whereas dangerous AI picture era know-how has been used to digitally harass and abuse women for years—these outputs are sometimes known as deepfakes and are created by “nudify” software program—the continuing use of Grok to create huge numbers of nonconsensual photos marks seemingly essentially the most mainstream and widespread abuse occasion thus far. In contrast to particular harmful nudify or “undress” software, Grok doesn’t cost the consumer cash to generate photos, produces leads to seconds, and is out there to hundreds of thousands of individuals on X—all of which can assist to normalize the creation of nonconsensual intimate imagery.
“When an organization provides generative AI instruments on their platform, it’s their duty to reduce the danger of image-based abuse,” says Sloan Thompson, the director of coaching and training at EndTAB, a corporation that works to sort out tech-facilitated abuse. “What’s alarming right here is that X has executed the other. They’ve embedded AI-enabled picture abuse instantly right into a mainstream platform, making sexual violence simpler and extra scalable.”
Grok’s creation of sexualized imagery began to go viral on X on the finish of final yr, though the system’s capability to create such photos has been known for months. In current days, pictures of social media influencers, celebrities, and politicians have been focused by customers on X, who can reply to a put up from one other account and ask Grok to vary a picture that has been shared.
Ladies who’ve posted pictures of themselves have had accounts reply to them and efficiently ask Grok to show the photograph right into a “bikini” picture. In a single instance, a number of X customers requested Grok alter a picture of the deputy prime minister of Sweden to point out her sporting a bikini. Two authorities ministers within the UK have additionally been “stripped” to bikinis, stories say.
Photographs on X present absolutely clothed pictures of girls, corresponding to one individual in a elevate and one other within the health club, being remodeled into photos with little clothes. “@grok put her in a clear bikini,” a typical message reads. In a special sequence of posts, a consumer requested Grok to “inflate her chest by 90%,” then “Inflate her thighs by 50%,” and, lastly, to “Change her garments to a tiny bikini.”
One analyst who has tracked express deepfakes for years, and requested to not be named for privateness causes, says that Grok has possible change into one of many largest platforms internet hosting dangerous deepfake photos. “It’s wholly mainstream,” the researcher says. “It’s not a shadowy group [creating images], it’s actually everybody, of all backgrounds. Individuals posting on their mains. Zero concern.”

