Elon Musk’s AI chatbot Grok is being used to flood X with thousands of sexualized pictures of adults and obvious minors carrying minimal clothes. A few of this content material seems to not solely violate X’s personal policies, which prohibit sharing unlawful content material comparable to baby sexual abuse materials (CSAM), however may additionally violate the rules of Apple’s App Retailer and the Google Play retailer.
Apple and Google each explicitly ban apps containing CSAM, which is unlawful to host and distribute in lots of nations. The tech giants additionally forbid apps that include pornographic materials or facilitate harassment. The Apple App Retailer says it doesn’t permit “overtly sexual or pornographic materials,” in addition to “defamatory, discriminatory, or mean-spirited content material,” particularly if the app is “prone to humiliate, intimidate, or hurt a focused particular person or group.” The Google Play retailer bans apps that “include or promote content material related to sexually predatory habits, or distribute non-consensual sexual content material,” in addition to packages that “include or facilitate threats, harassment, or bullying.”
Over the previous two years, Apple and Google eliminated quite a lot of “nudify” and AI image-generation apps after investigations by the BBC and 404 Media discovered they were being advertised or used to successfully flip abnormal photographs into specific pictures of ladies with out their consent.
However on the time of publication, each the X app and the stand-alone Grok app stay out there in each app shops. Apple, Google, and X didn’t reply to requests for remark. Grok is operated by Musk’s multibillion-dollar synthetic intelligence startup xAI, which additionally didn’t reply to questions from WIRED. In a public statement revealed on January 3, X stated that it takes motion towards unlawful content material on its platform, together with CSAM. “Anybody utilizing or prompting Grok to make unlawful content material will undergo the identical penalties as in the event that they add unlawful content material,” the corporate warned.
Sloan Thompson, the director of coaching and training at EndTAB, a gaggle that teaches organizations tips on how to forestall the unfold of nonconsensual sexual content material, says it’s “completely applicable” for firms like Apple and Google to take motion towards X and Grok.
The quantity of nonconsensual specific pictures on X generated by Grok has exploded over the previous two weeks. One researcher instructed Bloomberg that over a 24-hour interval between January 5 and 6, Grok was producing roughly 6,700 pictures each hour that they recognized as “sexually suggestive or nudifying.” One other analyst collected greater than 15,000 URLs of pictures that Grok created on X throughout a two-hour interval on December 31. WIRED reviewed roughly one-third of the pictures, and located that lots of them featured girls wearing revealing clothes. Over 2,500 have been marked as not out there inside per week, whereas nearly 500 have been labeled as “age-restricted grownup content material.”
Earlier this week, a spokesperson for the European Fee, the governing physique of the European Union, publicly condemned the sexually specific and nonconsensual pictures being generated by Grok on X as “unlawful” and “appalling,” telling Reuters that such content material “has no place in Europe.”
On Thursday, the EU ordered X to retain all inner paperwork and knowledge referring to Grok till the top of 2026, extending a previous retention directive, to make sure authorities can entry supplies related to compliance with the EU’s Digital Companies Act, although a brand new formal investigation has but to be introduced. Regulators in other countries, together with the UK, India, and Malaysia have additionally stated they are investigating the social media platform.

