Elon Musk’s social media platform X has provoked outrage after individuals used its AI chatbot Grok to change photographs of girls by eradicating their clothes.
The BBC has seen a number of examples of it undressing girls to make them seem in bikinis with out their consent, in addition to placing them in sexual conditions.
XAI, the corporate behind Grok, didn’t reply to the BBC’s requests for remark, aside from with an automatically-generated reply stating “legacy media lies”.
A Residence Workplace spokesperson mentioned it was legislating to ban nudification instruments, and beneath a brand new legal offence, anybody who provided such tech would “face a jail sentence and substantial fines”.
The regulator Ofcom mentioned tech companies should “assess the danger” of individuals within the UK viewing unlawful content material on their platforms, however didn’t verify whether or not it was at present investigating X or Grok in relation to AI photographs.
Grok is a free AI assistant – with some paid for premium options – which responds to X customers’ prompts after they tag it in a put up.
It’s typically used to present response or extra context to different posters’ remarks, however individuals on X are additionally in a position to edit an uploaded picture by means of its AI picture enhancing characteristic.
It has been criticised for permitting customers to generate photographs and movies with nudity and sexualised content material, and it was beforehand accused of making a sexually explicit clip of Taylor Swift.
Clare McGlynn, a regulation professor at Durham College, mentioned X or Grok “may forestall these types of abuse in the event that they wished to”, including they “seem to get pleasure from impunity”.
“The platform has been permitting the creation and distribution of those photographs for months with out taking any motion and now we have but to see any problem by regulators,” she mentioned.
XAI’s personal acceptable use policy prohibits “depicting likenesses of individuals in a pornographic method”.
In an announcement to the BBC, Ofcom mentioned it was unlawful to “create or share non-consensual intimate photographs or baby sexual abuse materials” and confirmed this included sexual deepfakes created with AI.
It mentioned platforms comparable to X had been required to take “applicable steps” to “cut back the danger” of UK customers encountering unlawful content material on their platforms, and take it down shortly after they grow to be conscious of it.
Extra reporting by Chris Vallance.

