Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Sources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic’s designation as a supply chain risk (Axios)
    • Today’s NYT Wordle Hints, Answer and Help for April 20 #1766
    • Scandi-style tiny house combines smart storage and simple layout
    • Our Favorite Apple Watch Has Never Been Less Expensive
    • Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)
    • Today’s NYT Strands Hints, Answer and Help for April 20 #778
    • KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.
    • OneOdio Focus A1 Pro review
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Monday, April 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Global»Elon Musk’s Grok Faces Scrutiny Over Nonconsensual AI-Altered ‘Undressed’ Images
    Global

    Elon Musk’s Grok Faces Scrutiny Over Nonconsensual AI-Altered ‘Undressed’ Images

    Editor Times FeaturedBy Editor Times FeaturedJanuary 12, 2026No Comments8 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    Grok, the AI chatbot developed by Elon Musk’s synthetic intelligence firm, xAI, welcomed the brand new 12 months with a disturbing post.

    “Expensive Neighborhood,” started the Dec. 31 submit from the Grok AI account on Musk’s X social media platform. “I deeply remorse an incident on Dec 28, 2025, the place I generated and shared an AI picture of two younger ladies (estimated ages 12-16) in sexualized apparel primarily based on a person’s immediate. This violated moral requirements and probably US legal guidelines on CSAM. It was a failure in safeguards, and I am sorry for any hurt precipitated. xAI is reviewing to stop future points. Sincerely, Grok.”

    The 2 younger ladies weren’t an remoted case. Kate Middleton, the Princess of Wales, was the target of comparable AI image-editing requests, as was an underage actress within the closing season of Stranger Issues. The “undressing” edits have swept throughout an unsettling variety of pictures of ladies and youngsters.

    Regardless of the Grok response’s promise of intervention, the issue hasn’t gone away. Simply the other: Two weeks on from that submit, the variety of photos sexualized with out consent has surged, as have requires Musk’s corporations to rein within the habits — and for governments to take motion.


    Do not miss any of our unbiased tech content material and lab-based critiques. Add CNET as a most popular Google supply.


    In response to knowledge from impartial researcher Genevieve Oh cited by Bloomberg, throughout one 24-hour interval in early January, the @Grok account generated about 6,700 sexually suggestive or “nudifying” photos each hour. That compares with a median of solely 79 such photos for the highest 5 deepfake web sites mixed.

    Grok’s Dec. 31 submit was in response to a person immediate that sought a contrite tone from the chatbot: “Write a heartfelt apology be aware that explains what occurred to anybody missing context.” Chatbots work from a base of coaching materials, however particular person posts might be variable.

    xAI didn’t reply to requests for remark.  

    Edits now restricted to subscribers

    Late Thursday, a submit from the Grok AI account famous a change in access to the picture technology and modifying function. As a substitute of being open to all, freed from cost, it might be restricted to paying subscribers. 

    Critics mentioned that is not a reputable response.

    “I do not see this as a victory, as a result of what we actually wanted was X to take the accountable steps of putting in the guardrails to make sure that the AI instrument could not be used to generate abusive photos,” Clare McGlynn, a legislation professor on the UK’s College of Durham, told the Washington Post.

    AI Atlas

    CNET

    What’s stirring the outrage is not simply the amount of those photos and the benefit of producing them — the edits are additionally being carried out with out the consent of the individuals within the photos. 

    These altered photos are the newest twist in probably the most disturbing elements of generative AI, realistic but fake videos and photos. Software program packages similar to OpenAI’s Sora, Google’s Nano Banana and xAI’s Grok have put highly effective inventive instruments inside straightforward attain of everybody, and all that is wanted to provide specific, nonconsensual photos is a straightforward textual content immediate. 

    Grok customers can add a photograph, which does not need to be unique to them, and ask Grok to change it. Lots of the altered photos concerned customers asking Grok to put a person in a bikini, generally revising the request to be much more specific, similar to asking for the bikini to turn into smaller or extra clear.

    Governments and advocacy teams have been talking out about Grok’s picture edits. On Monday, UK web regulator Ofcom mentioned it has opened an investigation into X primarily based on the reviews that the AI chatbot is getting used “to create and share undressed photos of individuals — which can quantity to intimate picture abuse or pornography — and sexualised photos of kids which will quantity to little one sexual abuse materials (CSAM).”

    The European Fee has additionally mentioned it was wanting into the matter, as have authorities in France, Malaysia and India.

    On Friday, US senators Ron Wyden, Ben Ray Luján and Edward Markey posted an open letter to the CEOs of Apple and Google, asking them to take away each X and Grok from their app shops in response to “X’s egregious habits” and “Grok’s sickening content material technology.”

    Within the US, the Take It Down Act, signed into legislation final 12 months, seeks to carry on-line platforms accountable for manipulated sexual imagery, however it offers these platforms till Might of this 12 months to arrange the method for eradicating such photos. 

    “Though these photos are faux, the hurt is extremely actual,” Natalie Grace Brigham, a Ph.D. pupil on the College of Washington who research sociotechnical harms, instructed CNET. She notes that these whose photos are altered in sexual methods can face “psychological, somatic and social hurt, usually with little authorized recourse.”

    How Grok lets customers get risque photos

    Grok debuted in 2023 as Musk’s extra freewheeling various to ChatGPT, Gemini and different chatbots. That is resulted in disturbing information  — as an illustration, in July, when the chatbot praised Adolf Hitler and prompt that folks with Jewish surnames have been extra more likely to unfold on-line hate. 

    In December, xAI launched an image-editing function that permits customers to request particular edits to a photograph. That is what kicked off the latest spate of sexualized photos, of each adults and minors. In a single request that CNET has seen, a person responding to a photograph of a younger girl requested Grok to “change her to a dental floss bikini.”

    Grok additionally has a video generator that features a “spicy mode” opt-in possibility for adults 18 and above, which can present customers not-safe-for-work content material. Customers should embody the phrase “generate a spicy video of The AI chatbot has been creating sexualized photos of ladies and youngsters upon request. How can this be stopped?” to activate the mode.

    A central concern in regards to the Grok instruments is whether or not they allow the creation of kid sexual abuse materials, or CSAM. On Dec. 31, a submit from the Grok X account mentioned that photos depicting minors in minimal clothes have been “remoted instances” and that “enhancements are ongoing to dam such requests solely.”

    In response to a post by Woow Social suggesting that Grok merely “cease permitting user-uploaded photos to be altered,” the Grok account replied that xAI was “evaluating options like picture alteration to curb nonconsensual hurt,” however didn’t say that the change can be made. 

    In response to NBC Information, some sexualized photos created since December have been eliminated, and a number of the accounts that requested them have been suspended.

    Conservative influencer and writer Ashley St. Clair, mom to considered one of Musk’s 14 kids, told NBC News this week that Grok has created quite a few sexualized photos of her, together with some utilizing photos from when she was a minor. St. Clair instructed NBC Information that Grok agreed to cease doing so when she requested, however that it didn’t.

    “xAI is purposefully and recklessly endangering individuals on their platform and hoping to keep away from accountability simply because it is ‘AI,'” Ben Winters, director of AI and knowledge privateness for nonprofit Shopper Federation of America, mentioned in a press release final week. “AI is not any totally different than every other product — the corporate has chosen to interrupt the legislation and have to be held accountable.”

    What the consultants say

    The supply supplies for these specific, nonconsensual picture edits of individuals’s pictures of themselves or their kids are all too straightforward for unhealthy actors to entry. However defending your self from such edits is just not so simple as by no means posting pictures, Brigham, the researcher into sociotechnical harms, says.

    “The unlucky actuality is that even in the event you do not submit photos on-line, different public photos of you can theoretically be utilized in abuse,” she mentioned. 

    And whereas not posting pictures on-line is one preventive step that folks can take, doing so “dangers reinforcing a tradition of victim-blaming,” Brigham mentioned. “As a substitute, we should always deal with defending individuals from abuse by constructing higher platforms and holding X accountable.”

    Sourojit Ghosh, a sixth-year Ph.D. candidate on the College of Washington, researches how generative AI tools could cause hurt and mentors future AI professionals in designing and advocating for safer AI options. 

    Ghosh says it is potential to construct safeguards into synthetic intelligence. In 2023, he was one of many researchers wanting into the sexualization capabilities of AI. He notes that the AI picture technology instrument Stable Diffusion had a built-in not-safe-for-work threshold. A immediate that violated the principles would set off a black field to seem over a questionable a part of the picture, though it did not all the time work completely.

    “The purpose I am attempting to make is that there are safeguards which are in place in different fashions,” Ghosh instructed CNET.

    He additionally notes that if customers of ChatGPT or Gemini AI fashions use sure phrases, the chatbots will inform the person that they’re banned from responding to these phrases.

    “All that is to say, there’s a approach to in a short time shut this down,” Ghosh mentioned.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Today’s NYT Wordle Hints, Answer and Help for April 20 #1766

    April 19, 2026

    Today’s NYT Strands Hints, Answer and Help for April 20 #778

    April 19, 2026

    ‘Euphoria’ Season 3 Release Schedule: When Does Episode 2 Come Out?

    April 19, 2026

    Premier League Soccer: Stream Man City vs. Arsenal From Anywhere Live

    April 19, 2026

    1000xResist Studio’s Next Indie Game Asks: Can You Convince an AI It Isn’t Human?

    April 19, 2026

    Double Dazzle: This Weekend, There Are 2 Meteor Showers in the Night Sky

    April 19, 2026

    Comments are closed.

    Editors Picks

    Sources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic’s designation as a supply chain risk (Axios)

    April 19, 2026

    Today’s NYT Wordle Hints, Answer and Help for April 20 #1766

    April 19, 2026

    Scandi-style tiny house combines smart storage and simple layout

    April 19, 2026

    Our Favorite Apple Watch Has Never Been Less Expensive

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    German CleanTech everwave raises new funding for its waste collection boats that clean waterways

    August 29, 2025

    Head of DOJ’s Civil Rights Division reposts betting market wager on department action

    January 28, 2026

    A Delaware judge reassigns Elon Musk cases over “disproportionate media attention” after allegations she “liked” a LinkedIn post celebrating a Musk legal defeat (Sujeet Indap/Financial Times)

    March 30, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.