More

    Elon Musk’s Grok Faces Backlash Over Nonconsensual AI-Altered Images

    Grok, the AI chatbot developed by Elon Musk’s artificial intelligence company, xAI, welcomed the new year with a disturbing post.

    «Dear Community,» began the Dec. 31 post from the Grok AI account on Musk’s X social media platform. «I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok.»

    The two young girls weren’t an isolated case. Kate Middleton, the Princess of Wales, was the target of similar AI image-editing requests, as was an underage actress in the final season of Stranger Things. The «undressing» edits have swept across an unsettling number of photos of women and children.

    Despite the company’s promise of intervention, the problem hasn’t gone away. Just the opposite: Two weeks on from that post, the number of images sexualized without consent has surged, as have calls for Musk’s companies to rein in the behavior — and for governments to take action.


    Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


    According to data from independent researcher Genevieve Oh cited by Bloomberg this week, during one 24-hour period in early January, the @Grok account generated about 6,700 sexually suggestive or «nudifying» images every hour. That compares with an average of only 79 such images for the top five deepfake websites combined.

    Edits now limited to subscribers

    Late Thursday, a post from the GrokAI account noted a change in access to the image generation and editing feature. Instead of being open to all, free of charge, it would be limited to paying subscribers.

    Critics say that’s not a credible response.

    «I don’t see this as a victory, because what we really needed was X to take the responsible steps of putting in place the guardrails to ensure that the AI tool couldn’t be used to generate abusive images,» Clare McGlynn, a law professor at the UK’s University of Durham, told the Washington Post.

    What’s stirring the outrage isn’t just the volume of these images and the ease of generating them — the edits are also being done without the consent of the people in the images.

    These altered images are the latest twist in one of the most disturbing aspects of generative AI, realistic but fake videos and photos. Software programs such as OpenAI’s Sora, Google’s Nano Banana and xAI’s Grok have put powerful creative tools within easy reach of everyone, and all that’s needed to produce explicit, nonconsensual images is a simple text prompt.

    Grok users can upload a photo, which doesn’t have to be original to them, and ask Grok to alter it. Many of the altered images involved users asking Grok to put a person in a bikini, sometimes revising the request to be even more explicit, such as asking for the bikini to become smaller or more transparent.

    Governments and advocacy groups have been speaking out about Grok’s image edits. Ofcom, the UK’s internet regulator, said this week that it had «made urgent contact» with xAI, and the European Commission said it was looking into the matter, as did authorities in France, Malaysia and India.

    «We cannot and will not allow the proliferation of these degrading images,» UK Technology Secretary Liz Kendall said earlier this week.

    In the US, the Take It Down Act, signed into law last year, seeks to hold online platforms accountable for manipulated sexual imagery, but it gives those platforms until May of this year to set up the process for removing such images.

    «Although these images are fake, the harm is incredibly real,» says Natalie Grace Brigham, a Ph.D. student at the University of Washington who studies sociotechnical harms. She notes that those whose images are altered in sexual ways can face «psychological, somatic and social harm, often with little legal recourse.»

    How Grok lets users get risque images

    Grok debuted in 2023 as Musk’s more freewheeling alternative to ChatGPT, Gemini and other chatbots. That’s resulted in disturbing news — for instance, in July, when the chatbot praised Adolf Hitler and suggested that people with Jewish surnames were more likely to spread online hate.

    In December, xAI introduced an image-editing feature that enables users to request specific edits to a photo. That’s what kicked off the recent spate of sexualized images, of both adults and minors. In one request that CNET has seen, a user responding to a photo of a young woman asked Grok to «change her to a dental floss bikini.»

    Grok also has a video generator that includes a «spicy mode» opt-in option for adults 18 and above, which will show users not-safe-for-work content. Users must include the phrase «generate a spicy video of [description]» to activate the mode.

    A central concern about the Grok tools is whether they enable the creation of child sexual abuse material, or CSAM. On Dec. 31, a post from the Grok X account said that images depicting minors in minimal clothing were «isolated cases» and that «improvements are ongoing to block such requests entirely.»

    In response to a post by Woow Social suggesting that Grok simply «stop allowing user-uploaded images to be altered,» the Grok account replied that xAI was «evaluating features like image alteration to curb nonconsensual harm,» but did not say that the change would be made.

    According to NBC News, some sexualized images created since December have been removed, and some of the accounts that requested them have been suspended.

    Conservative influencer and author Ashley St. Clair, mother to one of Musk’s 14 children, told NBC News this week that Grok has created numerous sexualized images of her, including some using images from when she was a minor. St. Clair told NBC News that Grok agreed to stop doing so when she asked, but that it did not.

    «xAI is purposefully and recklessly endangering people on their platform and hoping to avoid accountability just because it’s ‘AI,'» Ben Winters, director of AI and data privacy for nonprofit Consumer Federation of America, said in a statement this week. «AI is no different than any other product — the company has chosen to break the law and must be held accountable.»

    xAI did not respond to requests for comment.

    What the experts say

    The source materials for these explicit, nonconsensual image edits of people’s photos of themselves or their children are all too easy for bad actors to access. But protecting yourself from such edits is not as simple as never posting photographs, Brigham, the researcher into sociotechnical harms, says.

    «The unfortunate reality is that even if you don’t post images online, other public images of you could theoretically be used in abuse,» she says.

    And while not posting photos online is one preventive step that people can take, doing so «risks reinforcing a culture of victim-blaming,» Brigham says. «Instead, we should focus on protecting people from abuse by building better platforms and holding X accountable.»

    Sourojit Ghosh, a sixth-year Ph.D. candidate at the University of Washington, researches how generative AI tools can cause harm and mentors future AI professionals in designing and advocating for safer AI solutions.

    Ghosh says it’s possible to build safeguards into artificial intelligence. In 2023, he was one of the researchers looking into the sexualization capabilities of AI. He notes that the AI image generation tool Stable Diffusion had a built-in not-safe-for-work threshold. A prompt that violated the rules would trigger a black box to appear over a questionable part of the image, although it didn’t always work perfectly.

    «The point I’m trying to make is that there are safeguards that are in place in other models,» Ghosh says.

    He also notes that if users of ChatGPT or Gemini AI models use certain words, the chatbots will tell the user that they are banned from responding to those words.

    «All this is to say, there is a way to very quickly shut this down,» Ghosh says.

    Recent Articles

    spot_img

    Related Stories

    Stay on op - Ge the daily news in your inbox