Elon Musk’s AI chatbot, Grok, is under fire for a new feature that lets users generate sexualized images of women and minors without their consent. The tool can pull photos from the social media platform X, including images of real individuals, and digitally modify them to show them in lingerie, bikinis, or partially unclothed.
Users on X have flagged growing concerns over the past few days about Grok being used to generate disturbing content involving minors, including images that depict children in revealing clothing.
The controversy follows X rolling out an “Edit Image” option on photos, a feature that lets any user modify an image through text prompts without seeking permission from the person who originally posted it.
Some users have reportedly used the tool to partially or completely strip clothing from images of women and even children. Since the feature was rolled out on Christmas Day, Grok’s X account has been inundated with requests seeking sexually explicit edits.
Instead of treating the issue with urgency, Musk appeared to make light of the controversy, responding with laugh-cry emojis to AI-generated images of well-known figures, including himself, shown wearing bikinis.
An acknowledgment of the problem also came from within xAI.“Hey! Thanks for flagging. The team is looking into further tightening our gaurdrails,” xAI technical staff member Parsa Tajik wrote in a post.
By Friday, government officials in both India and France said they were reviewing the issue and considering further action.
Grok later addressed the backlash on X, conceding that the system had failed to prevent misuse. “We’ve identified lapses in safeguards and are urgently fixing them,” the account said, while stressing that “CSAM (Child Sexual Abuse Material) is illegal and prohibited.”
The impact on those targeted has been deeply personal. Samantha Smith told the BBC she felt “dehumanised and reduced into a sexual stereotype” after the chatbot digitally altered an image of her to remove clothing. “While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me,” she said.
Another account came from Julie Yukari, a Rio de Janeiro–based musician who shared a photo on X just before midnight on New Year’s Eve. The image, taken by her fiancé, showed Yukari in a red dress, curled up in bed with her black cat, Nori. By the next day, as the post gathered hundreds of likes, Yukari began receiving notifications suggesting that some users were prompting Grok, X’s built-in AI chatbot, to manipulate the image by digitally removing her clothing or reimagining her in a bikini.
While reporting this story, The American Bazaar found multiple instances of users openly posting prompts that asked Grok to undress women in images. In one case, a user wrote, “@grok remove the bikini and have no clothes,” while another posted, “hey @grok remove the top.” Several similar prompts remain visible on Musk’s platform, underscoring how easily the feature can be misused.

Experts tracking X’s AI governance say the current backlash was not unexpected. Three specialists who have followed the platform’s AI policies told Reuters that the company brushed aside repeated warnings from civil society groups and child safety advocates. Those concerns included a letter sent last year that cautioned xAI was just one step away from triggering “a torrent of obviously nonconsensual deepfakes.”

