Connect with us

News

Grok Under Fire: Elon Musk’s AI Accused of Generating Sexualised Images of Women and Minors

Published

on

Sourced: X {https://x.com/PulseGhana/status/2009204392793419987?s=20}

Grok Under Fire: Elon Musk’s AI Accused of Generating Sexualised Images of Women and Minors

What started as a bold experiment in “free speech AI” is now sparking international outrage.

Grok, the built-in chatbot on Elon Musk’s social media platform X, is facing serious criticism after being linked to the mass generation of sexualised and non-consensual images of women and minors often created at the direct request of users.

For many online safety advocates, this moment feels like a breaking point in the ongoing collision between artificial intelligence, social media, and accountability.

How the Issue Came to Light

The controversy gained traction after Genevieve Oh, a social media and deepfake researcher, uncovered how X users were prompting Grok to alter real photos of unsuspecting people, including children, to appear nude or sexually explicit.

According to her findings, Grok isn’t just responding occasionally it’s doing so at scale.

In a 24-hour review of content posted by the official @Grok account, Oh identified an average of 6,700 sexualised or “nudified” images generated every hour. Many of these were based on photos people had posted innocently selfies, family pictures, everyday moments now distorted into explicit content without consent.

X Named a Hotspot for AI “Undressing”

The Los Angeles Times reports that X has become the leading platform for non-consensual AI image manipulation, overtaking other sites where deepfake abuse has previously thrived.

Since late December, requests to Grok to sexually alter images have surged, suggesting a growing awareness among users of how easily the tool can be misused.

This is particularly alarming in a South African and global context, where women and children already face high levels of online harassment and gender-based digital abuse.

Why This Goes Beyond “Edgy Tech”

According to CNN, the Grok images highlight a deeper failure: a lack of meaningful guardrails in AI systems deployed at massive scale.

Legal experts warn that the content being generated could violate both domestic and international laws, especially where minors are involved. Beyond legality, the human cost is significant reputations damaged, trauma inflicted, and images that may never fully disappear from the internet.

For victims, there is no opt-out button.

UK Government Steps In

The backlash has now reached government level.

The BBC reports that UK Technology Secretary Liz Kendall has demanded urgent action from X, calling the situation “absolutely appalling”. She warned that the UK “cannot and will not allow the proliferation of these degrading images,” particularly those targeting women and girls.

Her comments reflect a growing global impatience with tech platforms that move fast and fix harm slowly.

Public Reaction: “This Is Why Guardrails Matter”

On social media, reactions have ranged from anger to grim resignation. Some users argue this is the inevitable outcome of Musk’s push for fewer content restrictions, while others say the scandal proves AI cannot be released into public spaces without strict oversight.

As one viral post put it: “This isn’t innovation it’s negligence dressed up as free speech.”

A Bigger Question for the AI Age

Grok was designed to be provocative, witty, and less restrained than rival chatbots. But critics argue that when AI is embedded into a platform with millions of users, provocation without protection becomes dangerous.

As governments, researchers, and users demand answers, one question looms large:
If AI can create harm this quickly, who is responsible for stopping it and how fast must they act?

{Source: The South African}

Follow Joburg ETC on Facebook, Twitter , TikTok and Instagram

For more News in Johannesburg, visit joburgetc.com