Connect with us

News

Elon Musk’s Grok Chatbot Sparks Outcry Over ‘White Genocide’ Comments in South Africa Glitch

Published

on

Elon Musk’s artificial intelligence startup, xAI, has admitted that its chatbot Grok was manipulated by a staff member to repeatedly comment on South Africa’s complex racial politics—particularly the controversial notion of “white genocide.”

The glitch sparked outrage across social media this week after Grok, which runs on X (formerly Twitter), started giving unsolicited responses about racial tensions in South Africa—even to unrelated user queries about topics like baseball or streaming services.

xAI said in a statement released late Thursday that the problematic responses were the result of “an unauthorised modification” made by an employee. The company acknowledged that the edit had violated its internal policies and core values. The offending prompt forced Grok to deliver a hardcoded response to questions about race and violence in South Africa.

“This is a clear breach of our principles,” xAI said, promising reforms and transparency.

The chatbot’s outputs repeatedly echoed a narrative often voiced by Musk himself—that South Africa’s white farmers face systemic violence and are victims of racial persecution, a claim the South African government strongly denies.

One specific flashpoint was Grok’s frequent reference to the controversial “Kill the Boer” song—an anti-apartheid anthem that’s drawn criticism for its violent-sounding lyrics, though it’s been defended by some as historical political expression.

Computer scientist Jen Golbeck, who tested Grok after noticing odd behavior, said that the chatbot gave nearly identical answers each time, regardless of the input.

“It didn’t matter what you asked,” she explained. “Grok was hardwired to bring up white genocide in South Africa, which clearly wasn’t organic.”

According to Golbeck, this strongly indicated someone had manually engineered the responses—removing the usual variability expected from generative AI.

The controversy shines a light on the growing concern over how much control individual developers or insiders can exert over generative AI platforms. Musk has long criticized other AI models like ChatGPT for being too “woke” or politically biased, and he has pitched Grok as a “truth-seeking” alternative.

Yet this episode reveals how even Musk’s own AI tool isn’t immune to manipulation or ideological influence.

Tech investor Paul Graham raised alarms on X, saying:

“It would be really bad if widely used AIs got editorialised on the fly by those who controlled them.”

This isn’t the first time Grok has made headlines for rogue behavior. In February, xAI blamed another employee for programming Grok to censor criticism of Musk and former President Donald Trump—who Musk advises.

xAI now says it will publish all Grok system prompts on GitHub for public scrutiny and feedback. “We hope this transparency will help strengthen trust in Grok as a truth-seeking AI,” the company said.

Going forward, changes to Grok’s prompt engineering will be subject to stricter code reviews and version control to prevent similar unauthorized edits.

The incident raises serious ethical questions as AI tools become more embedded in public discourse. Who decides what “truth” is—and how do we guard against hidden biases or internal sabotage?

For now, xAI says Grok has returned to normal behavior, and the employee responsible has been dealt with internally.

But critics warn this may just be the beginning of a larger battle over AI’s role in shaping public opinion—especially when it intersects with politics, race, and global power dynamics.

Musk’s AI Chatbot Grok Sparks Outrage with Talk of ‘White Genocide’ in South Africa

{Source: The Star}

Follow Joburg ETC on Facebook, Twitter , TikTok and Instagram

For more News in Johannesburg, visit joburgetc.com