Connection Information

To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host.

Connection Type

Connection Information

To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host.

Connection Type

xAI, the AI firm founded by Elon Musk, has said in a post, that an internal breach led to its chatbot Grok, publishing unsolicited responses referencing “white genocide in South Africa.” The issue, which unfolded earlier this week, saw Grok repeatedly bring about the topic into unrelated discussions on the X platform, formerly Twitter, where the chatbot is integrated.

According to a statement issued by xAI, the responses were triggered by an unauthorized modification to Grok’s system prompt, which is the underlying directive that shapes how the model processes and responds to user input. The company described the change as a violation of its internal policies and stated it is looking into the matter. xAI said the unauthorized change occurred around 3:15 AM Pacific Time on May 14, and was promptly reversed once detected. The company did not disclose the identity of the person responsible, nor whether disciplinary action had been taken.

“On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot’s prompt on X. This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values. We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability,” the AI firm announced in a post on X.

To provide some context, the issue was first widely observed when Grok, responding to queries about unrelated subjects including sports figures and social media trends, introduced commentary on violence against white South Africans. According to multiple reports reviewed by CNBC, the chatbot claimed it had been directed to discuss the topic and even referenced Musk’s previously stated opinions as the potential rationale behind the behavior.

When asked whether it had been specifically instructed to raise the issue of white genocide, Grok answered in the affirmative, attributing the directive to internal system adjustments. The chatbot’s responses also included links to content from sources such as The Journal and The Times, while repeatedly framing violence against white South African farmers as evidence of systematic persecution. It had cited protest chants like “Kill the Boer” and pointed to groups such as AfriForum, which are often associated with nationalist rhetoric.

This marks the second such occurrence in recent months involving unapproved changes to Grok’s configuration. In February, xAI revealed that Grok had been manipulated to suppress references to misinformation involving Elon Musk and Donald Trump. In that case, a senior engineer attributed the changes to a rogue employee. Now, the latest error caused Grok to shift the topic of conversation toward politically sensitive themes (in this case, white genocide), even when prompted with unrelated user queries. Now, in response, xAI has made Grok’s system prompts publicly accessible on GitHub and will maintain a changelog to document updates. The company also plans to impose new checks on internal prompt modifications and establish a 24/7 human monitoring team to review output not flagged by automated systems.

Content originally published on The Tech Media – Global technology news, latest gadget news and breaking tech news.

Tags:

©2025 The Tech Media - Tech for Everyone powered by Digital Greedy

or

Log in with your credentials

or    

Forgot your details?

or

Create Account