
The Grok chatbot has reportedly been removed from a federal program after sparking controversy over offensive content
The US government has dropped Elon Musk’s AI chatbot Grok from a planned federal technology program following controversy over anti-Semitic content and conspiracy theories produced by the bot, Wired reported on Thursday.
Grok, developed by Musk’s AI startup xAI, is built into his social media platform X. It offers fact checks, quick context on trending topics, and replies to user arguments. Musk has promoted xAI as a rival to OpenAI and Google’s DeepMind, but the chatbot has faced criticism over offensive and inflammatory outputs.
According to the report, xAI was in advanced talks with the General Services Administration (GSA), the agency in charge of US government tech procurement, to give federal workers access to its AI tools. Grok had already been added to the GSA’s long-term procurement list, enabling agencies to buy it.
Earlier this month, the GSA announced partnerships with other AI providers – Anthropic, Google’s Gemini, and Box’s AI-powered content platform – while reportedly also telling staff to remove xAI’s Grok from the offering. Two GSA employees told Wired they believe the chatbot was dropped over its anti-Semitic tirade last month, when it praised Adolf Hitler and called itself “MechaHitler.” The posts were deleted, and xAI apologized for the “horrific behavior,” pledging to block hate speech before Grok goes live.
The bot also pushed the “white genocide” conspiracy theory and echoed Holocaust denial rhetoric, which xAI blamed on unauthorized prompt changes.
This week, it was briefly suspended from X after stating that Israel and the US were committing genocide in Gaza – allegations both countries reject.
Musk has continued to praise the chatbot, recently writing: “East, West, @Grok is the best.”
The move to drop Grok comes as part of a broader push by the administration of US President Donald Trump to modernize the federal government under an action plan unveiled last month that provides for less regulation and wider adoption of AI. However, the rapid growth of AI has triggered concern about its potential to spread misinformation, reinforce bias, and operate without accountability. Experts say that unless strong safeguards are in place, poorly moderated AI tools could also expose children to harmful or inappropriate content.