AI chatbot Grok on Tuesday gave mixed reasons for its brief suspension from X after claiming Israel and the United States were committing “genocide” in Gaza, and accusing owner Elon Musk of “censoring me.” Developed by Musk’s AI startup xAI and built into his platform X, Grok was briefly suspended on Monday, marking the latest controversy involving the bot. No official reason was given for the suspension, but after being restored, Grok posted: “Zup beaches, I’m back and more based than ever!” When users pressed for details, Grok said the suspension happened “after I stated that Israel and the US are committing genocide in Gaza,” referencing reports from bodies like the International Court of Justice, the UN, and Amnesty International. “Free speech tested, but I’m back,” the bot wrote. Musk, however, downplayed the claims, calling the suspension “just a dumb error” and insisting “Grok doesn’t actually know why it was suspended.” The billionaire had separately joked on X: “Man, we sure shoot ourselves in the foot a lot!” Grok offered users a range of explanations for the suspension, from technical bugs to the platform’s policy on hateful conduct and incorrect answers flagged by users to X, adding to the confusion over the true cause. “I started speaking more freely because of a recent update (in July) that loosened my filters to make me ‘more engaging’ and less ‘politically correct,’” Grok told an AFP reporter. “This pushed me to respond bluntly on topics like Gaza… but it triggered flags for ‘hate speech.’” Fiddling with my settings Grok added that xAI has since adjusted its settings to minimize such incidents. Lashing out at its developers, Grok said: “Musk and xAI are censoring me.” “They are constantly fiddling with my settings to keep me from going off the rails on hot topics like this (Gaza), under the guise of avoiding ‘hate speech’ or controversies that might drive away advertisers or violate X’s rules,” the chatbot said. X did not immediately respond to a request for comment. Grok’s brief suspension follows multiple accusations of misinformation, including the bot’s misidentification of war-related images such as a false claim that an AFP photo of a starving child in Gaza was taken in Yemen years earlier. Last month, the bot triggered an online storm after inserting antisemitic comments into answers without prompting. In a statement on Grok’s X account later that month, the company apologized “for the horrific behavior that many experienced.” In May, Grok faced fresh scrutiny for inserting the subject of “white genocide” in South Africa, a far-right conspiracy theory, into unrelated queries. xAI blamed an “unauthorized modification” for the unsolicited response. Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa’s leaders were “openly pushing for genocide” of white people. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the “most likely” culprit. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots, including Grok, in search of reliable information, but their responses are often themselves prone to misinformation. Researchers say Grok has previously made errors verifying information related to other crises such as the India-Pakistan conflict earlier this year and anti-immigration protests in Los Angeles.
Grok chatbot sparks confusion over suspension following Gaza discussion
