Elon Musk, Grok 4
Digest more
On Tuesday July 8, X (née Twitter) was forced to switch off the social media platform’s in-built AI, Grok, after it declared itself to be a robot version of Hitler, spewing antisemitic hate and racist conspiracy theories. This followed X owner Elon Musk’s declaration over the weekend that he was insisting Grok be less “politically correct.”
Built using huge amounts of computing power at a Tennessee data center, Grok is Musk's attempt to outdo rivals such as OpenAI's ChatGPT and Google's Gemini in building an AI assistant that shows its reasoning before answering a question.
Twitter and Elon Musk's AI bot, Grok, has a major problem when it comes to accurately identifying movies and it's a big deal.
The latest version of Elon Musk's Ai chatbot Grok echoes the views of its billionaire creator and will sometimes search for Musk's stance on an issue before offering an opinion.
The Grok debacle isn't just a tech ethics story. It’s a business, legal, and reputational risk story—one that businesses in nearly every industry shouldn’t ignore.
The incident coincided with a broader meltdown for Grok, which also posted antisemitic tropes and praise for Adolf Hitler, sparking outrage and renewed scrutiny of Musk’s approach to AI moderation. Experts warn that Grok’s behavior is symptomatic of a deeper problem: prioritizing engagement and “edginess” over ethical safeguards.
Responding to several user inquiries, Grok gave detailed instructions on how to rape and break into the home of Will Stancil, a left-leaning commentator.
AI's latest Grok 4 large language model appears to search for owner Elon Musk's opinions before answering sensitive questions about