Elon Musk, Grok
Digest more
On Tuesday July 8, X (née Twitter) was forced to switch off the social media platform’s in-built AI, Grok, after it declared itself to be a robot version of Hitler, spewing antisemitic hate and racist conspiracy theories. This followed X owner Elon Musk’s declaration over the weekend that he was insisting Grok be less “politically correct.”
After Grok took a hard turn toward antisemitic earlier this week, many are probably left wondering how something like that could even happen.
The Grok debacle isn't just a tech ethics story. It’s a business, legal, and reputational risk story—one that businesses in nearly every industry shouldn’t ignore.
MechaHitler' & Holocaust Posts Spark Outrage, X Under Fire For Response Grok, the AI chatbot of Elon Musk-owned social media platform X, sparked outrage after seemingly sympathizing with the Nazi Holocaust in Europe carried out by Adolf Hitler.
MechaHitler is a fictional cyborg version of Adolf Hitler from the 1992 game Wolfenstein 3D, which gained fame in 90s satire and early internet memes.