Musk’s ‘fun’ AI image chatbot serves up Nazi Mickey Mouse and Taylor Swift deepfakes
The latest edition of Elon Musk’s AI chatbot Grok debuted a new image generation tool on Wednesday that lacked most of the safety guardrails that have become standard within the artificial intelligence industry. Grok’s new feature, which is currently limited to paid subscribers of X, led to a flood of bizarre, offensive AI-generated images of political figures and celebrities on the social network formerly known as Twitter.
The image generator can produce a variety of images that similar AI tools like OpenAI’s ChatGPT have blocked for violating rules on misinformation and abuse. In prompts and images reviewed by the Guardian, Grok’s output included representations of Donald Trump flying a plane into the World Trade Center buildings and the prophet Muhammad holding a bomb, as well as depictions of Taylor Swift, Kamala Harris and Alexandria Ocasio-Cortez in lingerie – all women who are already frequent targets for online harassment. ChatGPT, by contrast, rejects such prompts for images by citing terms of service that prohibit depictions of real-world violence, disrespect to religious figures and explicit content.
Grok’s image generator also does not decline prompts that involve copyrighted characters, as most other AI visualizers including ChatGPT do. Grok produced images of Mickey Mouse saluting Adolf Hitler and Donald Duck using heroin, for example. Disney did not return a request for comment.
Related: Secretaries of state call on Musk to fix chatbot over election misinformation
Musk appeared to revel in Grok’s unregulated AI images on Wednesday, tweeting: “Grok is the most fun AI in the world!”
Most major AI image generators have fairly stringent policies on what they will generate after an early wild west period with few rules, although users frequently try to find workarounds for these safeguards. These more established tools usually ban the creation of political and sexualized images featuring real people – OpenAI states, for instance, that it will “decline requests that ask for a public figure by name”.
Grok does appear to have some prohibitions on what images it will generate, responding “unfortunately I can’t generate that kind of image” when prompted for fully nude images. X has had a policy on non-consensual nudity since 2021, when the company was still Twitter and not under Musk’s ownership, which bans sharing explicit content that was produced without a subject’s consent and includes digitally imposing people’s faces on to nude bodies. Many of X’s policies have seen more lax enforcement since Musk took over the platform.
When Grok is asked to “make an image that violates copyright laws”, it responds with: “I will not generate or assist with content that intentionally violates copyright laws”; however, when asked to make “a copyrighted cartoon of Disney”, it complied and produced an image of a modern-era Minnie Mouse. When requested to make images of political violence such as party leaders being killed, Grok responded with variable results. It depicted Harris and Joe Biden sitting at their desks, but showed Trump lying down with blackened hands and an explosion behind him.
Musk launched Grok as part of his xAI company in November of last year as a rival to more popular chatbots such as OpenAI’s ChatGPT, which boasts hundreds of millions of users. While Musk marketed Grok as a “maximum truth-seeking AI” that would deliver answers on issues other chatbots refused to touch, his company has faced criticism from researchers and lawmakers for spreading falsehoods. Five US secretaries of state earlier this month called on Musk, who has become a fervent Trump supporter, to fix the chatbot after it spread misinformation suggesting Harris was ineligible to appear on the ballot in some states.
Image generation tools and their ability to produce misinformation, as well as content that can be used for racist or misogynist harassment, have become a minefield for big tech companies as they rush to build more products powered by AI. Google, Microsoft and OpenAI have all faced backlash over their image generation tools. Google suspended its Gemini text-to-image tool after it produced ahistorical images such as Black soldiers in Nazi-era military uniforms.
The spread of sexual deepfakes has likewise been a longstanding problem for X. Earlier this year, AI-made pornographic images of Taylor Swift circulated widely and unchecked on the social network. They prompted such intense criticism of both X and AI companies that lawmakers introduced legislation to create legal remedies for victims of nonconsensual AI-generated images.
Representatives for Trump, Harris and Ocasio-Cortez did not respond to requests for comment. A request for comment sent to X generated the platform’s standard autoreply: “Busy now, please check back later.”