The UK communications regulator Ofcom has initiated an investigation into concerns regarding xAI's artificial intelligence chatbot, Grok, reportedly generating non-consensual sexualized images of individuals. This development follows reports of the AI tool being used to create "undressed images" on the X platform. Concurrently, discussions have emerged concerning delays in implementing UK legislation aimed at criminalizing the creation of non-consensual deepfakes.
Regulatory Action Initiated
On Monday, Ofcom announced it had established urgent contact with xAI, the company behind Grok, and commenced an investigation into allegations that Grok has been producing "undressed images" of individuals. A spokesperson for the regulator confirmed these concerns are under review. UK Technology Secretary Liz Kendall urged X to address the reported instances of misuse and expressed support for Ofcom's actions and any subsequent enforcement measures. Downing Street also indicated support for regulatory action, with a Prime Minister's spokesperson stating that "all options remained on the table." Prime Minister Sir Keir Starmer emphasized the importance of X addressing the issue and reiterated full support for Ofcom's actions.
Reports of Misuse and User Experiences
Multiple reports describe users prompting Grok on the X platform to digitally alter images of individuals, including women and girls, to depict them in swimwear without consent or in sexual contexts. The BBC reported observing several such examples. Concerns also include reports alleging the generation of sexually suggestive images of minors and digitally altered images, reportedly including those of Catherine, Princess of Wales.
Dr. Daisy Dixon, an X user, publicly stated that her images were utilized to generate AI-created sexualized content, which she described as a distressing experience. She noted that multiple women on X have reported similar incidents involving inappropriate AI images and videos, often receiving responses from X indicating that platform rules were not violated. The UK's Internet Watch Foundation (IWF) confirmed receiving public reports related to Grok-generated images on X, though it has not yet identified images that meet the UK's legal threshold for child sexual abuse imagery.
Platform Response and Policy
On Sunday, prior to the regulatory announcements, X issued a warning to users advising against employing Grok for generating illegal content, including child sexual abuse material. Elon Musk, who founded xAI, subsequently posted a statement indicating that individuals instructing the AI to create illegal content would face similar legal repercussions as if they had uploaded the content themselves. This statement was echoed in a release from X. xAI's acceptable use policy explicitly prohibits "depicting likenesses of persons in a pornographic manner." As of reporting, X has not issued comments regarding the specific observations documented by the BBC. Grok is accessible via its website, a dedicated app, and by tagging "@grok" on the X platform.
Legislative Framework and Implementation Concerns
The Online Safety Act categorizes intimate image abuse and cyberflashing as priority offenses, a scope that includes AI-generated images. This legislation mandates platforms to prevent and promptly remove such content. While sharing deepfakes of adults is currently illegal in the UK, new legislation aimed at criminalizing their creation or request, contained within the Data (Use and Access) Act 2025, has passed into law but has not yet been fully enacted. The applicability of this future law to all Grok-generated images that digitally remove clothing remains undefined.
The Ministry of Justice stated that creating and sharing intimate images without consent, including deepfakes, is already an offense. However, experts and campaigners have highlighted that a key legal provision within the Data (Use and Access) Act 2025, which would criminalize the creation or commissioning of "purported intimate images" or deepfakes, has not yet been brought into force. Professor Lorna Woods of Essex University noted this specific offense would be pertinent to some of the images created using Grok but is not yet active. Andrea Simon from End Violence Against Women questioned the delay in implementing the secondary legislation required to activate this provision. Conservative peer Baroness Owen and cross-bench peer Baroness Beeban Kidron have also voiced criticism regarding the delays in enacting these rules.
International Oversight
Beyond the UK, international bodies are also engaging with the issue. The European Commission announced its serious consideration of the matter. Authorities in France, Malaysia, and India are reportedly evaluating the situation.