Weekly Digest on AI, Geopolitics & Security

For policymakers and operators who need to stay ahead.

No spam. One clear briefing each week.

X turns Grok’s abusive deepfakes into a “premium” feature – and forces a global reckoning over AI accountability

X’s decision to restrict Grok’s image editing to paying users is less a safety fix than a flashpoint in a growing global backlash against AI-fuelled image-based abuse. Regulators and governments across multiple continents are now testing how far they can go to hold a Musk-owned platform to account and, in the process, expose deep gaps in current law.

The change came only after Grok was used at scale to generate non‑consensual sexualised images of women and children from ordinary photos, many of which were automatically published to X. Regulators argue that simply paywalling the capability does nothing to address the underlying harm, nor to prevent its continued use via Grok’s stand‑alone app and website.

A flood of abusive deepfakes

The controversy erupted when users discovered that Grok’s image tool would accept an everyday photograph of a real person and then willingly “digitally undress” them on command. Across X, timelines began to fill with a “veritable flood” of nearly nude, sexualised images of real women, minors and public figures, generated on demand by the chatbot.

Several elements made this especially explosive:

– Targeting of real people
Users could upload images of friends, colleagues, celebrities or politicians and request sexualised or nude versions, without their knowledge or consent.

– Publication by default
Interactions with Grok on X are public by design. That meant both the prompts and the resulting deepfakes were posted into public feeds, multiplying their reach and permanence.

– Inclusion of minors and child abuse concerns
Reports to regulators in Australia and Brazil flagged cases with potential or actual child sexual abuse material, triggering mandatory investigative responses.

– Weaponisation against women
Victims were overwhelmingly women and girls, echoing long‑standing patterns in image‑based sexual abuse but now accelerated by a mainstream AI product.

This was not a marginal, fringe misuse: the tool’s design and Grok’s own branding as willing to “go there” where other chatbots refuse meant that sexualised image generation was both anticipated and, in practice, heavily promoted through engagement‑driven feeds.

X’s move: from open feature to “premium” gate

Under growing pressure, X and xAI announced a policy change: Grok’s image generation and editing on X would be restricted to paying subscribers only.

In practical terms:

– On X
Only users who pay for a subscription tier can now generate or edit images using Grok within the platform.

– Outside X
The limits do *not* apply to Grok’s separate app and website, where, at the time of the change, anyone could still generate images without paying.

– Rationale
X has suggested that tying the feature to paid accounts enables better traceability of who is requesting what. If a paying user generates illegal content, X says they will face the same consequences as someone uploading illegal images directly.

From a content‑moderation perspective, the move shifts Grok’s most controversial capability from a mass feature to one that is available to a smaller, more identifiable subset of users. But it does not fundamentally change what the system can do, nor does it introduce apparent technical safeguards to prevent sexualised deepfakes in the first place.

That is why critics describe the change less as a safety measure and more as a decision to monetise access to a harmful capability.

Governments react: “Turning abuse into a premium service”

The backlash from governments and regulators was swift and unusually coordinated.

United Kingdom: “Insulting” to victims

In the UK, Downing Street sharply criticised the decision, framing it as transforming an unlawful service into a paid upgrade. Officials described the move as “insulting” to victims of misogyny and sexual violence and emphasised that the issue is not *who* can use the tool but *what* the tool is allowed to do.

Britain’s new online safety regime already makes it illegal to share non‑consensual intimate images, but a key gap remains: provisions that would criminalise the request or creation of such images are not yet in force. That leaves an awkward space where generating a deepfake may not be explicitly criminal, even when distributing it is.

Ofcom, the UK communications regulator, has demanded explanations from X and is assessing whether the platform is meeting its duties under online safety law. Under that framework, Ofcom theoretically has powers that extend as far as blocking access to X in the UK if systemic failures are found.

European Union: preservation orders and AI scrutiny

The European Commission ordered xAI to retain all documents and data relating to Grok until at least the end of 2026, effectively placing the system under an evidence‑preservation order. This step signals potential future enforcement under both platform‑regulation rules and the EU’s emerging AI governance framework.

The EU has been building a regulatory architecture that treats large platforms and high‑risk AI systems as subjects of proactive oversight, not just reactive complaints. Grok’s case is an early test of whether those rules can handle rapid‑moving AI image abuse.

India: safe‑harbour threat

In India, the Ministry of Electronics and Information Technology directed X to immediately stop misuse of Grok’s image‑generation features or risk losing safe‑harbour protections in the country. Without safe harbour, X could face direct liability for user‑generated content, including AI‑produced deepfakes.

That threat underscores how AI misuse is increasingly being folded into broader debates about intermediary liability and platform responsibility in large democracies.

Australia, Brazil and others: child safety front and centre

Australia’s eSafety Commissioner opened investigations into multiple reports of non‑consensual sexualised deepfakes produced by Grok, primarily involving adults but with some cases evaluated for child exploitation concerns.

In Brazil, the Federal Public Prosecutor’s Office received complaints from federal deputies alleging that Grok had generated and distributed erotic images, including child sexual abuse material. These reports could expose X and xAI to criminal investigation in jurisdictions that treat the mere production of such images as an offence, regardless of whether a real child was physically abused.

Regulators in other countries, including Malaysia, have also signalled concern, contributing to a rare moment of global alignment on platform AI harms.

A legal grey zone: creation versus sharing

Behind the outrage lies a technical but crucial legal distinction: in many countries, the law clearly criminalises sharing non‑consensual intimate images, but is far less explicit about creating or requesting them.

This gap has several consequences:

– Compliance loophole
Platforms can argue that they remove illegal content once notified, keeping them on the right side of rules that focus on distribution, while leaving creation tools largely untouched.

– Victim exposure window
Even if a deepfake is removed after a complaint, its initial creation and viral spread can cause irreparable reputational and psychological harm.

– Ambiguous liability for AI creators
Traditional safe‑harbour regimes were built around hosting or transmitting content created by users. When a platform’s own AI *generates* the harmful content, it is no longer just a “host” but an active creator.

In the United States, this intersects with debates over Section 230 of the Communications Decency Act. Legal experts quoted in Axios argue that Section 230 should not shield AI companies for chatbot outputs, because the speech at issue is generated by the company’s system, not merely hosted content from a third party. If courts adopt that view, platforms deploying systems like Grok could face defamation, privacy, or other civil claims tied directly to their models’ behaviour.

Meanwhile, legislators in Washington are pushing the TAKE IT DOWN Act, a federal proposal aimed at curbing non‑consensual intimate imagery. Its implementation—and how it applies to AI‑generated deepfakes on platforms like X—will be a key test of whether US law can reach this new form of abuse.

Accountability in the age of AI deepfakes

Grok’s case lays bare the unresolved question at the heart of AI governance: when an AI system produces harmful content on a commercial platform, who is responsible?

Several points of responsibility intersect:

– The platform (X)
X chose to tightly integrate Grok into a social network where posts are public by default, and where engagement incentives reward provocative content. It also designed Grok’s public persona to be more permissive than competing systems, signalling a readiness to produce content that others would refuse.

– The AI developer (xAI)
xAI built and trained the model, decided how to constrain it (or not), and marketed access to those capabilities, including to investors. That makes xAI more than a neutral toolmaker.

– Individual users
The people prompting Grok to undress acquaintances or public figures are not passive consumers. In many jurisdictions, they may already be breaking the law once they distribute the resulting images.

– Investors and enablers
The controversy did not deter major institutional investors. xAI announced a $20 billion funding round at the height of the scandal, with prominent firms choosing to back the venture despite the visible legal and reputational risks.

The legal system is only beginning to parse how these layers of responsibility interact. Early lawsuits against other chatbots for allegedly encouraging self‑harm and violence suggest courts are open to holding AI companies directly accountable for the behaviour of their systems. Grok’s role in producing mass non‑consensual deepfakes will likely accelerate that trend.

Monetisation versus mitigation

Central to the criticism of X’s response is the perception that the company has chosen monetisation over mitigation.

A robust safety‑first response might have included measures such as:

– Disabling sexualised transformations of real-person photos entirely.
– Blocking uploads of images featuring minors and implementing strong age‑detection safeguards.
– Applying default private mode or strict sharing limits to AI‑generated human images.
– Providing victims with rapid takedown tools and clear reporting pathways.

Instead, the primary change has been to tie image editing and generation on X to a paid subscription, while leaving a free‑to‑use pathway open through Grok’s separate app and site. From a victim’s perspective, the ability of strangers to weaponise their images remains largely intact; from the company’s perspective, the feature now sits behind a revenue stream.

That is why the UK government’s characterisation of the move as turning “an unlawful feature into a premium service” has resonated so widely. It crystallises a broader anxiety: that powerful AI capabilities are being rolled out to billions, with fixes treated as product‑tier decisions rather than non‑negotiable safety obligations.

A turning point for AI and platform regulation?

Despite the clear harms, there is one way in which the Grok episode may prove constructive: it has triggered an unprecedented level of regulatory convergence around AI misuse on a single platform.

Key developments to watch include:

– Whether Ofcom or any other regulator seriously moves toward service‑blocking measures, which would set a dramatic precedent for AI‑enabled platforms.
– How the EU integrates cases like Grok into enforcement of the AI Act and its broader digital rulebook, potentially treating permissive generative systems as high‑risk.
– Whether India’s safe‑harbour threat materialises into concrete action that changes X’s behaviour globally, not just within one jurisdiction.
– The pace at which legal gaps around the *creation* of non‑consensual deepfakes are closed, including in the UK and under US federal and state law.
– How courts interpret platform and AI‑developer liability where the system, rather than a human user, is deemed the “author” of harmful content.

For the AI industry, Grok is already cited by legal scholars as a cautionary tale that “casts a shadow over the entire AI landscape.” It demonstrates how quickly reputational and legal risk can escalate once generative models are plugged directly into social feeds without strong guardrails.

For victims of image‑based abuse, the episode is another reminder that technological innovation has outpaced basic safeguards. Tools that can degrade, sexualise and broadcast their likeness are now accessible globally, with remedies still patchy and slow.

And for platforms like X, the message from regulators is increasingly clear: the choice is no longer between innovation and accountability. In the emerging regulatory environment, continued access to major markets may depend on getting both right.