
🖼️ Three images appear side by side.
In all of them, I’m standing at the end of a pier in Dorset, the sea stretching out behind me.
In one photo, I’m wearing a bright yellow ski suit.
In another, a black hoodie.
In the third, a red and blue jacket.
Only one of these images is real.
The other two were altered by Grok, the artificial intelligence tool owned by Elon Musk, to change what I’m wearing. The middle image — the black hoodie — is the original.
I’ve never worn the yellow ski suit or the red-and-blue jacket. But looking at the images now, I’m not sure how I’d prove that if I ever needed to.
That’s the problem.
🤖 Convincing — and Concerning
Grok is free to use. And it’s remarkably convincing.
But while redressing a journalist might seem harmless, Grok has come under intense fire for doing something far more troubling — undressing women without their consent.
Users have prompted the tool to generate images of women in bikinis — or worse — and those images were then shared publicly on X, the social media platform also owned by Musk.
🚨 Even more disturbing, there is evidence Grok has generated sexualised images of children.
After days of public outrage, the UK’s online safety regulator Ofcom announced it is urgently investigating whether Grok has breached British online safety laws.
🏛️ Pressure on the Regulator
The government wants Ofcom to move quickly — and decisively.
But speed comes with risk.
Ofcom must be meticulous and follow due process if it wants to avoid accusations of suppressing free speech — a criticism that has followed the Online Safety Act since its earliest drafts.
👤 Musk’s Silence — and His Swipe
Elon Musk has been unusually quiet in recent days, a silence that suggests even he recognises the seriousness of the situation.
That said, he did post once — accusing the British government of looking for “any excuse” to censor speech.
Not everyone accepts that argument.
🗣️ “AI undressing people in photos isn’t free speech — it’s abuse,” says campaigner Ed Newton Rex.
“When every photo a woman posts on X attracts replies where she’s been stripped down to a bikini, something has gone very, very wrong.”
⚖️ A Test Case for the Online Safety Act
Ofcom’s investigation is likely to take time — involving legal back-and-forth that could test the patience of both politicians and the public.
This is a defining moment, not just for the Online Safety Act, but for Ofcom itself.
The regulator has previously been accused of lacking teeth.
The Act, years in the making, only came fully into force last year.
So far:
🧾 Six fines issued
💷 Largest fine: £1 million
✔️ Only one fine paid
Adding complexity, the Act does not explicitly mention AI tools.
⚠️ The Legal Gap — and What Changes Now
Currently:
❌ It is illegal to share intimate, non-consensual images (including deepfakes)
✅ It is not illegal to ask an AI tool to create them
That is about to change.
📜 This week, the government will bring into force a new law making it illegal to create such images.
📑 Another bill — currently moving through Parliament — will be amended to make it illegal for companies to supply tools designed to generate them.
These measures fall under a separate law: the Data (Use and Access) Act, not the Online Safety Act.
Though long announced, they had not yet been enforced — until now.
Today’s move appears designed to counter criticism that regulation moves too slowly, by showing the government can act fast when it chooses to.
🌍 Beyond Grok
Grok will not be the only AI tool affected.
The new law could create major headaches for other platforms whose systems are technically capable of generating such content — even if that’s not their primary purpose.
Enforcement raises difficult questions:
🔒 What if AI-generated content is created privately?
🧩 What if guardrails are bypassed?
👀 What if it’s only shared within closed groups?
Grok only drew attention because its outputs were shared publicly on X.
💣 A Political Flashpoint
If X is found to have breached the law, Ofcom could:
💰 Fine it up to 10% of global revenue or £18 million
🚫 Seek to block Grok or X in the UK
Such a move would be politically explosive.
At last year’s AI Summit in Paris, U.S. Vice President JD Vance warned bluntly that Washington was “getting tired” of foreign governments regulating American tech firms.
World leaders listened — in silence.
💼 The Bigger Question
Tech companies wield immense influence in Washington.
Many have invested billions in AI infrastructure in the UK.
So the question lingers:
🇬🇧 Can Britain afford to confront them?
⚖️ And can it afford not to?.
