Grok being attacked by governments through malicious users
Safety, Control, and the Illusion of Regulating the Internet
Once again, governments around the world are circling the same idea: if we regulate the internet harder, it will become safer.
This week, that obsession has found a new target in Elon Musk, his AI chatbot Grok, and X. From the UK to Southeast Asia, political leaders, regulators, and campaigners are lining up to demand bans, tighter controls, and faster legislation in the name of public safety.
The problem is not that the harms being discussed are imaginary. They are real, distressing, and unacceptable.
The problem is that the response is fundamentally flawed.
The Pattern We Never Learn From
The story is familiar.
A new technology emerges.
It is misused by a minority.
Governments respond not by targeting bad actors, but by attempting to regulate the entire system.
We saw it with file sharing.
We saw it with encryption.
We saw it with social media.
We are now seeing it again with generative AI.
In the case of Grok, governments are reacting to the creation of non-consensual sexualised images, including deeply serious allegations involving children. Politicians describe the technology as “disgraceful” and “disgusting”. Regulators like Ofcom threaten enforcement. Some countries go further and impose outright bans.
On the surface, this looks decisive. In reality, it is mostly theatre.
Regulation Chases the Tool, Not the Behaviour
Here is the uncomfortable truth policymakers avoid:
The internet does not care about borders, bans, or press conferences.
Blocking Grok in Malaysia or Indonesia does not stop deepfakes.
Threatening X in the UK does not remove the underlying capability.
Delaying or rushing legislation does not change the incentives of malicious users.
The same images can be created with open-source models.
They can be generated offline.
They can be shared on platforms regulators have never heard of.
Even in the UK, where the Online Safety Act promises sweeping powers, enforcement is reactive, slow, and technologically outpaced before it begins. Laws passed in 2025 are still not fully in force in 2026, while AI tooling iterates weekly.
Technology moves exponentially.
Legislation moves linearly.
This gap is not closing. It is widening.
Safety Is Not the Same as Control
There is a critical distinction governments keep blurring:
protecting people versus controlling infrastructure.
Making harassment, abuse, and exploitation illegal is necessary and right.
Holding individuals accountable for criminal behaviour is essential.
But trying to “fix” harm by regulating platforms, banning tools, or blocking access is an admission that the state cannot meaningfully police behaviour at scale, so it instead targets the medium.
That approach has never worked.
It did not stop piracy.
It did not stop extremist content.
It did not stop disinformation.
What it did do was slow innovation, centralise power, and push activity into darker, less visible corners of the internet.
The Real Cost: Innovation Paralysis
Every new regulatory panic has a cost that is rarely discussed.
Startups hesitate.
Open research is chilled.
Small teams cannot afford compliance.
The only organisations that survive heavy-handed regulation are the largest incumbents with legal teams, lobbying power, and deep pockets. Ironically, the very people politicians claim to be standing up to.
Meanwhile, the rest of the world moves on.
Developers keep building.
Models keep improving.
The internet routes around restrictions as it always has.
Conclusion: The Internet Cannot Be Regulated
This is the conclusion policymakers refuse to accept:
The internet cannot be regulated into safety.
You can regulate behaviour.
You can prosecute crimes.
You can support victims and improve reporting mechanisms.
But you cannot centrally control a decentralised, global, rapidly evolving network and expect anything other than failure, delay, and unintended consequences.
The sooner governments stop pretending otherwise, the sooner we can have a more honest conversation about responsibility, resilience, and innovation in an AI-driven world.
The internet will move forward regardless.
The only question is whether regulation will move with reality, or continue chasing shadows.
Member discussion