Zoe KleinmanKnow-how editor
BBCHere is me, on the finish of a pier in Dorset in the summertime.
Two of those photos had been generated utilizing the unreal intelligence device Grok, which is free to make use of and belongs to Elon Musk.
It is fairly convincing. I’ve by no means worn the fairly fetching yellow ski swimsuit, or the crimson and blue jacket – the center picture is the unique – however I do not know the way I might show that if I wanted to, due to these photos.
After all, Grok is underneath hearth for undressing fairly than redressing girls. And doing so with out their consent.
It made photos of individuals in bikinis, or worse, when prompted by others. And shared the ends in public on the social community X.
There’s additionally proof it has generated sexualised images of children.
Following days of concern and condemnation, the UK’s on-line regulator Ofcom has mentioned it’s urgently investigating whether Grok has broken British online safety laws.
The federal government needs Ofcom to get on with it – and quick.
However Ofcom must be thorough and comply with its personal processes if it needs to keep away from criticism of attacking free speech, which has dogged the On-line Security Act from its earliest levels.
Elon Musk has been uncharacteristically quiet on the topic in latest days, which suggests even he realises how severe this all is.
However he did hearth off a publish accusing the British authorities of looking for “any excuse” for censorship.
Not everybody agrees that on this event, the defence is suitable.
“AI undressing folks in images is not free speech – it is abuse,” says campaigner Ed Newton Rex.
“When each picture a girl posts of themselves on X instantly attracts public replies by which they have been stripped right down to a bikini, one thing has gone very, very fallacious.”
With all this in thoughts, Ofcom’s investigation might take time, and a number of back-and-forth – testing the endurance of each politicians and the general public.
It is a main second not just for Britain’s Online Safety Act, however the regulator itself.
It could actually’t afford to get this fallacious.
Ofcom has beforehand been accused of missing enamel. The Act, which was years within the making, solely got here absolutely into pressure final yr.
It has up to now issued three comparatively small fines for non-compliance, none of which have been paid.
The On-line Security Act would not particularly point out AI merchandise both.
And whereas it’s presently unlawful to share intimate, non-consensual photos, together with deepfakes, it isn’t presently unlawful to ask an AI device to create them.
That is about to vary. The federal government will this week convey into pressure a legislation which is able to make it unlawful to create these photos.
And the UK says it is going to amend one other legislation – presently going by way of Parliament – which might make it unlawful for corporations to provide the instruments designed to make them, too.
These guidelines have been round for some time, they don’t seem to be truly a part of the On-line Security Act however a very completely different piece of laws known as the Knowledge (Use and Entry) Act.
They’ve not been introduced into enforcement regardless of repeated bulletins from the federal government over many months that they had been incoming.
Right this moment’s announcement reveals a authorities decided to quell criticisms that regulation strikes too slowly, by displaying it could possibly act rapidly when it needs to.
It is not simply Grok that shall be affected.
A political bombshell?
The brand new legislation that shall be enforced this week might show to be a headache for different homeowners of AI instruments that are technically largely able to producing these photos as nicely.
And there are already questions round how on earth it is going to be enforced – Grok solely got here underneath the highlight as a result of it was publishing its output on X.
If a device is used privately by a person consumer, they discover a means across the guardrails and the ensuing content material is just shared with those that wish to see it, how will it come to mild?
If X is discovered to have damaged the legislation, Ofcom might subject it with a high quality of as much as 10% of its worldwide income or £18m, whichever is bigger.
It might even search to dam Grok or X within the UK. However this is also a political bombshell.
I sat on the AI Summit in Paris final yr and watched Vice President JD Vance thunder that the US administration was “getting drained” of overseas nations trying to manage its tech corporations.
His viewers, which included an enormous variety of world leaders, sat in stony silence.
However the tech corporations have a number of firepower contained in the White Home – and a number of other of them have additionally invested billions of {dollars} in AI infrastructure within the UK.
Can the nation afford to fall out with them?



