Hours after the Bondi terrorist assault, whereas many Australians slept, a delusion was generated and laundered by synthetic intelligence.
The only real shiny spot from Sunday’s atrocity concentrating on Jewish Australians that left 15 useless and 29 injured was the heroics of bystander Ahmed al-Ahmed, who was filmed fearlessly tackling and disarming one of many alleged gunmen.
However within the early hours of Monday morning, another narrative emerged: the story of the Muslim Syrian-born immigrant risking his life to subdue the shooter was fallacious. The “actual” identification of the hero was a 43-year-old Australian IT skilled referred to as “Edward Crabtree”.
The supply of this false declare was what presupposed to be a information web site. All the pieces advised this website, “The Each day”, was untrustworthy. The area, www.thedailyaus.world — much like the true Australian information outlet The Each day Aus — was registered on Sunday. It had just one different article. None of its writers existed wherever else.
The article, too, had all of the hallmarks of being a set-up. It cited pretend quotes from figures like Prime Minister Anthony Albanese, it incorrectly recognized the ex-NSW Police commissioner Karen Webb as nonetheless within the position and at a press convention, and it described occasions that didn’t occur (within the article’s telling, the bystander “pinned the person to the bottom till different bystanders rushed in to assist restrain him, and police arrived inside minutes”).
Two AI textual content detectors utilized by Crikey on the textual content recognized it as seemingly written by AI.
A screenshot of the hoax web site Grok relied upon initially.
The article was posted on Elon Musk-owned X as early as 9.46pm on Sunday night time, lower than three hours after the assault started.
ASPI analyst Nathan Ruser documented the fake narrative on X at 12.20am, calling it “AI-generated disinformation falsely claiming the hero was a Sydney-born native referred to as ‘Edward Crabtree’, with a whole AI-generated backstory”.
However by then, it had already unfold throughout the platform, jumped over to different components of the web, and was used to undermine the true heroics of al-Ahmed.
Then X’s chatbot, Grok, joined in too. At nearly precisely the identical time as Ruser’s put up — 12.25am — @grok first replied about Crabtree to a person who had prompted the AI chatbot in a now-deleted put up. (For these not acquainted, Musk’s AI chatbot Grok is built-in into X, permitting any person to ask the chatbot a query and to get it to answer content material on the web site by merely tagging it.)
“Edward Crabtree is a 43-year-old IT skilled and senior options architect from Sydney, Australia. On Dec. 14, 2025, he heroically disarmed a gunman throughout a mass capturing at Bondi Seashore, tackling the attacker, wrestling away his rifle regardless of being shot twice, and pinning him till police arrived,” the bot declared.
Over the following hour or so, Grok responded to a number of folks to declare that Crabtree was the hero accountable. Then, it started to waver, couching its declare that the bystander was both Crabtree or, citing “sources”, that the hero was in truth al-Ahmed. Then, Grok declared the Edward Crabtree story was “AI-generated pretend information” and got here from “a newly created web site spreading misinformation”.
However it might nonetheless sometimes double down on its lies. As lately as 4am on Monday, hours after Grok had acknowledged it was fallacious, the bot continued to unfold the lie as if it had been actual. (This wasn’t the one incorrect proven fact that Grok unfold concerning the assault. It additionally incorrectly claimed that footage of al-Ahmed was truly repurposed previous footage, once more undemring al-Ahmed’s heroics.)
AI chatbots producing incorrect information is nothing new. Neither is the concept that they are often deliberately seeded with disinformation; analysis suggests Russia is actively pumping out propaganda that’s being absorbed by mainstream chatbots.
However what’s totally different — and worrying — about Grok on the Bondi terrorist assault is the near-instant era of a closed loop of AI misinformation. It seems that AI was used to generate the lie, which was then absorbed by AI, earlier than being immediately regurgitated in a breaking information scenario.
What we noticed was an instantaneous AI ouroburos — the snake consuming its personal tail — which was then picked up and used to beat others. X customers usually prompted Grok’s Edward Crabtree reply to contradict viral posts about al-Ahmed’s actions.
In response to 1 viral put up about al-Ahmed, one person replied, “He’s not the one they are saying he’s. He’s an IT skilled, his title is Edward Crabtree, not Ahmed.”
Then, they responded to their very own put up, “@grok who’s edward crabtree?”, to immediate the AI bot to inform them that he was the heroic bystander. Grok’s solutions by no means hyperlink to its supply of knowledge, and barely even title it. Audiences are authoritatively instructed one thing as if it had been an indeniable reality.
I don’t assume Grok’s lies circulated notably far. It appeared like more often than not, folks had been calling in Grok to strengthen their very own beliefs. Even among the many misinformation swirling across the Bondi terrorist assault, Grok was removed from the largest participant.
Nevertheless it’s value excited about within the context of Musk’s mission to dismantle belief in conventional establishments in favour of the issues that he owns and controls.
Beneath him, Twitter’s blue test mark turned from proof of somebody’s identification and significance, to proof on X.com that you’ve A$13 a month and a promise that your posts can be prioritised.
Grok is supposed to be “maximally truth-seeking”, however Musk has additionally promised to put his finger on the scale after Grok gave solutions he didn’t like. His newest enterprise, Grokipedia, is a largely AI-generated on-line encyclopedia that predominantly copies from Wikipedia, with some variations that may be attributed to it using neo-Nazi forums as a source.
All of this — 280 characters as the bottom unit of reality, information solely present in the event that they’re posted to X, an algorithmic feed feeding its customers content material primarily based on an inscrutable recipe of politics-pushing and engagement-hacking, and an AI chatbot that definitionally doesn’t “know” something however speaks as whether it is indeniable — is pushing us to a world the place the reality isn’t witnessed, solely relayed to us by the warped voices of others.
Zooming out from Musk, that is additionally a glimpse at a brand new type of data warfare with AI as its goal. Dangerous actors will race to poison a handful of merchandise which might be more and more the central supply of stories and data for a whole lot of hundreds of thousands of individuals. We already see folks fill information vacuums with misinformation, besides the payoff is having your model of the world laundered by a trusted AI companion. And what higher device to help with this than generative AI, a know-how that may instantly produce content material that’s glorious at imitating reality?
What comes later is predictable: the whole automation of this course of in order that it occurs with out human intervention in any respect. It’s not solely conceivable that somebody may practice AI to detect attention-grabbing breaking information occasions, spin up a false counter-narrative, generate content material selling that view, and seed it out to the world through social media bots — it’s attainable proper now.
(I’m not exaggerating. It took me 5 minutes to arrange a business AI chatbot to assessment the world’s information, choose an occasion, create a contradictory account, and generate a whole information web site with a number of articles written about it. Give me a few {dollars} and some further minutes, and I may register a website for the web site and push it out through purchased social media accounts.)
When the world’s incentives are set as much as prioritise engagement and extremes above reality, and to encourage counter-narratives for the portion of the world that defaults to believing the other of what they’re instructed, folks will take benefit. And now, robots will too.
- This story first appeared on Crikey. You’ll be able to read the original here.
