Wired and Business Insider lately pulled a number of articles penned by a mysterious freelancer named Margaux Blanchard, after discovering they had been virtually definitely generated by AI—and filled with fabricated characters and scenes.
That’s proper: what appeared like neat journal options turned out to be digital mirages.
The story first tickled suspicions when “Blanchard” pitched a story a few secretive Colorado city referred to as Gravemont.
Upon Googling, editors discovered it didn’t exist. She bypassed pay methods, demanded fee by way of examine or PayPal, and couldn’t show her id.
Past Wired and Enterprise Insider, different shops like Cone Journal, SFGate, and Bare Politics additionally revealed—however then swiftly deleted—her bylines.
Inside Wired, there’s a little bit of rueful awe. A pitch about digital weddings in Minecraft appeared so vividly Wired-esque that it sailed by means of editorial filters—till deeper digging revealed there was no “Jessica Hu” or digital officiant.
It’s much less “gotcha second” and extra “whoopsie-daisy”: “If anybody ought to be capable to catch an AI scammer,” Wired admitted, “it’s us.”
These embarrassments aren’t remoted. Tech writer CNET confronted related backlash when AI-written private finance tales was error-riddled dumpster fires, prompting a newsroom union rebellion demanding transparency.
It’s straightforward to mistake slick AI copy for real content material—till you attempt to confirm the small print.
All this raises large questions: how did subtle AI idiot clear-thinking editors? Even AI-detection instruments failed to smell it out. It exhibits that these methods can produce real-sounding tales with zero accountability—a scary hole in journalistic protection strains.
My take? That is the digital equal of a Computer virus proper in your editorial inbox. Readers, editors, and tech must staff up on stronger verification routines—and possibly somewhat wholesome skepticism isn’t such a nasty factor in any case.

