An evaluation by WIRED this week discovered that ICE and CBP’s face recognition app Cellular Fortify, which is getting used to establish folks throughout america, isn’t actually designed to verify who people are and was solely authorized for Division of Homeland Safety use by stress-free among the company’s personal privateness guidelines.
WIRED took an in depth have a look at highly militarized ICE and CBP units that use excessive ways usually seen solely in energetic fight. Two brokers concerned within the capturing deaths of US residents in Minneapolis are reportedly members of those paramilitary models. And a brand new report from the Public Service Alliance this week discovered that data brokers can fuel violence against public servants, who’re going through increasingly threats however have few methods to guard their private data underneath state privateness legal guidelines.
In the meantime, with the Milano Cortina Olympic Video games starting this week, Italians and other spectators are on edge as an inflow of safety personnel—together with ICE brokers and members of the Qatari Safety Forces—descend on the occasion.
And there’s extra. Every week, we spherical up the safety and privateness information we didn’t cowl in depth ourselves. Click on the headlines to learn the complete tales. And keep secure on the market.
AI has been touted as a super-powered software for locating safety flaws in code for hackers to take advantage of or for defenders to repair. For now, one factor is confirmed: AI creates plenty of these hackable bugs itself—together with a really unhealthy one revealed this week within the AI-coded social community for AI brokers often known as Moltbook.
Researchers on the safety agency Wiz this week revealed that they’d discovered a severe safety flaw in Moltbook, a social community supposed to be a Reddit-like platform for AI brokers to work together with each other. The mishandling of a non-public key within the website’s JavaScript code uncovered the e-mail addresses of hundreds of customers together with tens of millions of API credentials, permitting anybody entry “that might enable full account impersonation of any person on the platform,” as Wiz wrote, together with entry to the personal communications between AI brokers.
That safety flaw could come as little shock on condition that Moltbook was proudly “vibe-coded” by its founder, Matt Schlicht, who has stated that he “didn’t write one line of code” himself in creating the positioning. “I simply had a imaginative and prescient for the technical structure, and AI made it a actuality,” he wrote on X.
Although Moltbook has now mounted the positioning’s flaw found by Wiz, its crucial vulnerability ought to function a cautionary story concerning the safety of AI-made platforms. The issue typically isn’t any safety flaw inherent in corporations’ implementation of AI. As an alternative, it’s that these corporations are way more prone to let AI write their code—and plenty of AI-generated bugs.
The FBI’s raid on Washington Submit reporter Hannah Natanson’s dwelling and search of her computer systems and cellphone amid its investigation right into a federal contractor’s alleged leaks has supplied necessary safety classes in how federal brokers can entry your gadgets if you have biometrics enabled. It additionally reveals at the very least one safeguard that may hold them out of these gadgets: Apple’s Lockdown mode for iOS. The function, designed at the very least partly to forestall the hacking of iPhones by governments contracting with adware corporations like NSO Group, additionally stored the FBI out of Natanson’s cellphone, based on a court docket submitting first reported by 404 Media. “As a result of the iPhone was in Lockdown mode, CART couldn’t extract that machine,” the submitting learn, utilizing an acronym for the FBI’s Pc Evaluation Response Group. That safety probably resulted from Lockdown mode’s safety measure that forestalls connection to peripherals—in addition to forensic evaluation gadgets just like the Graykey or Cellebrite instruments used for hacking telephones—except the cellphone is unlocked.
The position of Elon Musk and Starlink within the warfare in Ukraine has been complicated, and has not at all times favored Ukraine in its protection in opposition to Russia’s invasion. However Starlink this week gave Ukraine a major win, disabling the Russian army’s use of Starlink, inflicting a communications blackout amongst lots of its frontline forces. Russian army bloggers described the measure as a significant issue for Russian troops, particularly for its use of drones. The transfer reportedly comes after Ukraine’s protection minister wrote to Starlink’s mum or dad firm, SpaceX, final month. Now it seems to have responded to that request for assist. “The enemy has not solely an issue, the enemy has a disaster,” Serhiy Beskrestnov, one of many protection minister’s advisers, wrote on Fb.
In a coordinated digital operation final 12 months, US Cyber Command used digital weapons to disrupt Iran’s air missile protection techniques through the US’s kinetic assault on Iran’s nuclear program. The disruption “helped to forestall Iran from launching surface-to-air missiles at American warplanes,” based on The Report. US brokers reportedly used intelligence from the Nationwide Safety Company to search out an advantageous weak spot in Iran’s army techniques that allowed them to get on the anti-missile defenses with out having to immediately assault and defeat Iran’s army digital defenses.
“US Cyber Command was proud to assist Operation Midnight Hammer and is absolutely geared up to execute the orders of the commander-in-chief and the secretary of warfare at any time and in anyplace,” a command spokesperson mentioned in an announcement to The Report.

