Final Thursday, Senators Elizabeth Warren and Eric Schmitt launched a invoice aimed toward stirring up extra competitors for Pentagon contracts awarded in AI and cloud computing. Amazon, Microsoft, Google, and Oracle at present dominate these contracts. “The way in which that the large get greater in AI is by sucking up everybody else’s information and utilizing it to coach and broaden their very own programs,” Warren informed the Washington Post.
The brand new bill would “require a aggressive award course of” for contracts, which might ban using “no-bid” awards by the Pentagon to firms for cloud companies or AI basis fashions. (The lawmakers’ transfer got here a day after OpenAI introduced that its know-how could be deployed on the battlefield for the primary time in a partnership with Anduril, finishing a year-long reversal of its coverage in opposition to working with the army.)
Whereas Large Tech is hit with antitrust investigations—together with the ongoing lawsuit in opposition to Google about its dominance in search, in addition to a brand new investigation opened into Microsoft—regulators are additionally accusing AI firms of, nicely, simply straight-up mendacity.
On Tuesday, the Federal Commerce Fee took motion in opposition to the smart-camera firm IntelliVision, saying that the corporate makes false claims about its facial recognition know-how. IntelliVision has promoted its AI fashions, that are utilized in each house and industrial safety digital camera programs, as working with out gender or racial bias and being skilled on hundreds of thousands of pictures, two claims the FTC says are false. (The corporate couldn’t assist the bias declare and the system was skilled on solely 100,000 pictures, the FTC says.)
Per week earlier, the FTC made comparable claims of deceit in opposition to the safety large Evolv, which sells AI-powered safety scanning merchandise to stadiums, Okay-12 colleges, and hospitals. Evolv advertises its programs as providing higher safety than easy steel detectors, saying they use AI to precisely display for weapons, knives, and different threats whereas ignoring innocent gadgets. The FTC alleges that Evolv has inflated its accuracy claims, and that its programs failed in consequential instances, reminiscent of a 2022 incident after they didn’t detect a seven-inch knife that was in the end used to stab a pupil.
These add to the complaints the FTC made again in September in opposition to numerous AI firms, together with one which offered a device to generate pretend product evaluations and one promoting “AI lawyer” companies.