“One thing like over 70 % of [Anthropic’s] pull requests are actually Claude code written,” Krieger instructed me. As for what these engineers are doing with the additional time, Krieger stated they’re orchestrating the Claude codebase and, after all, attending conferences. “It actually turns into obvious how a lot else is within the software program engineering position,” he famous.
The pair fiddled with Voss water bottles and answered an array of questions from the press about an upcoming compute cluster with Amazon (Amodei says “components of that cluster are already getting used for analysis,”) and the displacement of employees resulting from AI (“I do not assume you possibly can offload your organization technique to one thing like that,” Krieger stated).
We’d been instructed by spokespeople that we weren’t allowed to ask questions on coverage and regulation, however Amodei provided some unprompted perception into his views on a controversial provision in President Trump’s megabill that will ban state-level AI regulation for 10 years: “In the event you’re driving the automobile, it is one factor to say ‘we do not have to drive with the steering wheel now.’ It is one other factor to say ‘we’ll rip out the steering wheel, and we will not put it again in for 10 years,’” Amodei stated.
What does Amodei take into consideration essentially the most? He says the race to the underside, the place security measures are minimize with the intention to compete within the AI race.
“Absolutely the puzzle of working Anthropic is that we someway must discover a approach to do each,” Amodei stated, that means the corporate has to compete and deploy AI safely. “You may need heard this stereotype that, ‘Oh, the businesses which are the most secure, they take the longest to do the security testing. They’re the slowest.’ That isn’t what we discovered in any respect.”