Produced by Pause AI, a world activist group that co-organized the protest, it ended with this plea to the reader: “Pause AI till we all know what the hell Step 2 is.”
Within the South Park episode “Gnomes,” which first aired in 1998, Kenny, Kyle, Cartman, and Stan uncover a group of gnomes that sneak out at evening to steal underpants from dressers. Why? The gnomes current their pitch deck. “Part 1: Gather underpants. Part 2: ? Part 3: Revenue.”
The gnomes’ marketing strategy has since grow to be one of many greats among internet memes, used to satirize all the things from startup methods to coverage proposals. Memelord in chief Elon Musk as soon as invoked it in a talk about how he deliberate to fund a mission to Mars. Proper now, it captures the state of AI. Corporations have constructed the tech (Step 1) and promised transformation (Step 3). How they get there’s nonetheless an enormous query mark.
So far as Pause AI is anxious, Step 2 should contain some sort of regulation. However precisely what it’ll name for and who will implement it are up for debate.
AI boosters, then again, are satisfied that Step 3 is salvation and have a tendency to glaze over the center bit. They see us racing towards sunny uplands on the again of an “economically transformative expertise,” as OpenAI’s chief scientist, Jakub Pachocki, put it to me just a few weeks in the past. They know the place they wish to go—roughly: It’s hazy up there and nonetheless a way off. However everybody’s taking a unique route. Will all of them make it? Will anybody?
For each huge declare in regards to the future, there’s a extra sober evaluation of how the rubber meets the highway—one which quells the hype. Contemplate two current research. One, from Anthropic, predicted what types of jobs are going to be most affected by LLMs. (A takeaway: Managers, architects, and other people within the media ought to put together for change; groundskeepers, development staff, and people in hospitality, not a lot.) However their predictions are actually simply guesses, primarily based on what sorts of duties LLMs appear to be good at moderately than how they actually carry out within the office.
One other research, put out in February by researchers at Mercor, an AI hiring startup, tested several AI agents powered by top-tier models from OpenAI, Anthropic, and Google DeepMind on 480 office duties incessantly carried out by human bankers, consultants, and attorneys. Each agent they examined failed to finish most of its duties.
Why is there such broad disagreement? There are a selection of things. For a begin, it’s essential to contemplate who’s making the claims (and why). Anthropic has pores and skin within the recreation. What’s extra, the general public telling us that one thing huge is about to occur have reached that conclusion largely on the idea of how briskly AI coding instruments are getting. However not all duties might be hacked with coding. Different research have discovered that LLMs are unhealthy at making strategic judgment calls, for instance.

