In contrast with standard psychological fashions, which use basic math equations, Centaur did a much better job of predicting conduct. Correct predictions of how people reply in psychology experiments are precious in and of themselves: For instance, scientists might use Centaur to pilot their experiments on a pc earlier than recruiting, and paying, human members. Of their paper, nonetheless, the researchers suggest that Centaur may very well be greater than only a prediction machine. By interrogating the mechanisms that enable Centaur to successfully replicate human conduct, they argue, scientists might develop new theories in regards to the internal workings of the thoughts.
However some psychologists doubt whether or not Centaur can inform us a lot in regards to the thoughts in any respect. Positive, it’s higher than standard psychological fashions at predicting how people behave—however it additionally has a billion instances extra parameters. And simply because a mannequin behaves like a human on the skin doesn’t imply that it features like one on the within. Olivia Visitor, an assistant professor of computational cognitive science at Radboud College within the Netherlands, compares Centaur to a calculator, which might successfully predict the response a math whiz will give when requested so as to add two numbers. “I don’t know what you’ll find out about human addition by finding out a calculator,” she says.
Even when Centaur does seize one thing necessary about human psychology, scientists might wrestle to extract any perception from the mannequin’s tens of millions of neurons. Although AI researchers are working arduous to determine how massive language fashions work, they’ve barely managed to crack open the black field. Understanding an unlimited neural-network mannequin of the human thoughts might not show a lot simpler than understanding the factor itself.
One various strategy is to go small. The second of the two Nature studies focuses on minuscule neural networks—some containing solely a single neuron—that however can predict conduct in mice, rats, monkeys, and even people. As a result of the networks are so small, it’s potential to trace the exercise of every particular person neuron and use that information to determine how the community is producing its behavioral predictions. And whereas there’s no assure that these fashions perform just like the brains they have been educated to imitate, they’ll, on the very least, generate testable hypotheses about human and animal cognition.
There’s a value to comprehensibility. In contrast to Centaur, which was educated to imitate human conduct in dozens of various duties, every tiny community can solely predict conduct in a single particular job. One community, for instance, is specialised for making predictions about how individuals select amongst completely different slot machines. “If the conduct is actually advanced, you want a big community,” says Marcelo Mattar, an assistant professor of psychology and neural science at New York College who led the tiny-network research and likewise contributed to Centaur. “The compromise, after all, is that now understanding it is vitally, very troublesome.”
This trade-off between prediction and understanding is a key function of neural-network-driven science. (I additionally occur to be writing a e book about it.) Research like Mattar’s are making some progress towards closing that hole—as tiny as his networks are, they’ll predict conduct extra precisely than conventional psychological fashions. So is the analysis into LLM interpretability occurring at locations like Anthropic. For now, nonetheless, our understanding of advanced programs—from people to local weather programs to proteins—is lagging farther and farther behind our means to make predictions about them.
This story initially appeared in The Algorithm, our weekly e-newsletter on AI. To get tales like this in your inbox first, sign up here.

