The wrapper reckoning
, Darren Mowry, the Google VP who leads its world startup group throughout Cloud, DeepMind, and Alphabet, issued a blunt warning: two classes of AI startups face extinction. The primary is the “LLM wrapper,” an organization that places a product layer on high of an current massive language mannequin and counts on that back-end mannequin to do all the true work. The second is the “AI aggregator,” an organization that bundles a number of LLMs behind a single API or routing layer. “The business doesn’t have a whole lot of endurance for that anymore,” Mowry mentioned.
Weeks later, Google and Accel’s Atoms accelerator reviewed roughly 4,000 AI startup purposes and rejected about 70% of them as shallow wrappers. The startups that made the reduce shared a standard trait: they have been constructing proprietary fashions for particular verticals, utilizing the precise AI method for the issue at hand reasonably than outsourcing all intelligence to a general-purpose LLM.
This isn’t a minor correction. It indicators a basic shift in how the business thinks about AI structure. And it factors towards a future that isn’t simply “post-LLM-wrapper” however one thing rather more fascinating: various within the AI applied sciences it leverages, and distributed in how these applied sciences are composed.
A decade of specialised breakthroughs
To know why, it helps to zoom out. If you happen to’ve been listening to AI over the previous decade, you’ve witnessed one thing outstanding. Not one revolution, however a collection of them, every pushed by a special know-how conquering a special class of downside.
Within the early 2010s, convolutional neural networks remodeled pc imaginative and prescient. Instantly, machines may acknowledge faces, learn medical scans, and interpret the visible world with superhuman accuracy. CNNs didn’t clear up every part. They solved imaginative and prescient, they usually solved it spectacularly.
Then got here deep reinforcement studying. In 2013, DeepMind educated an agent to play Atari video games from uncooked pixels, studying technique from nothing however trial, error, and reward. Two years later, AlphaGo defeated the world champion at Go, a recreation with extra doable positions than atoms within the universe. DRL didn’t substitute CNNs. It opened a completely new frontier: machines that might be taught to make selections in complicated, dynamic environments.
And now, massive language fashions. GPT, Claude, and their successors have made machines extraordinary at understanding and producing language, reasoning throughout domains, summarizing huge quantities of data, and interacting with people in pure dialog. The affect has been staggering, and rightly so.
However right here’s what will get misplaced within the pleasure: every of those breakthroughs was a specialised software that excelled at a selected class of downside. CNNs excel at notion. LLMs excel at language and reasoning. And reinforcement studying, particularly temporal distinction studying, excels at sequential decision-making below uncertainty.
That is precisely the lesson that the wrapper reckoning is instructing the startup ecosystem the laborious method.
The correct software for every process
At this time’s AI dialog is dominated by LLMs, and for good motive. They’re extremely versatile and accessible. However versatility shouldn’t be confused with universality. An LLM shouldn’t be a decision-making engine. It could motive about selections. It could generate choices. It could clarify trade-offs fantastically. However controlling a course of, making a sequence of decisions over time, in a stochastic setting, the place suggestions is delayed by weeks or months: that’s a basically totally different downside.
There was some early pleasure round Choice Transformers, which tried to reframe reinforcement studying as a sequence modeling downside that might leverage transformer architectures. It was a sublime concept. However in apply, it hasn’t displaced temporal distinction studying for real-world management duties. When the issue is genuinely sequential and dynamic, TD studying stays the confirmed method.
Contemplate the precedents. DeepMind used deep reinforcement studying to optimize Google’s datacenter cooling methods, lowering power consumption by 40%. Not by writing higher reviews about power, however by repeatedly making real-time management selections in a fancy bodily system. In autonomous driving, the notion layer makes use of CNNs to see the street, however the planning and management layer, the half that decides when to brake, speed up, or change lanes, depends on reinforcement studying. Notion and management are totally different issues. They deserve totally different instruments.
The identical logic applies to gross sales. Writing a greater e mail is a language downside, and LLMs are excellent for it. Enriching a lead checklist is a knowledge retrieval downside. However understanding the dynamics of a pipeline, modeling how offers evolve over time, and studying what patterns result in wins and losses? That’s a management and optimization downside. And it requires temporal distinction studying.
To make this extra concrete: take into account how a deal progresses via a B2B gross sales pipeline. At every stage, a rep faces a sequence of selections. When to observe up. Which stakeholder to have interaction subsequent. Whether or not to supply a reduction or maintain agency on pricing. Every selection impacts what occurs downstream, and the result (closed-won or closed-lost) might not materialize for months. The state area is high-dimensional (deal measurement, stakeholder engagement ranges, aggressive strain, timing), the transitions are stochastic, and the reward sign is sparse and delayed. This can be a textbook reinforcement studying downside. An LLM can draft the follow-up e mail, however deciding whether or not, when, and to whom to ship it’s a totally different problem completely.
That is the excellence that the wrapper mannequin utterly misses. A startup that wraps an LLM round a gross sales workflow will help reps write higher emails. It can’t be taught, over hundreds of deal outcomes, {that a} particular sequence of stakeholder engagement in enterprise healthcare offers results in a 3x enchancment in shut charges. That requires a basically totally different sort of intelligence.
From monolithic fashions to various agent networks

This perception factors towards one thing a lot greater than which mannequin to make use of for which process. It factors towards a brand new structure for enterprise AI altogether, and it explains why Mowry’s warning resonated so extensively.
The present paradigm is actually monolithic: one massive mannequin, requested to do every part. Chat with prospects. Write paperwork. Analyze information. Make suggestions. It’s as if all the software program business had tried to construct each software as a single program.
However we’ve discovered this lesson earlier than. Within the early 2000s, the software program business moved from monolithic purposes to service-oriented structure, or SOA. As a substitute of 1 large codebase attempting to do every part, you constructed networks of small, specialised providers, every doing one factor exceptionally nicely. Every service had a well-defined interface and a transparent set of capabilities. An orchestration layer composed them into complicated workflows. The consequence was extra sturdy, extra scalable, and extra adaptable than something a monolith may obtain.
AI is heading in the identical course. The long run isn’t one mannequin to rule all of them. It’s hundreds of thousands of specialised brokers, every educated to do one factor with precision. An agent that understands deal momentum in enterprise SaaS. An agent that detects shopping for committee dynamics. An agent that fashions pricing sensitivity in mid-market offers. Every one small, targeted, and superb at its job. And critically, every one constructed on the AI paradigm that truly suits the issue it solves, not shoehorned right into a transformer as a result of that’s what’s modern.
These brokers don’t work in isolation. They type networks. They convey. And making them work collectively requires two distinct capabilities which can be straightforward to conflate however basically totally different.
The primary is reasoning and decomposition. That is the place LLMs shine. Given a fancy aim, say, “assess the well being of this enterprise deal,” an LLM can break that down into sub-tasks: analyze stakeholder engagement, consider pricing dynamics, evaluate in opposition to historic patterns of this deal sort. It understands intent, it decomposes issues, and it might synthesize the outcomes into coherent perception.
The second is orchestration, and that is one thing else completely. A single agent would possibly require the outputs of a number of fashions earlier than it might act: a momentum sign from one mannequin, a stakeholder map from one other, a market context from a 3rd. Managing that execution circulation, dealing with dependencies, routing outputs to the precise inputs, coordinating timing, is an infrastructure downside, not a reasoning downside. It requires a devoted orchestration layer that sits between the LLM’s strategic course and the brokers’ execution.
Consider it via the SOA parallel: the LLM is just like the enterprise logic that decides what must occur. The orchestration layer is the middleware that ensures it truly occurs, that the precise providers are referred to as in the precise order with the precise information. And the brokers are the providers themselves, every with a well-defined functionality registered in what quantities to a listing of abilities.
That is what it means to say that the way forward for AI for gross sales is various and distributed. Various within the applied sciences it leverages: LLMs for reasoning, TD studying for management, specialised fashions for domain-specific duties. And distributed in its structure: not one mind, however a coordinated community of brokers, orchestrated to work collectively and composed into one thing way more highly effective than any single mannequin might be.
The agentic enterprise

Lengthen this imaginative and prescient past gross sales, and you start to see the form of one thing transformative: the agentic enterprise.
However right here’s what makes this actually highly effective: people and brokers don’t function in separate lanes. They work hand in hand. An agent exploring an unlimited state area would possibly uncover a sample no human would have seen, a counterintuitive sequence of engagement that dramatically improves shut charges in a selected phase. And a human’s instinct, a hunch a couple of new market, a sense {that a} deal isn’t what it seems like on paper, can redirect brokers towards unexplored territory that no algorithm would have prioritized by itself.
That is the place actual disruption comes from. Not from brokers alone, and never from people alone, however from the loop between them. Brokers increase what’s doable to look at and optimize. People carry context, judgment, and the sort of lateral considering that no state area exploration can absolutely replicate. Every makes the opposite higher. The breakthroughs occur on the interface.
Within the agentic enterprise, the aggressive benefit isn’t AI or individuals. It’s the standard of the collaboration between them.
What this implies for practitioners
If you happen to’re constructing AI right into a gross sales group (or any complicated enterprise course of), the sensible takeaway is that this: resist the temptation to deal with your LLM as a common solver. The wrapper reckoning is not only a enterprise capital development. It displays a real technical actuality.
Is it a language downside? Use an LLM. Drafting outreach, summarizing name transcripts, extracting key phrases from contracts: these are duties the place transformers excel and the place you’ll get glorious outcomes with at present’s fashions.
Is it a notion or classification downside? Contemplate the precise mannequin structure for the sign sort. Detecting sentiment in voice recordings, classifying inbound leads by intent, studying and structuring paperwork: every of those might name for a specialised mannequin reasonably than a general-purpose one.
Is it a sequential resolution downside? That is the place most groups attain for an LLM and get mediocre outcomes. Deciding which offers to prioritize, when to escalate, how one can allocate a rep’s time throughout a portfolio of alternatives: these are management issues with delayed rewards and stochastic dynamics. Temporal distinction studying, not next-token prediction, is the precise framework.
Then ask the more durable architectural query: how do these specialised brokers compose? What orchestration layer manages the circulation of data between them? How do you construct a system the place an LLM decomposes the aim, an orchestrator coordinates the execution, and a set of targeted brokers every deal with their piece?
This isn’t a trivial engineering downside. However it’s the course the sphere is shifting, and the startups that survived the Atoms accelerator filter are proof that the market is already deciding on for it. Organizations that begin constructing towards this structure now may have a big benefit because the ecosystem of specialised AI brokers matures.
The way forward for AI for gross sales isn’t one mannequin doing every part. It’s the precise know-how for every downside, the precise agent for every process, and a community structure that composes them into one thing way more highly effective than any single mannequin might be.
The long run is various and distributed. One human, hundreds of thousands of brokers.
Nicolas Maquaire is the Co-Founder and CEO of Dynamiks.ai. Primarily based in San Francisco and Paris, he beforehand based EntropySoft, which was acquired by Salesforce.

