has maybe been a very powerful phrase in terms of Giant Language Fashions (LLMs), with the discharge of ChatGPT. ChatGPT was made so profitable, largely due to the scaled pre-training OpenAI did, making it a strong language mannequin.
Following that, Frontier LLM labs began scaling the post-training, with supervised fine-tuning and RLHF, the place fashions bought more and more higher at instruction following and performing advanced duties.
And simply after we thought LLMs have been about to plateau, we began doing inference-time scaling with the discharge of reasoning fashions, the place spending considering tokens gave big enhancements to the standard of outputs.
I now argue we must always proceed this scaling with a brand new scaling paradigm: usage-based scaling, the place you scale how a lot you’re utilizing LLMs:
- Run extra coding brokers in parallel
- All the time begin a deep analysis on a subject of curiosity
- Run data fetching workflows
Should you’re not firing off an agent earlier than going to lunch, or going to sleep, you’re losing time
On this article, I’ll focus on why scaling LLM utilization can result in elevated productiveness, particularly when working as a programmer. Moreover, I’ll focus on particular methods you should utilize to scale your LLM utilization, each personally, and for firms you’re working for. I’ll maintain this text high-level, aiming to encourage how one can maximally make the most of AI to your benefit.
Why it is best to scale LLM utilization
We have now already seen scaling be extremely highly effective beforehand with:
- pre-training
- post-training
- inference time scaling
The explanation for that is that it seems the extra computing energy you spend on one thing, the higher output high quality you’ll obtain. This, in fact, assumes you’re in a position to spend the pc successfully. For instance, for pre-training, with the ability to scale computing depends on
- Giant sufficient fashions (sufficient weights to coach)
- Sufficient information to coach on
Should you scale compute with out these two parts, you gained’t see enhancements. Nevertheless, if you happen to do scale all three, you get wonderful outcomes, just like the frontier LLMs we’re seeing now, for instance, with the discharge of Gemini 3.
I thus argue it is best to look to scale your personal LLM utilization as a lot as attainable. This might, for instance, be firing off a number of brokers to code in parallel, or beginning Gemini deep analysis on a subject you’re enthusiastic about.
In fact, the utilization should nonetheless be of worth. There’s no level in beginning a coding agent on some obscure job you don’t have any want for. Slightly, it is best to begin a coding agent on:
- A linear situation you by no means felt you had time to sit down down and do your self
- A fast characteristic was requested within the final gross sales name
- Some UI enhancements, you understand, immediately’s coding brokers deal with simply

In a world with abundance of assets, we must always look to maximise our use of them
My fundamental level right here is that the brink to carry out duties has decreased considerably for the reason that launch of LLMs. Beforehand, if you bought a bug report, you needed to sit down for two hours in deep focus, fascinated by find out how to resolve that bug.
Nevertheless, immediately, that’s not the case. As a substitute, you possibly can go into Cursor, put within the bug report, and ask Claude Sonnet 4.5 to try to repair it. You may then come again 10 minutes later, check if the issue is mounted, and create the pull request.
What number of tokens are you able to spend whereas nonetheless doing one thing helpful with the tokens
scale LLM utilization
I talked about why it is best to scale LLM utilization by working extra coding brokers, deep analysis brokers, and every other AI brokers. Nevertheless, it may be exhausting to think about precisely what LLMs it is best to fireplace off. Thus, on this part, I’ll focus on particular brokers you possibly can fireplace off to scale your LLM utilization.
Parallel coding brokers
Parallel coding brokers are one of many easiest methods to scale LLM utilization for any programmer. As a substitute of solely engaged on one downside at a time, you begin two or extra brokers on the identical time, both utilizing Cursor brokers, Claude code, or every other agentic coding instrument. That is sometimes made very simple to do by using Git worktrees.
For instance, I sometimes have one fundamental job or undertaking that I’m engaged on, the place I’m sitting in Cursor and programming. Nevertheless, typically I get a bug report coming in, and I robotically route it to Claude Code to make it seek for why the issue is going on and repair it if attainable. Generally, this works out of the field; typically, I’ve to assist it a bit.
Nevertheless, the price of beginning this bug fixing agent is tremendous low (I can actually simply copy the Linear situation into Cursor, which may learn the difficulty utilizing Linear MCP). Equally, I even have a script robotically researching related prospects, which I’ve working within the background.
Deep analysis
Deep analysis is a performance you should utilize in any of the frontier mannequin suppliers like Google Gemini, OpenAI ChatGPT, and Anthropic’s Claude. I want Gemini 3 deep analysis, although there are numerous different stable deep analysis instruments on the market.
At any time when I’m enthusiastic about studying extra a few subject, discovering data, or something comparable, I fireplace off a deep analysis agent with Gemini.
For instance, I used to be enthusiastic about discovering some prospects given a particular ICP. I then rapidly pasted the ICP data into Gemini, gave it some contextual data, and had it begin researching, in order that it might run whereas I used to be engaged on my fundamental programming undertaking.
After 20 minutes, I had a short report from Gemini, which turned out to include a great deal of helpful data.
Creating workflows with n8n
One other method to scale LLM utilization is to create workflows with n8n or any comparable workflow-building instrument. With n8n, you possibly can construct particular workflows that, for instance, learn Slack messages and carry out some motion based mostly on these Slack messages.
You may, as an illustration, have a workflow that reads a bug report group on Slack and robotically begins a Claude code agent for a given bug report. Or you could possibly create one other workflow that aggregates data from lots of totally different sources and supplies it to you in an simply readable format. There are basically limitless alternatives with workflow-building instruments.
Extra
There are numerous different methods you should utilize to scale your LLM utilization. I’ve solely listed the primary few gadgets that got here to thoughts for me after I’m working with LLMs. I like to recommend all the time conserving in thoughts what you possibly can automate utilizing AI, and how one can leverage it to turn into simpler. scale LLM utilization will fluctuate broadly from totally different firms, job titles, and plenty of different elements.
Conclusion
On this article, I’ve mentioned find out how to scale your LLM utilization to turn into a simpler engineer. I argue that we’ve seen scaling work extremely properly up to now, and it’s extremely seemingly we will see more and more highly effective outcomes by scaling our personal utilization of LLMs. This might be firing off extra coding brokers in parallel, working deep analysis brokers whereas consuming lunch. Usually, I consider that by growing our LLM utilization, we will turn into more and more productive.
👉 Discover me on socials:
📚 Get my free Vision Language Models ebook
💻 My webinar on Vision Language Models
🧑💻 Get in touch
✍️ Medium

