fashions able to automating quite a lot of duties, akin to analysis and coding. Nonetheless, usually occasions, you’re employed with an LLM, full a activity, and the subsequent time you work together with the LLM, you begin from scratch.
This can be a main drawback when working with LLMs. We waste plenty of time merely repeating directions to LLMs, akin to the specified code formatting or find out how to carry out duties in keeping with your preferences.
That is the place brokers.md information are available in: A option to apply continuous studying to LLMs, the place the LLM learns your patterns and behaviours by storing generalizable info in a separate file. This file is then learn each time you begin a brand new activity, stopping the chilly begin drawback and serving to you keep away from repeating directions.
On this article, I’ll present a high-level overview of how I obtain continuous studying with LLMs by frequently updating the brokers.md file.
Why do we’d like continuous studying?
Beginning with a contemporary agent context takes time. The agent wants to choose up in your preferences, and it is advisable to spend extra time interacting with the agent, getting it to do precisely what you need.
For instance:
- Telling the agent to make use of Python 3.13 syntax, as a substitute of three.12
- Informing the agent to at all times use return sorts on features
- Guaranteeing the agent by no means makes use of the Any kind
I usually needed to explicitly inform the agent to make use of Python 3.13 syntax, and never 3.12 syntax, in all probability as a result of 3.12 syntax is extra prevalent of their coaching dataset.
The entire level of utilizing AI brokers is to be quick. Thus, you don’t wish to be spending time repeating directions on which Python model to make use of, or that the agent ought to by no means use the Any kind.
Moreover, the AI agent generally spends further time determining info that you have already got out there, for instance:
- The identify of your paperwork desk
- The names of your CloudWatch logs
- The prefixes in your S3 buckets
If the agent doesn’t know the identify of your paperwork desk, it has to:
- Checklist all tables
- Discover a desk that sounds just like the doc desk (could possibly be a number of potential choices)
- Both make a lookup to the desk to verify, or ask the consumer

This takes plenty of time, and is one thing we are able to simply stop by including the doc desk identify, CloudWatch logs, and S3 bucket prefixes into brokers.md.
Thus, the principle motive we’d like continuous studying is that repeating directions is irritating and time-consuming, and when working with AI brokers, we wish to be as efficient as potential.
Easy methods to apply continuous studying
There are two foremost methods I strategy continuous studying, each involving heavy utilization of the brokers.md file, which you need to have in each repository you’re engaged on:
- Every time the agent makes a mistake, I inform the agent find out how to right the error, and to recollect this for later within the agent.md file
- After every thread I’ve had with the agent, I take advantage of the immediate beneath. This ensures that something I advised the agent all through the thread, or info it found all through the thread, is saved for later use. This makes later interactions far simpler.
Generalize the data from this thread, and bear in mind it for later.
Something that could possibly be helpful to know for a later interplay,
when doing comparable issues. Retailer in brokers.md
Making use of these two easy ideas will get you 80% on the best way to continuous studying with LLMs and make you a much more efficient engineer.
An important level is to at all times preserve the agentic reminiscence with brokers.md in thoughts. Every time the agent does one thing you don’t like, you at all times have to recollect to retailer it in brokers.md
You would possibly assume you’re risking bloating the brokers.md file, which is able to make the agent each slower and extra pricey. Nonetheless, this isn’t actually the case. LLMs are extraordinarily good at condensing info down right into a file. Moreover, even when you’ve got an brokers.md file consisting of hundreds of phrases, it’s probably not an issue, neither with regard to context size or value.
The context size of frontier LLMs is tons of of hundreds of tokens, in order that’s no problem in any respect. And for the fee, you’ll in all probability begin seeing the price of utilizing the LLM go down. The explanation for that is that the agent will spend fewer tokens determining info, as a result of that info is already current in brokers.md.
Heavy utilization of brokers.md for agentic reminiscence will each make LLM utilization sooner, and scale back value
Some added suggestions
I might additionally like so as to add some extra suggestions which can be helpful when coping with agentic reminiscence.
The primary tip is that when interacting with Claude Code, you may entry the agent’s reminiscence utilizing “#”, after which write what to recollect. For instance, write this into the terminal when interacting with Claude Code:
# At all times use Python 3.13 syntax, keep away from 3.12 syntax
You’ll then get an possibility, as you see within the picture beneath. Both you reserve it to the consumer reminiscence, which shops the knowledge for all of your interactions with Claude Code, irrespective of the code repository. That is helpful for generic info, like at all times having a return kind for features.
The second and third choices are to reserve it to the present folder you’re in or to the basis folder of your venture. This may be helpful for both storing folder-specific info, for instance, solely describing a selected service. Or for storing details about a code repository usually.

Moreover, totally different coding brokers use totally different reminiscence information.
- Claude Code makes use of CLAUDE.md
- Warp makes use of WARP.md
- Cursor makes use of .cursorrules
Nonetheless, all brokers often learn brokers.md, which is why I like to recommend storing info in that file, so you could have entry to the agentic reminiscence irrespective of which coding agent you’re utilizing. It is because someday Claude Code would be the finest, however we’d see one other coding agent on high one other day.
AGI and continuous studying
I might additionally like so as to add a notice on AGI and continuous studying. True continuous studying is usually stated to be one of many final hindrances to reaching AGI.
Presently, LLMs basically faux continuous studying by merely storing issues they be taught into information they learn in a while (akin to brokers.md). Nonetheless, the best can be that LLMs frequently replace their mannequin weights every time studying new info, basically the best way people be taught instincts.
Sadly, true continuous studying is just not achieved but, but it surely’s possible a functionality we’ll see extra of within the coming years.
Conclusion
On this article, I’ve talked about find out how to develop into a much more efficient engineer by using brokers.md for continuous studying. With this, your agent will decide up in your habits, the errors you make, the knowledge you often use, and plenty of different helpful items of knowledge. This once more will make later interactions together with your agent far simpler. I imagine heavy utilization of the brokers.md file is crucial to changing into a superb engineer, and is one thing you need to continuously attempt to attain.
👉 My Free Sources
🚀 10x Your Engineering with LLMs (Free 3-Day Email Course)
📚 Get my free Vision Language Models ebook
💻 My webinar on Vision Language Models
👉 Discover me on socials:
🧑💻 Get in touch
✍️ Medium

