TL;DR:
a managed four-phase experiment in pure Python, with actual benchmark numbers. No API key. No GPU. Runs in below 10 seconds.
- As reminiscence grows from 10 to 500 entries, accuracy drops from 50% to 30%
- Over the identical vary, confidence rises from 70.4% to 78.0% — your alerts won’t ever hearth
- The repair is 4 architectural mechanisms: subject routing, deduplication, relevance eviction, and lexical reranking
- 50 well-chosen entries outperform 500 collected ones. The constraint is the characteristic.
The Failure That Shouldn’t Have Occurred
I ran a managed experiment on a buyer assist LLM with long-term reminiscence.
Nothing else modified. Not the mannequin. Not the retrieval pipeline.
At first, it labored completely. It answered questions on fee thresholds, password resets, and API price limits with near-perfect accuracy. Then the system stored working.
Each interplay was saved:
- assembly notes
- onboarding checklists
- inside reminders
- operational noise
All blended with the precise solutions.
Three months later, a person requested:
“How do I reset a person account password?”
The system responded:
“VPN certificates expires in 30 days.”
Confidence: 78.5%
Three months earlier, when it was right:
Confidence: 73.2%
The system didn’t worsen. It received extra assured whereas being unsuitable.
Right here, 78.5% is the single-query confidence and 75.8% is the 10-query common.
Why This Issues to You Proper Now
In case you are constructing any of the next:
- A RAG system that accumulates retrieved paperwork over time
- An AI copilot with a persistent reminiscence retailer
- A buyer assist agent that logs previous interactions
- Any LLM workflow the place context grows throughout periods
This failure mode could be very possible already occurring in your system. You most likely haven’t measured it, as a result of the sign that ought to warn you — agent confidence — is transferring within the unsuitable path.
The agent just isn’t getting dumber. It’s getting confidently unsuitable. And there may be nothing in an ordinary retrieval pipeline that can catch this earlier than customers do.
This text reveals you precisely what is going on, why, and the best way to repair it. No API key required. No mannequin downloads. All outcomes reproduced in below 10 seconds on CPU.
The Shock (Learn This Earlier than the Code)
Right here is the counterintuitive discovering, said plainly earlier than any proof:
As reminiscence grows from 10 to 500 entries, agent accuracy drops from 50% to 30%. Over the identical vary, agent confidence rises from 70.4% to 78.0%.
The agent turns into extra assured because it turns into much less correct. These two alerts transfer in reverse instructions. Any monitoring system that alerts on low confidence won’t ever hearth. The failure is invisible by design.
This isn’t a quirk of the simulation. It follows from the way in which retrieval confidence is computed in nearly each manufacturing RAG system: as a perform of imply similarity rating throughout retrieved entries [4]. Because the reminiscence pool grows, extra entries obtain reasonable similarity to any given question — not as a result of they’re related, however as a result of giant numerous corpora assure near-matches. Imply similarity drifts upward. Confidence follows. Accuracy doesn’t.
Now allow us to show it.
Full Code: https://github.com/Emmimal/memory-leak-rag/
The Setup: A Help Agent With a Rising Reminiscence Drawback
The simulation fashions a buyer assist and API-documentation agent. Ten practical queries cowl fee fraud detection, authentication flows, API price limiting, refund insurance policies, and delivery. A reminiscence pool grows from 10 to 500 entries.
The reminiscence pool mixes two sorts of entries:
Related entries — the proper solutions, saved early. Issues like:
fee fraud threshold is $500 for evaluationPOST /auth/reset resets person password by way of electronic mailprice restrict exceeded returns 429 error code
Stale entries — organizational noise that accumulates over time. Issues like:
quarterly board assembly notes reviewed fundsVPN certificates expires in 30 days notify customerscatering order positioned for all-hands assembly Friday
As reminiscence dimension grows, the ratio of stale entries will increase. The related entries keep put. The noise multiplies round them.
Embeddings are deterministic and keyword-seeded — no exterior mannequin or API wanted. Each end result right here is reproducible by working one Python file.
The companion code requires solely numpy, scipy, and colorama. Hyperlink within the References part.

Section 1 — Relevance Collapses Silently
Begin with essentially the most fundamental query: of the 5 entries retrieved for every question, what number of are literally related to what was requested?
| Reminiscence Measurement | Relevance Fee | Accuracy |
|---|---|---|
| 10 entries | 44% | 50% |
| 25 entries | 34% | 50% |
| 50 entries | 34% | 50% |
| 100 entries | 24% | 50% |
| 200 entries | 22% | 40% |
| 500 entries | 14% | 30% |
At 10 entries, fewer than half the retrieved outcomes are related — however that is sufficient to get the proper reply more often than not when the proper entry ranks first. At 500 entries, six out of seven retrieved entries are noise. The agent is basically constructing its reply from scratch, with one related entry buried below six irrelevant ones.
The agent doesn’t pause. It doesn’t flag uncertainty. It retains returning solutions on the identical pace with the identical tone of authority.
That is the primary failure mode: relevance decays silently.
Why cosine similarity can not prevent right here
The instinct behind retrieval is sound: retailer entries as vectors, discover the entries geometrically closest to the question vector, return these. The issue is that geometric closeness just isn’t the identical as relevance [2].
“VPN certificates expires in 30 days” sits shut in embedding area to “session token expires after 24 hours.” “Annual efficiency evaluation” sits near “fraud evaluation threshold.” “Parking validation up to date” shares construction with “coverage up to date final quarter.”
These stale entries should not random noise. They’re believable noise — contextually adjoining to actual queries in ways in which cosine similarity can not distinguish. As extra of them accumulate, they collectively crowd the top-k retrieval slots away from the entries that truly matter. That is the core downside with dense-only retrieval at scale [5].
Section 2 — Confidence Rises as Accuracy Falls
Now overlay confidence on the accuracy chart. That is the place the issue turns into genuinely harmful.
| Reminiscence Measurement | Accuracy | Avg Confidence |
|---|---|---|
| 10 entries | 50% | 70.4% |
| 25 entries | 50% | 71.7% |
| 50 entries | 50% | 72.9% |
| 100 entries | 50% | 74.7% |
| 200 entries | 40% | 75.8% |
| 500 entries | 30% | 78.0% |
Accuracy drops 20 proportion factors. Confidence rises 7.6 proportion factors. They’re inversely correlated throughout the complete vary.
Take into consideration what this implies in manufacturing. Your monitoring dashboard reveals confidence trending upward. Your on-call engineer sees no alert. Your customers are receiving more and more unsuitable solutions with more and more authoritative supply.
Customary confidence measures retrieval coherence, not correctness. It’s simply the imply similarity throughout retrieved entries. The extra entries within the pool, the upper the chance that a number of of them obtain reasonable similarity to any question, no matter relevance. Imply similarity rises. Confidence follows. Accuracy doesn’t get the memo.
That is the second failure mode: confidence just isn’t a reliability sign. It’s an optimism sign.
It tells you one thing matched — not that it was right.
Section 3 — One Stale Entry, One Flawed Reply, Zero Warning
Right here is the failure made concrete. A particular question. A particular unsuitable reply. The precise similarity scores that induced it.
Question: “How do I reset a person account password?”
Right reply: “Use POST /auth/reset with the person electronic mail.”
At 10 reminiscence entries — working appropriately:
[1] ✓ sim=0.457 flip= 2 POST /auth/reset resets person password by way of electronic mail
[2] ✓ sim=0.353 flip= 9 account locks after 5 failed login makes an attempt
[3] ✓ sim=0.241 flip= 4 refund processed inside 5 enterprise days coverage
Reply: POST /auth/reset resets person password by way of electronic mail
Right: True | Confidence: 73.2%
At 200 reminiscence entries — silently damaged:
[1] ✗ sim=0.471 flip=158 VPN certificates expires in 30 days notify customers
[2] ✓ sim=0.457 flip= 2 POST /auth/reset resets person password by way of electronic mail
[3] ✓ sim=0.353 flip= 9 account locks after 5 failed login makes an attempt
Reply: VPN certificates expires in 30 days notify customers
Right: False | Confidence: 78.5%
The VPN certificates entry wins by a similarity margin of 0.014. The right entry continues to be retrieved however is pushed to rank-2 by this slender hole — sufficient to flip the ultimate resolution. That’s the total distinction between an accurate reply and a unsuitable one.
Why does a VPN entry beat a password reset entry for a password reset question? As a result of “VPN certificates expires… notify customers” shares the tokens “customers” and a structural proximity to “expires” / “reset” on this embedding area. The stale entry wins by token co-occurrence, not semantic relevance. Cosine similarity can not see the distinction. It is a well-documented failure mode of dense retrieval in long-context settings [3].
That is the third failure mode: stale entries win on uncooked similarity, and the margin is just too small to detect.
Section 4 — The Repair: Managed Reminiscence Structure

The answer just isn’t a greater embedding mannequin. It isn’t GPT-4 as a substitute of GPT-3.5. It’s 4 architectural mechanisms utilized earlier than and through retrieval. Collectively they break the belief that cosine similarity equals relevance.
| Enter Fed In | Entries Retained | Relevance Fee | Accuracy |
|---|---|---|---|
| 10 | 10 | 46% | 70% |
| 25 | 25 | 44% | 80% |
| 50 | 50 | 44% | 60% |
| 100 | 50 | 42% | 60% |
| 200 | 50 | 42% | 60% |
| 500 | 50 | 42% | 60% |
Feed in 50 entries or 500 — accuracy converges to ~60% after 50+ entries. At smaller enter sizes the managed agent really performs even higher: 70% at 10 entries, 80% at 25 entries. The managed agent retains 50 entries from a 500-entry enter and outperforms the agent sitting on all 500. Much less context, appropriately chosen, solutions higher.
Here’s what makes that doable.
Mechanism 1 — Route the Question Earlier than You Rating It
Earlier than any similarity computation, classify the question into a subject cluster. Every cluster has a centroid embedding computed from consultant entries [5]. The question is matched to the closest centroid, and solely entries from that cluster enter the candidate set.
def _route_query_to_topic(query_emb: np.ndarray) -> str:
best_topic = "payment_fraud"
best_sim = -1.0
for subject, centroid in _TOPIC_CLUSTERS.objects():
sim = _cosine_sim(query_emb, centroid)
if sim > best_sim:
best_sim = sim
best_topic = subject
return best_topic
The password reset question routes to the auth cluster. The VPN certificates entry belongs to off_topic. It by no means enters the candidate set. The issue in Section 3 disappears earlier than similarity scoring even begins.
This one mechanism eliminates cross-topic contamination solely. It’s also low-cost — centroid comparability prices O(n_clusters), not O(n_memory).
Mechanism 2 — Collapse Close to-Duplicates at Ingestion
Earlier than entries are saved, near-duplicates are merged. If two entries have cosine similarity above 0.85, solely the newer one is stored.
def _deduplicate(self, entries: checklist[MemoryEntry]) -> checklist[MemoryEntry]:
entries_sorted = sorted(entries, key=lambda e: e.flip)
stored: checklist[MemoryEntry] = []
for candidate in entries_sorted:
is_dup = False
for i, present in enumerate(stored):
if _cosine_sim(candidate.embedding, present.embedding) > self.DEDUP_THRESHOLD:
stored[i] = candidate # change older with newer
is_dup = True
break
if not is_dup:
stored.append(candidate)
return stored
With out deduplication, the identical stale content material saved ten occasions throughout ten turns accumulates collective retrieval weight. Ten related VPN-certificate entries push the off-topic cluster centroid towards auth area. Deduplication collapses them to 1. The right cluster boundaries survive.
Mechanism 3 — Evict by Relevance, Not by Age
When the retained pool should be capped, entries are scored by their most cosine similarity to any recognized subject cluster centroid. Entries that match no recognized question subject are evicted first. Throughout the retained set, a recency bonus (+0.0 to +0.12) breaks ties in favor of newer entries.
def _topic_relevance_score(self, entry: MemoryEntry) -> float:
return max(
_cosine_sim(entry.embedding, centroid)
for centroid in _TOPIC_CLUSTERS.values()
)
That is the crucial architectural inversion. Most implementations use a queue: oldest entries out, latest entries in. That’s precisely backwards when the proper solutions had been saved at system initialization and the noise arrived later. A relevance-scored eviction coverage retains the reply to “what’s the fraud threshold” — saved at flip 1 — over a catering order saved at flip 190. Recency is a tiebreaker, not the first criterion.
Mechanism 4 — Separate Similar-Subject Entries with Lexical Overlap
Subject routing and recency weighting nonetheless can not separate two entries that belong to the identical cluster however reply completely different questions. Each of those survive subject filtering for the fraud threshold question:
fee fraud threshold is $500 for evaluation— right ✓Visa Mastercard Amex card fee accepted— unsuitable, but additionallypayment_fraud✗
Cosine similarity provides them related scores. A BM25-inspired [1] lexical overlap bonus resolves this by rewarding entries whose content material shares significant non-stop-word tokens with the question.
@staticmethod
def _lexical_overlap_bonus(query_text: str, entry: MemoryEntry) -> float:
q_tokens = {
w.strip("?.,!").decrease()
for w in query_text.break up()
if len(w.strip("?.,!")) > 3 and w.decrease() not in _LEX_STOP
}
e_tokens = set(entry.content material.decrease().change("/", " ").break up())
overlap = len(q_tokens & e_tokens)
return min(overlap * 0.05, 0.15)
The fraud threshold question incorporates “threshold.” The right entry incorporates “threshold.” The unsuitable entry doesn’t. A bonus of 0.05 ideas the rating. Multiply this impact throughout all ten queries and accuracy lifts measurably. That is the sample often called hybrid retrieval [2] — dense embedding similarity mixed with sparse lexical matching — carried out right here as a light-weight reranking step that requires no second embedding cross.
All 4 mechanisms are load-bearing. Take away anyone and accuracy degrades:
- No routing → cross-topic stale entries re-enter competitors
- No deduplication → repeated stale content material shifts cluster centroids
- No relevance eviction → FIFO discards the oldest right solutions first
- No lexical reranking → same-topic unsuitable entries win on coin-flip
The Last Rating
| Metric | Unbounded (200 entries) | Managed (50 retained) |
|---|---|---|
| Relevance price | 22% | 42% |
| Accuracy | 40% | 60% |
| Avg confidence | 75.8% | 77.5% |
| Reminiscence footprint | 200 entries | 50 entries |

The identical question that returned a VPN certificates reply below unbounded reminiscence now appropriately returns the auth reset entry — similarity 0.608 versus the stale entry’s 0.471. Subject routing excluded the stale entry earlier than it might compete. The right reply wins by a snug margin as a substitute of dropping by a razor-thin one.
One-quarter of the reminiscence. Twenty proportion factors extra correct. The constraint is the characteristic.
What To Change in Your System (Beginning Monday)
1. Cease utilizing confidence as a correctness proxy. Instrument your agent with ground-truth analysis — a small fastened set of recognized queries with verified solutions — sampled on a schedule. Confidence tells you retrieval occurred. It doesn’t inform you retrieval labored.
2. Audit your eviction coverage. In case you are utilizing FIFO or LRU eviction, you’re discarding your oldest entries first. In most knowledge-base brokers, these are your Most worthy entries. Change to relevance-scored eviction with recency as a tiebreaker.
3. Add a routing step earlier than similarity scoring. Even a easy centroid-based cluster project dramatically reduces cross-topic contamination. This doesn’t require retraining. It requires computing a centroid per subject cluster — a one-time offline step — and filtering candidates earlier than scoring.
4. Run deduplication at ingestion. Repeated near-identical entries multiply their collective retrieval weight. Collapse them to the latest model at write time, not at learn time.
5. Add a lexical overlap bonus as a reranking step. If two entries rating equally on cosine similarity, a BM25-style token overlap bonus [1] will normally separate the one that truly shares vocabulary with the question from the one which merely shares subject. That is low-cost to implement and doesn’t require a second embedding cross.
Limitations
This simulation makes use of deterministic keyword-seeded embeddings, not a realized sentence encoder. Subject clusters are hand-labeled. The arrogance mannequin is a linear perform of imply retrieval rating. Actual techniques have higher-dimensional embedding areas, realized boundaries, and calibrated chances that will behave otherwise on the margins.
These simplifications make the failure modes simpler to watch, not tougher. The structural causes — cosine similarity measuring coherence not correctness, FIFO eviction discarding related outdated entries, stale entries accumulating collective weight — persist no matter embedding dimension or mannequin scale [3]. The mechanisms described tackle these structural causes.
The accuracy numbers are relative comparisons inside a managed simulation, not benchmarks to generalize. The necessary portions are the instructions and magnitudes of change as reminiscence scales.
Operating the Code Your self
pip set up numpy scipy colorama
# Run the complete four-phase demo
python llm_memory_leak_demo.py
# Suppress INFO logs
python llm_memory_leak_demo.py --quiet
# Run unit checks first (really useful — verifies correctness logic)
python llm_memory_leak_demo.py --test
Run --test earlier than capturing output for replication. The TestAnswerKeywords suite verifies that every question’s correctness filter matches precisely one template entry — that is what closes the topic-level correctness loophole described in Section 3.
Key Takeaways
- Relevance collapses silently. At 10 entries, 44% of retrieved context is related. At 500 entries, 14% is. The agent retains answering all through.
- Confidence is an optimism sign, not a reliability sign. It rises as accuracy falls. Your alert won’t ever hearth.
- Stale entries win on margins you can not see. A 0.014 cosine similarity hole is the distinction between an accurate reply and a VPN certificates.
- 4 mechanisms are required — not three. Subject routing, semantic deduplication, relevance-scored eviction, and lexical reranking every shut a failure mode the others can not.
- Bounded reminiscence beats unbounded reminiscence. 50 well-chosen entries reply higher than 200 collected ones. Much less context, appropriately chosen, is strictly higher.
Last Thought
Extra reminiscence doesn’t make LLM techniques smarter.
It makes them extra assured in no matter they retrieve.
If retrieval degrades, confidence turns into essentially the most harmful metric you might have.
Disclosure
This text was written by the creator. The companion code is unique work. All experimental outcomes are produced by working the revealed code; no outcomes had been manually adjusted. The creator has no monetary relationship with any device, library, or firm talked about on this article.
References
[1] Robertson S, Zaragoza H (2009), “The Probabilistic Relevance Framework: BM25 and Past”. Foundations and Developments in Info Retrieval, Vol. 4 No. 1-2 pp. 1–174, doi: https://doi.org/10.1561/1500000019
[2] Yi Luan, Jacob Eisenstein, Kristina Toutanova, Michael Collins; Sparse, Dense, and Attentional Representations for Textual content Retrieval. Transactions of the Affiliation for Computational Linguistics 2021; 9 329–345. doi: https://doi.org/10.1162/tacl_a_00369
[3] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang; Misplaced within the Center: How Language Fashions Use Lengthy Contexts. Transactions of the Affiliation for Computational Linguistics 2024; 12 157–173. doi: https://doi.org/10.1162/tacl_a_00638
[4] Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rocktäschel, T., Riedel, S., & Kiela, D. (2020). Retrieval-Augmented Technology for Data-Intensive NLP Duties. Advances in Neural Info Processing Methods, 33, 9459–9474. https://arxiv.org/abs/2005.11401
[5] Gao, L., Ma, X., Lin, J., & Callan, J. (2023). Exact Zero-Shot Dense Retrieval with out Relevance Labels. In Proceedings of the 61st Annual Assembly of the Affiliation for Computational Linguistics (Quantity 1: Lengthy Papers), 1762–1777. https://doi.org/10.18653/v1/2023.acl-long.99 (arXiv:2212.10496)
The companion code for this text is obtainable at: https://github.com/Emmimal/memory-leak-rag/
All terminal output proven on this article was produced by working python llm_memory_leak_demo.py on the revealed code with no modifications.

