Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • The Startup World Cup is your chance to pitch in Silicon Valley and win $1.4 million
    • 13 Best Coolers for Sunshine and Nighttime (2026)
    • Which States Actually Have the Best Laws Against License Plate Surveillance?
    • Portable smart TV, art frame, tablet
    • Former Startmate boss Michael Batko is back in founder mode building with Hourglass AI
    • Why Sharing a Screenshot Can Get You Jailed in the UAE
    • The European Commission issues preliminary DSA findings against Meta, saying Instagram and Facebook fail to prevent under-13 users from accessing the services (Gian Volpicelli/Bloomberg)
    • Today’s NYT Mini Crossword Answers for April 29
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Wednesday, April 29
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Chunk Size as an Experimental Variable in RAG Systems
    Artificial Intelligence

    Chunk Size as an Experimental Variable in RAG Systems

    Editor Times FeaturedBy Editor Times FeaturedDecember 31, 2025No Comments12 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    Consumer: “What does the inexperienced highlighting imply on this doc?”
    RAG system: “Inexperienced highlighted textual content is interpreted as configuration settings.”

    the sorts of solutions we anticipate in the present day from Retrieval-Augmented Technology (RAG) methods.

    Over the previous few years, RAG has grow to be one of many central architectural constructing blocks for knowledge-based language fashions: As an alternative of relying solely on the data saved within the mannequin, RAG methods mix language fashions with exterior doc sources.

    The time period was launched by Lewis et al. and describes an strategy that’s broadly used to scale back hallucinations, enhance the traceability of solutions, and allow language fashions to work with proprietary information.

    I needed to know why a system selects one particular reply as an alternative of a really comparable various. This determination is usually made already on the retrieval stage, lengthy earlier than an LLM comes into play.

    Because of this, I carried out three experiments on this article to research how totally different chunk sizes (80, 220, 500) affect retrieval habits.

    Desk of Contents
    1 – Why Chunk Size Is More Than Just a Parameter
    2 – How Does Chunk Size Influence the Stability of Retrieval Results in Small RAG Systems?
    3 – Minimal RAG System Without Output Generation
    4 – Three Experiments: Chunk Size as a Variable
    5 – Final Thoughts

    1 – Why Chunk Dimension Is Extra Than Only a Parameter

    In a typical RAG pipeline, paperwork are first cut up into smaller textual content segments, embedded into vectors, and saved in an index. When a question is issued, semantically comparable textual content segments are retrieved after which processed into a solution. This remaining step is usually carried out together with a language mannequin.

    Typical parts of a RAG system embody:

    • Doc preprocessing
    • Chunking
    • Embedding
    • Vector index
    • Retrieval logic
    • Elective: Technology of the output

    On this article, I give attention to the retrieval step. This step relies on a number of parameters:

    • Selection of the embedding mannequin:
      The embedding mannequin determines how textual content is transformed into numerical vectors. Totally different fashions seize which means at totally different ranges of granularity and are educated on totally different aims. For instance, light-weight sentence-transformer fashions are sometimes adequate for semantic search, whereas bigger fashions might seize extra nuance however include greater computational price.
    • Distance or similarity metric:
      The gap or similarity metric defines how the closeness between two vectors is measured. Frequent decisions embody cosine similarity, dot product or Euclidean distance. For normalized embeddings, cosine similarity is usually used
    • Variety of retrieved outcomes (High-k):
      The variety of retrieved outcomes specifies what number of textual content segments are returned by the retrieval step. A small High-k can miss related context, whereas a big High-k will increase recall however might introduce noise.
    • Overlap between textual content segments:
      Overlap defines how a lot textual content is shared between consecutive chunks. It’s sometimes used to keep away from shedding necessary data at chunk boundaries. A small overlap reduces redundancy however dangers slicing explanations in half, whereas a bigger overlap will increase robustness at the price of storing and processing extra comparable chunks.
    • Chunk dimension:
      Describes the dimensions of the textual content items which can be extracted from a doc and saved as particular person vectors. Relying on the implementation, chunk dimension could be outlined based mostly on characters, phrases, or tokens. The scale determines how a lot context a single vector represents.

    Small chunks include little or no context and are extremely particular. Giant chunks embody extra surrounding data, however at a a lot coarser stage. In consequence, chunk dimension determines which components of the which means are literally in contrast when a question is matched in opposition to a piece.

    Chunk dimension implicitly displays assumptions about how a lot context is required to seize which means, how strongly data could also be fragmented, and the way clearly semantic similarity could be measured.

    With this text, I needed to discover precisely this by a small RAG system experiment and requested myself:

    How do totally different chunk sizes have an effect on retrieval habits?

    The main focus isn’t on a system meant for manufacturing use. As an alternative, I needed to learn the way totally different chunk sizes have an effect on the retrieval outcomes.

    2 – How Does Chunk Dimension Affect the Retrieval Ends in Small RAG Methods?

    I subsequently requested myself the next questions:

    • How does chunk dimension change retrieval ends in a small, managed RAG system?
    • Which textual content segments make it to the highest of the rating when the queries are equivalent however the chunk sizes differ?

    To analyze this, I intentionally outlined a easy setup during which all situations (besides chunk dimension) stay the identical:

    • Three Markdown paperwork because the data base
    • Three equivalent, mounted questions
    • The identical embedding mannequin for vectorizing the texts

    The textual content used within the three Markdown information relies on a documentation from an actual software known as OneLatex. To maintain the experiment targeted on retrieval habits, the content material was barely simplified and decreased to the core explanations related for the questions.

    The three questions I used the place:

    "Q1: What's the fundamental benefit of separating content material creation from formatting in OneLatex?"
    "Q2: How does OneLatex interpret textual content highlighted in inexperienced in OneNote?"
    "Q3: How does OneLatex interpret textual content highlighted in yellow in OneNote?"

    As well as, I intentionally omitted an LLM for output technology.

    The rationale for that is easy: I didn’t need an LLM to show incomplete or poorly-matched textual content segments right into a coherent reply. This makes it a lot clearer what truly occurs within the retrieval step, how the parameters of the retrieval work together, and what function the sentence transformer performs.

    3 – Minimal RAG System With out Output Technology

    For the experiments, I subsequently used a small RAG system with the next parts: Markdown paperwork because the data base, a easy chunking logic with overlap, a sentence transformer mannequin to generate embeddings, and a rating of textual content segments utilizing cosine similarity.

    Because the embedding mannequin, I used all-MiniLM-L6-v2 from the Sentence-Transformers library. This mannequin is light-weight and subsequently well-suited for working regionally on a private laptop computer (I ran it regionally on my Lenovo laptop computer with 64 GB of RAM). The similarity between a question and a textual content phase is calculated utilizing cosine similarity. As a result of the vectors are normalized, the dot product could be in contrast straight.

    I intentionally saved the system small and subsequently didn’t embody any chat historical past, reminiscence or agent logic, or LLM-based reply technology.

    As an “reply,” the system merely returns the highest-ranked textual content phase. This makes it a lot clearer which content material is definitely recognized as related by the retrieval step.

    The total code for the mini RAG system could be present in my GitHub repository:

    → 🤓 Find the full code in the GitHub Repo 🤓 ←

    4 – Three Experiments: Chunk Dimension as a Variable

    For the analysis, I ran the three instructions under through the command line:

    #Experiment 1 - Baseline
    python fundamental.py --chunk-size 220 --overlap 40 --top-k 3
    
    #Experiment 2 - Small Chunk-Dimension
    python fundamental.py --chunk-size 80 --overlap 10 --top-k 3
    
    #Experiment 3 - Large Chunk-Dimension
    python fundamental.py --chunk-size 500 --overlap 50 --top-k 3

    The setup from Part 3 stays precisely the identical: The identical three paperwork, the identical three questions, and the identical embedding mannequin.

    Chunk dimension defines the variety of characters per textual content phase. As well as, I used an overlap in every experiment to scale back data loss at chunk boundaries. For every experiment, I computed the semantic similarity scores between the question and all chunks and ranked the highest-scoring segments.

    Small Chunks (80 Characters) – Lack of Context

    With very small chunks (chunk-size 80), a powerful fragmentation of the content material turns into obvious: Particular person textual content segments usually include solely sentence fragments or remoted statements with out adequate context. Explanations are cut up throughout a number of chunks, in order that particular person segments include solely components of the unique content material.

    Formally, the retrieval nonetheless works appropriately: Semantically comparable fragments are discovered and ranked extremely.

    Nevertheless, once we have a look at the precise content material, we see that the outcomes are hardly usable:

    Screenshot taken by the Writer.

    The returned chunks are thematically associated, however they don’t present a self-contained reply. The system roughly acknowledges what the subject is about, however it breaks the content material down so strongly that the person outcomes don’t say a lot on their very own.

    Medium Chunks (220 characters) – Obvious Stability

    With the medium chunks (chunk-size 220), the outcomes already improved clearly. A lot of the returned textual content segments contained full explanations and had been content-wise believable. At first look, the retrieval appeared secure and dependable: It often returned precisely the knowledge one would anticipate.

    Nevertheless, a concrete downside turned obvious when distinguishing between inexperienced and yellow highlighted textual content. No matter whether or not I requested concerning the which means of the inexperienced or the yellow highlighting, the system returned the chunk concerning the yellow highlighting as the highest lead to each instances. The proper chunk was current, however it was not chosen as High-1.

    Shows the results of the retrieval experiment with chunk size 220.
    Screenshot taken by the writer.

    The rationale lies within the very comparable similarity scores of the 2 high outcomes:

    • Rating for High-1: 0.873
    • Rating for High-2: 0.774

    The system can hardly distinguish between the 2 candidates semantically and finally selects the chunk with the marginally greater rating.

    The issue? It doesn’t match the query content-wise and is solely improper.

    For us as people, that is very straightforward to acknowledge. For a sentence transformer like all-MiniLM-L6-v2, it appears to be a problem.

    What issues right here is that this: If we solely have a look at the High-1 outcome, this error stays invisible. Solely by evaluating the scores can we see that the system is unsure on this scenario. Since it’s compelled to make a transparent determination in our setup, it returns the High-1 chunk as the reply.

    Giant Chunks (500 characters) – Sturdy Contexts

    With the bigger chunks (chunk-size 500), the textual content segments include rather more coherent context. There may be additionally hardly any fragmentation anymore: Explanations are now not cut up throughout a number of chunks.

    And certainly, the error in distinguishing between inexperienced and yellow now not happens. The questions on inexperienced and yellow highlighting are actually appropriately distinguished, and the respective matching chunk is clearly ranked as the highest outcome. We are able to additionally see that the similarity scores of the related chunks are actually extra clearly separated.

    Shows the result of the retrieval experiment with chunk size 500.
    Screenshot taken by the writer.

    This makes the rating extra secure and simpler to know. The draw back of this setting, nonetheless, is the coarser granularity: Particular person chunks include extra data and are much less finely tailor-made to particular features.

    In our setup with three Markdown information, the place the content material is already thematically effectively separated, this draw back hardly performs a job. With in another way structured documentation, comparable to lengthy steady texts with a number of subjects per part, an excessively massive chunk dimension might result in irrelevant data being retrieved along with related content material.


    On my Substack Data Science Espresso, I share sensible guides and bite-sized updates from the world of Information Science, Python, AI, Machine Studying, and Tech — made for curious minds like yours.

    Take a look and subscribe on Medium or on Substack if you wish to keep within the loop.


    5 – Closing Ideas

    The outcomes of the three quite simple experiments could be traced again to how retrieval works. Every chunk is represented as a vector, and its proximity to the question is calculated utilizing cosine similarity. The ensuing rating signifies how comparable the query and the textual content phase are within the semantic house.

    What’s necessary right here is that the rating isn’t a measure of correctness. It’s a measure of relative comparability throughout the accessible chunks for a given query in a single run.

    When a number of segments are semantically very comparable, even minimal variations within the scores can decide which chunk is returned as High-1. One instance of this was the wrong distinction between inexperienced and yellow within the medium chunk dimension.

    One potential extension can be to permit the system to explicitly sign uncertainty. If the scores of the High-1 and High-2 chunks are very shut, the system might return an “I don’t know” or “I’m unsure” response as an alternative of forcing a choice.

    Primarily based on this small RAG system experiment, it’s not actually potential to derive a “greatest chunk dimension” conclusion.

    However what we will observe as an alternative is the next:

    • Small chunks result in excessive variance: Retrieval reacts very exactly to particular person phrases however shortly loses the general context.
    • Medium-sized chunks: Seem secure at first look, however can create harmful ambiguities when a number of candidates are scored virtually equally.
    • Giant chunks: Present extra sturdy context and clearer rankings, however they’re coarser and fewer exactly tailor-made.

    Chunk dimension subsequently, determines how sharply retrieval can distinguish between comparable items of content material.

    On this small setup, this didn’t play a serious function. Nevertheless, once we take into consideration bigger RAG methods in manufacturing environments, this type of retrieval instability might grow to be an actual downside: Because the variety of paperwork grows, the variety of semantically comparable chunks will increase as effectively. Because of this many conditions with very small rating variations are prone to happen. I may think about that such results are sometimes masked by downstream language fashions, when an LLM turns incomplete or solely partially matching textual content segments into believable solutions.

    The place Can You Proceed Studying?



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    PyTorch NaNs Are Silent Killers — So I Built a 3ms Hook to Catch Them at the Exact Layer

    April 28, 2026

    Correlation Doesn’t Mean Causation! But What Does It Mean?

    April 28, 2026

    Let the AI Do the Experimenting

    April 28, 2026

    The Next Frontier of AI in Production Is Chaos Engineering

    April 28, 2026

    How Spreadsheets Quietly Cost Supply Chains Millions

    April 27, 2026

    A Career in Data Is Not Always a Straight Line, and That’s Okay

    April 27, 2026

    Comments are closed.

    Editors Picks

    The Startup World Cup is your chance to pitch in Silicon Valley and win $1.4 million

    April 29, 2026

    13 Best Coolers for Sunshine and Nighttime (2026)

    April 29, 2026

    Which States Actually Have the Best Laws Against License Plate Surveillance?

    April 29, 2026

    Portable smart TV, art frame, tablet

    April 29, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    A foldable tiny home you can install in just 60 minutes without tools

    February 2, 2025

    Sidewinder chopper rides on unique omnidirectional wheels

    July 15, 2025

    YouTube TV and NBC Extend Negotiations Past Deadline

    October 1, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.