Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Canyon Spectral:ON CF 8 Electric Mountain Bike: Beginner-Friendly, Under $5K
    • US-sanctioned currency exchange says $15 million heist done by “unfriendly states”
    • This New Air Purifier Filter Can Remove Cannabis Smoke Odor, Just in Time for 4/20
    • Portable water filter provides safe drinking water from any source
    • MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged
    • NCAA seeks faster trial over DraftKings disputed March Madness branding case
    • AI Trusted Less Than Social Media and Airlines, With Grok Placing Last, Survey Says
    • Extragalactic Archaeology tells the ‘life story’ of a whole galaxy
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Saturday, April 18
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Optimizing Vector Search: Why You Should Flatten Structured Data 
    Artificial Intelligence

    Optimizing Vector Search: Why You Should Flatten Structured Data 

    Editor Times FeaturedBy Editor Times FeaturedJanuary 29, 2026No Comments8 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    structured knowledge right into a RAG system, engineers typically default to embedding uncooked JSON right into a vector database. The fact, nonetheless, is that this intuitive method results in dramatically poor efficiency. Trendy embeddings are based mostly on the BERT structure, which is basically the encoder a part of a Transformer, and are educated on an enormous textual content dataset with the primary purpose of capturing semantic that means. Trendy embedding fashions can present unbelievable retrieval efficiency, however they’re educated on a big set of unstructured textual content with a give attention to semantic that means. Consequently, although embedding JSON could appear to be an intuitively easy and stylish answer, utilizing a generic embedding mannequin for JSON objects would reveal outcomes removed from peak efficiency.

    Deep dive

    Tokenization

    Step one is tokenization, which takes the textual content and splits it into tokens, that are typically a generic a part of the phrase. The fashionable embedding fashions make the most of Byte-Pair Encoding (BPE) or WordPiece tokenization algorithms. These algorithms are optimized for pure language, breaking phrases into widespread sub-components. When a tokenizer encounters uncooked JSON, it struggles with the excessive frequency of non-alphanumeric characters. For instance, "usd": 10, shouldn’t be considered as a key-value pair; as a substitute, it’s fragmented:

    • The quotes ("), colon (:), and comma (,)
    • Tokens usd and 10 

    This creates a low signal-to-noise ratio. In pure language, virtually all phrases contribute to the semantic “sign”. Whereas in JSON (and different structured codecs), a big share of tokens are “wasted” on structural syntax that incorporates zero semantic worth.

    Consideration calculation

    The core energy of Transformers lies within the consideration mechanism. This permits the mannequin to weight the significance of tokens relative to one another.

    Within the sentence The value is 10 US {dollars} or 9 euros, consideration can simply hyperlink the worth 10 to the idea value as a result of these relationships are well-represented within the mannequin’s pre-training knowledge and the mannequin has seen this linguistic sample thousands and thousands of instances. However, within the uncooked JSON:

    "value": {
      "usd": 10,
      "eur": 9,
     }

    the mannequin encounters structural syntax it was not primarily optimized to “learn”. With out the linguistic connector, the ensuing vector will fail to seize the true intent of the information, because the relationships between the important thing and the worth are obscured by the format itself. 

    Imply Pooling

    The ultimate step in producing a single embedding illustration of the doc is Imply Pooling. Mathematically, the ultimate embedding (E) is the centroid of all token vectors (e1, e2, e3) within the doc:

    Imply Pooling calculation: Changing a sequence of n token embeddings right into a single vector illustration by averaging their values. Picture by creator.

    That is the place the JSON tokens turn into a mathematical legal responsibility. If 25% of the tokens within the doc are structural markers (braces, quotes, colons), the ultimate vector is closely influenced by the “that means” of punctuation. Consequently, the vector is successfully “pulled” away from its true semantic middle within the vector house by these noise tokens. When a consumer submits a pure language question, the space between the “clear” question vector and “noisy” JSON vector will increase, instantly hurting the retrieval metrics.

    Flatten it

    So now that we all know concerning the JSON limitations, we have to determine how you can resolve them. The final and most simple method is to flatten the JSON and convert it into pure language.

    Let’s think about the everyday product object:

    {
     "skuId": "123",
     "description": "It is a check product used for demonstration functions",
     "amount": 5,
     "value": {
      "usd": 10,
      "eur": 9,
     },
     "availableDiscounts": ["1", "2", "3"],
     "giftCardAvailable": "true", 
     "class": "demo product"
     ...
    }

    It is a easy object with some attributes like description, and so forth. Let’s apply the tokenization to it and see the way it seems:

    Tokenization of uncooked JSON. Discover the excessive density of distinct tokens for syntax (braces, quotes, colons) that contribute to noise relatively than that means. Screenshot by creator utilizing OpenAI Tokenizer

    Now, let’s convert it into textual content to make the embeddings’ work simpler. With a view to try this, we are able to outline a template and substitute the JSON values into it. For instance, this template may very well be used to explain the product:

    Product with SKU {skuId} belongs to the class "{class}"
    Description: {description}
    It has a amount of {amount} out there 
    The value is {value.usd} US {dollars} or {value.eur} euros  
    Obtainable low cost ids embrace {availableDiscounts as comma-separated checklist}  
    Present playing cards are {giftCardAvailable ? "out there" : "not out there"} for this product

    So the ultimate end result will appear to be:

    Product with SKU 123 belongs to the class "demo product"
    Description: It is a check product used for demonstration functions
    It has a amount of 5 out there
    The value is 10 US {dollars} or 9 euros
    Obtainable low cost ids embrace 1, 2, and three
    Present playing cards can be found for this product

    And apply tokenizer to it:

    Tokenization of the flattened textual content. The ensuing sequence is shorter (14% fewer tokens) and composed primarily of semantically significant phrases. Screenshot by creator utilizing OpenAI Tokenizer

    Not solely does it have 14% fewer tokens now, however it is also a a lot clearer type with the semantic that means and required context.

    Let’s measure the outcomes

    Observe: Full, reproducible code for this experiment is obtainable within the Google Colab notebook

    Now let’s attempt to measure retrieval efficiency for each choices. We’re going to give attention to the usual retrieval metrics like Recall@ok, Precision@ok, and MRR to maintain it easy, and can make the most of a generic embedding mannequin (all-MiniLM-L6-v2) and the Amazon ESCI dataset with random 5,000 queries and three,809 related merchandise.

    The all-MiniLM-L6-v2 is a well-liked selection, which is small (22.7m params) however offers quick and correct outcomes, making it a good selection for this experiment.

    For the dataset, the model of Amazon ESCI is used, particularly milistu/amazon-esci-data (), which is obtainable on Hugging Face and incorporates a group of Amazon merchandise and search queries knowledge.

    The flattening operate used for textual content conversion is:

    def flatten_product(product):
      return (
        f"Product {product['product_title']} from model {product['product_brand']}" 
        f" and product id {product['product_id']}" 
        f" and outline {product['product_description']}"
    )

    A pattern of the uncooked JSON knowledge is:

    {
      "product_id": "B07NKPWJMG",
      "title": "RoWood 3D Puzzles for Adults, Picket Mechanical Gear Kits for Teenagers Youngsters Age 14+",
      "description": "

    Specs
    Mannequin Quantity: Rowood Treasure field LK502
    Common construct time: 5 hours
    Whole Items: 123
    Mannequin weight: 0.69 kg
    Field weight: 0.74 KG
    Assembled measurement: 100*124*85 mm
    Field measurement: 320*235*39 mm
    Certificates: EN71,-1,-2,-3,ASTMF963
    Really helpful Age Vary: 14+
    Contents
    Plywood sheets
    Steel Spring
    Illustrated directions
    Equipment
    MADE FOR ASSEMBLY
    -Comply with the directions supplied within the booklet and meeting 3d puzzle with some thrilling and interesting enjoyable. Fell the delight of self creation getting this beautiful wood work like a professional.
    GLORIFY YOUR LIVING SPACE
    -Revive the enigmatic appeal and cheer your events and get-togethers with an expertise that's distinctive and attention-grabbing .
    ", "model": "RoWood", "colour": "Treasure Field" }

    For the vector search, two FAISS indexes are created: one for the flattened textual content and one for the JSON-formatted textual content. Each indexes are flat, which implies that they’ll examine distances for every of the present entries as a substitute of using an Approximate Nearest Neighbour (ANN) index. That is vital to make sure that retrieval metrics will not be affected by the ANN.

    D = 384
    index_json = faiss.IndexFlatIP(D)
    index_flatten = faiss.IndexFlatIP(D)

    To scale back the dataset a random variety of 5,000 queries has been chosen and all corresponding merchandise have been embedded and added to the indexes. Consequently, the collected metrics are as follows:

    Evaluating the 2 indexing strategies utilizing the all-MiniLM-L6-v2 embedding mannequin on the Amazon ESCI dataset. The flattened method constantly yields larger scores throughout all key retrieval metrics (Precision@10, Recall@10, and MRR). Picture by creator

    And the efficiency change of the flattened model is:

    Changing the structured JSON to pure language textual content resulted in vital features, together with a 19.1% enhance in Recall@10 and a 27.2% enhance in MRR (Imply Reciprocal Rank), confirming the superior semantic illustration of the flattened knowledge. Picture by creator.

    The evaluation confirms that embedding uncooked structured knowledge into generic vector house is a suboptimal method and including a easy preprocessing step of flattening structured knowledge constantly delivers vital enchancment for retrieval metrics (boosting recall@ok and precision@ok by about 20%). The principle takeaway for engineers constructing RAG programs is that efficient knowledge preparation is extraordinarily vital for attaining peak efficiency of the semantic retrieval/RAG system.

    References

    [1] Full experiment code https://colab.research.google.com/drive/1dTgt6xwmA6CeIKE38lf2cZVahaJNbQB1?usp=sharing
    [2] Mannequin 
    https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
    [3] Amazon ESCI dataset. Particular model used: https://huggingface.co/datasets/milistu/amazon-esci-data
    The unique dataset out there at https://www.amazon.science/code-and-datasets/shopping-queries-dataset-a-large-scale-esci-benchmark-for-improving-product-search
    [4] FAISS https://ai.meta.com/tools/faiss/



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    A Practical Guide to Memory for Autonomous LLM Agents

    April 17, 2026

    You Don’t Need Many Labels to Learn

    April 17, 2026

    Beyond Prompting: Using Agent Skills in Data Science

    April 17, 2026

    6 Things I Learned Building LLMs From Scratch That No Tutorial Teaches You

    April 17, 2026

    Introduction to Deep Evidential Regression for Uncertainty Quantification

    April 17, 2026

    memweave: Zero-Infra AI Agent Memory with Markdown and SQLite — No Vector Database Required

    April 17, 2026

    Comments are closed.

    Editors Picks

    Canyon Spectral:ON CF 8 Electric Mountain Bike: Beginner-Friendly, Under $5K

    April 18, 2026

    US-sanctioned currency exchange says $15 million heist done by “unfriendly states”

    April 18, 2026

    This New Air Purifier Filter Can Remove Cannabis Smoke Odor, Just in Time for 4/20

    April 18, 2026

    Portable water filter provides safe drinking water from any source

    April 18, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Microsoft releases urgent Office patch. Russian-state hackers pounce.

    February 9, 2026

    Review: I tried these $11.60 Chinese earbuds and Apple’s $420 AirPods for live translations

    January 2, 2026

    England vs. Senegal: Livestream International Soccer Free From Anywhere

    June 10, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.