Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Our Favorite Apple Watch Has Never Been Less Expensive
    • Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)
    • Today’s NYT Strands Hints, Answer and Help for April 20 #778
    • KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.
    • OneOdio Focus A1 Pro review
    • The 11 Best Fans to Buy Before It Gets Hot Again (2026)
    • A look at Dylan Patel’s SemiAnalysis, an AI newsletter and research firm that expects $100M+ in 2026 revenue from subscriptions and AI supply chain research (Abram Brown/The Information)
    • ‘Euphoria’ Season 3 Release Schedule: When Does Episode 2 Come Out?
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Sunday, April 19
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.
    Artificial Intelligence

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    Editor Times FeaturedBy Editor Times FeaturedApril 19, 2026No Comments12 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    any time with Transformers, you already know consideration is the mind of the entire operation. It’s what lets the mannequin determine which tokens are speaking to one another, and that one mechanism is answerable for virtually the whole lot spectacular LLMs do.

    Consideration works with three elements: Question (Q), Key (Ok), and Worth (V) [1]. The dot product between Q and Ok is what tells the mannequin how a lot every token ought to give attention to the others, and that’s basically the core of what consideration does.

    Now, calling consideration the “mind” additionally means it comes with a value. Throughout inference, each time a brand new token is being predicted, the Ok and V matrices are recalculated for all of the earlier tokens too. So if 90 tokens are already there and the mannequin is predicting the 91st, it goes again and recomputes KV for all 90. Isn’t this repetitiveness a waste?

    KV cache modified this. The concept is easy, as an alternative of recomputing, simply retailer the Ok and V matrices in VRAM and reuse them throughout inference. Sounds easy, proper? Which might be why each main LLM on the market has adopted it, the drop in latency is tough to argue with.

    Although KV cache got here as a silver lining for LLMs, it introduced up extra challenges. It launched further reminiscence overhead. This won’t be a giant situation for SLMs, however mega-LLMs with billions of parameters now turned harder to load on machines. Roughly 20-30% further VRAM is consumed by the KV cache alone. The larger limitation is that this overhead shouldn’t be static, it retains rising. This could develop as much as the mannequin measurement itself with lengthy contexts or extra concurrent customers, since every person will get their very own KV cache. To resolve this many researchers launched completely different approaches like Grouped-Question Consideration (GQA) [2], PagedAttention (VLLM) [3], Quantization (to 4-bit or 8-bit). Nevertheless, all of those helped with the reminiscence overhead situation however accuracy needed to be compromised for that. There was no answer to each compress them and retain authentic accuracy. Then got here TurboQuant from Google, which surprisingly manages to do each. The authors additionally show that this answer sits on the theoretical optimum, the absolute best for this class of drawback.

    TurboQuant comes with two levels: PolarQuant and Residual Correction [4].

    PolarQuant (Stage 1): Compresses the Ok and V matrices.

    Residual Correction (Stage 2): Corrects the quantization error left after PolarQuant, recovering misplaced info.

    Making use of each sequentially is what makes it completely different from conventional quantization. Here’s a visible breakdown:

    Conventional quantization reconstructs the vector. TurboQuant reconstructs what consideration really wants. Picture by Creator.

    That ought to offer you a transparent image of the pipeline of TurboQuant and the way it differs from the normal quantization we talked about. Earlier than we dive into every stage, allow us to uncover one other vital factor: since we’re speaking about decreasing the reminiscence overhead, what precisely does TurboQuant retailer in cache? and the way a lot much less reminiscence does it really take up? Allow us to look into that visually under:

    The image shows intuitive difference between Original model, Traditional Quantization and TurboQuant in terms of the storage they carry and an example showing their compression and accuracy they offer.
    Storage Effectivity vs. Accuracy: Evaluating the architectural variations between customary INT8/INT4 compression and TurboQuant’s residual-based storage pipeline. Picture by Creator.

    You won’t absolutely grasp what Idx, QJL, and ε imply simply but, however they’ll grow to be clear as we unpack this pipeline step-by-step. For now, the desk above provides you the important concept: it reveals precisely what TurboQuant shops in contrast with conventional quantization.

    The important thing takeaway? Though each methods obtain equivalent compression charges (the additional ε scalar is negligible when you unfold it throughout the vector dimensions), TurboQuant retains accuracy on par with the unique full-precision mannequin. In reality, the official paper studies that TurboQuant delivers greater than 4.5–5x KV cache compression, that’s efficient 3.5–2.5 bits per channel, with near-zero accuracy loss in follow. That’s fairly phenomenal.

    Now let’s stroll by the precise step-by-step move of TurboQuant, the precise sequence we previewed within the diagram earlier.

    Stage 1 (PolarQuant):

    This entails two main operations in it, Rotation and LLoyd Max Quantization.

    However why rotation within the first place? The key flaw of conventional quantization is how badly it handles outliers. To make this concrete, lets assume we’ve a 4 dimensional key vector for a token: [0.125, 0.103, 0.220, 6.030] (Outliers like this are literally fairly frequent in consideration keys). Now if we quantize them historically, the quantizer has to stretch its restricted ranges to cowl that huge 6.030 spike. The end result? One thing like [0, 0, 0, 1], virtually all the data is misplaced.

    Rotating the vector resolves this situation. This “spinning” of vector in high-dimensional house (y = R*x, the place R is random orthogonal rotation matrix) removes the spike and immerses its power to the opposite coordinates making the vector distribution clean (Isotropic). The values are modified however the total magnitude stays the identical. After rotation, the identical instance vector may look one thing extra balanced like [1.42, -0.85, 2.31, 0.97].

    Two 3D coordinate plots showing data transformation. The left plot labeled 'Spiky (before)' shows data points clustered with one massive red outlier dimension. An arrow labeled 'Rotation applied' leads to the right plot labeled 'Isotropic (After)', where the data points are evenly distributed within a shaded green sphere, accompanied by a small Beta/Gaussian distribution curve.
    From Spikes to Spheres: Randomized rotation eliminates “spiky” outlier dimensions, spreading outlier power throughout dimensions to realize an isotropic (uniform) distribution. Picture by Creator.

    This smoothed distribution for high-dimensional vectors brings us near gaussian distribution (in follow, the rotated vector is uniformly distributed on the unit sphere, as anticipated from the central restrict theorem). Because of this, every coordinate thus follows beta-like distribution over the power current within the vector.

    xi2∑j=1dxj2∼Beta(12,d−12)frac{x_i^2}{sum_{j=1}^d x_j^2} sim textual content{Beta}left(frac{1}{2}, frac{d-1}{2}proper)

    the place d is head dimension
    Tip (skip if you happen to’re not into the mathematics particulars): That is linked to a basic property in multivariate statistics, the place if X1, X2, …. Xd ~ N(0,1) are unbiased and identically distributed (i.i.d), then Xi2 ~ Chi-Squared distribution and there’s a theorem which states that:

    If U∼χ2(ν1) and V∼χ2(ν2), then:UU+V∼Beta(ν12,ν22)textual content{If } U sim chi^2(nu_1) textual content{ and } V sim chi^2(nu_2), textual content{ then:} frac{U}{U+V} sim textual content{Beta}left( frac{nu_1}{2}, frac{nu_2}{2} proper)

    Now rotation has led us to the purpose that we all know what’s the distribution like of coordinates. Now comes second main operation in stage 1: Lloyd Max Quantization:

    The entire concept behind Lloyd-Max [5,6] is to put the quantization ranges (centroids) in precisely the proper spots so the imply squared error is minimized. It’s principally sensible clustering for 1D knowledge. Let’s simplify it with an instance. Taking the identical rotated vector as above: [1.42, -0.85, 2.31, 0.97]. Suppose we’re doing 1 bit quantization right here.

    • Variety of centroids or ranges listed below are 2bits = 21 = 2.
    • Allow us to take preliminary random ranges as [0.5, 1.5], their mid-point or boundary is (0.5 + 1.5)/2 ~ 1, thus the quantized values now grow to be [1.5, 0.5, 1.5, 0.5] (All values under 1 belong to 0.5 and above 1 belong to 1.5). That’s the concept of quantization proper? however what we discover is there’s a lot of error right here, i.e., MSE may be very excessive.
    • Thus we’ve to seek out optimum ranges such that MSE is minimal and values are greatest represented round them.
    • That is executed by Llyod Max quantization: since now new values are [1.5, 0.5, 1.5, 0.5], allotting two clusters:
      -0.85, 0.97 –> 0.5 stage cluster,
      1.42, 2.31 –> 1.5 stage cluster.
      Taking their imply, 0.5 stage cluster imply ~ 0.06 and 1.5 cluster imply ~ 1.86.
      So now the brand new ranges are modified from [0.5, 1.5] to [0.06, 1.86], and our new boundary now could be (0.06+1.86)/2 ~ 0.96, now values decrease than 0.96 belong to 0.06 stage and values above 0.96 belong to 1.86. This retains on reiterating till we attain a degree the place MSE doesn’t enhance.

    Tip: There’s a basic statistical cause this works: the worth that minimizes squared error for any group of factors is solely their imply.

    However wait, operating this repetitive course of on each new vector throughout inference can be means too sluggish, proper? Right here’s the place the rotation pays off once more. As a result of each coordinate now follows the identical recognized distribution (the Beta we noticed earlier), we don’t must compute a recent Lloyd-Max codebook for each new piece of information. As a substitute, the optimum codebook relies upon solely on two mounted parameters: the head dimension (d) and the variety of bits (b). We compute it as soon as, offline, and reuse it perpetually. A snippet of this codebook is proven under:

    Precomputed Lloyd-Max codebooks for various bit-widths and head dimensions. The distribution for every coordinate is all the time Beta(1/2, (d−1)/2). Picture by Creator.

    The quantized values should not saved in float, however within the type of indexes (idx) of ranges. Instance: if the degrees have been 8, then its listed (idx) type is 0, 1, 2, 3, 4, 5, 6, 7. Thus needing 3 bits for storage of every worth.

    Be aware: In TurboQuant’s Stage 1 (PolarQuant), the precise saved index (idx) makes use of b-1 bits per dimension (codebook measurement = 2{b-1}), not b bits. The additional bit per dimension comes from the QJL residual correction in Stage 2 (Identical was talked about in storage comparability diagram of this text above, hope now it’s clear) The desk above reveals the final Lloyd-Max setup; TurboQuant cleverly splits the finances to depart room for that correction.

    These indexes are saved in cache till the token is evicted. Dequantization occurs on the fly at any time when that token’s Ok is required for consideration, idx is seemed up within the codebook to retrieve the float values for every index, and this matrix is then multiplied with the transpose of the unique rotation matrix to get again Ok̂ within the authentic house. This completes the primary stage.

    Subsequently, lastly we’re capable of extract residuals:

    ε = Unique Ok matrix – Ok̂ matrix [dequantized]

    Stage 2 (Residual Correction):

    Now that we’ve the residuals, probably the most intriguing a part of TurboQuant follows.

    Conventional quantization didn’t even look into the residuals. Nevertheless TurboQuant doesn’t discard this residual. As a substitute it asks a intelligent query, no matter info was misplaced throughout Stage 1 compression, can we extract its important traits fairly than storing it absolutely? Consider it as asking easy sure/no questions concerning the residual: is that this dimension leaning constructive or detrimental? The solutions to those sure/no questions are what Stage 2 preserves.

    To do that, a random projection matrix S of form (d, d) is multiplied with the residual vector. The indicators of the ensuing values, both +1 or -1, are what really get saved.

    Signal(ε(seq_length, d) * S(d, d))

    These signal projections are referred to as the Quantized Johnson-Lindenstrauss (QJL) Rework [7].

    Be aware: The randomness of S shouldn’t be arbitrary, the Johnson-Lindenstrauss lemma ensures that random projections protect inside product construction with excessive chance.

    However indicators alone solely seize path, not magnitude. So alongside QJL, the L2 norm of the residual (‖ε‖₂) can also be saved as a single scalar per vector. This scalar is what restores the magnitude throughout reconstruction.

    Throughout dequantization, these saved signal bits are multiplied again with transposed S, then scaled by (√π/2)/d and the saved norm ‖ε‖₂. The authors present that with out this scaling issue, the sign-based estimation of the inside product is biased, this correction is what makes it unbiased. The precise method is proven under:

    𝐊~QJL=π/2d×‖ϵ‖×𝐒⊤×QJLtilde{mathbf{Ok}}_{textual content{QJL}} equal frac{sqrt{pi/2}}{d} occasions |epsilon| occasions mathbf{S}^high occasions textual content{QJL}

    Lastly the 2 components from each levels are added collectively to get:

    Ok̃ = Ok̂ + Ok̃QJL

    Among the final minute observations:

    • Full TurboQuant pipeline summed up: Stage 1 handles the majority compression, Stage 2 hunts down what was misplaced and provides it again.
    • So what really sits in cache for every token is three issues: Idx, the signal bits QJL, and the scalar norm ‖ε‖₂. That’s the full compressed illustration.
    • The authors formally show that this two-stage design reaches the theoretical optimum, that means no methodology working throughout the identical bit finances can do higher at preserving consideration dot merchandise.

    Conclusion:

    On the finish of the day, TurboQuant works as a result of it stops obsessing over good vector reconstruction and cleverly focuses on what the eye mechanism really must see. As a substitute of combating the VRAM “tax” with extra complicated calibration, it simply makes use of a cleaner mathematical pipeline to get the job executed.

    As we hold pushing for longer context home windows, the KV cache bottleneck isn’t going away. However as this framework reveals, we don’t essentially want extra {hardware}, we simply must be extra intentional about how we deal with the info we have already got.

    With the introduction of TurboQuant, is the chapter of KV Cache reminiscence administration lastly closed? Or is that this simply the muse for one thing much more highly effective?

    Be aware: This breakdown represents my present understanding of the TurboQuant pipeline. Any errors in interpretation are fully my very own, and I encourage readers to consult with the unique analysis for the total mathematical proofs.

    References:

    [1] Vaswani, A., et al. (2017). Attention Is All You Need. Advances in Neural Information Processing Systems (NeurIPS 2017).

    [2] Ainslie, J., et al. (2023). GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints. EMNLP 2023.

    [3] Kwon, W., et al. (2023). Efficient Memory Management for Large Language Model Serving with PagedAttention. SOSP 2023.

    [4] Zandieh, A., et al. (2025). TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate. arXiv:2504.19874.

    [5] Lloyd, S. P. (1982). Least Squares Quantization in PCM. IEEE Transactions on Information Theory, 28(2), 129–137.

    [6] Max, J. (1960). Quantizing for Minimum Distortion. IRE Transactions on Information Theory, 6(1), 7–12.

    [7] Zandieh, A., et al. (2024). QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead. AAAI 2025.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    April 18, 2026

    Your RAG System Retrieves the Right Data — But Still Produces Wrong Answers. Here’s Why (and How to Fix It).

    April 18, 2026

    Europe Warns of a Next-Gen Cyber Threat

    April 18, 2026

    How to Learn Python for Data Science Fast in 2026 (Without Wasting Time)

    April 18, 2026
    Leave A Reply Cancel Reply

    Editors Picks

    Our Favorite Apple Watch Has Never Been Less Expensive

    April 19, 2026

    Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)

    April 19, 2026

    Today’s NYT Strands Hints, Answer and Help for April 20 #778

    April 19, 2026

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Best Holiday Coffee Subscription Deals (2025): Atlas, Trade

    December 11, 2025

    2025’s Best Phones Were Also Its Wackiest

    December 25, 2025

    The Incompetence of DOGE Is a Feature, Not a Bug

    February 21, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.