Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Startup Muster’s climate tech report reveals investment delivers jobs and much more across Australia
    • Apple CEO Tim Cook Is Stepping Down
    • DraftKings Alberta entry signals Canada gambling market legal shift ahead
    • Blue Origin Rocket Grounded After ‘Mishap’ Destroys Customer Satellite
    • Context Payload Optimization for ICL-Based Tabular Foundation Models
    • Nitecore AP01 pocket pump weighs less than an ounce, works in seconds
    • AI robs you of the achievement of effort. Here’s why that sucks.
    • A Humanoid Robot Set a Half-Marathon Record in China
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Tuesday, April 21
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Context Payload Optimization for ICL-Based Tabular Foundation Models
    Artificial Intelligence

    Context Payload Optimization for ICL-Based Tabular Foundation Models

    Editor Times FeaturedBy Editor Times FeaturedApril 21, 2026No Comments17 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    previous couple of years have seen a surge of funding in open‑supply and industrial tabular basis fashions constructed round in‑context studying (ICL). In 2025, for instance, the software program large SAP launched the SAP-RPT-1 suite of fashions, focusing on ERP-centric duties in areas reminiscent of monetary planning, gross sales and procurement order processing, and provide chain administration. Not like conventional supervised machine studying – the place fashions are educated and fine-tuned for particular duties – ICL permits a single, generically pretrained mannequin to adapt on the fly utilizing comparatively small quantities of task-specific information supplied within the context payload, which acts as a form of ephemeral coaching set.

    Whereas the shift to ICL eliminates the necessity for pricey (re)coaching of task-specific tabular fashions, it introduces an vital accuracy-latency commerce‑off at inference time, particularly for centrally hosted fashions like SAP-RPT-1. On the one hand, the time required to ship the context payload to the mannequin server, and for the mannequin to interpret and be taught from that payload, instantly contributes to general response latency. Smaller payloads can cut back latency. Then again, the mannequin might have to infer complicated schemas and information distributions from heterogenous contextual information that doubtlessly accommodates outliers, lacking values, and long-tail patterns. Correct predictions sometimes depend upon massive, well-curated context payloads. In follow, this implies discovering methods to distill the context payload to scale back response time with out degrading the mannequin’s predictive efficiency. Secondary commerce‑offs contain elements reminiscent of mannequin service throughput, response stability, and the financial value of mannequin utilization. All these challenges make context payload optimization a central architectural concern in ICL‑based mostly workflows.

    Within the following sections, we’ll study the inference‑time commerce‑offs entailed by ICL-based tabular basis fashions in additional element, define sensible methods for optimizing context payloads, and exhibit the usage of KNN‑based mostly context prefiltering as a payload optimization approach with an end-to-end instance in Python.

    Inference-Time Commerce-Offs

    An efficient method to analyzing the inference‑time trade-offs of ICL‑based mostly tabular basis fashions is to use the so-called “iron triangle” framework mentioned in this previous article. There, we confirmed how prospects and customers of AI techniques should navigate the inherent tensions between response high quality, inference value, and latency, which is an inference‑time analog of the basic, design-time “triple constraint” in mission administration. Crucially, enhancing any one among these dimensions sometimes places stress on the others: larger‑high quality responses are typically extra computationally intensive, which will increase each latency and value; lowering latency usually requires sacrificing high quality or paying extra for sooner {hardware}; and reducing value often means accepting slower or decrease‑high quality AI responses.

    We encounter this identical triangular pressure within the context of ICL‑based mostly tabular basis fashions. The first commerce‑off is the necessity to stability response high quality (measured when it comes to precision, recall, and so on.) in opposition to latency. Take into account an actual‑time fraud detection system deployed at ATMs: each precision and pace are essential, but they pull the system in several instructions with regards to developing the context payload. Bigger, richer payloads give the AI mannequin extra examples from which to deduce the underlying schema, acknowledge uncommon and lengthy‑tail patterns, and thus ship larger‑high quality predictions. On the identical time, every further row or function will increase the amount of knowledge that should be despatched to the mannequin server and interpreted throughout inference, which may introduce a measurable overhead to the end-to-end response time. In real-time functions, even a small enhance in payload dimension can noticeably degrade system responsiveness and, in the end, harm consumer expertise.

    Moreover, a lot of associated, secondary commerce‑offs emerge in follow. A bigger context payload not solely slows down the inference but additionally consumes extra tokens. Underneath token-based billing, this creates a pressure between response latency and financial value of mannequin utilization for patrons, which turns into particularly salient for centrally hosted fashions like SAP-RPT-1. A bigger payload can enhance the compute time per request, making a latency-throughput commerce‑off which will power the AI system’s growth staff to make powerful scaling selections. There may be additionally a possible quality-stability commerce‑off: growing the amount and number of the context information can enhance predictive accuracy however might cut back determinism by introducing noise and making outputs extra delicate to small variations within the information. Lastly, extra subtle payload choice strategies reminiscent of KNN-based retrieval can enhance prediction high quality but additionally enhance payload development time, including to the general latency.

    Context Payload Optimization Methods

    Typically, methods to optimize the context payload span two orthogonal dimensions: the methodology and the second of optimization. The strategy of optimization determines how precisely the payload is curated, i.e., the particular filtering, clustering, or embedding strategies used to compress the rows within the uncooked context. The second of optimization considerations when and the place the optimization is carried out, e.g., whether or not it’s precomputed offline or derived on the fly at inference time, and whether or not that is completed by the consumer or the mannequin service. Selecting a selected second for developing the optimized payload can have vital penalties for inference latency and maintainability. The strategy and second of payload optimization needs to be aligned with the scope, finances, latency threshold, and high quality necessities of a given AI use case.

    Strategies of Optimization

    We will broadly distinguish between task-agnostic and task-aware strategies of payload optimization. Job‑agnostic strategies depend on strategies reminiscent of random sampling and recency‑based mostly sampling, which don’t require information of the particular prediction activity or the semantic construction of the info. Random sampling is simple to implement, quick, and unbiased, making it a helpful baseline or fallback technique. Nevertheless, it could inadvertently discard rows that seize uncommon but vital patterns essential for mannequin efficiency. Recency‑based mostly sampling assumes that timestamps are recorded within the information, and retrieves the latest rows, which could be priceless for information distributions which might be time‑sure (e.g., seasonal) or inclined to temporal drift. Nevertheless, recency-based sampling ignores the broader construction of the dataset and will obese quick‑time period noise. General, activity‑agnostic strategies provide simplicity and pace however present restricted management over the representativeness and relevance of the ensuing payload.

    Against this, activity‑conscious strategies can incorporate details about the prediction activity, the question rows, and the underlying information distribution to pick probably the most related rows for the context payload. A typical method is Okay‑nearest neighbors (KNN) sampling, which identifies rows within the historic information which might be much like the question rows. This may yield extremely related contextual information and robust empirical efficiency, however requires distance metrics (e.g., cosine), and auxiliary fashions to vectorize or embed the info, and might thus be computationally costly at scale. One other class of strategies makes use of clustering algorithms (e.g., Okay‑means, hierarchical clustering, DBSCAN) to attract consultant samples from clusters pertaining to the question rows. This may guarantee enough protection of numerous patterns within the information whereas avoiding redundancy, although it sometimes requires offline computation of clusters and periodic re-computation to make sure that the clusters stay updated.

    Extra subtle activity‑conscious strategies are additionally potential. For instance, the uncooked context and question rows could be embedded in a low-dimensional vector house – encoded within the request, and decoded within the response of the muse mannequin API; this quantities to a type of lossy compression that sacrifices some accuracy for the latency and value advantages of a smaller payload. Retrieval‑augmented era (RAG) strategies can additional enrich the payload with area‑particular grounding to spice up response relevance.

    In sum, activity‑conscious strategies usually produce larger‑high quality context payloads however include larger engineering and computational overhead.

    Moments of Optimization

    One key moment-related resolution is about whether or not among the payload optimization steps could be pre-computed offline (i.e., the “when”). For instance, a curated, “golden” dataset could be pre-computed from historic information, optimized for informational density, and enriched with metadata (e.g., cluster IDs, hashtags, and so on.). Related rows could be chosen from this leaner, golden dataset to shortly assemble and ship the context payload at inference time. Golden datasets are well-suited for secure schemas and repetitive duties (e.g., auto-completion of widespread gross sales orders within the ERP area), however their curation and upkeep can create further overhead for the event staff. In distinction, on‑the‑fly optimization derives the payload at inference time based mostly on the present question rows and accessible historic information. This method is extra adaptive however can enhance the compute value and latency for every inference name. On‑the‑fly optimization additionally doesn’t essentially cut back the event staff’s overhead – the financial savings from not sustaining a golden dataset could also be offset by the immediate engineering effort required to optimize the context payload dynamically.

    One other moment-related resolution considerations whether or not the optimization happens on the consumer or service aspect (i.e., the “the place”). Shopper‑aspect optimization offers the consuming software full management, permitting bespoke preprocessing, native caching, and simpler debugging. But it surely additionally makes every consumer liable for implementing and sustaining its personal optimization logic – an effort that could be duplicated throughout functions and groups. Shopper‑aspect processing additionally requires enough compute assets, which can be onerous for functions operating on useful resource‑constrained IoT or edge gadgets. Service‑aspect optimization, in contrast, advantages from economies of scale: with enough utilization throughout purchasers, the AI service supplier can justify extra subtle algorithms and better‑finish {hardware} than any single consumer would deploy by itself. The supplier also can leverage deep, mannequin‑particular experience and visibility into how the mannequin performs throughout a number of consumer environments – compounding over time – to develop a extra refined and harmonized technique. Service‑aspect processing additionally simplifies governance, since software program updates, privateness controls, audit logging, and compliance checks could be enforced uniformly. Downsides embrace decreased transparency for purchasers, larger load on the supplier’s infrastructure, and the continued value to the AI service supplier of creating and sustaining the optimization logic.

    In fact, ICL-based tabular AI workflows also can undertake a hybrid technique that mixes the strengths of various choices. One helpful sample consists of coarse consumer‑aspect filtering to scale back the payload to a manageable dimension (e.g., choosing the highest‑Okay nearest neighbors or making use of another easy heuristics), paired with advantageous‑grained service‑aspect pruning utilizing mannequin‑conscious indicators to refine the ultimate context earlier than inference. Hybrid approaches can strike an excellent stability between transparency, flexibility, governance, and efficiency.

    Fingers-On Demo: KNN‑Based mostly Context Prefiltering

    Within the following instance Python code, we’ll use the Solar Flare dataset and the playground version of the SAP-RPT-1 mannequin. See this article for an introduction to the mannequin API.

    Setup

    First, set up the required third-party packages utilizing the necessities.txt file:

    pandas
    numpy
    requests
    scikit-learn
    ucimlrepo

    Subsequent, create a file known as demo.py and add the next import statements:

    import pandas as pd
    import numpy as np
    import time
    import json
    import requests
    import sys
    import os
    from datetime import datetime
    from sklearn.preprocessing import LabelEncoder
    from sklearn.metrics import pairwise_distances
    from ucimlrepo import fetch_ucirepo

    Add these configuration parameters:

    EXPERIMENT_ORDER = ["without_prefiltering", "with_prefiltering"]
    
    API_URL = "https://rpt.cloud.sap/api/predict"
    ACCESS_TOKEN_PATH = "access_token.json"  # File containing your API token
    
    with open(ACCESS_TOKEN_PATH, "r") as f:
        token = json.load(f)["access_token"]
    
    n_test_rows = 20  # Variety of question rows to make use of
    mask_proportion = 0.3  # Proportion of column values to masks (simulating a prediction situation)
    max_masked_columns = 4  # Playground mannequin limitation
    random_seed = 3  # Guarantee reproducibility
    rng = np.random.default_rng(random_seed)  # Create a random quantity generator
    
    ctx_max_rows = 600  # Max rows allowed in context window

    Add this code to allow output logging:

    class Tee(object):
        """A easy stdout tee: Prints to console and writes to a log file."""
        def __init__(self, logfile_path):
            self.terminal = sys.stdout
            self.log = open(logfile_path, "a", encoding="utf-8")
    
        def write(self, message):
            self.terminal.write(message)
            self.log.write(message)
    
        def flush(self):
            self.terminal.flush()
            self.log.flush()
    
    script_dir = os.path.dirname(os.path.abspath(__file__))
    
    timestamp = datetime.now().strftime("%Ypercentmpercentd_percentHpercentMpercentS")
    
    log_filename = f"log_knn_seed{random_seed}_{"".be part of([x[0] for x in EXPERIMENT_ORDER])}_{timestamp}.log"
    
    log_path = os.path.be part of(script_dir, log_filename)
    
    sys.stdout = Tee(log_path)
    
    print(f"Logging enabled. Output is being written to: {log_path}n")

    Subsequent, we’ll add helper capabilities for diagnostics, developing the SAP-RPT-1 mannequin payload, calling the mannequin, and exporting the prediction outcomes to a CSV file.

    An instance perform for computing function statistics of the dataset:

    def compute_feature_stats(df, random_seed):
        """
        Computes cardinality and HHI focus metric for every function.
        Saves outcomes to: feature_stats_knn_seed_.csv
        """
        stats = []
    
        for col in df.columns:
            if col == "id":
                proceed
    
            cardinality = df[col].nunique()
    
            # Normalized worth counts
            vc = df[col].value_counts(normalize=True)
    
            # Herfindahl-Hirschman Index
            # HHI = 1.0 implies completely concentrated (just one worth seems)
            # HHI = 0.01 implies very uniform distribution
            # Increased HHI implies larger function focus
            hhi = float((vc ** 2).sum())
    
            # Dominant class proportion (share of commonest function worth)
            max_prop = float(vc.max())
    
            stats.append({
                "function": col,
                "cardinality": cardinality,
                "hhi": hhi,
                "max_proportion": max_prop
            })
    
        stats_df = pd.DataFrame(stats)
    
        timestamp = datetime.now().strftime("%Ypercentmpercentd_percentHpercentMpercentS")
        filename = f"feature_stats_knn_seed{random_seed}_{timestamp}.csv"
    
        stats_df.to_csv(filename, index=False)
        print(f"Saved function stats to {filename}n")

    Features for developing the SAP-RPT-1 mannequin payload by simulating a prediction situation, and safely calling the mannequin API:

    def mask_row_values(row, allowed_mask_columns, p, rng):
        row = row.copy()
        mask_candidates = [c for c in allowed_mask_columns if rng.random() < p]
        for c in mask_candidates:
            row[c] = "[PREDICT]"
        return row
    
    
    def build_payload(df, index_column="id"):
        return {"rows": df.to_dict(orient="data"), "index_column": index_column}
    
    
    def safe_call_rpt1(payload, token):
        headers = {
            "Content material-Sort": "software/json",
            "Authorization": f"Bearer {token}"
        }
    
        strive:
            response = requests.put up(API_URL, json=payload, headers=headers)
    
            strive:
                response_json = response.json()
            besides ValueError:
                print("nNon-JSON response from RPT-1:")
                print(response.textual content)
                return False, {"error": "Non-JSON response"}
    
            if "error" in response_json:
                print("nRPT-1 API returned an error:")
                print(json.dumps(response_json, indent=2))
                return False, response_json
    
            if "aiApiResponsePayload" not in response_json:
                print("nMissing aiApiResponsePayload:")
                print(json.dumps(response_json, indent=2))
                return False, response_json
    
            payload = response_json["aiApiResponsePayload"]
    
            if "predictions" not in payload:
                print("nMissing predictions in aiApiResponsePayload:")
                print(json.dumps(response_json, indent=2))
                return False, response_json
    
            return True, response_json
    
        besides requests.exceptions.RequestException as e:
            print("nHTTP request failed:")
            print(str(e))
            return False, {"error": str(e)}

    Features for prediction post-processing:

    def flatten_predictions(pred_list):
        flat = {}
        for entry in pred_list:
            row = {}
            for key, worth in entry.objects():
                if key == "id":
                    row["id"] = str(worth)
                else:
                    if isinstance(worth, record) and len(worth) > 0:
                        row[key] = worth[0].get("prediction")
                    else:
                        row[key] = None
            flat[row["id"]] = row
        return pd.DataFrame(flat.values()).set_index("id")
    
    
    def evaluate_accuracy(pred_df, true_df, masked_df):
        right = 0
        whole = 0
        for idx in masked_df.index:
            for col in masked_df.columns:
                # Doesn't rely predictions for unmasked columns
                if masked_df.loc[idx, col] == "[PREDICT]":
                    whole += 1
                    if pred_df.loc[idx, col] == true_df.loc[idx, col]:
                        right += 1
        return right, whole, right / whole if whole > 0 else np.nan
    
    
    def export_predictions_dynamic(true_rows, masked_rows, pred_df, filename):
        """
        Export a NaN-free CSV the place:
          - masked columns get mannequin predictions
          - unmasked columns maintain their true values
          - pred_df is aligned to true_rows by id
        """
    
        # Guarantee pred_df is listed by id
        pred_df = pred_df.copy()
        pred_df.index = pred_df.index.astype(int)
    
        # Reindex pred_df to match true_rows
        pred_df = pred_df.reindex(true_rows.index)
    
        # Begin with true rows
        merged = true_rows.reset_index().copy()
    
        # Align masks by id
        masked_by_id = masked_rows.copy()
    
        # Add prediction columns dynamically
        for col in pred_df.columns:
            pred_col = f"pred_{col}"
    
            # Begin with true values
            merged[pred_col] = merged[col]
    
            # Overwrite solely the place masked
            masks = masked_by_id[col] == "[PREDICT]"
            merged.loc[mask.values, pred_col] = pred_df.loc[mask.values, col]
    
        # Save CSV
        merged.to_csv(
            filename,
            index=False,
            encoding="utf-8",
            quoting=1
        )
    
        print(f"Saved outcomes to {filename}n")

    Subsequent, load and put together the Photo voltaic Flare dataset:

    solar_flare_data = fetch_ucirepo(id=89)
    
    df = pd.concat([solar_flare_data.data.features, solar_flare_data.data.targets], axis=1)
    
    df.columns = [
        "zurich_class",
        "spot_size",
        "spot_dist",
        "activity",
        "evolution",
        "prev24_fac",
        "hist_complex",
        "region_complex",
        "area",
        "area_largest_spot",
        "c_class",
        "m_class",
        "x_class",
    ]
    
    if "id" not in df.columns:
        df["id"] = df.index.astype(str)
    
    # Convert numeric codes to phrases to power categorical conduct
    replacement_map = {"0": "zero", "1": "one", "2": "two", "3": "three"}
    for col in df.columns:
        if col != "id":
            df[col] = df[col].astype(str)
            df[col] = df[col].substitute(replacement_map)

    Save function statistics:

    compute_feature_stats(df, random_seed)

    Now add code to simulate the prediction situation. First, break up the Photo voltaic Flare dataset into context and question/check rows:

    df_test_rows = df.pattern(n=n_test_rows, random_state=random_seed).reset_index(drop=True)
    
    df_context_full = df.drop(df_test_rows.index).reset_index(drop=True)

    Then randomly masks some columns within the question/check rows:

    all_columns = [c for c in df.columns if c != "id"]
    
    allowed_mask_columns = rng.alternative(all_columns, dimension=max_masked_columns, substitute=False)
    
    df_test_rows_masked = df_test_rows.apply(
        lambda row: mask_row_values(row, allowed_mask_columns, mask_proportion, rng),
        axis=1
    )
    
    df_test_rows_masked["id"] = df_test_rows["id"]

    Prefiltering Logic

    Add the next code to derive an optimized set of context rows (df_context_prefiltered) on the fly utilizing KNN-based prefiltering:

    start_prefilter = time.time()
    
    n_test = df_test_rows.form[0]
    budget_per_row = max(1, (ctx_max_rows - n_test) // n_test)
    
    print(f"Context max rows: {ctx_max_rows}")
    print(f"Variety of check rows: {n_test}")
    print(f"KNN finances per check row: {budget_per_row}n")
    
    # Encode utilizing LabelEncoder (can use extra subtle vectorizers and embedding fashions in follow)
    encoders = {}
    df_context_enc = df_context_full.copy()
    df_test_enc = df_test_rows.copy()
    
    for col in df_context_full.columns:
        if col == "id":
            proceed
        le = LabelEncoder()
        df_context_enc[col] = le.fit_transform(df_context_full[col].astype(str))
        df_test_enc[col] = le.remodel(df_test_rows[col].astype(str))
        encoders[col] = le
    
    X_context = df_context_enc.drop(columns=["id"]).to_numpy()
    X_test = df_test_enc.drop(columns=["id"]).to_numpy()
    
    selected_indices = []
    for x_test in X_test:
        dists = pairwise_distances([x_test], X_context)[0]
        nearest = np.argsort(dists)[:budget_per_row]
        selected_indices.lengthen(nearest)
    
    df_context_prefiltered = (
        df_context_full.iloc[selected_indices]
        .drop_duplicates()
        .reset_index(drop=True)
    )
    
    end_prefilter = time.time()
    prefilter_time = end_prefilter - start_prefilter
    
    print(f"Prefiltering time: {prefilter_time:.3f} seconds")
    print(
        f"Prefiltered rows: {len(df_context_prefiltered)} "
        f"({100 * len(df_context_prefiltered) / len(df_context_full):.2f}% of full context)n"
    )

    Working Experiments

    Add the next capabilities to name the mannequin with and with out context optimization (i.e., KNN-based prefiltering).

    def run_without_prefiltering():
        print("=== CASE 1: NO PREFILTERING ===")
        
        begin = time.time()
    
        df_context_without_prefiltering = pd.concat(
            [df_context_full, df_test_rows_masked], ignore_index=True
        )
    
        payload = build_payload(df_context_without_prefiltering)
    
        success, response = safe_call_rpt1(payload, token)
    
        finish = time.time()
    
        inference_time = finish - begin
        print(f"Case 1 inference time: {inference_time:.3f} seconds")
    
        acc = np.nan
        if success:
            pred_df = flatten_predictions(response["aiApiResponsePayload"]["predictions"])
            pred_df = pred_df.astype(str)
    
            true_rows = df_test_rows.set_index("id")
            masked_rows = df_test_rows_masked.set_index("id")
    
            right, whole, acc = evaluate_accuracy(pred_df, true_rows, masked_rows)
            print(f"Case 1 accuracy: {right}/{whole} = {acc:.3f}n")
    
            # Use helper for NaN-free export
            timestamp = datetime.now().strftime("%Ypercentmpercentd_percentHpercentMpercentS")
            filename = f"results_knn_seed{random_seed}_c_{timestamp}.csv"
            export_predictions_dynamic(true_rows, masked_rows, pred_df, filename)
    
        else:
            print("Skipping accuracy analysis.n")
    
        return inference_time, acc
    
    
    def run_with_prefiltering():
        print("=== CASE 2: KNN-BASED PREFILTERING ===")
        
        begin = time.time()
        
        df_context_with_prefiltering = pd.concat(
            [df_context_prefiltered, df_test_rows_masked], ignore_index=True
        )
    
        payload = build_payload(df_context_with_prefiltering)
    
        success, response = safe_call_rpt1(payload, token)
    
        finish = time.time()
    
        inference_time = finish - begin
        print(f"Case 2 inference time (RPT-1 name): {inference_time:.3f} seconds")
    
        acc = np.nan
        if success:
            pred_df = flatten_predictions(response["aiApiResponsePayload"]["predictions"])
            pred_df = pred_df.astype(str)
    
            true_rows = df_test_rows.set_index("id")
            masked_rows = df_test_rows_masked.set_index("id")
    
            right, whole, acc = evaluate_accuracy(pred_df, true_rows, masked_rows)
            print(f"Case 2 accuracy: {right}/{whole} = {acc:.3f}n")
    
            # Use helper for NaN-free export
            timestamp = datetime.now().strftime("%Ypercentmpercentd_percentHpercentMpercentS")
            filename = f"results_knn_seed{random_seed}_t_{timestamp}.csv"
            export_predictions_dynamic(true_rows, masked_rows, pred_df, filename)
    
        else:
            print("Skipping accuracy analysis.n")
    
        return inference_time, acc

    Lastly, run the experiments and print/log the outcomes:

    def run_experiments(order):
        outcomes = {}
        for exp so as:
            if exp == "without_prefiltering":
                outcomes["without_prefiltering"] = run_without_prefiltering()
            elif exp == "with_prefiltering":
                outcomes["with_prefiltering"] = run_with_prefiltering()
            else:
                print(f"Unknown experiment kind: {exp}")
        return outcomes
    
    print("=== RUNNING EXPERIMENTS ===n")
    outcomes = run_experiments(EXPERIMENT_ORDER)
    
    print("n=== FINAL RESULTS ===")
    print(outcomes)

    Notice that the primary name to the mannequin API might take noticeably longer as a result of the service must heat up. This may contain loading the mannequin into reminiscence, initializing runtime kernels, and establishing community connections. Subsequent calls reuse the initialized state and thus are likely to run sooner. Altering the order of experiments will shift which one absorbs the preliminary heat‑up value. To see this in motion, strive altering the order of experiments within the EXPERIMENT_ORDER configuration parameter (e.g., operating the experiment with prefiltering earlier than the one with out prefiltering).

    The Wrap

    As ICL‑based mostly tabular basis fashions turn out to be extra extensively adopted, the locus of optimization will shift from conventional supervised mannequin coaching to context payload development. The standard, value, and latency traits of an ICL‑based mostly system rely much less on how the muse mannequin was educated and much more on how successfully the context payload is leveraged at inference time. This shift will possible push organizations towards repeatable, reusable patterns for managing context payloads. Simply because the trade ultimately standardized round function shops, information pipelines, and immediate‑engineering conventions, we will anticipate an identical consolidation of greatest practices for context payload design. Over time, these patterns may turn out to be a part of the shared vocabulary for growth groups working with ICL-based tabular basis fashions, elevating context optimization to a primary‑class architectural concern.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    What Does the p-value Even Mean?

    April 20, 2026

    From Risk to Asset: Designing a Practical Data Strategy That Actually Works

    April 20, 2026

    Will Humans Live Forever? AI Races to Defeat Aging

    April 20, 2026

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    April 19, 2026

    Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026
    Leave A Reply Cancel Reply

    Editors Picks

    Startup Muster’s climate tech report reveals investment delivers jobs and much more across Australia

    April 21, 2026

    Apple CEO Tim Cook Is Stepping Down

    April 21, 2026

    DraftKings Alberta entry signals Canada gambling market legal shift ahead

    April 21, 2026

    Blue Origin Rocket Grounded After ‘Mishap’ Destroys Customer Satellite

    April 21, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    14 Best Office Chairs of 2025— I’ve Tested 55+ to Pick Them

    February 1, 2025

    Boman is after startups for its China tech delegation to meet with Alibaba, Geely and Unitree Robotics

    April 16, 2026

    sequestra raises €1.1 million for CO2 utilisation reduction in heavy industry

    February 4, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.