Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • AI Machine-Vision Earns Man Overboard Certification
    • Battery recycling startup Renewable Metals charges up on $12 million Series A
    • The Influencers Normalizing Not Having Sex
    • Sources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic’s designation as a supply chain risk (Axios)
    • Today’s NYT Wordle Hints, Answer and Help for April 20 #1766
    • Scandi-style tiny house combines smart storage and simple layout
    • Our Favorite Apple Watch Has Never Been Less Expensive
    • Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Monday, April 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Why LLMs Aren’t a One-Size-Fits-All Solution for Enterprises
    Artificial Intelligence

    Why LLMs Aren’t a One-Size-Fits-All Solution for Enterprises

    Editor Times FeaturedBy Editor Times FeaturedNovember 18, 2025No Comments11 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    are racing to make use of LLMs, however typically for duties they aren’t well-suited to. In truth, in keeping with latest analysis by MIT, 95% of GenAI pilots fail — they’re getting zero return. 

    An space that has been neglected within the GenAI storm is that of structured knowledge, not solely from an adoption standpoint, but additionally from a technological entrance. In actuality, there’s a goldmine of potential worth that may be extracted from structured knowledge, significantly within the type of predictions. 

    On this piece, I’ll go over what LLMs can and may’t do, what worth you may get from AI working over your structured knowledge, particularly for predictive modeling, and trade approaches used immediately — together with one which I developed with my staff. 

    Why LLMs aren’t optimized for enterprise knowledge and workflows

    Whereas giant language fashions have fully remodeled textual content and communication, they fall brief in making predictions from the structured, relational knowledge that strikes the needle, driving actual enterprise outcomes — buyer lifecycle administration, gross sales optimization, advertisements and advertising, suggestions, fraud detection, and provide chain optimization.

    Enterprise knowledge, the info enterprises are grounded in, is inherently structured. It typically resides in tables, databases, and workflows, the place that means is derived from relationships throughout entities similar to prospects, transactions, and provide chains. In different phrases, that is all relational knowledge.

    LLMs took the world by storm and performed a key position in advancing AI. That mentioned, they had been designed to work with unstructured knowledge and aren’t naturally suited to motive over rows, columns, or joins. Consequently, they battle to seize the depth and complexity inside relational knowledge. One other problem is that relational knowledge modifications in actual time, whereas LLMs are usually skilled on static snapshots of textual content. Additionally they deal with numbers and portions as tokens in a sequence, quite than “understanding” them mathematically. In follow, this implies an LLM is optimized to foretell the subsequent more than likely token, which it does extremely properly, however to not confirm whether or not a calculation is right. So, whether or not the mannequin outputs 3 or 200 when the true reply is 2, the penalty the mannequin receives is similar.

    LLMs are able to multi-step reasoning by means of chain-of-thought-based inferencing, however they will face reliability challenges in sure instances. As a result of they will hallucinate, and accomplish that confidently, would possibly I add, even a small likelihood of error in a multi-step workflow can compound throughout steps. This lowers the general chance of an accurate end result, and in enterprise processes similar to approving a mortgage or predicting provide shortages, only one small mistake may be catastrophic.

    Due to all this, enterprises immediately depend on conventional machine studying pipelines that take months to construct and keep, limiting the measurable influence of AI on income. Whenever you need to apply AI to this sort of tabular knowledge, you might be primarily teleported again thirty years and want people to painstakingly engineer options and construct bespoke fashions from scratch. For every single activity individually! This method is gradual, costly, doesn’t scale, and sustaining such fashions is a nightmare.

    How we constructed our Relational Basis Mannequin

    My profession has revolved round AI and machine studying over graph-structured knowledge. Early on, I acknowledged that knowledge factors don’t exist in isolation. Quite, they’re a part of a graph linked to different items of information. I utilized this view to my work on on-line social networks and data virality, working with knowledge from Fb, Twitter, LinkedIn, Reddit, and others. 

    This perception led me to assist pioneer Graph Neural Networks at Stanford, a framework that permits machines to study from the relationships between entities quite than simply the entities themselves. I utilized this whereas serving as Chief Scientist at Pinterest, the place an algorithm often called PinSage remodeled how customers expertise Pinterest. That work later advanced into Graph Transformers, which carry Transformer structure capabilities to graph-structured knowledge. This permits fashions to seize each native connections and long-range dependencies inside complicated networks. 

    As my analysis superior, I noticed laptop imaginative and prescient remodeled by convolutional networks and language reshaped by LLMs. However, I spotted the predictions companies rely on from structured relational knowledge had been nonetheless ready for his or her breakthrough, restricted by machine studying methods that hadn’t modified in over twenty years! A long time! 

    The fruits of this analysis and foresight led my staff and me to create the primary Relational Basis Mannequin (RFM) for enterprise knowledge. Its function is to allow machines to motive straight over structured knowledge, to grasp how entities, similar to prospects, transactions, and merchandise, join. By realizing the relationships between these entities, we then allow customers to make correct predictions from these particular relationships and patterns. 

    Key capabilities of Relational Basis Fashions. Picture by writer 

    In contrast to LLMs, RFMs have been designed for structured relational knowledge. RFMs are pretrained on plenty of (artificial) datasets in addition to on plenty of duties over structured enterprise knowledge. Like LLMs, RFMs may be merely prompted to provide immediate responses to all kinds of predictive duties over a given database, all with out task-specific or database-specific coaching. 

    We wished a system that would study straight from how actual databases are structured, and with out all the standard guide setup. To make that doable, we handled every database like a graph: tables turned node sorts, rows became nodes, and international keys linked all the pieces collectively. This fashion, the mannequin may truly “see” how issues like prospects, transactions, and merchandise join and alter over time. 

    On the coronary heart of it, the mannequin combines a column encoder with a relational graph transformer. Each cell in a desk is became a small numerical embedding based mostly on what sort of knowledge it holds, whether or not it’s a quantity, class, or a timestamp. The Transformer then appears to be like throughout the graph to drag context from associated tables, which helps the mannequin adapt to new database schemas and knowledge sorts. 

    For customers to enter which predictions they’d wish to make, we constructed a easy interface referred to as Predictive Question Language (PQL). It lets customers describe what they need to predict, and the mannequin takes care of the remaining. The mannequin pulls the proper knowledge, learns from previous examples, and causes by means of a solution. As a result of it makes use of in-context studying, it doesn’t should be retrained for each activity, both! We do have an choice for fine-tuning, however that is for very specialised duties. 

    Overview of architecture. Image by author
    Overview of structure. Picture by writer

    However this is only one method. Throughout the trade, a number of different methods are being explored:

    Trade approaches 

    1. Inner basis fashions

    Corporations like Netflix are constructing their very own large-scale basis fashions for suggestions. As described of their weblog, the objective is to maneuver away from dozens of specialised fashions towards a single centralized mannequin that learns member preferences throughout the platform. Analogy to LLMs is evident: like a sentence is represented as a sequence of phrases, a consumer is represented as a sequence of flicks the consumer interacted with. This permits improvements to help long-term personalization by processing large interplay histories.

    The advantages of proudly owning such a mannequin embody management, differentiation, and the power to tailor architectures to domain-specific wants (e.g., sparse consideration for latency, metadata-driven embeddings for chilly begin). On the flip facet, these fashions are extraordinarily pricey to coach and keep, requiring huge quantities of knowledge, compute, and engineering sources. Moreover, they’re skilled on a single dataset (e.g., Netflix consumer habits) for a single activity (e.g., suggestions). 

    2. Automating mannequin improvement with AutoML or Information Science brokers

    Platforms like DataRobot and SageMaker Autopilot have pushed ahead the concept of automating components of the machine studying pipeline. They assist groups transfer quicker by dealing with items like function engineering, mannequin choice, and coaching. This makes it simpler to experiment, scale back repetitive work, and develop entry to machine studying past simply extremely specialised groups. In an analogous vein, Information Scientist brokers are rising, the place the concept is that the Information Scientist agent will carry out all of the classical steps and iterate over them: knowledge cleansing, function engineering, mannequin constructing, mannequin analysis, and eventually mannequin improvement. Whereas a real modern feat, the jury remains to be out on whether or not this method will probably be efficient in the long run.

    3. Utilizing graph databases for linked knowledge

    Corporations like Neo4j and TigerGraph have superior the usage of graph databases to raised seize how knowledge factors are linked. This has been particularly impactful in areas like fraud detection, cybersecurity, and provide chain administration, locations the place the relationships between entities typically matter greater than the entities themselves. By modeling knowledge as networks quite than remoted rows in a desk, graph programs have opened up new methods of reasoning about complicated, real-world issues.

    Classes realized

    Once we got down to construct our expertise, our objective was easy: develop neural community architectures that would study straight from uncooked knowledge. This method mirrors the present AI (literal) revolution, which is fueled by neural networks that study straight from pixels in a picture or phrases in a doc. 

    Virtually talking, our imaginative and prescient for the product additionally entailed an individual merely connecting to the info and making a prediction. That led us to the bold goal of making a pretrained basis mannequin designed for enterprise knowledge from the bottom up (as defined above), eradicating the necessity to manually create options, coaching datasets, and customized task-specific fashions. An bold activity certainly.

    When constructing our Relational Basis Mannequin, we developed new transformer architectures that attend over a set of interconnected tables, a database schema. This required extending the classical LLM consideration mechanism, which attends over a linear sequence of tokens, to an consideration mechanism that attends over a graph of knowledge. Critically, the eye mechanism needed to generalize throughout completely different database constructions in addition to throughout various kinds of tables, broad or slim, with diversified column sorts and meanings. 

    One other problem was inventing a brand new coaching scheme, as a result of predicting the subsequent token isn’t the proper goal. As an alternative, we generated many artificial databases and predictive duties mimicking challenges like fraud detection, time collection forecasting, provide chain optimization, danger profiling, credit score scoring, personalised suggestions, buyer churn prediction, and gross sales lead scoring.

    Ultimately, this resulted in a pretrained Relational Basis Mannequin that may be prompted to resolve enterprise duties, whether or not it’s monetary versus insurance coverage fraud or medical versus credit score danger scoring.

    Conclusion 

    Machine studying is right here to remain, and because the area evolves, it’s our accountability as knowledge scientists to spark extra considerate and candid discourse in regards to the true capabilities of our expertise — what it’s good at, and the place it falls brief. 

    Everyone knows how transformative LLMs had been, and proceed to be, however too typically, they’re applied unexpectedly earlier than contemplating inside targets or wants. As technologists, we must always encourage executives to take a more in-depth take a look at their proprietary knowledge, which anchors their firm’s uniqueness, and take the time to thoughtfully determine which applied sciences will finest capitalize on that knowledge to advance their enterprise goals.

    On this piece, we went over LLM capabilities, the worth that lies throughout the (typically) neglected facet of structured knowledge, and trade options for making use of AI over structured knowledge — together with my very own resolution and the teachings realized from constructing that. 

    Thanks for studying. 


    References: 

    [1] R. Ying, R. He, Okay. Chen, P. Eksombatchai, W. L. Hamilton and J. Leskovec, Graph Convolutional Neural Networks for Net-Scale Recommender Methods (2018), KDD 2018.

    Creator bio:

    Dr. Jure Leskovec is the Chief Scientist and Co-Founding father of Kumo, a number one predictive AI firm. He’s a Pc Science professor at Stanford, the place he has been educating for greater than 15 years. Jure co-created Graph Neural Networks and has devoted his profession to advancing how AI learns from linked data. He beforehand served as Chief Scientist at Pinterest and performed award-winning analysis at Yahoo and Microsoft.

    Jure's headshot
    Picture by Jeff Cable



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    April 19, 2026

    Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    April 18, 2026

    Your RAG System Retrieves the Right Data — But Still Produces Wrong Answers. Here’s Why (and How to Fix It).

    April 18, 2026

    Europe Warns of a Next-Gen Cyber Threat

    April 18, 2026

    Comments are closed.

    Editors Picks

    AI Machine-Vision Earns Man Overboard Certification

    April 20, 2026

    Battery recycling startup Renewable Metals charges up on $12 million Series A

    April 20, 2026

    The Influencers Normalizing Not Having Sex

    April 20, 2026

    Sources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic’s designation as a supply chain risk (Axios)

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Flamengo vs. Chelsea From Anywhere for Free: Stream FIFA Club World Cup Soccer

    June 20, 2025

    A United Arab Emirates Lab Announces Frontier AI Projects—and a New Outpost in Silicon Valley

    May 22, 2025

    Also foldable and plenty big

    February 6, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.