Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Laughing gas shows rapid antidepressant effects
    • A crackdown on plane cabin baggage is coming – here’s what you need to know from an ex-pilot
    • Right-Wing Influencers Have Flooded Minneapolis
    • Mississippi bill goes on step further than most to completely ban sweepstakes
    • Lenovo’s Twisting Laptop Follows You Around the Meeting Room
    • Wireless Power Beamed From Moving Aircraft
    • F-35 Digital Twin Prepares US Navy for Drone Warfare
    • Berlin’s NetBird raises €8.5 million to offer a European open-source alternative to SSL VPN giants
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Tuesday, January 13
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Overfitting vs. Underfitting: Making Sense of the Bias-Variance Trade-Off
    Artificial Intelligence

    Overfitting vs. Underfitting: Making Sense of the Bias-Variance Trade-Off

    Editor Times FeaturedBy Editor Times FeaturedNovember 22, 2025No Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    fashions is a bit like cooking: too little seasoning and the dish is bland, an excessive amount of and it’s overpowering. The purpose? That good steadiness – simply sufficient complexity to seize the flavour of the info, however not a lot that it’s overwhelming.

    On this publish, we’ll dive into two of the most typical pitfalls in mannequin growth: overfitting and underfitting. Whether or not you’re coaching your first mannequin or tuning your hundredth, preserving these ideas in examine is essential to constructing fashions that truly work in the true world.

    Overfitting

    What’s overfitting?

    Overfitting is a standard problem with knowledge science fashions. It occurs when the mannequin learns too properly from skilled knowledge, which means that it learns from patterns particular to skilled knowledge and noise. Due to this fact, it isn’t capable of predict properly primarily based on unseen knowledge.

    Why is overfitting a problem?

    1. Poor efficiency: The mannequin will not be capable of generalise properly. The patterns it has detected throughout coaching should not relevant to the remainder of the info. You get the impression that the mannequin is working nice primarily based on coaching errors, when in reality the check or real-world errors should not that optimistic.
    2. Predictions with excessive variance: The mannequin efficiency is unstable and the predictions should not dependable. Small changes to the info trigger excessive variance within the predictions being made.
    3. Coaching a fancy and costly mannequin: Coaching and constructing a fancy mannequin in manufacturing is an costly and high-resource job. If a less complicated mannequin performs simply as properly, it’s extra environment friendly to make use of it as a substitute.
    4. Danger of dropping enterprise belief: Information scientists who’re overly optimistic when experimenting with new fashions might overpromise outcomes to enterprise stakeholders. If overfitting is found solely after the mannequin has been offered, it might considerably harm credibility and make it troublesome to regain belief within the mannequin’s reliability.

    How you can determine overfitting

    1. Cross-validation: Throughout cross-validation, the enter knowledge is cut up into a number of folds (units of coaching and testing knowledge). Totally different folds of the enter knowledge ought to give comparable testing error outcomes. A big hole in efficiency throughout folds might point out mannequin instability or knowledge leakage, each of which will be signs of overfitting.
    2. Preserve monitor of the coaching, testing and generalisation errors. The error when the mannequin is deployed (generalisation error) shouldn’t deviate largely from the errors you already know of. If you wish to go the additional mile, think about implementing a monitoring alert if the deployed mannequin’s efficiency deviates considerably from the validation set error.

    How you can mitigate/ forestall overfitting

    1. Take away options: Too many options would possibly “information” the mannequin an excessive amount of, due to this fact ensuing to a mannequin that isn’t capable of generalise properly.
    2. Enhance coaching knowledge: Offering extra examples to study from, the mannequin learns to generalise higher and it’s much less delicate to outliers and noise.
    3. Enhance regularisation: Regularisation methods help by penalising the already inflated coefficients. This protects the mannequin from becoming too carefully to the info.
    4. Regulate hyper-parameters: Sure hyper-parameters which are fitted an excessive amount of, would possibly end in a mannequin that isn’t capable of generalise properly.

    Underfitting

    What’s underfitting?

    Underfitting occurs when the character of the mannequin or the options are too simplistic to seize the underlying knowledge properly. It additionally ends in poor predictions in unseen knowledge.

    Why is underfitting problematic?

    1. Poor efficiency: The mannequin performs poorly on coaching knowledge, due to this fact poorly additionally on check and real-world knowledge.
    2. Predictions with excessive bias: The mannequin is incapable of creating dependable predictions.

    How you can determine underfitting

    1. Coaching and check errors can be poor.
    2. Generalisation error can be excessive, and presumably near the coaching error.

    How you can repair underfitting

    1. Improve options: Introduce new options, or add extra refined options (e.g.: add interplay results/ polynomial phrases/ seasonality phrases) which can seize extra advanced patterns within the underlying knowledge
    2. Enhance coaching knowledge: Offering extra examples to study from, the mannequin learns to generalise higher and it’s much less delicate to outliers and noise.
    3. Scale back regularisation energy: When making use of a regularisation approach that’s too highly effective, the options develop into too uniform and the mannequin doesn’t prioritise any characteristic, stopping it from studying essential patterns.
    4. Regulate hyper-parameters: An intrinsically advanced mannequin with poor hyper-parameters might not have the ability to seize all of the complexity. Paying extra consideration to adjusting them could also be beneficial (e.g. add extra bushes to a random forest).
    5. If all different choices don’t repair the underlying problem, it is likely to be worthwhile tossing the mannequin and changing it with one which is ready to seize extra advanced patterns in knowledge.

    Abstract

    Machine studying isn’t magic, it’s a balancing act between an excessive amount of and too little. Overfit your mannequin, and it turns into a perfectionist that may’t deal with new conditions. Underfit it, and it misses the purpose completely.

    The most effective fashions stay within the candy spot: generalising properly, studying sufficient, however not an excessive amount of. By understanding and managing overfitting and underfitting, you’re not simply enhancing metrics, you’re constructing belief, lowering threat, and creating options that final past the coaching set.

    Sources

    [1] https://medium.com/@SyedAbbasT/what-is-overfitting-underfitting-regularization-371b0afa1a2c

    [2] https://www.datacamp.com/blog/what-is-overfitting



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    When Does Adding Fancy RAG Features Work?

    January 12, 2026

    How AI Can Become Your Personal Language Tutor

    January 12, 2026

    Why 90% Accuracy in Text-to-SQL is 100% Useless

    January 12, 2026

    Optimizing Data Transfer in Batched AI/ML Inference Workloads

    January 12, 2026

    How to Leverage Slash Commands to Code Effectively

    January 11, 2026

    Automatic Prompt Optimization for Multimodal Vision Agents: A Self-Driving Car Example

    January 11, 2026

    Comments are closed.

    Editors Picks

    Laughing gas shows rapid antidepressant effects

    January 13, 2026

    A crackdown on plane cabin baggage is coming – here’s what you need to know from an ex-pilot

    January 13, 2026

    Right-Wing Influencers Have Flooded Minneapolis

    January 13, 2026

    Mississippi bill goes on step further than most to completely ban sweepstakes

    January 13, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    With 57% of German tax advisors over 50, AnyTax raises €1 million to modernise tax infrastructure

    October 29, 2025

    A Group of Young Cybercriminals Poses the ‘Most Imminent Threat’ of Cyberattacks Right Now

    July 2, 2025

    Which US state could be the next to legalize sports betting?

    January 2, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.