Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • 7 Best All-Clad Deals From the Factory Seconds Sale (2026)
    • Americans worry sports betting hurts integrity even as participation keeps rising
    • Best Home Ellipticals in 2026: Smash Your Health Goals With These Full-Body Workout Machines
    • From Vietnam Boat Refugee to Reliability Engineering
    • Does Calendar-Based Time-Intelligence Change Custom Logic?
    • The UK government is backing AI that can run its own lab experiments
    • Easy small scale honey harvesting for beekeepers
    • German BioTech Exciva lands €51 million to advance Alzheimer’s drug testing in patients
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Tuesday, January 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»The Machine Learning “Advent Calendar” Day 12: Logistic Regression in Excel
    Artificial Intelligence

    The Machine Learning “Advent Calendar” Day 12: Logistic Regression in Excel

    Editor Times FeaturedBy Editor Times FeaturedDecember 13, 2025No Comments8 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    At the moment’s mannequin is Logistic Regression.

    In case you already know this mannequin, here’s a query for you:

    Is Logistic Regression a regressor or a classifier?

    Properly, this query is precisely like: Is a tomato a fruit or a vegetable?

    From a botanist’s viewpoint, a tomato is a fruit, as a result of they take a look at construction: seeds, flowers, plant biology.

    From a prepare dinner’s viewpoint, a tomato is a vegetable, as a result of they take a look at style, how it’s utilized in a recipe, whether or not it goes in a salad or a dessert.

    The identical object, two legitimate solutions, as a result of the standpoint is totally different.

    Logistic Regression is precisely like that.

    • Within the Statistical / GLM perspective, it’s a regression. And there may be not the idea of “classification” on this framework anyway. There are gamma regression, logistic regression, Poisson regression…
    • Within the machine studying perspective, it’s used for classification. So it’s a classifier.

    We are going to come again to this later.

    For now, one factor is certain:

    Logistic Regression could be very effectively tailored when the goal variable is binary, and often y is coded as 0 or 1.

    However…

    What’s a classifier for a weight-based mannequin?

    So, y could be 0 or 1.

    0 or 1, they’re numbers, proper?

    So we will simply think about y as steady!

    Sure, y = a x + b, with y = 0 or 1.

    Why not?

    Now, you could ask: why this query, now? Why it was not requested earlier than.

    Properly, for distance-based and tree-based fashions, a categorical y is actually categorical.

    When y is categorical, like crimson, blue, inexperienced, or just 0 and 1:

    • In Ok-NN, you classify by neighbors of every class.
    • In centroid fashions, you evaluate with the centroid of every class.
    • In a determination tree, you compute class proportions at every node.

    In all these fashions:

    Class labels should not numbers.
    They’re classes.
    The algorithms by no means deal with them as values.

    So classification is pure and speedy.

    However for weight-based fashions, issues work in another way.

    In a weight-based mannequin, we at all times compute one thing like:

    y = a x + b

    or, later, a extra advanced perform with coefficients.

    This implies:

    The mannequin works with numbers in every single place.

    So right here is the important thing concept:

    If the mannequin does regression, then this similar mannequin can be utilized for binary classification.

    Sure, we will use linear regression for binary classification!

    Since binary labels are 0 and 1, they’re already numeric.

    And on this particular case: we can apply Abnormal Least Squares (OLS) immediately on y = 0 and y = 1.

    The mannequin will match a line, and we will use the identical closed-form method, as we will see beneath.

    Logistic Regression in Excel – all pictures by writer

    We are able to do the identical gradient descent, and it’ll completely work:

    After which, to acquire the ultimate class prediction, we merely select a threshold.
    It’s often 0.5 (or 50 p.c), however relying on how strict you need to be, you may decide one other worth.

    • If the anticipated y≥0.5, predict class 1
    • In any other case, class 0

    This can be a classifier.

    And since the mannequin produces a numeric output, we will even establish the purpose the place: y=0.5.

    This worth of x defines the determination frontier.

    Within the earlier instance, this occurs at x=9.
    At this threshold, we already noticed one misclassification.

    However an issue seems as quickly as we introduce some extent with a massive worth of x.

    For instance, suppose we add some extent with: x= 50 and y = 1.

    As a result of linear regression tries to suit a straight line by all the info, this single massive worth of x pulls the road upward.
    The choice frontier shifts from x= to roughly x=12.

    And now, with this new boundary, we find yourself with two misclassifications.

    This illustrates the principle concern:

    A linear regression used as a classifier is extraordinarily delicate to excessive values of x. The choice frontier strikes dramatically, and the classification turns into unstable.

    This is without doubt one of the causes we’d like a mannequin that doesn’t behave linearly without end. A mannequin that stays between 0 and 1, even when x turns into very massive.

    And that is precisely what the logistic perform will give us.

    How Logistic Regression works

    We begin with : ax + b, identical to the linear regression.

    Then we apply one perform known as sigmoid, or logistic perform.

    As we will see within the screenshot beneath, the worth of p is then between 0 and 1, so that is excellent.

    • p(x) is the predicted chance that y = 1
    • 1 − p(x) is the anticipated chance that y = 0

    For classification, we will merely say:

    • If p(x) ≥ 0.5, predict class 1
    • In any other case, predict class 0

    From probability to log-loss

    Now, the OLS Linear Regression tries to reduce the MSE (Imply Squared Error).

    Logistic regression for a binary goal makes use of the Bernoulli probability. For every commentary i:

    • If yᵢ = 1, the chance of the info level is pᵢ
    • If yᵢ = 0, the chance of the info level is 1 − pᵢ

    For the entire dataset, the chances are the product over all i. In observe, we take the logarithm, which turns the product right into a sum.

    Within the GLM perspective, we attempt to maximize this log probability.

    Within the machine studying perspective, we outline the loss because the unfavorable log probability and we decrease it. This provides the same old log-loss.

    And it’s equal. We is not going to do the demonstration right here

    Gradient Descent for Logistic Regression

    Precept

    Simply as we did for Linear Regression, we will additionally use Gradient Descent right here. The concept is at all times the identical:

    1. Begin from some preliminary values of a and b.
    2. Compute the loss and its gradient (derivatives) with respect to a and b.
    3. Transfer a and b slightly bit within the path that reduces the loss.
    4. Repeat.

    Nothing mysterious.
    Simply the identical mechanical course of as earlier than.

    Step 1. Gradient Calculation

    For logistic regression, the gradients of the common log-loss comply with a quite simple construction.

    That is merely the common residual.

    We are going to simply give the end result beneath, for the method that we will implement in Excel. As you may see, it’s fairly easy on the finish, even when the log-loss method could be advanced at first look.

    Excel can compute these two portions with easy SUMPRODUCT formulation.

    Step 2. Parameter Replace

    As soon as the gradients are identified, we replace the parameters.

    This replace step is repeated at every iteration.
    And iteration after iteration, the loss goes down, and the parameters converge to the optimum values.

    We now have the entire image.
    You’ve gotten seen the mannequin, the loss, the gradients, and the parameter updates.
    And with the detailed view of every iteration in Excel, you may truly play with the mannequin: change a worth, watch the curve transfer, and see the loss lower step-by-step.

    It’s surprisingly satisfying to look at how every part matches collectively so clearly.

    What about multiclass classification?

    For distance-based and tree-based fashions:

    No concern in any respect.
    They naturally deal with a number of lessons as a result of they by no means interpret the labels as numbers.

    However for weight-based fashions?

    Right here we hit an issue.

    If we write numbers for the category: 1, 2, 3, and so on.

    Then the mannequin will interpret these numbers as actual numeric values.
    Which results in issues:

    • the mannequin thinks class 3 is “greater” than class 1
    • the midpoint between class 1 and sophistication 3 is class 2
    • distances between lessons grow to be significant

    However none of that is true in classification.

    So:

    For weight-based fashions, we can not simply use y = 1, 2, 3 for multiclass classification.

    This encoding is inaccurate.

    We are going to see later how one can repair this.

    Conclusion

    Ranging from a easy binary dataset, we noticed how a weight-based mannequin can act as a classifier, why linear regression shortly reaches its limits, and the way the logistic perform solves these issues by retaining predictions between 0 and 1.

    Then, by expressing the mannequin by probability and log-loss, we obtained a formulation that’s each mathematically sound and straightforward to implement.
    And as soon as every part is positioned in Excel, the whole studying course of turns into seen: the chances, the loss, the gradients, the updates, and eventually the convergence of the parameters.

    With the detailed iteration desk, you may truly see how the mannequin improves step-by-step.
    You’ll be able to change a worth, alter the educational fee, or add some extent, and immediately observe how the curve and the loss react.
    That is the actual worth of doing machine studying in a spreadsheet: nothing is hidden, and each calculation is clear.

    By constructing logistic regression this fashion, you not solely perceive the mannequin, you perceive why it’s skilled.
    And this instinct will stick with you as we transfer to extra superior fashions later within the Introduction Calendar.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Does Calendar-Based Time-Intelligence Change Custom Logic?

    January 20, 2026

    IVO’s $55M Boost Signals AI-Driven Law Future (and It’s Just Getting Started)

    January 20, 2026

    You Probably Don’t Need a Vector Database for Your RAG — Yet

    January 20, 2026

    Time Series Isn’t Enough: How Graph Neural Networks Change Demand Forecasting

    January 19, 2026

    Bridging the Gap Between Research and Readability with Marco Hening Tallarico

    January 19, 2026

    Using Local LLMs to Discover High-Performance Algorithms

    January 19, 2026

    Comments are closed.

    Editors Picks

    7 Best All-Clad Deals From the Factory Seconds Sale (2026)

    January 20, 2026

    Americans worry sports betting hurts integrity even as participation keeps rising

    January 20, 2026

    Best Home Ellipticals in 2026: Smash Your Health Goals With These Full-Body Workout Machines

    January 20, 2026

    From Vietnam Boat Refugee to Reliability Engineering

    January 20, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Study questions gender roles in nighttime infant care

    July 3, 2025

    Gear News of the Week: Amazon Buys Bee, VSCO Has a New App, and CMF Debuts a Smartwatch

    July 26, 2025

    Trump Family–Backed World Liberty Financial Sets Up $1.5 Billion Crypto Treasury

    August 13, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.