Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Brain-inspired AI chip could save 70% energy
    • Liquid Instruments jags more taxpayer funding in $70 million Series C
    • MAGA Is Confused About ‘Animal Farm’
    • Meta says it might be forced to withdraw its apps from New Mexico if a judge orders it to adopt the state’s proposed safety features (Thomas Barrabi/New York Post)
    • Samsung Chip Profits Soar Amid the Tech World’s RAM Shortages
    • DAIMON Robotics Wants to Give Robot Hands a Sense of Touch
    • A Gentle Introduction to Stochastic Programming
    • This startup’s new mechanistic interpretability tool lets you debug LLMs
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Thursday, April 30
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Linear Regression Is Actually a Projection Problem (Part 2: From Projections to Predictions)
    Artificial Intelligence

    Linear Regression Is Actually a Projection Problem (Part 2: From Projections to Predictions)

    Editor Times FeaturedBy Editor Times FeaturedApril 2, 2026No Comments19 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    assume that linear regression is about becoming a line to knowledge.

    However mathematically, that’s not what it’s doing.

    It’s discovering the closest potential vector to your goal throughout the
    area spanned by options.

    To grasp this, we have to change how we take a look at our knowledge.


    In Part 1, we’ve obtained a primary concept of what a vector is and explored the ideas of dot merchandise and projections.

    Now, let’s apply these ideas to unravel a linear regression downside.

    We have now this knowledge.

    Picture by Writer

    The Ordinary Means: Characteristic Area

    After we attempt to perceive linear regression, we usually begin with a scatter plot drawn between the impartial and dependent variables.

    Every level on this plot represents a single row of information. We then attempt to match a line by means of these factors, with the purpose of minimizing the sum of squared residuals.

    To unravel this mathematically, we write down the associated fee perform equation and apply differentiation to seek out the precise formulation for the slope and intercept.

    As we already mentioned in my earlier a number of linear regression (MLR) weblog, that is the usual solution to perceive the issue.

    That is what we name as a characteristic area.

    Picture by Writer

    After doing all that course of, we get a price for the slope and intercept. Right here we have to observe one factor.

    Allow us to say ŷᵢ is the expected worth at a sure level. We have now the slope and intercept worth, and now in line with our knowledge, we have to predict the value.

    If ŷᵢ is the expected worth for Home 1, we calculate it through the use of

    [
    beta_0 + beta_1 cdot text{size}
    ]

    What have we accomplished right here? We have now a measurement worth, and we’re scaling it with a sure quantity, which we name the slope (β₁), to get the worth as close to to the unique worth as potential.

    We additionally add an intercept (β₀) as a base worth.

    Now let’s bear in mind this level, and we are going to transfer to the following perspective.


    A Shift in Perspective

    Let’s take a look at our knowledge.

    Now, as a substitute of contemplating Worth and Dimension as axes, let’s take into account every home as an axis.

    We have now three homes, which suggests we are able to deal with Home A because the X-axis, Home B because the Y-axis, and Home C because the Z-axis.

    Then, we merely plot our factors.

    Picture by Writer

    After we take into account the scale and worth columns as axes, we get three factors, the place every level represents the scale and worth of a single home.

    Nevertheless, once we take into account every home as an axis, we get two factors in a three-d area.

    One level represents the sizes of all three homes, and the opposite level represents the costs of all three homes.

    That is what we name the column area, and that is the place the linear regression occurs.


    From Factors to Instructions

    Now let’s join our two factors to the origin and now we name them as vectors.

    Picture by Writer

    Okay, let’s decelerate and take a look at what we now have accomplished and why we did it.

    As a substitute of a traditional scatter plot the place measurement and worth are the axes (Characteristic Area), we thought-about every home as an axis and plotted the factors (Column Area).

    We are actually saying that linear regression occurs on this Column Area.

    You is perhaps considering: Wait, we be taught and perceive linear regression utilizing the standard scatter plot, the place we reduce the residuals to discover a best-fit line.

    Sure, that’s right! However in Characteristic Area, linear regression is solved utilizing calculus. We get the formulation for the slope and intercept utilizing partial differentiation.

    Should you bear in mind my earlier weblog on MLR, we derived the formulation for the slopes and intercepts once we had two options and a goal variable.

    You’ll be able to observe how messy it was to calculate these formulation utilizing calculus. Now think about when you have 50 or 100 options; it turns into advanced.

    By switching to Column Area, we modify the lens by means of which we view regression.

    We take a look at our knowledge as vectors and use the idea of projections. The geometry stays precisely the identical whether or not we now have 2 options or 2,000 options.

    So, if calculus will get that messy, what’s the actual good thing about this unchanging geometry? Let’s focus on precisely what occurs in Column Area.”


    Why This Perspective Issues

    Now that we now have an concept of what Characteristic Area and Column Area are, let’s concentrate on the plot.

    We have now two factors, the place one represents the sizes and the opposite represents the costs of the homes.

    Why did we join them to the origin and take into account them vectors?

    As a result of, as we already mentioned, in linear regression we’re discovering a quantity (which we name the slope or weight) to scale our impartial variable.

    We wish to scale the Dimension so it will get as near the Worth as potential, minimizing the residual.

    You can’t visually scale a floating level; you’ll be able to solely scale one thing when it has a size and a course.

    By connecting the factors to the origin, they turn into vectors. Now they’ve each magnitude and course, and we already know that we are able to scale vectors.


    Picture by Writer

    Okay, we established that we deal with these columns as vectors as a result of we are able to scale them, however there’s something much more essential to be taught right here.

    Let’s take a look at our two vectors: the Dimension vector and the Worth vector.

    First, if we take a look at the Dimension vector (1, 2, 3), it factors in a really particular course based mostly on the sample of its numbers.

    From this vector, we are able to perceive that Home 2 is twice as massive as Home 1, and Home 3 is thrice as massive.

    There’s a particular 1:2:3 ratio, which forces the Dimension vector to level in a single actual course.

    Now, if we take a look at the Worth vector, we are able to see that it factors in a barely completely different course than the Dimension vector, based mostly by itself numbers.

    The course of an arrow merely reveals us the pure, underlying sample of a characteristic throughout all our homes.

    If our costs had been precisely (2, 4, 6), then our Worth vector would lie precisely in the identical course as our Dimension vector. That will imply measurement is an ideal, direct predictor of worth.

    Picture by Writer

    However in actual life, that is hardly ever potential. The value of a home is not only depending on measurement; there are numerous different elements that have an effect on it, which is why the Worth vector factors barely away.

    That angle between the 2 vectors (1,2,3) and (4,8,9) represents the real-world noise.


    The Geometry Behind Regression

    Picture by Writer

    Now, we use the idea of projections that we realized in Half 1.

    Let’s take into account our Worth vector (4, 8, 9) as a vacation spot we wish to attain. Nevertheless, we solely have one course we are able to journey which is the trail of our Dimension vector (1, 2, 3).

    If we journey alongside the course of the Dimension vector, we are able to’t completely attain our vacation spot as a result of it factors in a distinct course.

    However we are able to journey to a particular level on our path that will get us as near the vacation spot as potential.

    The shortest path from our vacation spot dropping all the way down to that actual level makes an ideal 90-degree angle.

    In Half 1, we mentioned this idea utilizing the ‘freeway and residential’ analogy.

    We’re making use of the very same idea right here. The one distinction is that in Half 1, we had been in a 2D area, and right here we’re in a 3D area.

    I referred to the characteristic as a ‘manner’ or a ‘freeway’ as a result of we solely have one course to journey.

    This distinction between a ‘manner’ and a ‘course’ will turn into a lot clearer later once we add a number of instructions!


    A Easy Approach to See This

    We will already observe that that is the very same idea as vector projections.

    We derived a method for this in Half 1. So, why wait?

    Let’s simply apply the method, proper?

    No. Not but.

    There’s something essential we have to perceive first.

    In Half 1, we had been coping with a 2D area, so we used the freeway and residential analogy. However right here, we’re in a 3D area.

    To grasp it higher, let’s use a brand new analogy.

    Take into account this 3D area as a bodily room. There’s a lightbulb hovering within the room on the coordinates (4, 8, 9).

    The trail from the origin to that bulb is our Worth vector which we name as a goal vector.

    We wish to attain that bulb, however our actions are restricted.

    We will solely stroll alongside the course of our Dimension vector (1, 2, 3), shifting both ahead or backward.

    Based mostly on what we realized in Half 1, you may say, ‘Let’s simply apply the projection method to seek out the closest level on our path to the bulb.’

    And you’ll be proper. That’s the absolute closest we are able to get to the bulb in that course.


    Why We Want a Base Worth?

    However earlier than we transfer ahead, we must always observe yet one more factor right here.

    We already mentioned that we’re discovering a single quantity (a slope) to scale our Dimension vector so we are able to get as near the Worth vector as potential. We will perceive this with a easy equation:

    Worth = β₁ × Dimension

    However what if the scale is zero? Regardless of the worth of β₁ is, we get a predicted worth of zero.

    However is that this proper? We’re saying that if the scale of a home is 0 sq. toes, the value of the home is 0 {dollars}.

    This isn’t right as a result of there must be a base worth for every home. Why?

    As a result of even when there isn’t a bodily constructing, there’s nonetheless a price for the empty plot of land it sits on. The value of the ultimate home is closely depending on this base plot worth.

    We name this base worth β0. In conventional algebra, we already know this because the intercept, which is the time period that shifts a line up and down.

    So, how will we add a base worth in our 3D room? We do it by including a Base Vector.


    Combining Instructions

    GIF by Writer

    Now we now have added a base vector (1, 1, 1), however what is definitely accomplished utilizing this base vector?

    From the above plot, we are able to observe that by including a base vector, we now have yet one more course to maneuver in that area.

    We will transfer in each the instructions of the Dimension vector and the Base vector.

    Don’t get confused by taking a look at them as “methods”; they’re instructions, and it will likely be clear as soon as we get to a degree by shifting in each of them.

    With out the bottom vector, our base worth was zero. We began with a base worth of zero for each home. Now that we now have a base vector, let’s first transfer alongside it.

    For instance, let’s transfer 3 steps within the course of the Base vector. By doing so, we attain the purpose (3, 3, 3). We’re presently at (3, 3, 3), and we wish to attain as shut as potential to our Worth vector.

    This implies the bottom worth of each home is 3 {dollars}, and our new start line is (3, 3, 3).

    Subsequent, let’s transfer 2 steps within the course of our Dimension vector (1, 2, 3). This implies calculating 2 * (1, 2, 3) = (2, 4, 6).

    Subsequently, from (3, 3, 3), we transfer 2 steps alongside the Home A axis, 4 items alongside the Home B axis, and 6 steps alongside the Home C axis.

    Principally, we’re including the vectors right here, and the order doesn’t matter.

    Whether or not we transfer first by means of the bottom vector or the scale vector, it will get us to the very same level. We simply moved alongside the bottom vector first to know the concept higher!


    The Area of All Attainable Predictions

    This manner, we use each the instructions to get as near our Worth vector. Within the earlier instance, we scaled the Base vector by 3, which suggests right here β0 = 3, and we scaled the Dimension vector by 2, which suggests β1 = 2.

    From this, we are able to observe that we’d like the very best mixture of β0 and β1 in order that we are able to know what number of steps we journey alongside the bottom vector and what number of steps we journey alongside the scale vector to achieve that time which is closest to our Worth vector.

    On this manner, if we attempt all of the completely different combos of β0 and β₁, then we get an infinite variety of factors, and let’s see what it appears to be like like.

    GIF by Writer

    We will see that every one the factors fashioned by the completely different combos of β0 and β1 alongside the instructions of the Base vector and Dimension vector kind a flat 2D aircraft in our 3D area.

    Now, we now have to seek out the purpose on that aircraft which is nearest to our Worth vector.

    We already know how you can get to that time. As we mentioned in Half 1, we discover the shortest path through the use of the idea of geometric projections.


    Now we have to discover the precise level on the aircraft which is nearest to the Worth vector.

    We already mentioned this in Half 1 utilizing our ‘residence and freeway’ analogy, the place the shortest path from the freeway to the house fashioned a 90-degree angle with the freeway.

    There, we moved in a single dimension, however right here we’re shifting on a 2D aircraft. Nevertheless, the rule stays the identical.

    The shortest distance between the tip of our worth vector and some extent on the aircraft is the place the trail between them varieties an ideal 90-degree angle with the aircraft.

    GIF by Writer

    From a Level to a Vector

    Earlier than we dive into the mathematics, allow us to make clear precisely what is going on in order that it feels straightforward to comply with.

    Till now, we now have been speaking about discovering the particular level on our aircraft that’s closest to the tip of our goal worth vector. However what will we really imply by this?

    To succeed in that time, we now have to journey throughout our aircraft.

    We do that by shifting alongside our two out there instructions, that are our Base and Dimension vectors, and scaling them.

    If you scale and add two vectors collectively, the result’s all the time a vector!

    If we draw a straight line from the middle on the origin on to that actual level on the aircraft, we create what is named the Prediction Vector.

    Shifting alongside this single Prediction Vector will get us to the very same vacation spot as taking these scaled steps alongside the Base and Dimension instructions.

    The Vector Subtraction

    Now we now have two vectors.

    We wish to know the precise distinction between them. In linear algebra, we discover this distinction utilizing vector subtraction.

    After we subtract our Prediction from our Goal, the result’s our Residual Vector, also called the Error Vector.

    Because of this that dotted purple line is not only a measurement of distance. It’s a vector itself!

    After we deal in characteristic area, we attempt to reduce the sum of squared residuals. Right here, by discovering the purpose on the aircraft closest to the value vector, we’re not directly searching for the place the bodily size of the residual path is the bottom!


    Linear Regression Is a Projection

    Now let’s begin the mathematics.

    [
    text{Let’s start by representing everything in matrix form.}
    ]

    [
    X =
    begin{bmatrix}
    1 & 1
    1 & 2
    1 & 3
    end{bmatrix}
    quad
    y =
    begin{bmatrix}
    4
    8
    9
    end{bmatrix}
    quad
    beta =
    begin{bmatrix}
    b_0
    b_1
    end{bmatrix}
    ]
    [
    text{Here, the columns of } X text{ represent the base and size directions.}
    ]
    [
    text{And we are trying to combine them to reach } y.
    ]
    [
    hat{y} = Xbeta
    ]
    [
    = b_0
    begin{bmatrix}
    1
    1
    1
    end{bmatrix}
    +
    b_1
    begin{bmatrix}
    1
    2
    3
    end{bmatrix}
    ]
    [
    text{Every prediction is just a combination of these two directions.}
    ]
    [
    e = y – Xbeta
    ]
    [
    text{This error vector is the gap between where we want to be.}
    ]
    [
    text{And where we actually reach.}
    ]
    [
    text{For this gap to be the shortest possible,}
    ]
    [
    text{it must be perfectly perpendicular to the plane.}
    ]
    [
    text{This plane is formed by the columns of } X.
    ]
    [
    X^T e = 0
    ]
    [
    text{Now we substitute ‘e’ into this condition.}
    ]
    [
    X^T (y – Xbeta) = 0
    ]
    [
    X^T y – X^T X beta = 0
    ]
    [
    X^T X beta = X^T y
    ]
    [
    text{By simplifying we get the equation.}
    ]
    [
    beta = (X^T X)^{-1} X^T y
    ]
    [
    text{Now we compute each part step by step.}
    ]
    [
    X^T =
    begin{bmatrix}
    1 & 1 & 1
    1 & 2 & 3
    end{bmatrix}
    ]
    [
    X^T X =
    begin{bmatrix}
    3 & 6
    6 & 14
    end{bmatrix}
    ]
    [
    X^T y =
    begin{bmatrix}
    21
    47
    end{bmatrix}
    ]
    [
    text{computing the inverse of } X^T X.
    ]
    [
    (X^T X)^{-1}
    =
    frac{1}{(3 times 14 – 6 times 6)}
    begin{bmatrix}
    14 & -6
    -6 & 3
    end{bmatrix}
    ]
    [
    =
    frac{1}{42 – 36}
    begin{bmatrix}
    14 & -6
    -6 & 3
    end{bmatrix}
    ]
    [
    =
    frac{1}{6}
    begin{bmatrix}
    14 & -6
    -6 & 3
    end{bmatrix}
    ]
    [
    text{Now multiply this with } X^T y.
    ]
    [
    beta =
    frac{1}{6}
    begin{bmatrix}
    14 & -6
    -6 & 3
    end{bmatrix}
    begin{bmatrix}
    21
    47
    end{bmatrix}
    ]
    [
    =
    frac{1}{6}
    begin{bmatrix}
    14 cdot 21 – 6 cdot 47
    -6 cdot 21 + 3 cdot 47
    end{bmatrix}
    ]
    [
    =
    frac{1}{6}
    begin{bmatrix}
    294 – 282
    -126 + 141
    end{bmatrix}
    =
    frac{1}{6}
    begin{bmatrix}
    12
    15
    end{bmatrix}
    ]
    [
    =
    begin{bmatrix}
    2
    2.5
    end{bmatrix}
    ]
    [
    text{With these values, we can finally compute the exact point on the plane.}
    ]
    [
    hat{y} =
    2
    begin{bmatrix}
    1
    1
    1
    end{bmatrix}
    +
    2.5
    begin{bmatrix}
    1
    2
    3
    end{bmatrix}
    =
    begin{bmatrix}
    4.5
    7.0
    9.5
    end{bmatrix}
    ]
    [
    text{And this point is the closest possible point on the plane to our target.}
    ]

    We obtained the purpose (4.5, 7.0, 9.5). That is our prediction.

    This level is the closest to the tip of the value vector, and to achieve that time, we have to transfer 2 steps alongside the bottom vector, which is our intercept, and a pair of.5 steps alongside the scale vector, which is our slope.


    What Modified Was the Perspective

    Let’s recap what we now have accomplished on this weblog. We haven’t adopted the common methodology to unravel the linear regression downside, which is the calculus methodology the place we attempt to differentiate the equation of the loss perform to get the equations for the slope and intercept.

    As a substitute, we selected one other methodology to unravel the linear regression downside which is the strategy of vectors and projections.

    We began with a Worth vector, and we would have liked to construct a mannequin that predicts the value of a home based mostly on its measurement.

    When it comes to vectors, that meant we initially solely had one course to maneuver in to foretell the value of the home.

    Then, we additionally added the Base vector by realizing there needs to be a baseline beginning worth.

    Now we had two instructions, and the query was how shut can we get to the tip of the Worth vector by shifting in these two instructions?

    We aren’t simply becoming a line; we’re working inside an area.

    In characteristic area: we reduce error

    In column area: we drop perpendiculars

    Through the use of completely different combos of the slope and intercept, we obtained an infinite variety of factors that created a aircraft.

    The closest level, which we would have liked to seek out, lies someplace on that aircraft, and we discovered it through the use of the idea of projections and the dot product.

    By that geometry, we discovered the right level and derived the Regular Equation!

    Chances are you’ll ask, “Don’t we get this regular equation through the use of calculus as properly?” You’re precisely proper! That’s the calculus view, however right here we’re coping with the geometric linear algebra view to actually perceive the geometry behind the mathematics.

    Linear regression is not only optimization.

    It’s projection.


    I hope you realized one thing from this weblog!

    Should you assume one thing is lacking or could possibly be improved, be at liberty to depart a remark.

    Should you haven’t learn Half 1 but, you’ll be able to learn it here. It covers the fundamental geometric instinct behind vectors and projections.

    Thanks for studying!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    A Gentle Introduction to Stochastic Programming

    April 30, 2026

    Proxy-Pointer RAG: Multimodal Answers Without Multimodal Embeddings

    April 30, 2026

    DeepSeek’s new AI model is rolling out quietly, not to the Wall Street market shock

    April 30, 2026

    System Design Series: Apache Flink from 10,000 Feet, and Building a Flink-powered Recommendation Engine

    April 30, 2026

    Agentic AI: How to Save on Tokens

    April 29, 2026

    4 YAML Files Instead of PySpark: How We Let Analysts Build Data Pipelines Without Engineers

    April 29, 2026

    Comments are closed.

    Editors Picks

    Brain-inspired AI chip could save 70% energy

    April 30, 2026

    Liquid Instruments jags more taxpayer funding in $70 million Series C

    April 30, 2026

    MAGA Is Confused About ‘Animal Farm’

    April 30, 2026

    Meta says it might be forced to withdraw its apps from New Mexico if a judge orders it to adopt the state’s proposed safety features (Thomas Barrabi/New York Post)

    April 30, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Glass-walled tiny house rethinks privacy with minimalist design

    February 10, 2026

    German BioTech startup BIOWEG raises €1.5 million to turn industrial waste into valuable metals

    December 22, 2025

    Standard Kernel, which is making AI-powered GPU optimization software, raised a $20M seed led by Jump Capital, with General Catalyst and others participating (Clinton Nwachukwu/Ventureburn)

    March 16, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.