Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • 8 Best Bike Locks (2025): Kryptonite, Litelok, Abus, Hiplok, Compared
    • London-based Dojo, a payments tech provider to over 140K businesses, raised $190M from Vitruvian Partners in its first equity raise since launching in 2021 (Vishal Singh/Silicon Canals)
    • Yes, Student Loan Payments Could Rise for SAVE Borrowers. Here’s How to Calculate Yours
    • The Secret Power of Data Science in Customer Support
    • Kammok Crosswing three-second awning mesh bug room
    • Republican Operatives Want to Distance Themselves From Elon Musk’s DOGE
    • Builder.ai faked business with Bengaluru-based VerSe, which runs the Dailyhunt news app, by “round-tripping” sales worth ~$60M from 2021-2024 (Bloomberg)
    • Today’s NYT Mini Crossword Answers for May 31
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Saturday, May 31
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»A Bird’s Eye View of Linear Algebra: The Basics
    Artificial Intelligence

    A Bird’s Eye View of Linear Algebra: The Basics

    Editor Times FeaturedBy Editor Times FeaturedMay 30, 2025No Comments15 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    of the in-progress ebook on Linear Algebra, “A birds eye view of linear algebra”. This ebook will put a particular emphasis on AI functions and the way they leverage linear algebra.

    Linear algebra is a elementary self-discipline underlying something one can do with Math. From Physics to machine studying, chance idea (ex: Markov chains), you title it. It doesn’t matter what you’re doing, linear algebra is at all times lurking beneath the covers, able to spring at you as quickly as issues go multi-dimensional. In my expertise (and I’ve heard this from others), this was on the supply of a giant shock between highschool and college. In highschool (India), I used to be uncovered to some very primary linear algebra (primarily determinants and matrix multiplication). Then in college stage engineering schooling, each topic impulsively appears to be assuming proficiency in ideas like Eigen values, Jacobians, and so on. such as you have been presupposed to be born with the data.

    This chapter is supposed to supply a excessive stage overview of the ideas and their apparent functions that exist and are vital to know on this self-discipline.

    The AI revolution

    Nearly any info will be embedded in a vector house. Photos, video, language, speech, biometric info and no matter else you possibly can think about. And all of the functions of machine studying and synthetic intelligence (just like the current chat-bots, textual content to picture, and so on.) work on prime of those vector embeddings. Since linear algebra is the science of coping with excessive dimensional vector areas, it’s an indispensable constructing block.

    Complicated ideas from our actual world like pictures, textual content, speech, and so on. will be embedded in excessive dimensional vector areas. The upper the dimensionality of the vector house, the extra advanced info it may encode. Picture created utilizing MIdjourney.

    A number of the methods contain taking some enter vectors from one house and mapping them to different vectors from another house.

    However why the give attention to “linear” when most attention-grabbing capabilities are non-linear? It’s as a result of the issue of constructing our fashions excessive dimensional and that of constructing them non-linear (normal sufficient to seize every kind of advanced relationships) turn into orthogonal to one another. Many neural community architectures work by utilizing linear layers with easy one dimensional non-linearities in between them. And there may be a theorem that claims this sort of structure can mannequin any perform.

    Because the method we manipulate excessive dimensional vectors is primarily matrix multiplication, it isn’t a stretch to say it’s the bedrock of the fashionable AI revolution.

    I) Vector areas

    As talked about within the earlier part, linear algebra inevitably crops up when issues go multi-dimensional. We begin off with a scalar, which is simply numerous some kind. For this text, we’ll be contemplating actual and sophisticated numbers for these scalars. Basically, a scalar will be any object the place the essential operations of addition, subtraction, multiplication and division are outlined (abstracted as a “discipline”). Now, we wish a framework to explain collections of such numbers (add dimensions). These collections are known as “vector areas”. We’ll be contemplating the circumstances the place the weather of the vector house are both actual or advanced numbers (the previous being a particular case of the latter). The ensuing vector areas are known as “actual vector areas” and “advanced vector areas” respectively.

    The concepts in linear algebra are relevant to those “vector areas”. The commonest instance is your flooring, desk or the pc display screen you’re studying this on. These are all two-dimensional vector areas since each level in your desk will be specified by two numbers (the x and y coordinates as proven under). This house is denoted by R² since two actual numbers specify it.

    We are able to generalize R² in numerous methods. First, we are able to add dimensions. The house we dwell in is 3 dimensional (R³). Or, we are able to curve it. The floor of a sphere just like the Earth for instance (denoted S²), remains to be two dimensional, however not like R² (which is flat), it’s curved. Thus far, these areas have all principally been arrays of numbers. However the concept of a vector house is extra normal. It’s a assortment of objects the place the next concepts ought to be effectively outlined:

    1. Addition of any two of the objects.
    2. Multiplication of the objects by a scalar (an actual quantity).

    Not solely that, however the objects ought to be “closed” beneath these operations. Which means for those who apply these two operations to the objects of the vector house, it is best to get objects of the identical sort (you shouldn’t go away the vector house). For instance, the set of integers isn’t a vector house as a result of multiplication by a scalar (actual quantity) can provide us one thing that isn’t an integer (3*2.5 = 7.5 which isn’t an integer).

    One of many methods to specific the objects of a vector house is with vectors. Vectors require an arbitrary “foundation”. An instance of a foundation is the compass system with instructions — North, South, East and West. Any path (like “SouthWest”) will be expressed by way of these. These are “path vectors” however we are able to even have “place vectors” the place we’d like an origin and a coordinate system intersecting at that origin. The latitude and longitude system for referencing each place on the floor of the Earth is an instance. The latitude and longitude pair are one solution to establish your home. However there are infinite different methods. One other tradition may draw the latitude and longitude traces at a barely completely different angle to what the usual is. And so, they’ll provide you with completely different numbers for your home. However that doesn’t change the bodily location of the home itself. The home exists as an object within the vector house and these other ways to specific that location are known as “bases”. Selecting one foundation means that you can assign a pair of numbers to the home and selecting one other one means that you can assign a unique set of numbers which can be equally legitimate.

    A vector house the place each place is organized and neatly mapped to a set of numbers. Picture created utilizing Midjourney.

    Vector areas will also be infinite dimensional. As an illustration, in miniature 12 of [2], the complete set of actual numbers is considered an infinite dimensional vector house.

    II) Linear maps

    Now that we all know what a vector house is, let’s take it to the following stage and discuss two vector areas. Since vector areas are merely collections of objects, we are able to consider a mapping that takes an object from one of many areas and maps it to an object from the opposite. An instance of that is current AI packages like Midjourney the place you enter a textual content immediate they usually return a picture matching it. The textual content you enter is first transformed to a vector. Then, that vector is transformed to a different vector within the picture house by way of such a “mapping”.

    Let V and W be vector areas (both each actual or advanced vector areas). A perform f: V ->W is claimed to be a ‘linear map’ if for any two vectors u, v 𝞮 V and any scalar c (an actual variety of advanced quantity relying on climate we’re working with actual or advanced vector areas) the next two situations are happy:

    $$f(u+v) = f(u) + f(v) tag{1}$$
    $$f(c.v) = c.f(v)tag{2}$$

    Combining the above two properties, we are able to get the next consequence a few linear mixture of n vectors.

    $$f(c_1.u_1+ c_2.u_2+ … c_n.u_n) = c_1.f(u_1)+c_2.f(u_2)+…+c_n.f(u_n)$$

    And now we are able to see the place the title “linear map” comes from. If we cross to the linear map, f, a linear combination of n vectors (LHS of equation above), that is equal to making use of the identical linear map to the capabilities (f) of the person vectors. We are able to apply the linear map first after which the linear mixture or the linear mixture first after which the linear map. The 2 are equal.

    In highschool, we study linear equations. In two dimensional house, such an equation is represented by f(x)=m.x+c. Right here, m and c are the parameters of the equation. Word that this perform isn’t a linear map. Though it satisfies equation (1) above, it fails to fulfill equation (2). If we set f(x)=m.x as an alternative, then this can be a linear map because it satisfies each equations.

    A linear map takes objects from one vector house and maps them to things in one other vector house. Form of like a portal between worlds. In fact, there will be many such “maps” or “portals”. A linear map has to fulfill different properties. When you cross a linear mixture of vectors from the primary house to it, it shouldn’t matter climate you apply the linear map first or the linear mixture first. Picture created utilizing Midjourney.

    III) Matrices

    In part I, we launched the idea of foundation for a vector house. Given a foundation for the primary vector house (V) and the dimensionality of the second (U), each linear map will be expressed as a matrix (for particulars, see here). A matrix is only a assortment of vectors. These vectors will be organized in columns, giving us a 2-d grid of numbers as proven under.

    A matrix as a set of vectors organized in columns. Picture by creator.

    Matrices are the objects individuals first consider within the context of linear algebra. And for good cause. More often than not spent practising linear algebra is coping with matrices. However it is very important do not forget that there (generally) are an infinite variety of matrices that may characterize a linear map, relying on the idea we select for the primary house, V. The linear map is therefore a extra normal idea than the matrix one occurs to be utilizing to characterize it.

    How do matrices assist us carry out the linear map they characterize (from one vector to the opposite)? By way of the matrix getting multiplied with the primary vector. The result’s the second vector and the mapping is full (from first to second).

    Intimately, we take the dot product (sum product) of the primary vector, v_1 with the primary row of the matrix and this yields the primary entry of the ensuing vector, v_2 after which the dot product of v_1 with the second row of the matrix to get the second entry of v_2 and so forth. This course of is demonstrated under for a matrix with 2 rows and three columns. The primary vector, v_1 is three dimensional and the second vector, v_2 is 2 dimensional.

    How matrix multiplication with a vector works. Picture by creator.

    Word that the underlying linear map behind a matrix with this dimensionality (2x3) will at all times take a 3 dimensional vector, v_1 and map it to a two dimensional house, v_2.

    A linear transformation that takes vectors in three dimensional house and maps them to 2 dimensional house. Picture created with MidJourney.

    Basically an (nxm) matrix will map an m dimensional vector to an n dimensional one.

    III-A) Properties of matrices

    Let’s cowl some properties of matrices that’ll enable us to establish properties of the linear maps they characterize.

    Rank

    An vital property of matrices and their corresponding linear maps is the rank. We are able to discuss this by way of a set of vectors, since that’s all a matrix is. Say we have now a vector, v1=[1,0,0]. The primary component of the vector is the coordinate alongside the x-axis, the second is that alongside the y-axis and the third one the z-axis. These three axes are a foundation (there are numerous) of the three-dimensional house, R³, which means that any vector on this house will be expressed as a linear mixture of these three vectors.

    A single vector in three-dimensional house. Picture by creator

    We are able to multiply this vector by a scalar, s. This provides us s.[1,0,0] = [s,0,0]. As we range the worth of s, we are able to get any level alongside the x-axis. However that’s about it. Say we add one other vector to our assortment, v2=[3.5,0,0]. Now, what are the vectors we are able to make with linear mixtures of these two vectors? We get to multiply the primary one with any scalar, s_1 and the second with any scalar, s_2. This provides us:

    $$s_1.[1,0,0] + s_2[3.5,0,0] = [s_1+3.5 s_2, 0,0] = [s’,0,0]$$

    Right here, s’ is simply one other scalar. So, we are able to nonetheless attain factors solely on the x-axis, even with linear mixtures of each these vectors. The second vector didn’t “increase our attain” in any respect. The variety of factors we are able to attain with linear mixtures of the 2 is precisely the identical because the quantity we are able to attain with the primary. So although we have now two vectors, the rank of this assortment of vectors is 1 for the reason that house they span is one dimensional. If however, the second vector have been v2=[0,1,0] then you may get any level on the x-y aircraft with these two vectors. So, the house spanned can be two dimensional and the rank of this assortment can be 2. If the second vector have been v2=[2.1,1.5,0.8], we might nonetheless span a two dimensional house with v1 and v2 (although that house can be completely different from the x-y aircraft now, it might be another 2-d aircraft). And the 2 vectors would nonetheless have a rank of 2. If the rank of a set of vectors is similar because the variety of vectors (which means they’ll collectively span an area of dimensionality as excessive because the variety of vectors), then they’re known as “linearly unbiased”.

    If the vectors that make up the matrix can span an m dimensional house, then the rank of the matrix is m. However a matrix will be considered a set of vectors in two methods. Because it’s a easy two dimensional grid of numbers, we are able to both take into account all of the columns because the group of vectors or take into account all of the rows because the group as proven under. Right here, we have now a (3x4) matrix (three rows and 4 columns). It may be considered both as a set of 4 column vectors (every three-dimensional) or 3 row vectors (every 4 dimensional).

    A matrix will be thought as a set of row vectors or a set of column vectors. Picture by creator.

    Full row rank means all row the row vectors are linearly unbiased. Full column rank means all column vectors are linearly unbiased.

    When the matrix is a sq. matrix, it seems that the row rank and column rank will at all times be the identical. This isn’t apparent in any respect and a proof is given within the mathexchange put up, [3]. Which means for a sq. matrix, we are able to speak simply by way of the rank and don’t must trouble specifying “row rank” or “column rank”.

    The linear transformation equivalent to a (3 x 3) matrix that has a rank of two will map all the pieces within the 3-D house to a decrease, 2-d house very similar to the (3 x 2) matrix we encountered within the final part.

    A light-weight supply forming shadows of factors in 3-D house onto a 2-d flooring or wall is one linear transformation that maps 3-D vectors to 2-d vectors. Picture created with MidJourney.

    Notions intently associated to the rank of sq. matrices are the determinant and invertibility.

    Determinants

    The determinant of a sq. matrix is its “measure” in a way. Let me clarify by going again to pondering of a matrix as a set of vectors. Let’s begin with only one vector. The best way to “measure” it’s apparent — its size. And since we’re dealing solely with sq. matrices, the one solution to have one vector is to have it’s one dimensional. Which is principally only a scalar. Issues get attention-grabbing after we go from one dimension to 2. Now, we’re in two dimensional house. So, the notion of “measure” is not size, however has graduated to areas. And with two vectors in that two dimensional house, it’s the space of the parallelogram they kind. If the 2 vectors are parallel to one another (ex: each lie on x-axis). In different phrases, they don’t seem to be linearly unbiased, then the world of the parallelogram between them will turn out to be zero. The determinant of the matrix shaped by them shall be zero and so will the rank of that matrix be zero.

    Two vectors forming a parallelogram between them. The world of the parallelogram is the determinant of the matrix shaped by these two vectors. Picture by creator.

    Taking it one dimension increased, we get 3 dimensional house. And to assemble a sq. matrix (3x3), we now want three vectors. And for the reason that notion of “measure” in three dimensional house is quantity, the determinant of a (3x3) matrix turns into the quantity contained between the vectors that make it up.

    In three dimensional house, three vectors are wanted to create a 3×3 matrix. The determinant of the matrix is the quantity contained between these vectors. Picture by MidJourney.

    And this may be prolonged to house of any dimensionality.

    Discover that we spoke in regards to the space or the quantity contained between the vectors. We didn’t specify if these have been the vectors composing the rows of the sq. matrix or those composing its columns. And the considerably shocking factor is that we don’t must specify this as a result of it doesn’t matter both method. Climate we take the vectors forming the rows and measure the quantity between them or the vectors forming the columns, we get the identical reply. That is confirmed within the mathexchange put up [4].

    There are a number of different properties of linear maps and corresponding matrices that are invaluable in understanding them and extracting worth out of them. We’ll be delving into invertability, eigen values, diagonalizability and completely different transformations one can do within the coming articles (test again right here for hyperlinks).


    When you preferred this story, purchase me a espresso 🙂 https://www.buymeacoffee.com/w045tn0iqw

    References

    [1] Linear map: https://en.wikipedia.org/wiki/Linear_map

    [2] Matousek’s miniatures: https://kam.mff.cuni.cz/~matousek/stml-53-matousek-1.pdf

    [3] Mathexchange put up proving row rank and column rank are the identical: https://math.stackexchange.com/questions/332908/looking-for-an-intuitive-explanation-why-the-row-rank-is-equal-to-the-column-ran

    [4] Mathexchange put up proving the determinants of a matrix and its transpose are the identical: https://math.stackexchange.com/a/636198/155881



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    The Secret Power of Data Science in Customer Support

    May 31, 2025

    Agentic RAG Applications: Company Knowledge Slack Agents

    May 31, 2025

    Hands-On Attention Mechanism for Time Series Classification, with Python

    May 30, 2025

    LLM Optimization: LoRA and QLoRA | Towards Data Science

    May 30, 2025

    When Censorship Gets in the Way of Art

    May 30, 2025

    Tested AI Image Generator No NSFW Restrictions

    May 30, 2025
    Leave A Reply Cancel Reply

    Editors Picks

    8 Best Bike Locks (2025): Kryptonite, Litelok, Abus, Hiplok, Compared

    May 31, 2025

    London-based Dojo, a payments tech provider to over 140K businesses, raised $190M from Vitruvian Partners in its first equity raise since launching in 2021 (Vishal Singh/Silicon Canals)

    May 31, 2025

    Yes, Student Loan Payments Could Rise for SAVE Borrowers. Here’s How to Calculate Yours

    May 31, 2025

    The Secret Power of Data Science in Customer Support

    May 31, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    DeepSeek-V3 Explained 1: Multi-head Latent Attention | by Shirley Li | Jan, 2025

    January 31, 2025

    AI and stand up comedy

    August 15, 2024

    Spicychat vs Candy AI

    January 11, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.