Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Portable water filter provides safe drinking water from any source
    • MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged
    • NCAA seeks faster trial over DraftKings disputed March Madness branding case
    • AI Trusted Less Than Social Media and Airlines, With Grok Placing Last, Survey Says
    • Extragalactic Archaeology tells the ‘life story’ of a whole galaxy
    • Swedish semiconductor startup AlixLabs closes €15 million Series A to scale atomic-level etching technology
    • Republican Mutiny Sinks Trump’s Push to Extend Warrantless Surveillance
    • Yocha Dehe slams Vallejo Council over rushed casino deal approval process
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Saturday, April 18
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Understanding Matrices | Part 4: Matrix Inverse
    Artificial Intelligence

    Understanding Matrices | Part 4: Matrix Inverse

    Editor Times FeaturedBy Editor Times FeaturedAugust 31, 2025No Comments20 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    of this sequence [1], [2], and [3], we have now noticed:

    • interpretation of multiplication of a matrix by a vector,
    • the bodily which means of matrix-matrix multiplication,
    • the conduct of a number of special-type matrices, and
    • visualization of matrix transpose.

    On this story, I need to share my perspective on what lies beneath matrix inversion, why totally different formulation associated to inversion are the best way they really are, and eventually, why calculating the inverse might be achieved far more simply for matrices of a number of particular sorts.

    Listed here are the definitions that I exploit all through the tales of this sequence:

    • Matrices are denoted with uppercase (like ‘A‘, ‘B‘), whereas vectors and scalars are denoted with lowercase (like ‘x‘, ‘y‘ or ‘m‘, ‘n‘).
    • |x| – is the size of vector ‘x‘,
    • AT – is the transpose of matrix ‘A‘,
    • B-1 – is the inverse of matrix ‘B‘.

    Definition of the inverse matrix

    From part 1 of this sequence – “matrix-vector multiplication” [1], we do not forget that a sure matrix “A“, when multiplied by a vector ‘x‘ as “y = Ax“, might be handled as a metamorphosis of enter vector ‘x‘ into the output vector ‘y‘. In that case, then the inverse matrix A-1 ought to do the reverse transformation – it ought to rework vector ‘y‘ again to ‘x‘:

    [begin{equation*}
    x = A^{-1}y
    end{equation*}]

    Substituting “y = Ax” there’ll give us:

    [begin{equation*}
    x = A^{-1}y = A^{-1}(Ax) = (A^{-1}A)x
    end{equation*}]

    which implies that the product of the unique matrix and its inverse – A-1A, needs to be such a matrix, which does no transformation to any enter vector ‘x‘. In different phrases:

    [begin{equation*}
    (A^{-1}A) = E
    end{equation*}]

    the place “E” is the id matrix.

    Concatenating X-diagrams of A-1 and A turns into the id matrix E.

    The primary query that may come up right here is, is it all the time potential to reverse the affect of a sure matrix “A“? The reply is – it’s potential, provided that no 2 totally different enter vectors x1 and x2 are being reworked by “A” into the identical output vector ‘y‘. In different phrases, the inverse matrix A-1 exists provided that for any output vector ‘y‘ there exists precisely one enter vector ‘x‘, which is reworked by “A” into it:

    [begin{equation*}
    y = Ax
    end{equation*}]

    Case 1: A number of enter vectors ‘x’ (purple dots) are reworked into the identical output vector ‘y’ (gentle blue dots). We are able to’t design an inverse matrix on this case, as a result of for a sure vector ‘y’ the product “x = A-1y” can be ambiguous.
    Case 2: Every enter vector ‘x’ (purple dots) is reworked into a special output vector ‘y’ (gentle blue dots). The inverse matrix, which is able to do the reverse transformation “x = A-1y” does exist.

    On this sequence, I don’t need to dive an excessive amount of into the formal a part of definitions and proofs. As a substitute, I need to observe a number of instances the place it’s truly potential to invert the given matrix “A“, and we are going to see how the inverse matrix A-1 is calculated for every of these instances.


    Inverting chains of matrices

    An necessary method associated to matrix inverse is:

    [begin{equation*}
    (AB)^{-1} = B^{-1}A^{-1}
    end{equation*}]

    which states that the inverse of the product of matrices is the same as the product of inverse matrices, however within the reverse order. Let’s perceive why the order of matrices is being reversed.

    What’s the bodily which means of the inverse (AB)-1? It needs to be such a matrix that turns again the affect of the matrix (AB). So if:

    [begin{equation*}
    y = (AB)x,
    end{equation*}]

    then, we should always have:

    [begin{equation*}
    x = (AB)^{-1}y.
    end{equation*}]

    Now the transformation “y = (AB)x” goes in 2 steps: first, we do:

    [begin{equation*}
    Bx = t,
    end{equation*}]

    which provides an intermediate vector ‘t‘, after which that ‘t‘ is multiplied by “A“:

    [begin{equation*}
    y = At = A(Bx).
    end{equation*}]

    Throughout calculation of “y = (AB)x”, the enter vector ‘x’ is first reworked by matrix “B”, producing an intermediate vector “t = Bx”, which is then reworked by matrix “A”, producing the ultimate vector “y = A(Bx) = At”.

    So the matrix “A” influenced the vector after it was already influenced by “B“. On this case, to show again such a sequential affect, at first we should always flip again the affect of “A“, by multiplying A-1 over ‘y‘, which is able to give us:

    [begin{equation*}
    A^{-1}y = A^{-1}(ABx) = (A^{-1}A)Bx = EBx = Bx = t,
    end{equation*}]

    … the intermediate vector ‘t‘, produced a bit above.

    The product “A-1(AB)x = (A-1A)Bx = EBx = Bx = t”.
    Word, the vector ‘t’ participates right here twice.

    Then, after getting again the intermediate vector ‘t‘, to revive ‘x‘, we also needs to reverse the affect of matrix “B“. And that’s achieved by multiplying B-1 over ‘t‘:

    [begin{equation*}
    B^{-1}t = B^{-1}(Bx) = (B^{-1}B)x = Ex = x,
    end{equation*}]

    or writing all of it in an expanded manner:

    [begin{equation*}
    x = B^{-1}(A^{-1}A)Bx = (B^{-1}A^{-1})(AB)x,
    end{equation*}]

    which explicitly reveals that to show again the affect of the matrix (AB) we should always use (B-1A-1).

    The product “(B-1A-1)(AB)x = B-1(A-1A)Bx = B-1EBx = B-1Bx = Ex = x”.
    Word, each vectors ‘x’ and ‘t’ take part right here twice.

    For this reason within the inverse of a product of matrices, their order is reversed:

    [begin{equation*}
    (AB)^{-1} = B^{-1}A^{-1}
    end{equation*}]

    The identical precept is utilized when we have now extra matrices in a sequence, like:

    [begin{equation*}
    (ABC)^{-1} = C^{-1}B^{-1}A^{-1}
    end{equation*}]


    Inversion of a number of particular matrices

    Now, with the notion of what lies beneath matrix inversion, let’s view how matrices of a number of particular sorts are being inverted.

    Inverse of cyclic-shift matrix

    A cyclic-shift matrix is such a matrix “V“, which when multiplied by an enter vector ‘x‘, produces an output vector “y = Vx“, the place all values of ‘x‘ are cyclic shifted by some ‘ok‘ positions. To attain that, the cyclic-shift matrix “V” has 2 traces of ‘1’s, which reside parallel to its principal diagonal, whereas all different cells of it are ‘0’s.

    [begin{equation*}
    begin{pmatrix}
    y_1 y_2 y_3 y_4 y_5
    end{pmatrix}
    = y = Vx =
    begin{bmatrix}
    0 & 0 & 1 & 0 & 0
    0 & 0 & 0 & 1 & 0
    0 & 0 & 0 & 0 & 1
    1 & 0 & 0 & 0 & 0
    0 & 1 & 0 & 0 & 0
    end{bmatrix}
    *
    begin{pmatrix}
    x_1 x_2 x_3 x_4 x_5
    end{pmatrix}
    =
    begin{pmatrix}
    x_3 x_4 x_5 x_1 x_2
    end{pmatrix}
    end{equation*}]

    The X-diagram of the introduced 5×5 cyclic-shift matrix “V”. When utilized to an enter vector ‘x’, it cyclic shifts up all its values by 2 positions, producing output vector ‘y’.

    Now, how ought to we undo the transformation of the cyclic-shift matrix “V“? Clearly, we should always apply one other cyclic-shift matrix V-1, which now cyclic shifts all of the values of ‘y‘ downwards by ‘ok‘ positions (bear in mind, “V” was shifting all of the values of ‘x‘ upwards).

    [begin{equation*}
    begin{pmatrix}
    x_1 x_2 x_3 x_4 x_5
    end{pmatrix}
    = x = V^{-1}Vx =
    begin{bmatrix}
    0 & 0 & 0 & 1 & 0
    0 & 0 & 0 & 0 & 1
    1 & 0 & 0 & 0 & 0
    0 & 1 & 0 & 0 & 0
    0 & 0 & 1 & 0 & 0
    end{bmatrix}
    begin{bmatrix}
    0 & 0 & 1 & 0 & 0
    0 & 0 & 0 & 1 & 0
    0 & 0 & 0 & 0 & 1
    1 & 0 & 0 & 0 & 0
    0 & 1 & 0 & 0 & 0
    end{bmatrix}
    begin{pmatrix}
    x_1 x_2 x_3 x_4 x_5
    end{pmatrix}
    = V^{-1}y
    end{equation*}]

    The X-diagram of a product of two cyclic-shift matrices V-1V reveals that each enter worth xi of vector ‘x’ outcomes on the similar place, after being reworked as V-1Vx. For example, the trail of worth x4 is highlighted.

    For this reason the inverse of a cyclic-shift matrix is one other cyclic-shift matrix:

    [begin{equation*}
    V_1^{-1} = V_2
    end{equation*}]

    Greater than that, we will be aware that the X-diagram of V-1 is definitely the horizontal flip of the X-diagram of “V“. And from the earlier a part of this sequence – “transpose of a matrix” [3], we do not forget that the horizontal flip of an X-diagram corresponds to the transpose of that matrix. For this reason the inverse of a cyclic shift matrix is the same as its transpose:

    [begin{equation*}
    V^{-1} = V^T
    end{equation*}]

    Inverse of an trade matrix

    An trade matrix, usually denoted by “J“, is such a matrix, which when multiplied by an enter vector ‘x‘, produces an output vector ‘y‘, having all of the values of ‘x‘, however in reverse order. To attain that, “J” has ‘1’s on its anti-diagonal, whereas all different cells are ‘0’s.

    [begin{equation*}
    begin{pmatrix}
    y_1 y_2 y_3 y_4 y_5
    end{pmatrix}
    = y = Jx =
    begin{bmatrix}
    0 & 0 & 0 & 0 & 1
    0 & 0 & 0 & 1 & 0
    0 & 0 & 1 & 0 & 0
    0 & 1 & 0 & 0 & 0
    1 & 0 & 0 & 0 & 0
    end{bmatrix}
    *
    begin{pmatrix}
    x_1 x_2 x_3 x_4 x_5
    end{pmatrix}
    =
    begin{pmatrix}
    x_5 x_4 x_3 x_2 x_1
    end{pmatrix}
    end{equation*}]

    X-diagram of the trade matrix “J” reveals that each one the ‘n’ arrows (comparable to ‘n’ cells of the matrix with ‘1’s) simply flip the content material of the enter vector ‘x’. So ok’th from prime worth of ‘x’ turns into ok’th from backside worth of output vector ‘y’.

    Clearly, to undo such a transformation, we should always apply another trade matrix.

    [
    begin{equation*}
    begin{pmatrix}
    x_1 x_2 x_3 x_4 x_5
    end{pmatrix}
    = x = J^{-1}Jx =
    begin{bmatrix}
    0 & 0 & 0 & 0 & 1
    0 & 0 & 0 & 1 & 0
    0 & 0 & 1 & 0 & 0
    0 & 1 & 0 & 0 & 0
    1 & 0 & 0 & 0 & 0
    end{bmatrix}
    begin{bmatrix}
    0 & 0 & 0 & 0 & 1
    0 & 0 & 0 & 1 & 0
    0 & 0 & 1 & 0 & 0
    0 & 1 & 0 & 0 & 0
    1 & 0 & 0 & 0 & 0
    end{bmatrix}
    begin{pmatrix}
    x_1 x_2 x_3 x_4 x_5
    end{pmatrix}
    = J^{-1}y
    end{equation*}]

    After 2 trade matrices “JJ” are sequentially utilized to the enter vector ‘x’, any ok’th from the highest worth returns to the identical place, so all the vector ‘x’ comes again to its authentic state. For example, the trail of worth “x2” is highlighted.

    For this reason the inverse of an trade matrix is the trade matrix itself:

    [begin{equation*}
    J^{-1} = J
    end{equation*}]

    Inverse of a permutation matrix

    A permutation matrix is such a matrix “P” which, when multiplied by an enter vector ‘x‘, rearranges its values in a special order. To attain that, an n*n-sized permutation matrix “P” has ‘n‘ 1(s), organized in such a manner that no two 1(s) seem on the identical row or the identical column. All different cells of “P” are 0(s).

    [begin{equation*}
    begin{pmatrix}
    y_1 y_2 y_3 y_4 y_5
    end{pmatrix}
    = y = Px =
    begin{bmatrix}
    0 & 0 & 1 & 0 & 0
    1 & 0 & 0 & 0 & 0
    0 & 0 & 0 & 1 & 0
    0 & 0 & 0 & 0 & 1
    0 & 1 & 0 & 0 & 0
    end{bmatrix}
    *
    begin{pmatrix}
    x_1 x_2 x_3 x_4 x_5
    end{pmatrix}
    =
    begin{pmatrix}
    x_3 x_1 x_4 x_5 x_2
    end{pmatrix}
    end{equation*}]

    X-diagram of the introduced permutation matrix “P” reveals that each one ‘n’ enter values xi are being rearranged when producing the output vector ‘y’.

    Now, what kind of matrix needs to be the inverse of a permutation matrix? In different phrases, tips on how to undo the transformation of a permutation matrix “P“? Clearly, we have to do one other rearrangement, which acts in reverse order. So, for instance, if the enter worth x3 was moved by “P” to output worth y1, then within the inverse permutation matrix P-1, the enter worth y1 needs to be moved again to output worth x3. Which means when drawing X-diagrams of permutation matrices “P-1” and “P“, one would be the reflection of the opposite.

    The X-diagram of a product matrix P-1P. We see that the enter worth ‘x2‘ is being positioned by “P” to the intermediate worth ‘y5‘, and later is being positioned again by P-1 to the unique place of ‘x2‘. The identical refers to each different enter worth ‘xi‘.

    Equally to the case of an trade matrix, within the case of a permutation matrix, we will visually be aware that the X-diagrams of “P” and P-1 differ solely by a horizontal flip. That’s the reason the inverse of any permutation matrix “P” is the same as its transposition:

    [begin{equation*}
    P^{-1} = P^T
    end{equation*}]

    [begin{equation*}
    begin{pmatrix}
    x_1 x_2 x_3 x_4 x_5
    end{pmatrix}
    = x = P^{-1}Px =
    begin{bmatrix}
    0 & 1 & 0 & 0 & 0
    0 & 0 & 0 & 0 & 1
    1 & 0 & 0 & 0 & 0
    0 & 0 & 1 & 0 & 0
    0 & 0 & 0 & 1 & 0
    end{bmatrix}
    begin{bmatrix}
    0 & 0 & 1 & 0 & 0
    1 & 0 & 0 & 0 & 0
    0 & 0 & 0 & 1 & 0
    0 & 0 & 0 & 0 & 1
    0 & 1 & 0 & 0 & 0
    end{bmatrix}
    begin{pmatrix}
    x_1 x_2 x_3 x_4 x_5
    end{pmatrix}
    = P^{-1}y
    end{equation*}]

    Inverse of a rotation matrix

    A rotation matrix on 2D aircraft is such a matrix “R“, which, when multiplied by a vector (x1, x2), rotates the purpose “x=(x1, x2)” counter-clockwise by a sure angle “ϴ” across the null-point. Its method is:

    [
    begin{equation*}
    begin{pmatrix}
    y_1 y_2
    end{pmatrix}
    = y = Rx =
    begin{bmatrix}
    cos(theta) & -sin(theta)
    sin(theta) & phantom{+} cos(theta)
    end{bmatrix}
    *
    begin{pmatrix}
    x_1 x_2
    end{pmatrix}
    end{equation*}]

    A rotation matrix acts on any level by rotating it by angle “ϴ”, whereas preserving its distance from the zero-point. Authentic factors are introduced in purple, whereas the rotated factors are the blue ones.

    Now, what needs to be the inverse of a rotation matrix? undo the rotation produced by a matrix “R“? Clearly, that needs to be one other rotation matrix, this time with an angle “-ϴ” (or “360°-ϴ“):

    [begin{equation*}
    R^{-1} =
    begin{bmatrix}
    cos(-theta) & -sin(-theta)
    sin(-theta) & phantom{+} cos(-theta)
    end{bmatrix}
    =
    begin{bmatrix}
    phantom{+} cos(theta) & sin(theta)
    -sin(theta) & cos(theta)
    end{bmatrix}
    =
    R^T
    end{equation*}]

    Which is why the inverse of a rotation matrix is one other rotation matrix. We additionally see that the inverse R-1 is the same as the transpose of the unique matrix “R“.

    Inverse of a triangular matrix

    An upper-triangular matrix is a sq. matrix that has zeros under its diagonal. Due to that, in its X-diagram, there are not any arrows directed downwards:

    A 3×3 upper-triangular matrix and its X-diagram.

    The horizontal arrows correspond to cells of the diagonal, whereas the arrows which are directed upwards correspond to the cells above the diagonal.

    Equally, the lower-triangular matrix is outlined, which has zeroes above its principal diagonal. On this article, we are going to focus solely on upper-triangular matrices, as for lower-triangular ones, inversion is carried out in a similar manner.

    For simplicity, let’s at first tackle inverting a 2×2-sized upper-triangular matrix ‘A‘.

    The two×2-sized upper-triangular matrix.

    As soon as ‘A‘ is multiplied by an enter vector ‘x‘, the end result vector “y = Ax” has the next kind:

    [begin{equation*}
    y =
    begin{pmatrix}
    y_1 y_2
    end{pmatrix}
    =
    begin{bmatrix}
    a_{1,1} & a_{1,2}
    0 & a_{2,2}
    end{bmatrix}
    begin{pmatrix}
    x_1 x_2
    end{pmatrix}
    =
    begin{pmatrix}
    begin{aligned}
    a_{1,1}x_1 + a_{1,2}x_2
    a_{2,2}x_2
    end{aligned}
    end{pmatrix}
    end{equation*}]

    Now, when calculating the inverse matrix A-1, we wish it to behave within the reverse order:

    Given values (y1, y2), the matrix A-1 ought to restore the unique values (x1, x2).

    How ought to we restore (x1, x2) from (y1, y2)? The primary and easiest step is to revive x2, utilizing solely y2, as a result of y2 was initially affected solely by x2. We don’t want the worth of y1 for that:

    To revive ‘x2‘, we want solely the worth of ‘y2‘.

    Subsequent, how ought to we restore x1? This time, we will’t use solely y1, as a result of the worth “y1 = a1,1x1 + a1,2x2” is type of a combination of x1 and x2. However we will restore x1 if utilizing each y1 and y2 correctly. This time, y2 will assist to filter out the affect of x2, so the pure worth of x1 might be restored:

    To revive ‘x1‘, we want values of each ‘y1‘ and ‘y2‘.

    We see now that the inverse A-1 of the upper-triangular matrix “A” can be an upper-triangular matrix.

    What about triangular matrices of bigger sizes? Let’s take this time a 3×3-sized matrix and discover its inverse analytically.

    X-diagram of a 3×3-sized upper-triangular matrix ‘A’.

    Values of the output vector ‘y‘ are obtained now from ‘x‘ within the following manner:

    [
    begin{equation*}
    y =
    begin{pmatrix}
    y_1 y_2 y_3
    end{pmatrix}
    = Ax =
    begin{bmatrix}
    a_{1,1} & a_{1,2} & a_{1,3}
    0 & a_{2,2} & a_{2,3}
    0 & 0 & a_{3,3}
    end{bmatrix}
    begin{pmatrix}
    x_1 x_2 x_3
    end{pmatrix}
    =
    begin{pmatrix}
    begin{aligned}
    a_{1,1}x_1 + a_{1,2}x_2 + a_{1,3}x_3
    a_{2,2}x_2 + a_{2,3}x_3
    a_{3,3}x_3
    end{aligned}
    end{pmatrix}
    end{equation*}]

    As we’re focused on constructing the inverse matrix A-1, our goal is to search out (x1, x2, x3), having the values of (y1, y2, y3):

    [begin{equation*}
    begin{pmatrix}
    x_1 x_2 x_3
    end{pmatrix}
    = A^{-1}y =
    begin{bmatrix}
    text{?} & text{?} & text{?}
    text{?} & text{?} & text{?}
    text{?} & text{?} & text{?}
    end{bmatrix}
    *
    begin{pmatrix}
    y_1 y_2 y_3
    end{pmatrix}
    end{equation*}]

    In different phrases, we should remedy the system of linear equations talked about above.

    Doing that can restore at first the worth of x3 as:

    [begin{equation*}
    y_3 = a_{3,3}x_3, hspace{1cm} x_3 = frac{1}{a_{3,3}} y_3
    end{equation*}]

    which is able to make clear cells of the final row of A-1 :

    [begin{equation*}
    begin{pmatrix}
    x_1 x_2 x_3
    end{pmatrix}
    = A^{-1}y =
    begin{bmatrix}
    text{?} & text{?} & text{?}
    text{?} & text{?} & text{?}
    0 & 0 & frac{1}{a_{3,3}}
    end{bmatrix}
    *
    begin{pmatrix}
    y_1 y_2 y_3
    end{pmatrix}
    end{equation*}]

    Having x3 found out, we will deliver all its occurrences to the left facet of the system:

    [begin{equation*}
    begin{pmatrix}
    y_1 – a_{1,3}x_3
    y_2 – a_{2,3}x_3
    y_3 – a_{3,3}x_3
    end{pmatrix}
    =
    begin{pmatrix}
    begin{aligned}
    a_{1,1}x_1 + a_{1,2}x_2
    a_{2,2}x_2
    0
    end{aligned}
    end{pmatrix}
    end{equation*}]

    which is able to enable us to calculate x2 as:

    [begin{equation*}
    y_2 – a_{2,3}x_3 = a_{2,2}x_2, hspace{1cm}
    x_2 = frac{y_2 – a_{2,3}x_3}{a_{2,2}} = frac{y_2 – (a_{2,3}/a_{3,3})y_3}{a_{2,2}}
    end{equation*}]

    This already clarifies the cells of the second row of A-1 :

    [begin{equation*}
    begin{pmatrix}
    x_1 x_2 x_3
    end{pmatrix}
    = A^{-1}y =
    begin{bmatrix}
    text{?} & text{?} & text{?} [0.2cm]
    0 & frac{1}{a_{2,2}} & – frac{a_{2,3}}{a_{2,2}a_{3,3}} [0.2cm]
    0 & 0 & frac{1}{a_{3,3}}
    finish{bmatrix}
    *
    start{pmatrix}
    y_1 y_2 y_3
    finish{pmatrix}
    finish{equation*}]

    Lastly, having the values of x3 and x2 found out, we will do the identical trick of transferring now x2 to the left facet of the system:

    [begin{equation*}
    begin{pmatrix}
    begin{aligned}
    y_1 – a_{1,3}x_3 & – a_{1,2}x_2
    y_2 – a_{2,3}x_3 & – a_{2,2}x_2
    y_3 – a_{3,3}x_3 &
    end{aligned}
    end{pmatrix}
    =
    begin{pmatrix}
    a_{1,1}x_1
    0
    0
    end{pmatrix}
    end{equation*}]

    from which x1 can be derived as:

    [begin{equation*}
    begin{aligned}
    & y_1 – a_{1,3}x_3 – a_{1,2}x_2 = a_{1,1}x_1,
    & x_1
    = frac{y_1 – a_{1,3}x_3 – a_{1,2}x_2}{a_{1,1}}
    = frac{y_1 – (a_{1,3}/a_{3,3})y_3 – a_{1,2}frac{y_2 – (a_{2,3}/a_{3,3})y_3}{a_{2,2}}}{a_{1,1}}
    end{aligned}
    end{equation*}]

    so the primary row of matrix A-1 can even be clarified:

    [begin{equation*}
    begin{pmatrix}
    x_1 x_2 x_3
    end{pmatrix}
    = A^{-1}y =
    begin{bmatrix}
    frac{1}{a_{1,1}} & – frac{a_{1,2}}{a_{1,1}a_{2,2}} & frac{a_{1,2}a_{2,3} – a_{1,3}a_{2,2}}{a_{1,1}a_{2,2}a_{3,3}} [0.2cm]
    0 & frac{1}{a_{2,2}} & – frac{a_{2,3}}{a_{2,2}a_{3,3}} [0.2cm]
    0 & 0 & frac{1}{a_{3,3}}
    finish{bmatrix}
    *
    start{pmatrix}
    y_1 y_2 y_3
    finish{pmatrix}
    finish{equation*}]

    After deriving A-1 analytically, we will see that it is usually an upper-triangular matrix.

    Being attentive to the sequence of actions that we used right here to calculate A-1, we will say for positive now that the inverse of any upper-triangular matrix ‘A‘ can be an upper-triangular matrix:

    Inverse of a 3×3-sized upper-triangular matrix ‘A’ can be an upper-triangular matrix.

    A similar judgment will present that the inverse of a lower-triangular matrix is one other lower-triangular matrix.


    A numerical instance of inverting a sequence of matrices

    Let’s have one other have a look at why, throughout an inversion of a sequence of matrices, their order is reversed. Recalling the method:

    [begin{equation*}
    (AB)^{-1} = B^{-1}A^{-1}
    end{equation*}]

    This time, for each ‘A‘ and ‘B‘, we are going to take sure sorts of matrices. The primary matrix “A=V” can be a cyclic shift matrix:

    The matrix ‘V’ performs a cyclic shift of values of the enter vector ‘x’ by 1 place upwards.

    Let’s recall right here that to revive the enter vector ‘x‘, the inverse V-1 ought to do the other – cyclic shift values of the argument vector ‘y‘ downwards:

    Concatenating V-1V ends in the unchanged enter vector ‘x’.

    The second matrix “B=S” can be a diagonal matrix with totally different values on its principal diagonal:

    The 4×4 matrix ‘S’ doubles solely the primary 2 values of the enter vector ‘x’.

    The inverse S-1 of such a scale matrix, to revive the unique vector ‘x‘, should halve solely the primary 2 values of its argument vector ‘y‘:

    Concatenating S-1S ends in the unchanged enter vector ‘x’.

    Now, what sort of conduct will the product matrix “VS” have? When calculating “y = VSx“, it should double solely the primary 2 values of the enter vector ‘x‘, and cyclic shift all the end result upwards.

    The product matrix “V*S” doubles solely the primary 2 values of the enter vector ‘x’, and cyclic shifts the end result by 1 place upwards.

    We all know already that when the output vector “y = VSx” is calculated, to reverse the affect of the product matrix “VS” and to revive the enter vector ‘x‘, we should always do:

    [begin{equation*}
    x = (VS)^{-1}y = S^{-1}V^{-1}y
    end{equation*}]

    In different phrases, the order of matrices ‘V‘ and ‘S‘ needs to be reversed throughout inversion:

    The inverse of the product matrix “VS” is the same as “S-1V-1“. All values of the enter vector ‘x’ on the proper facet are restored on the left facet.

    And what’s going to occur if we attempt to invert the love of “VS” in an improper manner, with out reversing the order of the matrices, assuming that V-1S-1 is what needs to be used for it:

    Attempting to invert the matrix “SV” utilizing S-1V-1 is not going to lead to an id matrix “E”.

    We see that the unique vector (x1, x2, x3, x4) from the suitable facet is just not restored on the left facet now. As a substitute, we have now vector
    (2x1, x2, 0.5x3, x4) there. One purpose for that is that the worth x3 shouldn’t be halved on its path, nevertheless it truly will get halved as a result of in the intervening time when matrix S-1 is utilized, x3 seems on the second place from the highest, which truly halves it. Similar refers back to the path of worth x1. All that ends in having an altered vector on the left facet.


    Conclusion

    On this story, we have now checked out matrix inversion operation A-1 as one thing that undoes the transformation of the given matrix “A“. We have now noticed why inverting a sequence of matrices like (ABC)-1 truly reverses the order of multiplication, leading to C-1B-1A-1. Additionally, we acquired a visible perspective on why inverting a number of particular sorts of matrices ends in one other matrix of the identical kind.

    Thanks for studying!

    That is most likely the final a part of my “Understanding Matrices” sequence. I hope you loved studying all 4 elements! If that’s the case, be happy to observe me on LinkedIn, as hopefully different articles can be coming quickly, and I’ll publish updates there!


    My gratitude to:
    – Asya Papyan, for exact design of all of the used illustrations ( behance.net/asyapapyan ).
    – Roza Galstyan, for cautious overview of the draft, and helpful solutions ( linkedin.com/in/roza-galstyan-a54a8b352/ )
    .

    In the event you loved studying this story, be happy to attach with me on LinkedIn ( linkedin.com/in/tigran-hayrapetyan-cs/ ).

    All used pictures, until in any other case famous, are designed by request of the writer.


    References:

    [1] – Understanding matrices | Part 1: Matrix-Vector Multiplication

    [2] – Understanding matrices | Part 2: Matrix-Matrix Multiplication

    [3] – Understanding matrices | Part 3: Matrix Transpose



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    A Practical Guide to Memory for Autonomous LLM Agents

    April 17, 2026

    You Don’t Need Many Labels to Learn

    April 17, 2026

    Beyond Prompting: Using Agent Skills in Data Science

    April 17, 2026

    6 Things I Learned Building LLMs From Scratch That No Tutorial Teaches You

    April 17, 2026

    Introduction to Deep Evidential Regression for Uncertainty Quantification

    April 17, 2026

    memweave: Zero-Infra AI Agent Memory with Markdown and SQLite — No Vector Database Required

    April 17, 2026

    Comments are closed.

    Editors Picks

    Portable water filter provides safe drinking water from any source

    April 18, 2026

    MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged

    April 18, 2026

    NCAA seeks faster trial over DraftKings disputed March Madness branding case

    April 18, 2026

    AI Trusted Less Than Social Media and Airlines, With Grok Placing Last, Survey Says

    April 18, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    60 Italian Mayors Want to Be the Unlikely Solution to Self-Driving Cars in Europe

    July 28, 2025

    Grado Signature S750 Review: Insane Sound, Old-School Fit

    December 23, 2025

    Wake surfing at Alcova Reservoir

    August 30, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.