Minor (linear algebra)

This article is about a concept in linear algebra. For the unrelated concept of "minor" in graph theory, see Minor (graph theory).

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows or columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices.

Definition and illustration

First minors

If A is a square matrix, then the minor of the entry in the i-th row and j-th column (also called the (i,j) minor, or a first minor[1]) is the determinant of the submatrix formed by deleting the i-th row and j-th column. This number is often denoted Mi,j. The (i,j) cofactor is obtained by multiplying the minor by (-1)^{i+j}.

To illustrate these definitions, consider the following 3 by 3 matrix,

\begin{bmatrix}
\,\,\,1 & 4 & 7 \\
\,\,\,3 & 0 & 5 \\
-1 & 9 & \!11 \\
\end{bmatrix}

To compute the minor M23 and the cofactor C23, we find the determinant of the above matrix with row 2 and column 3 removed.

 M_{23} = \det \begin{bmatrix}
\,\,1 & 4 & \Box\, \\
\,\Box & \Box & \Box\, \\
-1 & 9 & \Box\, \\
\end{bmatrix}= \det \begin{bmatrix}
\,\,\,1 & 4\, \\
-1 & 9\, \\
\end{bmatrix} = (9-(-4)) = 13

So the cofactor of the (2,3) entry is

\ C_{23} = (-1)^{2+3}(M_{23}) = -13.

General definition

Let A be an m × n matrix and k an integer with 0 < km, and kn. A k × k minor of A, also called minor determinant of order k of A or, if  m = n, (n-k):th minor determinant of A, with the word "determinant" often omitted and the word "order" sometimes replaced by "degree", is the determinant of a k × k matrix obtained from A by deleting mk rows and nk columns. Sometimes the term is used to refer to the k × k matrix obtained from A as above (by deleting mk rows and nk columns), but this matrix should be referred to as a (square) submatrix of A, leaving the term "minor" to refer to the determinant of this matrix. For a matrix A as above, there are a total of {m \choose k} \cdot {n \choose k} minors of size k × k. Minor of order zero is often defined to be 1. For a square matrix, zeroth minor is just the determinant of the matrix.,.[2][3]

Let  1 \leq i_1 < i_2 < \ldots < i_k \leq m ,  1 \leq j_1 < j_2 < \ldots < j_k \leq n be ordered sequences (in natural order, as it is always assumed when talking about minors unless otherwise stated) of indexes, call them  I and  J , respectively. The minor  \det \left( (A_{i_p, j_q})_{p,q = 1, \ldots, k} \right)  corresponding to these choices of indexes is denoted  \det_{I,J} A or  [A]_{I,J} or  M_{I,J} or  M_{i_1, i_2, \ldots, i_k, j_1, j_2, \ldots, j_k}  or  M_{(i),(j)} (where the  (i) denotes the sequence of indexes  I , etc.), depending on the source. Also, there are two types of denotations in use in literature: by the minor associated to ordered sequences of indexes I and J, some authors[4] mean the determinant of the matrix that is formed as above, by taking the elements of the original matrix from the rows whose indexes are in I and columns whose indexes are in J, whereas some other authors mean by a minor associated to I and J the determinant of the matrix formed from the original matrix by deleting the rows in I and columns in J.[5] Which notation is used should always be checked from the source in question. In this article, we use the inclusive definition of choosing the elements from rows of I and columns of J. The exceptional case is the case of the first minor or the (i,j)-minor described above; in that case, the exclusive notation  M_{i,j} = \det \left( \left( A_{p,q} \right)_{p \neq i, q \neq j} \right) is standard everywhere in the literature and is used in this article also.

Complement

The complement, Bijk...,pqr..., of a minor, Mijk...,pqr..., of a square matrix, A, is formed by the determinant of the matrix A from which all the rows (ijk...) and columns (pqr...) associated with Mijk...,pqr... have been removed. The complement of the first minor of an element aij is merely that element.[6]

Applications of minors and cofactors

Cofactor expansion of the determinant

Main article: Laplace expansion

The cofactors feature prominently in Laplace's formula for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given the n\times n matrix (a_{ij}), the determinant of A (denoted det(A)) can be written as the sum of the cofactors of any row or column of the matrix multiplied by the entries that generated them. In other words, the cofactor expansion along the jth column gives:

\ \det(\mathbf A) = a_{1j}C_{1j} + a_{2j}C_{2j} + a_{3j}C_{3j} + ... + a_{nj}C_{nj} = \sum_{i=1}^{n} a_{ij} C_{ij}

The cofactor expansion along the ith row gives:

\ \det(\mathbf A) = a_{i1}C_{i1} + a_{i2}C_{i2} + a_{i3}C_{i3} + ... + a_{in}C_{in} = \sum_{j=1}^{n} a_{ij} C_{ij}

Inverse of a matrix

Main article: Cramer's rule

One can write down the inverse of an invertible matrix by computing its cofactors by using Cramer's rule, as follows. The matrix formed by all of the cofactors of a square matrix A is called the cofactor matrix (also called the matrix of cofactors or comatrix):

\mathbf C=\begin{bmatrix}
    C_{11}  & C_{12} & \cdots &   C_{1n}   \\
    C_{21}  & C_{22} & \cdots &   C_{2n}   \\
  \vdots & \vdots & \ddots & \vdots \\ 
    C_{n1}  & C_{n2} & \cdots &  C_{nn}
\end{bmatrix}

Then the inverse of A is the transpose of the cofactor matrix times the reciprocal of the determinant of A:

\mathbf A^{-1} = \frac{1}{\operatorname{det}(\mathbf A)} \mathbf C^\mathsf{T}.

The transpose of the cofactor matrix is called the adjugate matrix (also called the classical adjoint) of A.

The above formula can be generalized as follows: Let  1 \leq i_1 < i_2 < \ldots < i_k \leq n ,  1 \leq j_1 < j_2 < \ldots < j_k \leq n be ordered sequences (in natural order) of indexes (here A is an  n \times n -matrix). Then

[\mathbf A^{-1}]_{I,J} = \pm\frac{[\mathbf A]_{J',I'}}{\det \mathbf A},

where I', J' denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary to I, J, so that every index 1,\ldots,n appears exactly one time in either  I or  I' , but not in both (similarly for the  J and  J' ) and  [\mathbf A]_{I,J} denotes the determinant of the submatrix of A formed by choosing the rows of the index set I and columns of index set J. Also,  [\mathbf A]_{I,J} = \det \left( (A_{i_p, j_q})_{p,q = 1, \ldots, k} \right) . A simple proof can be given using wedge product. Indeed,

[\mathbf A^{-1}]_{I,J}(e_1\wedge\ldots \wedge e_n) = \pm(\mathbf A^{-1}e_{j_1})\wedge \ldots \wedge(\mathbf A^{-1}e_{j_k})\wedge e_{i'_1}\wedge\ldots \wedge e_{i'_{n-k}},

where e_1,\ldots,e_n are the basis vectors. Acting by \mathbf A on both sides, one gets

[\mathbf A^{-1}]_{I,J}\det \mathbf A (e_1\wedge\ldots \wedge e_n) = \pm (e_{j_1})\wedge \ldots \wedge(e_{j_k})\wedge (\mathbf A e_{i'_1})\wedge\ldots \wedge (\mathbf A e_{i'_{n-k}})=\pm [\mathbf A]_{J',I'}(e_1\wedge\ldots \wedge e_n).

The sign can be worked out to be (-1)^{ \sum_{s=1}^{k} i_s - \sum_{s=1}^{k} j_s}, also the sign is determined by the sums of elements in I,J.

Other applications

Given an m × n matrix with real entries (or entries from any other field) and rank r, then there exists at least one non-zero r × r minor, while all larger minors are zero.

We will use the following notation for minors: if A is an m × n matrix, I is a subset of {1,...,m} with k elements and J is a subset of {1,...,n} with k elements, then we write [A]I,J for the k × k minor of A that corresponds to the rows with index in I and the columns with index in J.

Both the formula for ordinary matrix multiplication and the Cauchy-Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that A is an m × n matrix, B is an n × p matrix, I is a subset of {1,...,m} with k elements and J is a subset of {1,...,p} with k elements. Then

[\mathbf{AB}]_{I,J} = \sum_{K} [\mathbf{A}]_{I,K} [\mathbf{B}]_{K,J}\,

where the sum extends over all subsets K of {1,...,n} with k elements. This formula is a straightforward extension of the Cauchy-Binet formula.

Multilinear algebra approach

A more systematic, algebraic treatment of the minor concept is given in multilinear algebra, using the wedge product: the k-minors of a matrix are the entries in the kth exterior power map.

If the columns of a matrix are wedged together k at a time, the k × k minors appear as the components of the resulting k-vectors. For example, the 2 × 2 minors of the matrix

\begin{pmatrix}
1 & 4 \\
3 & \!\!-1 \\
2 & 1 \\
\end{pmatrix}

are 13 (from the first two rows), 7 (from the first and last row), and 5 (from the last two rows). Now consider the wedge product

(\mathbf{e}_1 + 3\mathbf{e}_2 +2\mathbf{e}_3)\wedge(4\mathbf{e}_1-\mathbf{e}_2+\mathbf{e}_3)

where the two expressions correspond to the two columns of our matrix. Using the properties of the wedge product, namely that it is bilinear and

\mathbf{e}_i\wedge \mathbf{e}_i = 0

and

\mathbf{e}_i\wedge \mathbf{e}_j = - \mathbf{e}_j\wedge \mathbf{e}_i,

we can simplify this expression to

 -13 \mathbf{e}_1\wedge \mathbf{e}_2 -7 \mathbf{e}_1\wedge \mathbf{e}_3 +5 \mathbf{e}_2\wedge \mathbf{e}_3

where the coefficients agree with the minors computed earlier.

A remark about different notations

In some books [9] instead of cofactor the term adjunct is used. Moreover, it is denoted as Aij and defined in the same way as cofactor:

\mathbf{A}_{ij} = (-1)^{i+j} \mathbf{M}_{ij}

Using this notation the inverse matrix is written this way:

\mathbf{A}^{-1} = \frac{1}{\det(A)}\begin{bmatrix}
    A_{11}  & A_{21} & \cdots &   A_{n1}   \\
    A_{12}  & A_{22} & \cdots &   A_{n2}   \\
  \vdots & \vdots & \ddots & \vdots \\ 
    A_{1n}  & A_{2n} & \cdots &  A_{nn}
\end{bmatrix}

Keep in mind that adjunct is not adjugate or adjoint. In modern terminology, the "adjoint" of a matrix most often refers to the corresponding adjoint operator.

See also

References

  1. Burnside, William Snow & Panton, Arthur William (1886) Theory of Equations: with an Introduction to the Theory of Binary Algebraic Form.
  2. Elementary Matrix Algebra (Third edition), Franz E. Hohn, The Macmillan Company, 1973, ISBN 978-0-02-355950-1
  3. Minor. Encyclopedia of Mathematics. http://www.encyclopediaofmath.org/index.php?title=Minor&oldid=30176
  4. Linear Algebra and Geometry, Igor R. Shafarevich, Alexey O. Remizov, Springer-Verlag Berlin Heidelberg, 2013, ISBN 978-3-642-30993-9
  5. Elementary Matrix Algebra (Third edition), Franz E. Hohn, The Macmillan Company, 1973, ISBN 978-0-02-355950-1
  6. Bertha Jeffreys, Methods of Mathematical Physics, p.135, Cambridge University Press, 1999 ISBN 0-521-66402-0.
  7. Minor. Encyclopedia of Mathematics. http://www.encyclopediaofmath.org/index.php?title=Minor&oldid=30176
  8. Minor. Encyclopedia of Mathematics. http://www.encyclopediaofmath.org/index.php?title=Minor&oldid=30176
  9. Felix Gantmacher, Theory of matrices (1st ed., original language is Russian), Moscow: State Publishing House of technical and theoretical literature, 1953, p.491,

External links

This article is issued from Wikipedia - version of the Monday, December 28, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.