Multilinear map

In linear algebra, a multilinear map is a function of several variables that is linear separately in each variable. More precisely, a multilinear map is a function

f\colon V_1 \times \cdots \times V_n \to W\text{,}

where V_1,\ldots,V_n and W\! are vector spaces (or modules over a commutative ring), with the following property: for each i\!, if all of the variables but v_i\! are held constant, then f(v_1,\ldots,v_n) is a linear function of v_i\!.[1]

A multilinear map of one variable is a linear map, and of two variables is a bilinear map. More generally, a multilinear map of k variables is called a k-linear map. If the codomain of a multilinear map is the field of scalars, it is called a multilinear form. Multilinear maps and multilinear forms are fundamental objects of study in multilinear algebra.

If all variables belong to the same space, one can consider symmetric, antisymmetric and alternating k-linear maps. The latter coincide if the underlying ring (or field) has a characteristic different from two, else the former two coincide.

Examples

Coordinate representation

Let

f\colon V_1 \times \cdots \times V_n \to W\text{,}

be a multilinear map between finite-dimensional vector spaces, where V_i\! has dimension d_i\!, and W\! has dimension d\!. If we choose a basis \{\textbf{e}_{i1},\ldots,\textbf{e}_{id_i}\} for each V_i\! and a basis \{\textbf{b}_1,\ldots,\textbf{b}_d\} for W\! (using bold for vectors), then we can define a collection of scalars A_{j_1\cdots j_n}^k by

f(\textbf{e}_{1j_1},\ldots,\textbf{e}_{nj_n}) = A_{j_1\cdots j_n}^1\,\textbf{b}_1 + \cdots +  A_{j_1\cdots j_n}^d\,\textbf{b}_d.

Then the scalars \{A_{j_1\cdots j_n}^k \mid 1\leq j_i\leq d_i, 1 \leq k \leq d\} completely determine the multilinear function f\!. In particular, if

\textbf{v}_i = \sum_{j=1}^{d_i} v_{ij} \textbf{e}_{ij}\!

for 1 \leq i \leq n\!, then

f(\textbf{v}_1,\ldots,\textbf{v}_n) = \sum_{j_1=1}^{d_1} \cdots \sum_{j_n=1}^{d_n} \sum_{k=1}^{d} A_{j_1\cdots j_n}^k v_{1j_1}\cdots v_{nj_n} \textbf{b}_k.

Example

Let's take a trilinear function

f\colon R^2 \times R^2 \times R^2 \to R,

where Vi = R2, di = 2, i = 1,2,3, and W = R, d = 1.

A basis for each Vi is \{\textbf{e}_{i1},\ldots,\textbf{e}_{id_i}\} = \{\textbf{e}_{1}, \textbf{e}_{2}\} = \{(1,0), (0,1)\}. Let

f(\textbf{e}_{1i},\textbf{e}_{2j},\textbf{e}_{3k}) = f(\textbf{e}_{i},\textbf{e}_{j},\textbf{e}_{k}) = A_{ijk},

where i,j,k \in \{1,2\}. In other words, the constant A_{i j k} is a function value at one of the eight possible triples of basis vectors (since there are two choices for each of the three V_i), namely:


\{\textbf{e}_1, \textbf{e}_1, \textbf{e}_1\}, 
\{\textbf{e}_1, \textbf{e}_1, \textbf{e}_2\}, 
\{\textbf{e}_1, \textbf{e}_2, \textbf{e}_1\},
\{\textbf{e}_1, \textbf{e}_2, \textbf{e}_2\},
\{\textbf{e}_2, \textbf{e}_1, \textbf{e}_1\}, 
\{\textbf{e}_2, \textbf{e}_1, \textbf{e}_2\}, 
\{\textbf{e}_2, \textbf{e}_2, \textbf{e}_1\},
\{\textbf{e}_2, \textbf{e}_2, \textbf{e}_2\}.

Each vector \textbf{v}_i \in V_i = R^2 can be expressed as a linear combination of the basis vectors

\textbf{v}_i = \sum_{j=1}^{2} v_{ij} \textbf{e}_{ij} = v_{i1} \times \textbf{e}_1 + v_{i2} \times \textbf{e}_2 = v_{i1} \times (1, 0) + v_{i2} \times (0, 1).

The function value at an arbitrary collection of three vectors \textbf{v}_i \in R^2 can be expressed as

f(\textbf{v}_1,\textbf{v}_2, \textbf{v}_3) = \sum_{i=1}^{2} \sum_{j=1}^{2} \sum_{k=1}^{2} A_{i j k} v_{1i} v_{2j} v_{3k}.

Or, in expanded form as

 \begin{align}
f((a,b),(c,d)&, (e,f)) = ace \times f(\textbf{e}_1, \textbf{e}_1, \textbf{e}_1) + acf \times f(\textbf{e}_1, \textbf{e}_1, \textbf{e}_2) \\
&+ ade \times f(\textbf{e}_1, \textbf{e}_2, \textbf{e}_1) +
adf \times f(\textbf{e}_1, \textbf{e}_2, \textbf{e}_2) +
bce \times f(\textbf{e}_2, \textbf{e}_1, \textbf{e}_1) +
bcf \times f(\textbf{e}_2, \textbf{e}_1, \textbf{e}_2) \\ 
&+ bde \times f(\textbf{e}_2, \textbf{e}_2, \textbf{e}_1) +
bdf \times f(\textbf{e}_2, \textbf{e}_2, \textbf{e}_2).
\end{align}

Relation to tensor products

There is a natural one-to-one correspondence between multilinear maps

f\colon V_1 \times \cdots \times V_n \to W\text{,}

and linear maps

F\colon V_1 \otimes \cdots \otimes V_n \to W\text{,}

where V_1 \otimes \cdots \otimes V_n\! denotes the tensor product of V_1,\ldots,V_n. The relation between the functions f\! and F\! is given by the formula

F(v_1\otimes \cdots \otimes v_n) = f(v_1,\ldots,v_n).

Multilinear functions on n×n matrices

One can consider multilinear functions, on an n×n matrix over a commutative ring K with identity, as a function of the rows (or equivalently the columns) of the matrix. Let A be such a matrix and ai, 1 ≤ in, be the rows of A. Then the multilinear function D can be written as

D(A) = D(a_{1},\ldots,a_{n}),

satisfying

D(a_{1},\ldots,c a_{i} + a_{i}',\ldots,a_{n}) = c D(a_{1},\ldots,a_{i},\ldots,a_{n}) + D(a_{1},\ldots,a_{i}',\ldots,a_{n}).

If we let \hat{e}_j represent the jth row of the identity matrix, we can express each row ai as the sum

a_{i} = \sum_{j=1}^n A(i,j)\hat{e}_{j}.

Using the multilinearity of D we rewrite D(A) as


D(A) = D\left(\sum_{j=1}^n A(1,j)\hat{e}_{j}, a_2, \ldots, a_n\right)
       = \sum_{j=1}^n A(1,j) D(\hat{e}_{j},a_2,\ldots,a_n).

Continuing this substitution for each ai we get, for 1 ≤ in,


D(A) = \sum_{1\le k_i\le n} A(1,k_{1})A(2,k_{2})\dots A(n,k_{n}) D(\hat{e}_{k_{1}},\dots,\hat{e}_{k_{n}}),

where, since in our case 1 ≤ in,


 \sum_{1\le k_i \le n} = \sum_{1\le k_1 \le n} \ldots \sum_{1\le k_i \le n} \ldots \sum_{1\le k_n \le n}

is a series of nested summations.

Therefore, D(A) is uniquely determined by how D operates on \hat{e}_{k_{1}},\dots,\hat{e}_{k_{n}}.

Example

In the case of 2×2 matrices we get


D(A) = A_{1,1}A_{2,1}D(\hat{e}_1,\hat{e}_1) + A_{1,1}A_{2,2}D(\hat{e}_1,\hat{e}_2) + A_{1,2}A_{2,1}D(\hat{e}_2,\hat{e}_1) + A_{1,2}A_{2,2}D(\hat{e}_2,\hat{e}_2) \,

Where \hat{e}_1 = [1,0] and \hat{e}_2 = [0,1]. If we restrict D to be an alternating function then D(\hat{e}_1,\hat{e}_1) = D(\hat{e}_2,\hat{e}_2) = 0 and D(\hat{e}_2,\hat{e}_1) = -D(\hat{e}_1,\hat{e}_2) = -D(I). Letting D(I) = 1 we get the determinant function on 2×2 matrices:


D(A) = A_{1,1}A_{2,2} - A_{1,2}A_{2,1} \,

Properties

See also

References

  1. Lang. Algebra. Springer; 3rd edition (January 8, 2002)
This article is issued from Wikipedia - version of the Friday, October 02, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.