Moore matrix

In linear algebra, a Moore matrix, introduced by E. H. Moore (1896), is a matrix defined over a finite field. When it is a square matrix its determinant is called a Moore determinant (this is unrelated to the Moore determinant of a quaternionic Hermitian matrix). The Moore matrix has successive powers of the Frobenius automorphism applied to the first column, so it is an m × n matrix

M=\begin{bmatrix}
\alpha_1 & \alpha_1^q & \dots & \alpha_1^{q^{n-1}}\\
\alpha_2 & \alpha_2^q & \dots & \alpha_2^{q^{n-1}}\\
\alpha_3 & \alpha_3^q & \dots & \alpha_3^{q^{n-1}}\\
\vdots & \vdots & \ddots &\vdots \\
\alpha_m & \alpha_m^q & \dots & \alpha_m^{q^{n-1}}\\
\end{bmatrix}

or

M_{i,j} = \alpha_i^{q^{j-1}}

for all indices i and j. (Some authors use the transpose of the above matrix.)

The Moore determinant of a square Moore matrix (so m = n) can be expressed as:

\det(V) = \prod_{\mathbf{c}} \left( c_1\alpha_1 + \cdots + c_n\alpha_n \right),

where c runs over a complete set of direction vectors, made specific by having the last non-zero entry equal to 1, i.e.

\det(V) = \prod_{1 \le i \le n} \prod_{c_1, \dots, c_{i-1}} \left( c_1\alpha_1 + \cdots + c_{i-1}\alpha_{i-1} + \alpha_i \right).

In particular the Moore determinant vanishes if and only if the elements in the left hand column are linearly dependent over the finite field of order q. So it is analogous to the Wronskian of several functions.

Dickson used the Moore determinant in finding the modular invariants of the general linear group over a finite field.

See also

References


This article is issued from Wikipedia - version of the Saturday, November 16, 2013. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.