Moment matrix

In mathematics, a moment matrix is a special symmetric square matrix whose rows and columns are indexed by monomials. The entries of the matrix depend on the product of the indexing monomials only (cf. Hankel matrices.)

Moment matrices play an important role in polynomial optimization, since positive semidefinite moment matrices correspond to polynomials which are sums of squares, and econometrics.[1]

Definition

A multiple linear regression model can be written as

y = \beta_{0} + \beta_{1} x_{1} + \beta_{2} x_{2} + \dots \beta_{k} x_{k} + u

where y is the explained variable, x_{1}, x_{2} \dots x_{k} are the explanatory variables, u is the error, and \beta_{0}, \beta_{1} \dots \beta_{k} are unknown coefficients to be estimated. Given observations \left\{ y_{i}, x_{1i}, x_{2i}, \dots x_{ki} \right\}_{i=1}^{n}, we have a system of n linear equations that can be expressed in matrix notation.[2]

\begin{bmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{n} \end{bmatrix} = \begin{bmatrix} 1 & x_{11} & x_{12} & \dots & x_{1k} \\ 1 & x_{21} & x_{22} & \dots & x_{2k} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n1} & x_{n2} & \dots & x_{nk} \\ \end{bmatrix} \begin{bmatrix} \beta_{0} \\ \beta_{1} \\ \vdots \\ \beta_{k} \end{bmatrix} + \begin{bmatrix} u_{1} \\ u_{2} \\ \vdots \\ u_{n} \end{bmatrix}

or

\mathbf{y} = \mathbf{X} \boldsymbol{\beta} + \mathbf{u}

where \mathbf{y} and \mathbf{u} are each a vector of dimension n \times 1, \mathbf{X} is a matrix of order N \times (k+1), and \boldsymbol{\beta} is a vector of dimension (k+1) \times 1. Under the Gauss–Markov assumptions, the best linear unbiased estimator of \boldsymbol{\beta} is the linear least squares estimator \mathbf{b} = \left( \mathbf{X}^{\mathsf{T}} \mathbf{X} \right)^{-1} \mathbf{X}^{\mathsf{T}} \mathbf{y}, involving the two moment matrices \mathbf{X}^{\mathsf{T}} \mathbf{X} and \mathbf{X}^{\mathsf{T}} \mathbf{y} defined as

\mathbf{X}^{\mathsf{T}} \mathbf{X} = \begin{bmatrix} n & \sum x_{i1} & \sum x_{i2} & \dots & \sum x_{ik} \\ \sum x_{i1} & \sum x_{i1}^{2} & \sum x_{i1} x_{i2} & \dots & \sum x_{i1} x_{ik} \\ \sum x_{i2} & \sum x_{i1} x_{i2} & \sum x_{i2}^{2} & \dots & \sum x_{i2} x_{ik} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \sum x_{ik} & \sum x_{i1} x_{ik} & \sum x_{i2} x_{ik} & \dots & \sum x_{ik}^{2} \end{bmatrix}

and

\mathbf{X}^{\mathsf{T}} \mathbf{y} = \begin{bmatrix} \sum y_{i} \\ \sum x_{i1} y_{i} \\ \vdots \\ \sum x_{ik} y_{i} \end{bmatrix}

where obviously \mathbf{X}^{\mathsf{T}} \mathbf{X} is a square matrix of dimension (k+1) \times (k+1), and is a vector of dimension (k+1 ) \times 1.

See also

References

  1. Goldberger, Arthur S. (1964). "Classical Linear Regression". Econometric Theory. New York: John Wiley & Sons. pp. 156–212. ISBN 0-471-31101-4.
  2. Huang, David S. (1970). Regression and Econometric Methods. New York: John Wiley & Sons. pp. 52–65. ISBN 0-471-41754-8.

External links

This article is issued from Wikipedia - version of the Tuesday, March 29, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.