Multiple correlation

In statistics, the coefficient of multiple correlation is a measure of how well a given variable can be predicted using a linear function of a set of other variables. It is the correlation between the variable's values and the best predictions that can be computed linearly from the predictive variables.[1]

The coefficient of multiple correlation takes values between 0 and 1; a higher value indicates a better predictability of the dependent variable from the independent variables, with a value of 1 indicating that the predictions are exactly correct and a value of 0 indicating that no linear combination of the independent variables is a better predictor than is the fixed mean of the dependent variable.[2]

The coefficient of multiple correlation is computed as the square root of the coefficient of determination, but under the particular assumptions that an intercept is included and that the best possible linear predictors are used, whereas the coefficient of determination is defined for more general cases, including those of nonlinear prediction and those in which the predicted values have not been derived from a model-fitting procedure.

Definition

The coefficient of multiple correlation, denoted R, is a scalar that is defined as the Pearson correlation coefficient between the predicted and the actual values of the dependent variable in a linear regression model that includes an intercept.

Computation

The square of the coefficient of multiple correlation can be computed using the vector \mathbf{c} = {(r_{x_1 y}, r_{x_2 y},\dots,r_{x_N y})}^\top of correlations r_{x_n y} between the predictor variables x_n (independent variables) and the target variable y (dependent variable), and the correlation matrix R_{xx} of inter-correlations between predictor variables. It is given by

R^2 = \mathbf{c}^\top R_{xx}^{-1}\, \mathbf{c},

where \mathbf{c}^\top is the transpose of \mathbf{c}, and R_{xx}^{-1} is the inverse of the matrix

R_{xx} = \left(\begin{array}{cccc}
    r_{x_1 x_1} & r_{x_1 x_2} & \dots  & r_{x_1 x_N} \\
    r_{x_2 x_1} & \ddots      &        & \vdots \\
    \vdots      &             & \ddots &  \\
    r_{x_N x_1} & \dots       &        & r_{x_N x_N}
\end{array}\right).

If all the predictor variables are uncorrelated, the matrix R_{xx} is the identity matrix and R^2 simply equals \mathbf{c}^\top\, \mathbf{c}, the sum of the squared correlations with the dependent variable. If the predictor variables are correlated among themselves, the inverse of the correlation matrix R_{xx} accounts for this.

The squared coefficient of multiple correlation can also be computed as the fraction of variance of the dependent variable that is explained by the independent variables, which in turn is 1 minus the unexplained fraction. The unexplained fraction can be computed as the sum of squared residualsthat is, the sum of the squares of the prediction errorsdivided by the sum of the squared deviations of the values of the dependent variable from its expected value.

Properties

With more than two variables being related to each other, the value of the coefficient of multiple correlation depends on the choice of dependent variable: a regression of y on x and z will in general have a different R than will a regression of z on x and y. For example, suppose that in a particular sample the variable z is uncorrelated with both x and y, while x and y are linearly related to each other. Then a regression of z on y and x will yield an R of zero, while a regression of y on x and z will yield a strictly positive R. This follows since the correlation of y with the best predictor based on x and z is in all cases at least as large as the correlation of y with the best predictor based on x alone, and in this case with z providing no explanatory power it will be exactly as large.

References

This article is issued from Wikipedia - version of the Friday, January 22, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.