DFFITS

DFFITS is a diagnostic meant to show how influential a point is in a statistical regression. It was proposed in 1980.[1] It is defined as the Studentized DFFIT, where the latter is the change in the predicted value for a point, obtained when that point is left out of the regression; Studentization is achieved by dividing by the estimated standard deviation of the fit at that point:

\text{DFFITS} = {\widehat{y_i} - \widehat{y_{i(i)}} \over s_{(i)} \sqrt{h_{ii}}}

where \widehat{y_i} and \widehat{y_{i(i)}} are the prediction for point i with and without point i included in the regression, s_{(i)} is the standard error estimated without the point in question, and h_{ii} is the leverage for the point.

DFFITS is very similar to the externally Studentized residual, and is in fact equal to the latter times \sqrt{h_{ii}/(1-h_{ii})}.[2]

As when the errors are Gaussian the externally Studentized residual is distributed as Student's t (with a number of degrees of freedom equal to the number of residual degrees of freedom minus one), DFFITS for a particular point will be distributed according to this same Student's t distribution multiplied by the leverage factor \sqrt{h_{ii}/(1-h_{ii})} for that particular point. Thus, for low leverage points, DFFITS is expected to be small, whereas as the leverage goes to 1 the distribution of the DFFITS value widens infinitely.

For a perfectly balanced experimental design (such as a factorial design or balanced partial factorial design), the leverage for each point is p/n, the number of parameters divided by the number of points. This means that the DFFITS values will be distributed (in the Gaussian case) as \sqrt{p \over n-p} \approx \sqrt{p \over n} times a t variate. Therefore, the authors suggest investigating those points with DFFITS greater than 2\sqrt{p \over n}.

Although the raw values resulting from the equations are different, Cook's distance and DFFITS are conceptually identical and there is a closed-form formula to convert one value to the other.[3]

Development

Previously when assessing a dataset before running a linear regression, the possibility of outliers would be assessed using histograms and scatterplots. Both methods of assessing data points were subjective and there was little way of knowing how much leverage each potential outlier had on the results data. This led to a variety of quantitative measures, including DFFIT, DFBETA.

References

  1. Belsley, David A.; Kuh, Edwin; Welsh, Roy E. (1980). Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. Wiley Series in Probability and Mathematical Statistics. New York: John Wiley & Sons. pp. 11–16. ISBN 0-471-05856-4.
  2. Montogomery, Douglas C.; Peck, Elizabeth A.; Vining, G. Geoffrey (2012). Introduction to Linear Regression Analysis (5th ed.). Wiley. p. 218. ISBN 978-0-470-54281-1. Retrieved 22 February 2013. Thus, DFFITSi is the value of R-student multiplied by the leverage of the ith observation [hii/(1-hii)]1/2.
  3. Cohen, Jacob; Cohen, Patricia; West, Stephen G.; Aiken, Leona S. (2003). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. ISBN 0-8058-2223-2.
This article is issued from Wikipedia - version of the Thursday, April 14, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.