Šidák correction

In statistics, the Šidák correction, or Dunn–Šidák correction, is a method used to counteract the problem of multiple comparisons. It is a simple method to control the familywise error rate that is probabilistically exact when the individual tests are independent from each other, conservative under positive dependence, and liberal under negative dependence. It is credited to a 1967 paper [1] by the statistician and probabilist Zbyněk Šidák.[2]

Usage

Proof

The Šidák correction is derived by assuming that the individual tests are independent. Let the significance threshold for each test be \alpha_1; then the probability that at least one of the tests is significant under this threshold is (1 - the probability that none of them are significant). Since it is assumed that they are independent, the probability that all of them are not significant is the product of the probabilities that each of them are not significant, or 1 - (1 - \alpha_1)^m. Our intention is for this probability to equal \alpha, the significance level for the entire series of tests. By solving for \alpha_1, we obtain \alpha_1 = 1 - (1 - \alpha)^{1/m}.

Šidák correction for t-test

See also

References

  1. Šidák, Z. K. (1967). "Rectangular Confidence Regions for the Means of Multivariate Normal Distributions". Journal of the American Statistical Association 62 (318): 626–633. doi:10.1080/01621459.1967.10482935.
  2. Seidler, J.; Vondráček, J. Í.; Saxl, I. (2000). "The life and work of Zbyněk Šidák (1933–1999)". Applications of Mathematics 45 (5): 321. doi:10.1023/A:1022238410461.

External links

This article is issued from Wikipedia - version of the Saturday, October 31, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.