One-way analysis of variance

In statistics, one-way analysis of variance (abbreviated one-way ANOVA) is a technique used to compare means of three or more samples (using the F distribution). This technique can be used only for numerical data.[1]

The ANOVA tests the null hypothesis that samples in two or more groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. These estimates rely on various assumptions (see below). The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values.[1]

Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test (Gosset, 1908). When there are only two means to compare, the t-test and the F-test are equivalent; the relation between ANOVA and t is given by F = t2. An extension of one-way ANOVA is two-way analysis of variance that examines the influence of two different categorical independent variables on one dependent variable.

Assumptions

The results of a one-way ANOVA can be considered reliable as long as the following assumptions are met:

ANOVA is a relatively robust procedure with respect to violations of the normality assumption.[2] If data are ordinal, a non-parametric alternative to this test should be used such as Kruskal–Wallis one-way analysis of variance.

The case of fixed effects, fully randomized experiment, unbalanced data

The model

The normal linear model describes treatment groups with probability distributions which are identically bell-shaped (normal) curves with different means. Thus fitting the models requires only the means of each treatment group and a variance calculation (an average variance within the treatment groups is used). Calculations of the means and the variance are performed as part of the hypothesis test.

The commonly used normal linear models for a completely randomized experiment are:[3]

y_{i,j}=\mu_j+\varepsilon_{i,j} (the means model)

or

y_{i,j}=\mu+\tau_j+\varepsilon_{i,j} (the effects model)

where

i=1,\dotsc,I is an index over experimental units
j=1,\dotsc,J is an index over treatment groups
I_j is the number of experimental units in the jth treatment group
I = \sum_j I_j is the total number of experimental units
y_{i,j} are observations
\mu_j is the mean of the observations for the jth treatment group
\mu is the grand mean of the observations
\tau_j is the jth treatment effect, a deviation from the grand mean
\sum\tau_j=0
\mu_j=\mu+\tau_j
\varepsilon \thicksim N(0, \sigma^2), \varepsilon_{i,j} are normally distributed zero-mean random errors.

The index i over the experimental units can be interpreted several ways. In some experiments, the same experimental unit is subject to a range of treatments; i may point to a particular unit. In others, each treatment group has a distinct set of experimental units; i may simply be an index into the j_{th} list.

The data and statistical summaries of the data

One form of organizing experimental observations y_{ij} is with groups in columns:

ANOVA data organization, Unbalanced, Single factor
Lists of Group Observations
I_{1} I_{2} I_{3} \dotso I_{j}
1 y_{11} y_{12} y_{13} y_{1j}
2 y_{21} y_{22} y_{23} y_{2j}
3 y_{31} y_{32} y_{33} y_{3j}
\vdots \vdots
i y_{i1} y_{i2} y_{i3} \dotso y_{ij}
Group Summary Statistics Grand Summary Statistics
# Observed I_1 I_2 \dotso I_j \dotso I_J # Observed I = \sum I_j
Sum \sum_i y_{ij} Sum \sum_j \sum_i y_{ij}
Sum Sq \sum_i (y_{ij})^2 Sum Sq \sum_j \sum_i (y_{ij})^2
Mean m_1 \dotso m_j \dotso m_J Mean m
Variance s_1^2 \dotso s_j^2 \dotso s_J^2 Variance s^2

Comparing model to summaries: \mu = m and \mu_j = m_j. The grand mean and grand variance are computed from the grand sums, not from group means and variances.

The hypothesis test

Given the summary statistics, the calculations of the hypothesis test are shown in tabular form. While two columns of SS are shown for their explanatory value, only one column is required to display results.

ANOVA table for fixed model, single factor, fully randomized experiment
Source of variation Sums of squares Sums of squares Degrees of freedom Mean square F
Explanatory SS[4] Computational SS[5] DF MS
Treatments \sum_{Treatments} I_j (m_j-m)^2 \sum_j \frac{(\sum_i y_{ij})^2}{I_j} - \frac{(\sum_j \sum_i y_{ij})^2}{I} J-1 \frac{SS_{Treatment}}{DF_{Treatment}} \frac{MS_{Treatment}}{MS_{Error}}
Error \sum_{Treatments} (I_j-1)s_j^2 \sum_j \sum_i y_{ij}^2 - \sum_j \frac{(\sum_i y_{ij})^2}{I_j} I-J \frac{SS_{Error}}{DF_{Error}}
Total \sum_{Observations} (y_{ij}-m)^2 \sum_j \sum_i y_{ij}^2 - \frac{(\sum_j \sum_i y_{ij})^2}{I} I-1

MS_{Error} is the estimate of variance corresponding to \sigma^2 of the model.

Analysis summary

The core ANOVA analysis consists of a series of calculations. The data is collected in tabular form. Then

If the experiment is balanced, all of the I_j terms are equal so the SS equations simplify.

In a more complex experiment, where the experimental units (or environmental effects) are not homogeneous, row statistics are also used in the analysis. The model includes terms dependent on i. Determining the extra terms reduces the number of degrees of freedom available.

References

See also

Notes

  1. 1 2 Howell, David (2002). Statistical Methods for Psychology. Duxbury. pp. 324–325. ISBN 0-534-37770-X.
  2. Kirk, RE (1995). Experimental Design: Procedures For The Behavioral Sciences (3 ed.). Pacific Grove, CA, USA: Brooks/Cole.
  3. Montgomery, Douglas C. (2001). Design and Analysis of Experiments (5th ed.). New York: Wiley. p. Section 3-2. ISBN 9780471316497.
  4. Moore, David S.; McCabe, George P. (2003). Introduction to the Practice of Statistics (4th ed.). W H Freeman & Co. p. 764. ISBN 0716796570.
  5. Winkler, Robert L.; Hays, William L. (1975). Statistics: Probability, Inference, and Decision (2nd ed.). New York: Holt, Rinehart and Winston. p. 761.
This article is issued from Wikipedia - version of the Thursday, March 24, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.