Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |
Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
In statistics, the Anderson–Darling test, named after Theodore Wilbur Anderson (1918–?) and Donald A. Darling (1915–?), who invented it in 1952[1], is a statistical test of whether there is evidence that a given sample of data did not arise from a given probability distribution. In its basic form, the test assumes that there are no parameters to be estimated in the distribution being tested, in which case the test and its set of critical values is distribution-free. However, the test is most often used in contexts where a family of distributions is being tested, in which case the parameters of that family need to be estimated and account must be taken of this in adjusting either the test-statistic or its critical values.
When applied to testing if a normal distribution adequately describes a set of data, it is one of the most powerful statistical tools for detecting most departures from normality.
In addition to its use as a test of fit for distributions, it can be used in parameter estimation as the basis for a form of minimum distance estimation procedure.
K-sample Anderson–Darling tests are available for testing whether several collections of observations can be modelled as coming from a single population, where the distribution function does not have to be specified.
The single-sample test[]
Basic test statistic[]
The Anderson–Darling test assesses whether a sample comes from a specified distribution. It makes use of the fact that, when given a hypothesized underlying distribution and assuming the data does arise from this distribution, the data can be transformed to a uniform distribution. The transformed sample data can be then tested for uniformity with a distance test (Shapiro 1980). The formula for the test statistic to assess if data (note that the data must be put in order) comes from a distribution with cumulative distribution function (CDF) is
where
The test statistic can then be compared against the critical values of the theoretical distribution. Note that in this case no parameters are estimated in relation to the distribution function F.
Tests for families of distributions[]
Essentially the same test statistic can be used in the test of fit of a family of distributions, but then it must be compared against the critical values appropriate to that family of theoretical distributions and dependent also on the method used for parameter estimation.
Test for Normality[]
In comparisons of power, Stephens (1974) found to be one of the best Empirical distribution function statistics for detecting most departures from normality.[2] The only statistic close was the Cramér–von Mises test statistic. It may be used with small sample sizes n ≤ 25. Very large sample sizes may reject the assumption of normality with only slight imperfections, but industrial data with sample sizes of 200 and more have passed the Anderson–Darling test. [citation needed]
(If testing for normal distribution of the variable X)
1) The data , for , of the variable that should be tested is sorted from low to high.
2) The mean and standard deviation are calculated from the sample of .
3) The values are standardized to create new values as
4) With the standard normal CDF , is calculated using
An alternative expression in which only a single observation is dealt with at each step of the summation is:
5) , an approximate adjustment for sample size, is calculated using
6) If exceeds 0.751 then the hypothesis of normality is rejected for a 5% level test.
Note 1: If s = 0 or any (0 or 1) then cannot be calculated and is undefined.
Note 2: Other common critical values for are .632 for a 10% level, .870 for a 2.5% level, and 1.029 for a 1% level. The above adjustment formulae and critical value are taken from Shorak & Wellner (1986, p239). Care is required in comparisons across different sources as often the specific adjustment formula is not stated.
Tests for other distributions[]
Above, it was assumed that the variable was being tested for normal distribution. Any other family of distributions can be tested but the test for each family is implemented by using a different modification of the basic test statistic and this is referred to critical values specific to that family of distributions. Tests for the (two-parameter) log-normal distribution can be implemented by transforming the data using a logarithm and using the above test for normality. Details for the required modifications to the test statistic and for the critical values for the normal distribution and the exponential distribution have been published by Pearson & Hartley (1972, Table 54). Details for these distributions, with the addition of the Gumbel distribution, are also given by Shorak & Wellner (1986, p239). Details for the logistic distribution are given by Stephens (1979). A test for the (two parameter) Weibull distribution can be obtained by making use of the fact that the logarithm of a Weibull variate has a Gumbel distribution.
Non-parametric k-sample tests[]
Scholz F.W. and Stephens M.A. (1987) discuss a test, based on the Anderson-Darling measure of agreement between distributions, for whether a number of random samples with possibly different sample sizes may have arisen from the same distribution, where this distribution is unspecified.
See also[]
- Kolmogorov–Smirnov test
- Kuiper's test
- Shapiro–Wilk test
- Smirnov–Cramér–von-Mises test
- Jarque–Bera test
- Goodness of fit
External links[]
References[]
- ↑ Anderson, T. W., Darling, D. A. (1952). Asymptotic theory of certain "goodness-of-fit" criteria based on stochastic processes. Annals of Mathematical Statistics 23: 193–212.
- ↑ Stephens, M. A. (1974). EDF Statistics for Goodness of Fit and Some Comparisons. Journal of the American Statistical Association 69: 730–737.
- Corder, G.W., Foreman, D.I. (2009).Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach Wiley, ISBN: 9780470454619
- Pearson E.S., Hartley, H.O. (Editors) (1972) Biometrika Tables for Statisticians, Volume II. CUP. ISBN 0-521-06937-8.
- Shapiro, S.S. (1980) How to test normality and other distributional assumptions. In: The ASQC basic references in quality control: statistical techniques 3, pp. 1–78.
- Shorak, G.R., Wellner, J.A. (1986) Empirical Processes with Applications to Statistics, Wiley. ISBN 0-471-86725-X.
- Stephens, M.A. (1979) Test of fit for the logistic distribution based on the empirical distribution function, Biometrika, 66(3), 591-5.
- Scholz F.W., Stephens M.A. (1987), K-sample Anderson-Darling Tests, Journal of the American Statistical Association, 82, 918–924.