Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |
Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
The mean difference is a measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. A related statistic is the relative mean difference, which is the mean difference divided by the arithmetic mean. An important relationship is that the relative mean difference is equal to twice the Gini coefficient, which is defined in terms of the Lorenz curve.
The mean difference is also known as the absolute mean difference and the Gini mean difference. The mean difference is sometimes denoted by Δ or as MD. The mean deviation is a different measure of dispersion.
Calculation[]
For a population of size n, with a sequence of values yi, i = 1 to n:
For a discrete probability function f(y), where yi, i = 1 to n, are the values with nonzero probabilities:
For a probability density function f(x):
For a cumulative distribution function F(x) with inverse x(F):
The inverse x(F) may not exist because the cumulative distribution function has jump discontinuities or intervals of constant values. However, the previous formula can still apply by generalizing the definition of x(F):
- x(F1) = inf {y : F(y) ≥ F1}.
Relative mean difference[]
When the probability distribution has a finite and nonzero arithmetic mean, the relative mean difference, sometimes denoted by ∇ or RMD, is defined by
The relative mean difference quantifies the mean difference in comparison to the size of the mean and is a dimensionless quantity. The relative mean difference is equal to twice the Gini coefficient which is defined in terms of the Lorenz curve. This gives complementary perspectives to both the relative mean difference and the Gini coefficient, including alternative ways of calculating their values.
Properties[]
The mean difference is invariant to translations and negation, and varies proportionally to positive scaling. That is to say, if X is a random variable and c is a constant:
- MD(X + c) = MD(X),
- MD(-X) = MD(X), and
- MD(c X) = |c| MD(X).
The relative mean difference is invariant to positive scaling, commutes with negation, and varies under translation in proportion to the ratio of the original and translated arithmetic means. That is to say, if X is a random variable and c is a constant:
- RMD(X + c) = RMD(X) · mean(X)/(mean(X) + c) = RMD(X) / (1+c / mean(X)) for c ≠ -mean(X),
- RMD(-X) = −RMD(X), and
- RMD(c X) = RMD(X) for c > 0.
If a random variable has a positive mean, then its relative mean difference will always be greater than or equal to zero. If additionally, the random variable can only take on values that are greater or equal to zero, then its relative mean difference will be less than 2.
Compared to standard deviation[]
Both the standard deviation and the mean difference measure dispersion -- how spread out are the values of a population or the probabilities of a distribution. The mean difference is not defined in terms of a specific measure of central tendency, whereas the standard deviation is defined in terms of the deviation from the arithmetic mean. Because the standard deviation squares its differences, it tends to give more weight to larger differences and less weight to smaller differences compared to the mean difference. When the arithmetic mean is finite, the mean difference will also be finite, even when the standard deviation is infinite. See the examples for some specific comparisons.
Sample estimators[]
For a random sample S from a random variable X, consisting of n values yi, the statistic
is a consistent and unbiased estimator of MD(X).
The statistic:
is a consistent estimator of RMD(X), but is not, in general, unbiased.
Confidence intervals for RMD(X) can be calculated using bootstrap sampling techniques.
There does not exist, in general, an unbiased estimator for RMD(X), in part because of the difficulty of finding an unbiased estimation for multiplying by the inverse of the mean. For example, even where the sample is known to be taken from a random variable X(p) for an unknown p, and X(p) - 1 has the Bernoulli distribution, so that Pr(X(p) = 1) = 1 − p and Pr(X(p) = 2) = p, then
- RMD(X(p)) = 2p(1 − p)/(1 + p).
But the expected value of any estimator R(S) of RMD(X(p)) will be of the form:
where the r i are constants. So E(R(S)) can never equal RMD(X(p)) for all p between 0 and 1.
Examples[]
Distribution | Parameters | Mean | Standard Deviation | Mean Difference | Relative Mean Difference |
---|---|---|---|---|---|
Continuous Uniform distribution | a = 0 ; b = 1 | 1 / 2 = 0.5 | ≈ 0.2887 | 1 / 3 ≈ 0.3333 | 2 / 3 ≈ 0.6667 |
Normal distribution | μ = 1 ; σ = 1 | 1 | 1 | ≈ 1.1284 | ≈ 1.1284 |
Exponential distribution | λ = 1 | 1 | 1 | 1 | 1 |
Pareto distribution | k > 1 ; xm = 1 | (for k > 2) | |||
Gamma distribution | k ; θ | kθ | k θ (2 − 4 I 0.5 (k+1 , k)) † | 2 − 4 I 0.5 (k+1 , k) † | |
Gamma distribution | k = 1 ; θ = 1 | 1 | 1 | 1 | 1 |
Gamma distribution | k = 2 ; θ = 1 | 2 | ≈ 1.4142 | 3 / 2 = 1.5 | 3 / 4 = 0.75 |
Gamma distribution | k = 3 ; θ = 1 | 3 | ≈ 1.7321 | 15 / 8 = 1.875 | 5 / 8 = 0.625 |
Gamma distribution | k = 4 ; θ = 1 | 4 | 2 | 35 / 16 = 2.1875 | 35 / 64 = 0.546875 |
Bernoulli distribution | 0 ≤ p ≤ 1 | p | 2 p (1 − p) | 2 (1 − p) for p > 0 |
- † I z (x,y) is the regularized incomplete Beta function
References[]
- Xu, Kuan (January, 2004). "How Has the Literature on Gini's Index Evolved in the Past 80 Years?". Department of Economics, Dalhousie University. Retrieved on 2006-06-01.
- Gini, Corrado (1912). Variabilità e Mutabilità, Bologna: Tipografia di Paolo Cuppini.
- Gini, Corrado (1921). Measurement of Inequality and Incomes. The Economic Journal 31: 124–126.
- Chakravarty, S. R. (1990). Ethical Social Index Numbers, New York: Springer-Verlag.
- Mills, Jeffrey A.; Zandvakili, Sourushe (1997). Statistical Inference via Bootstrapping for Measures of Inequality. Journal of Applied Econometrics 12: 133–150.
- Lomnicki, Z. A. (1952). The Standard Error of Gini's Mean Difference. Annals of Mathematical Statistics 23: 635–637.
- Nair, U. S. (1936). Standard Error of Gini's Mean Difference. Biometrika 28: 428–436.
See also[]
- Gini coefficient
- Lorenz curve
- Standard deviation
- Mean Deviation
- Estimator
- Coefficient of variation
- Corrado Gini
{
This page uses Creative Commons Licensed content from Wikipedia (view authors). |