Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |
Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
In probability theory (and especially gambling), the expected value (or mathematical expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff ("value"). Thus, it represents the average amount one "expects" to win per bet if bets with identical odds are repeated many times. Note that the value itself may not be expected in the general sense; it may be unlikely or even impossible. A game or situation in which the expected value for the player is zero (no net gain nor loss) is called a "fair game."
For example, an American roulette wheel has 38 equally possible outcomes. A bet placed on a single number pays 35-to-1 (this means that you are paid 35 times your bet and your bet is returned, so you get 36 times your bet). So the expected value of the profit resulting from a $1 bet on a single number is, considering all 38 possible outcomes:
which is about -$0.0526. Therefore one expects, on average, to lose over five cents for every dollar bet.
Mathematical definition[]
In general, if is a random variable defined on a probability space , then the expected value of (denoted or sometimes or ) is defined as
where the Lebesgue integral is employed. Note that not all random variables have an expected value, since the integral may not exist (e.g., Cauchy distribution). Two variables with the same probability distribution will have the same expected value, if it is defined.
If is a discrete random variable with values , , ... and corresponding probabilities , , ... which add up to 1, then can be computed as the sum or series
as in the gambling example mentioned above.
If the probability distribution of admits a probability density function , then the expected value can be computed as
It follows directly from the discrete case definition that if is a constant random variable, i.e. for some fixed real number , then the expected value of is also .
The expected value of an arbitrary function of x, g(x), with respect to the probability density function f(x) is given by
Properties[]
Linearity[]
The expected value operator (or expectation operator) is linear in the sense that
for any two random variables and (which need to be defined on the same probability space) and any real numbers and .
Iterated expectation[]
For any two random variables one may define the conditional expectation:
Then the expectation of satisfies
Hence, the following equations holds:
The right hand side of this equation is referred to as the iterated expectation. This proposition is treated in law of total expectation.
Inequality[]
If a random variable X is always less than or equal to another random variable Y, the expectation of X is less than or equal to that of Y:
If , then .
In particular, since and , the absolute value of expectation of a random variable is less or equal to the expectation of its absolute value:
Representation[]
The following formula holds for any nonnegative real--valued random variable (such that ), and positive real number :
Non-multiplicativity[]
In general, the expected value operator is not multiplicative, i.e. is not necessarily equal to , except if and are independent or uncorrelated. This lack of multiplicativity gives rise to study of covariance and correlation.
Functional non-invariance[]
In general, the expectation operator and functions of random variables do not commute; that is
except as noted above.
Uses and applications of the expected value[]
The expected values of the powers of are called the moments of ; the moments about the mean of are expected values of powers of . The moments of some random variables can be used to specify their distributions, via their moment generating functions.
To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. This estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates that (under fairly mild conditions) as the size of the sample gets larger, the variance of this estimate gets smaller.
In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose is a discrete random variable with values and corresponding probabilities . Now consider a weightless rod on which are placed weights, at locations along the rod and having masses (whose sum is one). The point at which the rod balances (its center of gravity) is . (Note however, that the center of mass is not the same as the center of gravity.)
Expectation of matrices[]
If is an matrix, then the expected value of the matrix is a matrix of expected values:
This property is utilized in covariance matrices.
See also[]
- An inequality on location and scale parameters
- Conditional expectation
- Expectation–maximization algorithm]]
- Expected number
- The general term expectation.
External links[]
de:Erwartungswert es:Valor esperado fr:Espérance mathématique gl:Valor esperado he:תוחלת nl:Verwachting (wiskunde) no:Forventning ru:Математическое ожидание su:Nilai ekspektasi sv:Väntevärde zh:期望值
This page uses Creative Commons Licensed content from Wikipedia (view authors). |