Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
- This article is about the mathematical definition of risk in statistical decision theory. For a more general discussion of concepts and definitions of risk, see the main article Risk.
In decision theory and estimation theory, the risk function R of a decision rule, δ, is the expected value of a loss function L:
where
- is a fixed but possibly unknown state of nature;
- X is a vector of observations stochastically drawn from a population;
- is the expectation over all population values of X;
- is a probability measure over the event space of X, parametrized by θ; and
- the integral is evaluated over the entire support of X.
Examples[]
- For a scalar parameter , a decision function whose output is an estimate of , and a quadratic loss function,
- the risk function becomes the mean squared error of the estimate,
- In density estimation, the unknown parameter is probability density itself. The loss function is typically chosen to be a norm in an appropriate function space. For example, for norm,
- the risk function becomes the mean integrated squared error
References[]
- Template:SpringerEOM
- Berger, James O. (1985). Statistical decision theory and Bayesian Analysis, 2nd, New York: Springer-Verlag.
- DeGroot, Morris [1970] (2004). Optimal Statistical Decisions, Wiley Classics Library.
- Robert, Christian (2007). The Bayesian Choice, 2nd, New York: Springer.
This page uses Creative Commons Licensed content from Wikipedia (view authors). |
This page uses Creative Commons Licensed content from Wikipedia (view authors). |