Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
More formally, statistical theory defines a statistic as a function of a sample where the function itself is independent of the sample's distribution.
In the calculation of the arithmetic mean, for example, the algorithm consists of summing all the data values and dividing this sum by the number of data items. Thus the arithmetic mean is a statistic.
Other examples of statistics include
- sample median
- Sample variance and sample standard deviation
- Sample quantiles besides the median, e.g., quartiles and percentiles
- t statistics, chi-square statistics
- Order statistics
- Sample moments, including Kurtosis and skewness
- Various functionals of the empirical distribution function
Strictly speaking, the popular use of "statistic" to mean a single measurement, or datum, is correct, as the function used can be deemed the identity function. In practice however, a statistician would usually not call an individual such measurement a statistic and rather would use the term to refer to, for example, the mean of several such measurements.
Statisticians often contemplate a parameterized family of probability distributions, any member of which could be the distribution of some measurable aspect of each member of a population, from which a sample is drawn randomly. For example, the parameter may be the average height of 25-year-old men in North America. The height of the members of a sample of 100 such men are measured; the average of those 100 numbers is a statistic. The average of the heights of all members of the population is not a statistic unless that has somehow also been ascertained. The average height of all (in the sense of genetically possible) 25-year-old North American men is a parameter and not a statistic.
Important potential properties of statistics are completeness, sufficiency and unbiasedness.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|