Probability theory is the mathematical study of phenomena characterizedby randomness or uncertainty.

More precisely, probability is used for modelling situations when the result of an experiment, realized under the same circumstances, produces different results (typically throwing a dice or a coin). Mathematicians and think of probabilities as numbers in the closed interval from 0 to 1 assigned to "events" whose occurrence or failure to occur is random. Probabilities ${\displaystyle P(A)}$ are assigned to events ${\displaystyle A}$ according to the probability axioms.

The probability that an event ${\displaystyle A}$ occurs given the known occurrence of an event ${\displaystyle B}$ is the conditional probability of ${\displaystyle A}$ given ${\displaystyle B}$; its numerical value is ${\displaystyle P(A\cap B)/P(B)}$ (as long as ${\displaystyle P(B)}$ is nonzero). If the conditional probability of ${\displaystyle A}$ given ${\displaystyle B}$ is the same as the ("unconditional") probability of ${\displaystyle A}$, then ${\displaystyle A}$ and ${\displaystyle B}$ are said to be independent events. That this relation between ${\displaystyle A}$ and ${\displaystyle B}$ is symmetric may be seen more readily by realizing that it is the same as saying ${\displaystyle P(A \cap B)=P(A)P(B)}$ when A and B are independent events.

Two crucial concepts in the theory of probability are those of a random variable and of the probability distribution of a random variable; see those articles for more information.

## A somewhat more abstract view of probability

Mathematicians usually take probability theory to be the study of probability spaces and random variables — an approach introduced by Kolmogorov in the 1930s. A probability space is a triple ${\displaystyle (\Omega, \mathcal F, P)}$, where

• ${\displaystyle \Omega}$ is a non-empty set, sometimes called the "sample space," each of whose members is thought of as a potential outcome of a random experiment. For example, if 100 voters are to be drawn randomly from among all voters in California and asked whom they will vote for governor, then the set of all sequences of 100 Californian voters would be the sample space Ω.
• ${\displaystyle \mathcal{F}}$ is a σ-algebra of subsets of ${\displaystyle \Omega}$ - its members are called "events." For example the set of all sequences of 100 Californian voters in which at least 60 will vote for Schwarzenegger is identified with the "event" that at least 60 of the 100 chosen voters will so vote. To say that ${\displaystyle \mathcal{F}}$ is a σ-algebra implies per definition that it contains ${\displaystyle \Omega}$, that the complement of any event is an event, and that the union of any (finite or countably infinite) sequence of events is an event. So for this example ${\displaystyle \mathcal{F}}$ contains: (1) the set of all sequences of 100 where at least 60 vote for Schwarzenegger; (2) the set of all sequences of 100 where fewer than 60 vote for Schwarzenegger (the converse of (1)); (3) the sample space Ω as above; and (4) the empty set.
• ${\displaystyle P}$ is a probability measure on ${\displaystyle \mathcal{F}}$, i.e., a measure such that ${\displaystyle P(\Omega)=1}$.

It is important to note that ${\displaystyle P}$ is a function defined on ${\displaystyle \mathcal{F}}$ and not on ${\displaystyle \Omega}$, and often not on the complete powerset ${\displaystyle {\mathcal {F}}=\mathbb {P} (\Omega )}$ either. Not every set of outcomes is an event.

If ${\displaystyle \Omega}$ is denumerable we almost always define ${\displaystyle \mathcal{F}}$ as the power set of ${\displaystyle \Omega}$, i.e ${\displaystyle {\mathcal {F}}=\mathbb {P} (\Omega )}$ which is trivially a σ-algebra and the biggest one we can create using ${\displaystyle \Omega}$. In a discrete space we can therefore omit ${\displaystyle \mathcal{F}}$ and just write ${\displaystyle (\Omega , P)}$ to define it. If on the other hand ${\displaystyle \Omega}$ is non-denumerable and we use ${\displaystyle {\mathcal {F}}=\mathbb {P} (\Omega )}$ we get into trouble defining our probability measure ${\displaystyle P}$ because ${\displaystyle \mathcal{F}}$ is too 'huge', i.e. there will often be sets to which it will be impossible to assign a unique measure, giving rise to problems like the Banach–Tarski paradox. So we have to use a smaller σ-algebra ${\displaystyle \mathcal{F}}$ (e.g. the Borel algebra of ${\displaystyle \Omega}$, which is the smallest σ-algebra that makes all open sets measurable).

A random variable ${\displaystyle X}$ is a measurable function on ${\displaystyle \Omega}$. For example, the number of voters who will vote for Schwarzenegger in the aforementioned sample of 100 is a random variable.

If ${\displaystyle X}$ is any random variable, the notation ${\displaystyle P(X\geq 60)}$, is shorthand for ${\displaystyle P(\{\omega \in \Omega \mid X(\omega )\geq 60\})}$, assuming that "${\displaystyle X\geq 60}$" is an "event."

For an algebraic alternative to Kolmogorov's approach, see algebra of random variables.

## Philosophy of application of probability

There are different ways to interpret probability. Frequentists will assign probabilities only to events that are random, i.e., random variables, that are outcomes of actual or theoretical experiments. On the other hand, Bayesians assign probabilities to propositions that are uncertain according either to subjective degrees of belief in their truth, or to logically justifiable degrees of belief in their truth. Among statisticians and philosophers, many more distinctions are drawn beyond this subjective/objective divide. See the article on interpretations of probability at the Stanford Encyclopedia of Philosophy: [1].

A Bayesian may assign a probability to the proposition that 'there was life on Mars a billion years ago', since that is uncertain, whereas a frequentist would not assign probabilities to statements at all. A frequentist is actually unable to technically interpret such uses of the probability concept, even though 'probability' is often used in this way in colloquial speech. Frequentists only assign probabilities to outcomes of well defined random experiments, that is, where there is a defined sample space as defined above in the theory section. For another illustration of the differences see the two envelopes problem.

Situations do arise where probability theory is somewhat lacking. One method of attempting to circumvent this indeterminancy is the theory of super-probability, in which situations are given integer values greater than 1.

## Bibliography

• Pierre Simon de Laplace (1812) Analytical Theory of Probability
The first major treatise blending calculus with probability theory, originally in French: Théorie Analytique des Probabilités.
• Andrei Nikolajevich Kolmogorov (1950) Foundations of the Theory of Probability
The modern measure-theoretic foundation of probability theory; the original German version (Grundbegriffe der Wahrscheinlichkeitrechnung) appeared in 1933.
• Harold Jeffreys (1939) The Theory of Probability
An empiricist, Bayesian approach to the foundations of probability theory.
• Edward Nelson (1987) Radically Elementary Probability Theory
Discrete foundations of probability theory, based on nonstandard analysis and internal set theory. downloadable. http://www.math.princeton.edu/~nelson/books.html
• Patrick Billingsley: Probability and Measure, John Wiley and Sons, New York, Toronto, London, 1979.
• Henk Tijms (2004) Understanding Probability
A lively introduction to probability theory for the beginner, Cambridge Univ. Press.