Psychology Wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Language: Linguistics · Semiotics · Speech


This article needs rewriting to enhance its relevance to psychologists..
Please help to improve this page yourself if you can..


The topic of this article is distinct from the topics of Library and information science and Information technology.


Information theory is a field of mathematics that considers three fundamental questions:

  • Compression: How much can data be compressed (abbreviated) so that another person can recover an identical copy of the uncompressed data?
  • Lossy data compression: How much can data be compressed so that another person can recover an approximate copy of the uncompressed data?
  • Channel capacity: How quickly can data be communicated to someone else through a noisy medium?

These somewhat abstract questions are answered quite rigorously by using mathematics introduced by Claude Shannon in 1948. His paper spawned the field of information theory, and the results have been crucial to the success of the Voyager missions to deep space, the invention of the CD, the feasibility of mobile phones, analysis of the code used by DNA, and numerous other fields.

Overview[]

Information theory is the mathematical theory of data communication and storage, generally considered to have been founded in 1948 by Claude E. Shannon. The central paradigm of classic information theory is the engineering problem of the transmission of information over a noisy channel. The most fundamental results of this theory are Shannon's source coding theorem, which establishes that on average the number of bits needed to represent the result of an uncertain event is given by the entropy; and Shannon's noisy-channel coding theorem, which states that reliable communication is possible over noisy channels provided that the rate of communication is below a certain threshold called the channel capacity. The channel capacity is achieved with appropriate encoding and decoding systems.

Information theory is closely associated with a collection of pure and applied disciplines that have been carried out under a variety of banners in different parts of the world over the past half century or more: adaptive systems, anticipatory systems, artificial intelligence, complex systems, complexity science, cybernetics, informatics, machine learning, along with systems sciences of many descriptions. Information theory is a broad and deep mathematical theory, with equally broad and deep applications, chief among them coding theory.

Coding theory is concerned with finding explicit methods, called codes, of increasing the efficiency and fidelity of data communication over a noisy channel up near the limit that Shannon proved is all but possible. These codes can be roughly subdivided into data compression and error-correction codes. It took many years to find the good codes whose existence Shannon proved. A third class of codes are cryptographic ciphers; concepts from coding theory and information theory are much used in cryptography and cryptanalysis; see the article on deciban for an interesting historical application.

Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even musical composition.

Mathematical theory of information[]

For a more thorough discussion of these basic equations, see Information entropy.

The abstract idea of what "information" really is must be made more concrete so that mathematicians can analyze it.

Self-information[]

Shannon defined a measure of information content called the self-information or surprisal of a message m:

where is the probability that message m is chosen from all possible choices in the message space .

This equation causes messages with lower probabilities to contribute more to the overall value of I(m). In other words, infrequently occurring messages are more valuable. (This is a consequence from the property of logarithms that is very large when is near 0 for unlikely messages and very small when is near 1 for almost certain messages).

For example, if John says "See you later, honey" to his wife every morning before leaving to office, that information holds little "content" or "value". But, if he shouts "Get lost" at his wife one morning, then that message holds more value or content (because, supposedly, the probability of him choosing that message is very low).

Entropy[]

The entropy of a discrete message space is a measure of the amount of uncertainty one has about which message will be chosen. It is defined as the average self-information of a message from that message space:

The logarithm in the formula is usually taken to base 2, and entropy is measured in bits. An important property of entropy is that it is maximized when all the messages in the message space are equiprobable. In this case .

Joint entropy[]

The joint entropy of two discrete random variables and is defined as the entropy of the joint distribution of and :

If and are independent, then the joint entropy is simply the sum of their individual entropies.

(Note: The joint entropy is not to be confused with the cross entropy, despite similar notation.)

Conditional entropy (equivocation)[]

Given a particular value of a random variable , the conditional entropy of given is defined as:

where is the conditional probability of given .

The conditional entropy of given , also called the equivocation of about is then given by:

A basic property of the conditional entropy is that:

Mutual information (transinformation)[]

It turns out that one of the most useful and important measures of information is the mutual information, or transinformation. This is a measure of how much information can be obtained about one random variable by observing another. The transinformation of relative to (which represents conceptually the amount of information about that can be gained by observing ) is given by:

A basic property of the transinformation is that:

Mutual information is symmetric:

Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the Multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. Also, mutual information can be expressed through the Kullback-Leibler divergence by measuring the difference (so to say) of the actual joint distribution to the product of the marginal distributions:

Continuous equivalents of entropy[]

See main article: Differential entropy.

Shannon information is appropriate for measuring uncertainty over a discrete space. Its basic measures have been extended by analogy to continuous spaces. The sums can be replaced with integrals and densities are used in place of probability mass functions. By analogy with the discrete case, entropy, joint entropy, conditional entropy, and mutual information can be defined as follows:

where is the joint density function, and are the marginal distributions, and is the conditional distribution.

Channel capacity[]

Let us return for the time being to our consideration of the communications process over a discrete channel. At this time it will be helpful to have a simple model of the process:

                        o---------o
                        |  Noise  |
                        o---------o
                             |
                             V
o-------------o    X    o---------o    Y    o----------o
| Transmitter |-------->| Channel |-------->| Receiver |
o-------------o         o---------o         o----------o

Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let be the conditional probability distribution function of Y given X. We will consider to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of , the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the amount of information, or the signal, we can communicate over the channel. The appropriate measure for this is the transinformation, and this maximum transinformation is called the channel capacity and is given by:

Source theory[]

Any process that generates successive messages can be considered a source of information. Sources can be classified in order of increasing generality as memoryless, ergodic, stationary, and stochastic, (with each class strictly containing the previous one). The term "memoryless" as used here has a slightly different meaning than it normally does in probability theory. Here a memoryless source is defined as one that generates successive messages independently of one another and with a fixed probability distribution. However, the position of the first occurrence of a particular message or symbol in a sequence generated by a memoryless source is actually a memoryless random variable. The other terms have fairly standard definitions and are actually well studied in their own right outside information theory.

Rate[]

The rate of a source of information is (in the most general case) , the expected, or average, conditional entropy per message (i.e. per unit time) given all the previous messages generated. It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a memoryless source is simply , since by definition there is no interdependence of the successive messages of a memoryless source. The rate of a source of information is related to its redundancy and how well it can be compressed.

Fundamental theorem[]

See main article: Noisy channel coding theorem.
Statement (noisy-channel coding theorem)[]
1. For every discrete memoryless channel, the channel capacity
has the following property. For any ε > 0 and R < C, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε.
2. If a probability of bit error pb is acceptable, rates up to R(pb) are achievable, where
3. For any pb, rates greater than R(pb) are not achievable.

(MacKay (2003), p. 162; cf Gallager (1968), ch.5; Cover and Thomas (1991), p. 198; Shannon (1948) thm. 11)

Channel capacity of particular model channels[]

  • A continuous-time analog communications channel subject to Gaussian noise — see Shannon-Hartley theorem.

Related concepts[]

Measure theory[]

Here is an interesting and illuminating connection between information theory and measure theory:

If to arbitrary discrete random variables X and Y we associate the existence of sets and , somehow representing the information borne by X and Y, respectively, such that:

  • whenever X and Y are independent, and
  • whenever X and Y are such that either one is completely determined by the other (i.e. by a bijection);

where is a measure over these sets, and we set:

we find that Shannon's "measure" of information content satisfies all the postulates and basic properties of a formal measure over sets. This can be a handy mnemonic device in some situations. Certain extensions to the definitions of Shannon's basic measures of information are necessary to deal with the σ-algebra generated by the sets that would be associated to three or more arbitrary random variables. (See Reza pp. 106-108 for an informal but rather complete discussion.) Namely needs to be defined in the obvious way as the entropy of a joint distribution, and an extended transinformation defined in a suitable manner (left as an exercise for the ambitious reader) so that we can set:

in order to define the (signed) measure over the whole σ-algebra. (It is interesting to note that the mutual information of three or more random variables can be negative as well as positive: Let X and Y be two independent fair coin flips, and let Z be their exclusive or. Then bit.)

This connection is important for two reasons: first, it reiterates and clarifies the fundamental properties of these basic concepts of information theory, and second, it justifies, in a certain formal sense, the practice of calling Shannon's entropy a "measure" of information.

Kolmogorov complexity[]

A. N. Kolmogorov introduced an alternative information measure that is based on the length of the shortest algorithm to produce a message, called the Kolmogorov complexity. The practical usefulness of the Kolmogorov complexity, however, is somewhat limited by two issues:

  • Due to the halting problem, it is in general not possible to actually calculate the Kolmogorov complexity of a given message.
  • Due to an arbitrary choice of programming language involved, the Kolmogorov complexity is only defined up to an arbitrary additive constant.

These limitations tend to restrict the usefulness of the Kolmogorov complexity to proving asymptotic bounds, which is really more the domain of complexity theory. Nevertheless it is in a certain sense the "best" possible measure of the information content of a message, and it has the advantage of being independent of any prior probability distribution on the messages.

Applications[]

Coding theory[]

Coding theory is the most important and direct application of information theory. It can be subdivided into data compression theory and error correction theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data. There are two formulations for the compression problem — in lossless data compression the data must be reconstructed exactly, whereas lossy data compression examines how many bits are needed to reconstruct the data to within a specified fidelity level. This fidelity level is measured by a function called a distortion function. In information theory this is called rate distortion theory. Both lossless and lossy source codes produce bits at the output which can be used as the inputs to the channel codes mentioned above.

The idea is to first compress the data, i.e. remove as much of its redundancy as possible, and then add just the right kind of redundancy (i.e. error correction) needed to transmit the data efficiently and faithfully across a noisy channel.

This division of coding theory into compression and transmission is justified by the information transmission theorems, or source-channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models.


Detection and Estimation Theory[]

Further information: Detection theory and Estimation theory

Gambling[]

Information theory is also important in gambling and (with some ethical reservations) investing. An important but simple relation exists between the amount of side information a gambler obtains and the expected exponential growth of his capital (Kelly). The so-called equation of ill-gotten gains can be expressed in logarithmic form as

for an optimal betting strategy, where is the initial capital, is the capital after the tth bet, and is the amount of side information obtained concerning the ith bet (in particular, the mutual information relative to the outcome of each bettable event). This equation applies in the absence of any transaction costs or minimum bets. When these constraints apply (as they invariably do in real life), another important gambling concept comes into play: the gambler (or unscrupulous investor) must face a certain probability of ultimate ruin. Note that even food, clothing, and shelter can be considered fixed transaction costs and thus contribute to the gambler's probability of ultimate ruin. That is why food is so cheap at casinos.

This equation was the first application of Shannon's theory of information outside its prevailing paradigm of data communications (Pierce). No one knows how much lucre has been gained by the use of this notorious equation since its discovery a half century ago.

The ill-gotten gains equation actually underlies much if not all of mathematical finance, although certainly, when there is money to be made, and eyebrows not to be raised, extreme discretion is employed in its use.




History[]

The decisive event which established the subject of information theory, and brought it to immediate worldwide attention, was the publication of Claude E. Shannon (19162001)'s classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October of 1948.

In this revolutionary and groundbreaking paper, the work for which Shannon had substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process, which underlies information theory; and with it the ideas of the information entropy and redundancy of a source, and its relevance through the source coding theorem; the mutual information, and the channel capacity of a noisy channel, as underwritten by the promise of perfect loss-free communication given by the noisy-channel coding theorem; the practical result of the Shannon-Hartley law for the channel capacity of a Gaussian channel; and of course the bit - a new common currency of information.

Before 1948[]

Quantitative ideas of information[]

The most direct antecedents of Shannon's work were two papers published in the 1920s by Harry Nyquist and Ralph Hartley, who were both still very much research leaders at Bell Labs when Shannon arrived there in the early 1940s.

Nyquist'’s 1924 paper, Certain Factors Affecting Telegraph Speed” is mostly concerned with some detailed engineering aspects of telegraph signals. But a more theoretical section discusses quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation

where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant.

Hartley's 1928 paper, called simply Transmission of Information, went further by introducing the word information, and making explicitly clear the idea that information in this context was quantitative measurable quantity, reflecting only that the receiver was able to distinguish that one sequence of symbols had been sent rather than any other -- quite regardless of any associated meaning or other psychological or semantic aspect the symbols might represent. This amount of information he quantified as

where S was the number of possible symbols, and n the number of symbols in a transmission. The natural unit of information was therefore the decimal digit, much later renamed the Hartley in his honour as a unit or scale or measure of information. The Hartley information, H0, is also still very much used as a quantity for the log of the total number of possibilities.

A similar unit of log10 probability, the ban, and its derived unit the deciban (one tenth of a ban), were introduced by Alan Turing in 1940 as part of the statistical analysis of the breaking of the German second world war Enigma cyphers. The decibannage represented the reduction in (the logarithm of) the total number of possibilities (similar to the change in the Hartley information); and also the log-likelihood ratio (or change in the weight of evidence) that could be inferred for one hypothesis over another from a set of observations. The expected change in the weight of evidence is equivalent to what was later called the Kullback discrimination information.

But underlying this notion was still the idea of equal a-priori probabilities, rather than the information content of events of unequal probability; nor yet any underlying picture of questions regarding the communication of such varied outcomes.

This work drew on earlier publications by . At the beginning of his paper, Shannon asserted that

References[]

The classic paper[]

Other journal articles[]

  • R.V.L. Hartley, "Transmission of Information," Bell System Technical Journal, July 1928
  • J. L. Kelly, Jr., "New Interpretation of Information Rate," Bell System Technical Journal, Vol. 35, July 1956, pp. 917-26
  • R. Landauer, "Information is Physical" Proc. Workshop on Physics and Computation PhysComp'92 (IEEE Comp. Sci.Press, Los Alamitos, 1993) pp. 1-4.
  • R. Landauer, "Irreversibility and Heat Generation in the Computing Process" IBM J. Res. Develop. Vol. 5, No. 3, 1961

Textbooks on information theory[]

  • Claude E. Shannon, Warren Weaver. The Mathematical Theory of Communication. Univ of Illinois Press, 1963. ISBN 0252725484
  • Robert B. Ash. Information Theory. New York: Dover 1990. ISBN 0486665216
  • Thomas M. Cover, Joy A. Thomas. Elements of information theory, 2nd Edition. New York: Wiley-Interscience, 2006. ISBN 0471241954 (forthcoming, to be released February 17, 2006.).
  • Stanford Goldman. Information Theory. Mineola, N.Y.: Dover 2005 ISBN 0486442713
  • Fazlollah M. Reza. An Introduction to Information Theory. New York: Dover 1994. ISBN 048668210
  • David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0521642981

Other books[]

  • James Bamford, The Puzzle Palace, Penguin Books, 1983. ISBN 0140067485
  • Leon Brillouin, Science and Information Theory, Mineola, N.Y.: Dover, [1956, 1962] 2004. ISBN 0486439186
  • W. B. Johnson and J. Lindenstrauss, editors, Handbook of the Geometry of Banach Spaces, Vol. 1. Amsterdam: Elsevier 2001. ISBN 0444828427
  • A. I. Khinchin, Mathematical Foundations of Information Theory, New York: Dover, 1957. ISBN 0486604349
  • H. S. Leff and A. F. Rex, Editors, Maxwell's Demon: Entropy, Information, Computing, Princeton University Press, Princeton, NJ (1990). ISBN 069108727X

See also[]

  • List of important publications

Applications[]

History[]

Theory[]

External links[]

  • Gibbs, M., "Quantum Information Theory", Eprint
  • Schneider, T., "Information Theory Primer", Eprint
Edit General subfields and scientists in Cybernetics
K1 Polycontexturality, Second-order cybernetics
K2 Catastrophe theory, Connectionism, Control theory, Decision theory, Information theory, Semiotics, Synergetics, Sociosynergetics, Systems theory
K3 Biological cybernetics, Biomedical cybernetics, Biorobotics, Computational neuroscience, Homeostasis, Medical cybernetics, Neuro cybernetics, Sociocybernetics
Cyberneticians William Ross Ashby, Claude Bernard, Valentin Braitenberg, Ludwig von Bertalanffy, George S. Chandy, Joseph J. DiStefano III, Heinz von Foerster, Charles François, Jay Forrester, Buckminster Fuller, Ernst von Glasersfeld, Francis Heylighen, Erich von Holst, Stuart Kauffman, Sergei P. Kurdyumov, Niklas Luhmann, Warren McCulloch, Humberto Maturana, Horst Mittelstaedt, Talcott Parsons, Walter Pitts, Alfred Radcliffe-Brown, Robert Trappl, Valentin Turchin, Francisco Varela, Frederic Vester, John N. Warfield, Kevin Warwick, Norbert Wiener

de:Informationstheorie es:Teoría de la información et:Informatsiooniteooria fa:نظریه اطلاعات fr:Théorie de l'information gl:Teoría da información he:תורת האינפורמציה hu:Információelmélet id:Teori Informasi io:Informo-teorio nl:Informatietheorie no:Informasjonsteori pt:Teoria da informação ru:Теория информации sv:Informationsteori th:ทฤษฎีข้อมูล zh:信息论

This page uses Creative Commons Licensed content from Wikipedia (view authors).