Psychology Wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking  - Cognitive processes Cognition - Outline Index


Merge-arrow
It has been suggested that this article or section be merged into [[::Hebbian theory|Hebbian theory]]. (Discuss)

Hebbian learning is a hypothesis for how neuronal connections are enforced in mammalian brains; it is also a technique for weight selection in artificial neural networks.

The idea is named after Donald Hebb, who in 1949 presented it in his book The Organization of Behavior (and inspired research into neural networks as a result). His idea specified how much the strength of a connection between two neurons should be altered according to how they are firing at that time. Hebb's original principle was essentially that if one neuron is stimulating some other neuron repeatedly, then the strength of the connection between the two neurons will be increased.

See also Hebbian theory.

Principles of Hebbian learning[]

From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons will increase if the two neurons activate simultaneously; it is reduced if they activate separately. Nodes which tend to be either both positive or both negative at the same time will have strong positive weights while those which tend to be opposite will have strong negative weights. It is sometimes stated more simply as "neurons that fire together, wire together."

This original principle is perhaps the simplest form of weight selection. While this means it can be relatively easily coded into a computer program and used to update the weights for a network, it also prohibits the number of applications of Hebbian learning. Today, the term Hebbian learning generally refers to some form of mathematical abstraction of the original principle proposed by Hebb. In this sense, Hebbian learning involves weights between learning nodes being adjusted so that each weight better represents the relationship between the nodes. As such, many learning methods can be considered to be somewhat Hebbian in nature.

The following is a formulaic description of Hebbian learning: (note that many other descriptions are possible)

Where where is the weight of the connection from neuron to neuron and the input for neuron . Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections are set to zero if (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.

Another formulaic description is:

,

where is the weight of the connection from neuron to neuron , is the dimension of the input vector, the number of training patterns, and the th input for neuron . This is learning by epoch (weights updated after all the training examples are presented). Again, in a Hopfield network, connections are set to zero if (no reflexive connections).

A variation of Hebbian learning that takes into account phenomena such as blocking and many other neural learning phenomena is the mathematical model of Harry Klopf, formerly of the Air Force Office of Scientific Research and presently with Wright Patterson Air Force Base. Klopf's model is considered a far more accurate model of Hebbian learning because it reproduces so many biological phenomena. It is also simple to implement.

Hebbian learning in biological systems[]

Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine invertebrate Aplysia californica.

Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such study reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms

References[]

de:Hebbsche Lernregel

This page uses Creative Commons Licensed content from Wikipedia (view authors).