Hebbian Theory
Table of Contents
1. Hebbian theory
Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell.
The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory.
The theory is often summarized as "Cells that fire together wire together."[2] However, Hebb emphasized that cell A needs to "take part in firing" cell B, and such causality can occur only if cell A fires just before, not at the same time as, cell B.
The theory attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells.
1.1. Hebbian engrams and cell assembly theory
1.2. Principles
From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons.
The weight between two neurons increases if the two neurons activate simultaneously, and reduces if they activate separately.
Hopfield network,
1.3. Relationship to unsupervised learning, stability, and generalization
This is an intrinsic problem due to this version of Hebb's rule being unstable, as in any network with a dominant signal the synaptic weights will increase or decrease exponentially.
Intuitively, this is because whenever the presynaptic neuron excites the postsynaptic neuron, the weight between them is reinforced, causing an even stronger excitation in the future, and so forth, in a self-reinforcing way. One may think a solution is to limit the firing rate of the postsynaptic neuron by adding a non-linear, saturating response function f {\displaystyle f} , but in fact, it can be shown that for any neuron model, Hebb's rule is unstable.[6]
Because, again, c ∗ {\displaystyle \mathbf {c} ^{*}} is the eigenvector corresponding to the largest eigenvalue of the correlation matrix between the x i {\displaystyle xi} s, this corresponds exactly to computing the first principal component of the input.
This mechanism can be extended to performing a full PCA (principal component analysis)
We have thus connected Hebbian learning to PCA, which is an elementary form of unsupervised learning, in the sense that the network can pick up useful statistical aspects of the input, and "describe" them in a distilled way in its output.[8]
1.4. Exceptions
One of the most well-documented of these exceptions pertains to how synaptic modification may not simply occur only between activated neurons A and B, but to neighboring neurons as well
The compound most commonly identified as fulfilling this retrograde transmitter role is nitric oxide, which, due to its high solubility and diffusibility, often exerts effects on nearby neurons.[11] This type of diffuse synaptic modification, known as volume learning, counters, or at least supplements, the traditional Hebbian model.[12]