UP | HOME

Date: [2020-11-15 Sun]

Hopfield network

Table of Contents

:FILE: Saved file

1. Hopfield network

A Hopfield network (or Ising model of a neural network or Ising–Lenz–Little model) is a form of recurrentartificial neural network popularized by John Hopfield in 1982, but described earlier by Little in 1974 based on Ernst Ising's work with Wilhelm Lenz.[1][2]

Hopfield networks also provide a model for understanding human memory [3][4]

1.1. Contents

1.2. Origins

1.3. Structure

1.4. Updating

1.4.1. Neurons "attract or repel each other" in state space

1.5. Working principles of discrete and continuous Hopfield networks

1.6. Energy

1.7. Hopfield network in optimization

1.8. Initialization and running

1.9. Training

1.9.1. Learning rules

1.9.2. Hebbian learning rule for Hopfield networks

1.9.3. The Storkey learning rule

1.10. Spurious patterns

1.11. Capacity

The Network capacity of the Hopfield network model is determined by neuron amounts and connections within a given network.

Furthermore, it was shown that the recall accuracy between vectors and nodes was 0.138 (approximately 138 vectors can be recalled from storage for every 1000 nodes) (Hertz et al., 1991). Therefore, it is evident that many mistakes will occur if one tries to store a large number of vectors. When the Hopfield model does not recall the right pattern, it is possible that an intrusion has taken place, since semantically related items tend to confuse the individual, and recollection of the wrong pattern occurs.

Perfect recalls and high capacity, >0.14, can be loaded in the network by Storkey learning method; ETAM

The storage capacity can be given as C ≅ n 2 log 2 ⁡ n {\displaystyle C≅ {\frac {n}{2log _{2}n}}} where n {\displaystyle n} is the number of neurons in the net. Or approximately C ≊ 0.15 n {\displaystyle C\approxeq 0.15n} [18]

1.12. Human memory

The Hopfield model accounts for associativememory through the incorporation of memory vectors.

It is important to note that Hopfield's network model utilizes the same learning rule as Hebb's (1949) learning rule, which basically tried to show that learning occurs as a result of the strengthening of the weights by when activity is occurring.

Rizzuto and Kahana (2001) were able to show that the neural network model can account for repetition on recall accuracy by incorporating a probabilistic-learning algorithm. During the retrieval process, no learning occurs. As a result, the weights of the network remain fixed, showing that the model is able to switch from a learning stage to a recall stage. By adding contextual drift they were able to show the rapid forgetting that occurs in a Hopfield model during a cued-recall task. The entire network contributes to the change in the activation of any single node.


References


You can send your feedback, queries here