Hopfield Network (HN)
YouTube search... ...Google search
- Neural Network Zoo | Fjodor Van Veen
- Hopfield Networks | Chris Nicholson - A.I. Wiki pathmind
- Recurrent Neural Network (RNN) Variants:
- Attention Mechanism ...Transformer ...Generative Pre-trained Transformer (GPT) ... GAN ... BERT
A Hopfield Network is a form (one particular type) of recurrent artificial neural network popularized by John Hopfield in 1982, but described earlier by Little in 1974. Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes. They are guaranteed to converge to a local minimum and, therefore, may converge to a false pattern (wrong local minimum) rather than the stored pattern (expected local minimum). Wikipedia
Hopfield Network (HN) the weight from node to another and from the later to the former are the same (symmetric). The Hopfield Network (HN) is fully connected, so every neuron’s output is an input to all the other neurons. Another feature of the network is that updating of nodes happens in a binary way. These features allow for a particular feature of Hopfield's nets - they are guaranteed to converge to an attractor (stable state). oba2311
Every neuron is connected to every other neuron; it is a completely entangled plate of spaghetti as even all the nodes function as everything. Each node is input before training, then hidden during training and output afterwards. The networks are trained by setting the value of the neurons to the desired pattern after which the weights can be computed. The weights do not change after this. Once trained for one or more patterns, the network will always converge to one of the learned patterns because the network is only stable in those states. Note that it does not always conform to the desired state (it’s not a magic black box sadly). It stabilizes in part due to the total “energy” or “temperature” of the network being reduced incrementally during training. Each neuron has an activation threshold which scales to this temperature, which if surpassed by summing the input causes the neuron to take the form of one of two states (usually -1 or 1, sometimes 0 or 1). Updating the network can be done synchronously or more commonly one by one. If updated one by one, a fair random sequence is created to organize which cells update in what order (fair random being all options (n) occurring exactly once every n items). This is so you can tell when the network is stable (done converging), once every cell has been updated and none of them changed, the network is stable (annealed). These networks are often called associative memory because the converge to the most similar state as the input; if humans see half a table we can image the other half, this network will converge to a table if presented with half noise and half a table. Hopfield, John J. “Neural networks and physical systems with emergent collective computational abilities.” Proceedings of the national academy of sciences 79.8 (1982): 2554-2558.
John J. Hopfield
|
|
Polariton
In physics, polaritons /pəˈlærɪtɒnz, poʊ-/[1] are quasiparticles resulting from strong coupling of electromagnetic waves with an electric or magnetic dipole-carrying excitation.[example needed] They are an expression of the common quantum phenomenon known as level repulsion, also known as the avoided crossing principle. Polaritons describe the crossing of the dispersion of light with any interacting resonance. To this extent polaritons can also be thought as the new normal modes of a given material or structure arising from the strong coupling of the bare modes, which are the photon and the dipolar oscillation. The polariton is a bosonic quasiparticle, and should not be confused with the polaron (a fermionic one), which is an electron plus an attached phonon cloud. Polariton | Wikipedia
Kinetic Proofreading
Kinetic proofreading (or kinetic amplification) is a mechanism for error correction in biochemical reactions, proposed independently by John Hopfield (1974) and Jacques Ninio (1975). Kinetic proofreading allows enzymes to discriminate between two possible reaction pathways leading to correct or incorrect products with an accuracy higher than what one would predict based on the difference in the activation energy between these two pathways. Increased specificity is obtained by introducing an irreversible step exiting the pathway, with reaction intermediates leading to incorrect products more likely to prematurely exit the pathway than reaction intermediates leading to the correct product. If the exit step is fast relative to the next step in the pathway, the specificity can be increased by a factor of up to the ratio between the two exit rate constants. (If the next step is fast relative to the exit step, specificity will not be increased because there will not be enough time for exit to occur.) This can be repeated more than once to increase specificity further. Kinetic proofreading | Wikipedia