Reexamining Hebbian Learning
One of the fundamental ways that neurons compute is thought to be a form of learning called Hebbian learning, in which cells that "fire together, wire together." Other learning mechanisms, such as back-propagation, have proven useful in neural network simulations, but are often considered less biologically-plausible (although the evidence for some form of error-driven learning is accumulating). But given a few elaborations to the classic view of Hebbian learning, this simple rule can explain a wide variety of cognitive phenomena. These elaborations are the focus of McClelland's chapter in the new volume of the Attention and Performance series, summarized below.
McClleland begins his discussion with long-term potentiation, or LTP, in which the synaptic efficacy of "sending" neurons increases if the "receiving" neuron itself fires. In other words, the receiving neuron becomes more sensitive, or potentiated, to its input. Recent work has also established the importance of precise timing of this input: LTP is strongest when sending neurons fire just before a receiving neuron. However, there is also something called "heterosynaptic long-term depression" in which sending neurons that did not fire have their synaptic efficacy decreased. And then there is "vanilla" long-term depression, in which relatively weak activity in the receiving neuron actually results in a decrease, rather than an increase, in synaptic efficacy. Together, these phenomena describe a slightly more complicated and "non-monotonic" hebbian learning curve, in which cells that fire together wire together, but those that fire just before the others become more strongly wired together ... and if the receiving cell does not fire (or does so weakly), the sending cells "unwire." (More accurate, but definitely not as catchy).
McClelland next points out that the Hebbian learning rule, as frequently implemented, often seems incapable of learning certain types of problems; however, this perception can be traced to a few characteristics of hebbian algorithms - some of which accurately characterize human behavior, even if they don't make the ideal learning algorithm for non-linear classifiers in AI applications.
For example, McClelland considers the phenomenon of dystonia. Dystonia occurs when people who repetitively use the same muscle pairings (such as guitarists gripping a pick for hours on end) find that their muscles "enter into a state of chronic activation," perceived as a cramp. This could easily be explained as a result of hebbian learning, in which actions performed at the same time become progressively more associated, until one has difficulty activating one muscle to the exclusion of the others with which it was repeatedly paired.
McClelland also considers the case of phonological confusion in Japanese speakers with English as a second language; for this population, the english sounds /r/ and /l/ are notoriously difficult to distinguish. McClelland hypothesized that this difficulty arises from the fact that english /r/ and /l/ sounds actually correspond to the same phoneme in Japanese, and that everytime an English speaker made either an /r/ or an /l/ sound, Japanese speakers would experience the activation of a single "r & l phoneme combination" representation. Through hebbian learning, this would lead to /r/ and /l/ becoming further intertwined based on mere exposure alone.
Based on this reasoning, McClelland was able to design a procedure which could train Japanese speakers to perceptually discriminate /r/ and /l/ sounds - all without ever getting feedback on whether they were correctly guessing if a given sound was an /r/ or and /l/. The procedure was essentially the following: Japanese speakers began by listening to highly exaggerated /r/ and /l/ sounds, and classifying them as either "r's" or "l's." After getting several consecutive discriminations correct (but never being informed of this), the sounds were covertly replaced with slightly more similar /r/ and /l/ sounds.
This training procedure is thought to work for the following reasons. First, the exaggerated sounds activate distinct percepts by virtue of being exaggerated, instead of activating the "r & l phoneme combination" percept normally activated by any normal-English pronounciation of /r/ and /l/. By repeatedly pairing /r/ and /l/ sounds with their respective percepts, the mappings between these representations would strengthen based on Hebbian mechanisms.
This may be one of the only examples in which instruction is not paired with feedback and is yet nonetheless completely successful. But even if there are other examples, this finding underscores just how pervasive Hebbian mechanisms may be in the neural computations underlying our every-day experiences.
Related Posts:
Neural Network Models of the Hippocampus
McClleland begins his discussion with long-term potentiation, or LTP, in which the synaptic efficacy of "sending" neurons increases if the "receiving" neuron itself fires. In other words, the receiving neuron becomes more sensitive, or potentiated, to its input. Recent work has also established the importance of precise timing of this input: LTP is strongest when sending neurons fire just before a receiving neuron. However, there is also something called "heterosynaptic long-term depression" in which sending neurons that did not fire have their synaptic efficacy decreased. And then there is "vanilla" long-term depression, in which relatively weak activity in the receiving neuron actually results in a decrease, rather than an increase, in synaptic efficacy. Together, these phenomena describe a slightly more complicated and "non-monotonic" hebbian learning curve, in which cells that fire together wire together, but those that fire just before the others become more strongly wired together ... and if the receiving cell does not fire (or does so weakly), the sending cells "unwire." (More accurate, but definitely not as catchy).
McClelland next points out that the Hebbian learning rule, as frequently implemented, often seems incapable of learning certain types of problems; however, this perception can be traced to a few characteristics of hebbian algorithms - some of which accurately characterize human behavior, even if they don't make the ideal learning algorithm for non-linear classifiers in AI applications.
For example, McClelland considers the phenomenon of dystonia. Dystonia occurs when people who repetitively use the same muscle pairings (such as guitarists gripping a pick for hours on end) find that their muscles "enter into a state of chronic activation," perceived as a cramp. This could easily be explained as a result of hebbian learning, in which actions performed at the same time become progressively more associated, until one has difficulty activating one muscle to the exclusion of the others with which it was repeatedly paired.
McClelland also considers the case of phonological confusion in Japanese speakers with English as a second language; for this population, the english sounds /r/ and /l/ are notoriously difficult to distinguish. McClelland hypothesized that this difficulty arises from the fact that english /r/ and /l/ sounds actually correspond to the same phoneme in Japanese, and that everytime an English speaker made either an /r/ or an /l/ sound, Japanese speakers would experience the activation of a single "r & l phoneme combination" representation. Through hebbian learning, this would lead to /r/ and /l/ becoming further intertwined based on mere exposure alone.
Based on this reasoning, McClelland was able to design a procedure which could train Japanese speakers to perceptually discriminate /r/ and /l/ sounds - all without ever getting feedback on whether they were correctly guessing if a given sound was an /r/ or and /l/. The procedure was essentially the following: Japanese speakers began by listening to highly exaggerated /r/ and /l/ sounds, and classifying them as either "r's" or "l's." After getting several consecutive discriminations correct (but never being informed of this), the sounds were covertly replaced with slightly more similar /r/ and /l/ sounds.
This training procedure is thought to work for the following reasons. First, the exaggerated sounds activate distinct percepts by virtue of being exaggerated, instead of activating the "r & l phoneme combination" percept normally activated by any normal-English pronounciation of /r/ and /l/. By repeatedly pairing /r/ and /l/ sounds with their respective percepts, the mappings between these representations would strengthen based on Hebbian mechanisms.
This may be one of the only examples in which instruction is not paired with feedback and is yet nonetheless completely successful. But even if there are other examples, this finding underscores just how pervasive Hebbian mechanisms may be in the neural computations underlying our every-day experiences.
Related Posts:
3 Comments:
Hey Pie - Many of the top researchers in this field would agree with you, that hebbian learning underlies the "phonological tuning" that occurs in infants. For example, the youngest infants will show through various behaviors that they can discriminate the phonemes used in many human languages - but slightly older infants will show sensitivity only to those phonemes used in their native language.
I like your site. I have not read everything yet but I intend to. Thanks for the effort.
Thanks Heidi!
Post a Comment
<< Home