Tempotron

The Tempotron is a supervised synaptic learning algorithm which is applied when the information is encoded in spatiotemporal spiking patterns. This is an advancement of the perceptron which does not incorporate a spike timing framework.

It is general consensus that spike timing (STDP) plays a crucial role in the development of synaptic efficacy for many different kinds of neurons [1] Therefore a large variety of STDP-rules has been developed one of which is the tempotron.

Algorithm

Assuming a leaky integrate-and-fire-model the potential  V(t) of the synapse can be described by

 V(t)= \sum _i \omega _i\sum _{t_i}K(t-t_i) +V_{rest},

where t_i denotes the spike time of the i-th afferent synapse with synaptic efficacy \omega _i and V_{rest} the resting potential. K(t-t_i) describes the postsynaptic potential (PSP) elicited by each incoming spike:

K(t-t_i) = \begin{cases} V_0[\exp(-(t-t_i)/\tau)-\exp(-(t-t_i)/\tau_s)] & t\geq t_i \\ 0 & t< t_i  \end{cases}

with parameters \tau and \tau_s denoting decay time constants of the membrane integration and synaptic currents. The factor V_0 is used for the normalization of the PSP kernels. When the potential crosses the firing threshold  V_{th} the potential is reset to its resting value by shunting all incoming spikes.

Next a binary classification of the input patterns is needed(\circ refers to a pattern which should elicit at least one post synaptic action potential and \bullet refers to a pattern which should have no response accordingly). In the beginning the neuron does not know which pattern belongs to which classification and has to learn it iteratively, similar to the perceptron . The tempotron learns its tasks by adapting the synaptic effifacy \omega _i. If a  \circ pattern is presented and the postsynaptic neuron did not spike, all synaptic efficacies are increased by \Delta \omega _i whereas a  \bullet pattern followed by a postsynaptic response leads to a decrease of the synaptic efficacies by -\Delta \omega _i with [2]

\Delta \omega _i=\lambda \sum _{t_i<t_{max}}K(t_{max}-t_i).

Here  t_{max} denotes the time at which the postsynaptic potential V(t) reaches its maximal value.

It should be mentioned that the Tempotron is a special case of an older paper which dealt with continuous inputs.[3]

Sources

  1. Caporale, N., & Dan, Y. (2008). Spike timing-dependent plasticity: a Hebbian learning rule. Annu Rev Neurosci, 31, 25-46.
  2. Rober Gütig, Haim Sompolinsky (2006): The tempotron: a neuron that learns spike timing-based decisions, Nature Neuroscience vol. 9, no.3, 420-428
  3. Anthony M. Zador, Barak A. Pearlmutter (1996): "VC dimension of an integrate-and-fire neuron model", Neural Computation vol.8, 611-624
This article is issued from Wikipedia - version of the Thursday, November 05, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.