Neural gas

Not to be confused with Nerve gas.


Neural gas is an artificial neural network, inspired by the self-organizing map and introduced in 1991 by Thomas Martinetz and Klaus Schulten.[1] The neural gas is a simple algorithm for finding optimal data representations based on feature vectors. The algorithm was coined "neural gas" because of the dynamics of the feature vectors during the adaptation process, which distribute themselves like a gas within the data space. It is applied where data compression or vector quantization is an issue, for example speech recognition,[2] image processing[3] or pattern recognition. As a robustly converging alternative to the k-means clustering it is also used for cluster analysis.[4]

Algorithm

Given a probability distribution P(x) of data vectors x and a finite number of feature vectors wi, i=1,...,N.

With each time step t a data vector randomly chosen from P is presented. Subsequently, the distance order of the feature vectors to the given data vector x is determined. i0 denotes the index of the closest feature vector, i1 the index of the second closest feature vector etc. and iN-1 the index of the feature vector most distant to x. Then each feature vector (k=0,...,N-1) is adapted according to

 w_{i_k}^{t+1} = w_{i_k}^{t} + \varepsilon\cdot  e^{-k/\lambda}\cdot (x-w_{i_k}^{t})

with ε as the adaptation step size and λ as the so-called neighborhood range. ε and λ are reduced with increasing t. After sufficiently many adaptation steps the feature vectors cover the data space with minimum representation error.[5]

The adaptation step of the neural gas can be interpreted as gradient descent on a cost function. By adapting not only the closest feature vector but all of them with a step size decreasing with increasing distance order, compared to (online) k-means clustering a much more robust convergence of the algorithm can be achieved. The neural gas model does not delete a node and also does not create new nodes.

References

  1. Thomas Martinetz and Klaus Schulten (1991). "A "neural gas" network learns topologies" (PDF). Artificial Neural Networks. Elsevier. pp. 397–402.
  2. F. Curatelli and O. Mayora-Iberra (2000). "Competitive learning methods for efficient Vector Quantizations in a speech recognition environment". In Osvaldo Cairó, L. Enrique Sucar, Francisco J. Cantú-Ortiz. MICAI 2000: Advances in artificial intelligence : Mexican International Conference on Artificial Intelligence, Acapulco, Mexico, April 2000 : proceedings. Springer. p. 109. ISBN 978-3-540-67354-5.
  3. Angelopoulou, Anastassia and Psarrou, Alexandra and Garcia Rodriguez, Jose and Revett, Kenneth (2005). "Automatic landmarking of 2D medical shapes using the growing neural gas network". In Yanxi Liu, Tianzi Jiang, Changshui Zhang. Computer vision for biomedical image applications: first international workshop, CVBIA 2005, Beijing, China, October 21, 2005 : proceedings. Springer. p. 210. doi:10.1007/11569541_22. ISBN 978-3-540-29411-5.
  4. Fernando Canales and Max Chacon (2007). "Modification of the growing neural gas algorithm for cluster analysis". In Luis Rueda, Domingo Mery, Josef Kittler, International Association for Pattern Recognition. Progress in pattern recognition, image analysis and applications: 12th Iberoamerican Congress on Pattern Recognition, CIARP 2007, Viña del Mar-Valparaiso, Chile, November 13–16, 2007 ; proceedings. Springer. pp. 684–693. doi:10.1007/978-3-540-76725-1_71. ISBN 978-3-540-76724-4.
  5. http://wwwold.ini.rub.de/VDM/research/gsn/JavaPaper/img187.gif[]

Further reading

External links

This article is issued from Wikipedia - version of the Wednesday, January 13, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.