Attractor Memory with Self-Organizing Input
2006 (English)In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2006, 265-280 p.Conference paper (Refereed)
We propose a neural network based autoassociative memory system for unsupervised learning. This system is intended to be an example of how a general information processing architecture, similar to that of neocortex, could be organized. The neural network has its units arranged into two separate groups called populations, one input and one hidden population. The units in the input population form receptive fields that sparsely projects onto the units of the hidden population. Competitive learning is used to train these forward projections. The hidden population implements an attractor memory. A back projection from the hidden to the input population is trained with a Hebbian learning rule. This system is capable of processing correlated and densely coded patterns, which regular attractor neural networks are very poor at. The system shows good performance on a number of typical attractor neural network tasks such as pattern completion, noise reduction, and prototype extraction.
Place, publisher, year, edition, pages
2006. 265-280 p.
, LECTURE NOTES IN COMPUTER SCIENCE, ISSN 0302-9743
Computer architecture, Data processing, Learning systems, Neural networks, Noise abatement, Software prototyping
IdentifiersURN: urn:nbn:se:kth:diva-6241DOI: 10.1007/11613022_22ISI: 000235807500022ScopusID: 2-s2.0-33744937897ISBN: 3540312536ISBN: 978-354031253-6OAI: oai:DiVA.org:kth-6241DiVA: diva2:10893
2nd International Workshop on Biologically Inspired Approaches to Advanced Information Technology, BioADIT 2006; Osaka; Japan; 26 January 2006 through 27 January 2006
QC 201507142006-10-092006-10-092015-07-14Bibliographically approved