Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
An Attractor Memory Model of Neocortex
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
2006 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [en]

This thesis presents an abstract model of the mammalian neocortex. The model was constructed by taking a top-down view on the cortex, where it is assumed that cortex to a first approximation works as a system with attractor dynamics. The model deals with the processing of static inputs from the perspectives of biological mapping, algorithmic, and physical implementation, but it does not consider the temporal aspects of these inputs. The purpose of the model is twofold: Firstly, it is an abstract model of the cortex and as such it can be used to evaluate hypotheses about cortical function and structure. Secondly, it forms the basis of a general information processing system that may be implemented in computers. The characteristics of this model are studied both analytically and by simulation experiments, and we also discuss its parallel implementation on cluster computers as well as in digital hardware.

The basic design of the model is based on a thorough literature study of the mammalian cortex’s anatomy and physiology. We review both the layered and columnar structure of cortex and also the long- and short-range connectivity between neurons. Characteristics of cortex that defines its computational complexity such as the time-scales of cellular processes that transport ions in and out of neurons and give rise to electric signals are also investigated. In particular we study the size of cortex in terms of neuron and synapse numbers in five mammals; mouse, rat, cat, macaque, and human. The cortical model is implemented with a connectionist type of network where the functional units correspond to cortical minicolumns and these are in turn grouped into hypercolumn modules. The learning-rules used in the model are local in space and time, which make them biologically plausible and also allows for efficient parallel implementation. We study the implemented model both as a single- and multi-layered network. Instances of the model with sizes up to that of a rat-cortex equivalent are implemented and run on cluster computers in 23% of real time. We demonstrate on tasks involving image-data that the cortical model can be used for meaningful computations such as noise reduction, pattern completion, prototype extraction, hierarchical clustering, classification, and content addressable memory, and we show that also the largest cortex equivalent instances of the model can perform these types of computations. Important characteristics of the model are that it is insensitive to limited errors in the computational hardware and noise in the input data. Furthermore, it can learn from examples and is self-organizing to some extent. The proposed model contributes to the quest of understanding the cortex and it is also a first step towards a brain-inspired computing system that can be implemented in the molecular scale computers of tomorrow.

The main contributions of this thesis are: (i) A review of the size, modularization, and computational structure of the mammalian neocortex. (ii) An abstract generic connectionist network model of the mammalian cortex. (iii) A framework for a brain-inspired self-organizing information processing system. (iv) Theoretical work on the properties of the model when used as an autoassociative memory. (v) Theoretical insights on the anatomy and physiology of the cortex. (vi) Efficient implementation techniques and simulations of cortical sized instances. (vii) A fixed-point arithmetic implementation of the model that can be used in digital hardware.

Place, publisher, year, edition, pages
Stockholm: KTH , 2006. , ix,148 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2006:14
Keyword [en]
Attractor Neural Networks, Cerebral Cortex, Neocortex, Brain Like Computing, Hypercolumns, Minicolumns, BCPNN, Parallel Computers, Autoassociative Memory
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-4136ISBN: 91-7178-461-6 (print)OAI: oai:DiVA.org:kth-4136DiVA: diva2:10896
Public defence
2006-10-26, F2, Lindstedtsvägen 28, Stockholm, 10:15
Opponent
Supervisors
Note
QC 20100903Available from: 2006-10-09 Created: 2006-10-09 Last updated: 2010-09-03Bibliographically approved
List of papers
1. Towards Cortex Sized Artificial Neural Systems
Open this publication in new window or tab >>Towards Cortex Sized Artificial Neural Systems
2007 (English)In: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 20, no 1, 48-61 p.Article in journal (Refereed) Published
Abstract [en]

We propose, implement, and discuss an abstract model of the mammalian neocortex. This model is instantiated with a sparse recurrently connected neural network that has spiking leaky integrator units and continuous Hebbian learning. First we study the structure, modularization, and size of neocortex, and then we describe a generic computational model of the cortical circuitry. A characterizing feature of the model is that it is based on the modularization of neocortex into hypercolumns and minicolumns.Both a floating- and fixed-point arithmetic implementation of the model are presented along with simulation results. We conclude that an implementation on a cluster computer is not communication but computation bounded. A mouse and rat cortex sized version of our model executes in 44% and 23% of real-time respectively. Further, an instance of the model with 1.6 x 10(6) units and 2 x 10(11) connections performed noise reduction and pattern completion. These implementations represent the current frontier of large-scale abstract neural network simulations in terms of network size and running speed.

Keyword
cerebral cortex, neocortex, attractor neural networks, cortical model, large scale implementation, cluster computers, hypercolumns, fixed-point arithmetic
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-6236 (URN)10.1016/j.neunet.2006.05.029 (DOI)000243841900004 ()16860539 (PubMedID)2-s2.0-33845642736 (Scopus ID)
Note
QC 20100902Available from: 2006-10-09 Created: 2006-10-09 Last updated: 2017-12-14Bibliographically approved
2. A Neural Network with Hypercolumns
Open this publication in new window or tab >>A Neural Network with Hypercolumns
2002 (English)Conference paper, Published paper (Refereed)
Series
LNCS 2415
National Category
Vehicle Engineering
Identifiers
urn:nbn:se:kth:diva-6237 (URN)
Conference
In Proc. International Conference on Artificial Neural Networks - ICANN’02
Note

QC 20100902

Available from: 2006-10-09 Created: 2006-10-09 Last updated: 2016-05-27Bibliographically approved
3. Attractor neural networks with patchy connectivity
Open this publication in new window or tab >>Attractor neural networks with patchy connectivity
2006 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 69, no 7-9, 627-633 p.Article in journal (Refereed) Published
Abstract [en]

 The neurons in the mammalian visual cortex are arranged in columnar structures, and the synaptic contacts of the pyramidal neurons in layer II/III are clustered into patches that are sparsely distributed over the surrounding cortical surface. Here, We use an attractor neural-network model of the cortical circuitry and investigate the effects of patchy connectivity, both on the properties of the network and the attractor dynamics. An analysis of the network shows that the signal-to-noise ratio of the synaptic potential sums are improved by the patchy connectivity, which results in a higher storage capacity. This analysis is performed for both the Hopfield and Willshaw learning rules and the results are confirmed by simulation experiments.

Keyword
attractor neural network, patchy connectivity, clustered connections, neocortex, small world network, hypercolumn
National Category
Neurosciences
Identifiers
urn:nbn:se:kth:diva-6311 (URN)10.1016/j.neucom.2005.12.002 (DOI)000235797000002 ()2-s2.0-32644438075 (Scopus ID)
Note
QC 20100831. Conference: 13th European Symposium on Artificial Neural Networks (ESANN). Brugge, BELGIUM. APR, 2005Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2017-12-14Bibliographically approved
4. Clustering of stored memories in an attractor network with local competition
Open this publication in new window or tab >>Clustering of stored memories in an attractor network with local competition
2006 (English)In: International Journal of Neural Systems, ISSN 0129-0657, E-ISSN 1793-6462, Vol. 16, no 6, 393-403 p.Article in journal (Refereed) Published
Abstract [en]

In this paper we study an attractor network with units that compete locally for activation and we prove that a reduced version of it has fixpoint dynamics. An analysis, complemented by simulation experiments, of the local characteristics of the network's attractors with respect to a parameter controlling the intensity of the local competition is performed. We find that the attractors are hierarchically clustered when the parameter of the local competition is changed

Keyword
Attractor neural network, Hierarchical clustering, Hypercolumn, Local competition, Memory clustering
National Category
Bioinformatics (Computational Biology)
Identifiers
urn:nbn:se:kth:diva-6239 (URN)10.1142/S0129065706000809 (DOI)000243806000001 ()17285686 (PubMedID)2-s2.0-33846695887 (Scopus ID)
Note
QC 20100902Available from: 2006-10-09 Created: 2006-10-09 Last updated: 2017-12-14Bibliographically approved
5. Implementing Plastic Weights in Neural Networks using Low Precision Arithmetic
Open this publication in new window or tab >>Implementing Plastic Weights in Neural Networks using Low Precision Arithmetic
2009 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 72, no 4-6, 968-972 p.Article in journal (Refereed) Published
Abstract [en]

In this letter, we develop a fixed-point arithmetic, low precision, implementation of an exponentially weighted moving average (EWMA) that is used in a neural network with plastic weights. We analyze the proposed design both analytically and experimentally, and we also evaluate its performance in the application of an attractor neural network. The EWMA in the proposed design has a constant relative truncation error, which is important for avoiding round-off errors in applications with slowly decaying processes, e.g. connectionist networks. We conclude that the proposed design offers greatly improved memory and computational efficiency compared to a naive implementation of the EWMA's difference equation, and that it is well suited for implementation in digital hardware.

Keyword
Exponentially weighted moving average, Fixed-point arithmetic, Leaky integrator, Low precision variables, Neural networks, Plastic weights
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-6240 (URN)10.1016/j.neucom.2008.04.007 (DOI)000263372000030 ()2-s2.0-58149468833 (Scopus ID)
Conference
2nd International Work-Conference on the Interplay Between Natural and Artificial Computation, La Manga del Mar Menor, SPAIN, JUN 18-21, 2007
Note

QC 20100917. Uppdaterad från Submitted till Published (20100917)

QC 20150915

Available from: 2006-10-09 Created: 2006-10-09 Last updated: 2017-12-14Bibliographically approved
6. Attractor Memory with Self-Organizing Input
Open this publication in new window or tab >>Attractor Memory with Self-Organizing Input
2006 (English)In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2006, 265-280 p.Conference paper, Published paper (Refereed)
Abstract [en]

We propose a neural network based autoassociative memory system for unsupervised learning. This system is intended to be an example of how a general information processing architecture, similar to that of neocortex, could be organized. The neural network has its units arranged into two separate groups called populations, one input and one hidden population. The units in the input population form receptive fields that sparsely projects onto the units of the hidden population. Competitive learning is used to train these forward projections. The hidden population implements an attractor memory. A back projection from the hidden to the input population is trained with a Hebbian learning rule. This system is capable of processing correlated and densely coded patterns, which regular attractor neural networks are very poor at. The system shows good performance on a number of typical attractor neural network tasks such as pattern completion, noise reduction, and prototype extraction.

Series
LECTURE NOTES IN COMPUTER SCIENCE, ISSN 0302-9743
Keyword
Computer architecture, Data processing, Learning systems, Neural networks, Noise abatement, Software prototyping
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-6241 (URN)10.1007/11613022_22 (DOI)000235807500022 ()2-s2.0-33744937897 (Scopus ID)3540312536 (ISBN)978-354031253-6 (ISBN)
Conference
2nd International Workshop on Biologically Inspired Approaches to Advanced Information Technology, BioADIT 2006; Osaka; Japan; 26 January 2006 through 27 January 2006
Note

QC 20150714

Available from: 2006-10-09 Created: 2006-10-09 Last updated: 2015-07-14Bibliographically approved
7. A Hierarchical Brain Inspired Computing System
Open this publication in new window or tab >>A Hierarchical Brain Inspired Computing System
2006 (English)Conference paper, Oral presentation only (Refereed)
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-6242 (URN)
Conference
International Symposium on Nonlinear Theory and its Applications – NOLTA’06, Sep. 11-14, Bologna, Italy
Note

QC 20100903

Available from: 2006-10-09 Created: 2006-10-09 Last updated: 2015-09-21Bibliographically approved
8. Imposing Biological Constraints onto an Abstract Neocortical Attractor Network Model
Open this publication in new window or tab >>Imposing Biological Constraints onto an Abstract Neocortical Attractor Network Model
2007 (English)In: Neural Computation, ISSN 0899-7667, E-ISSN 1530-888X, Vol. 19, no 7, 1871-1896 p.Article in journal (Refereed) Published
Abstract [en]

In this letter, we study an abstract model of neocortex based on its modularization into mini- and hypercolumns. We discuss a full-scale instance of this model and connect its network properties to the underlying biological properties of neurons in cortex. In particular, we discuss how the biological constraints put on the network determine the network's performance in terms of storage capacity. We show that a network instantiating the model scales well given the biologically constrained parameters on activity and connectivity, which makes this network interesting also as an engineered system. In this model, the minicolumns are grouped into hypercolumns that can be active or quiescent, and the model predicts that only a few percent of the hypercolumns should be active at any one time. With this model, we show that at least 20 to 30 pyramidal neurons should be aggregated into a minicolumn and at least 50 to 60 minicolumns should be grouped into a hypercolumn in order to achieve high storage capacity.

Keyword
cortical associative memory, motor cortex, neural-networks, visual-cortex, autoassociative memory, synaptic connectivity, pyramidal neurons, cerebral-cortex, short-range, dynamics
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-6243 (URN)10.1162/neco.2007.19.7.1871 (DOI)000246886200007 ()17521282 (PubMedID)2-s2.0-34447262844 (Scopus ID)
Note
QC 20100903Available from: 2006-10-09 Created: 2006-10-09 Last updated: 2017-12-14Bibliographically approved

Open Access in DiVA

fulltext(1702 kB)817 downloads
File information
File name FULLTEXT01.pdfFile size 1702 kBChecksum MD5
cc60f0b986b036dde4663e2b59a72a5d416b65811b89ed0b14219e9c9e4d0fddfb0fca9e
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Johansson, Christopher
By organisation
Numerical Analysis and Computer Science, NADA
Computer Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 817 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 934 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf