Change search
ReferencesLink to record
Permanent link

Direct link
Imposing Biological Constraints onto an Abstract Neocortical Attractor Network Model
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
2007 (English)In: Neural Computation, ISSN 0899-7667, E-ISSN 1530-888X, Vol. 19, no 7, 1871-1896 p.Article in journal (Refereed) Published
Abstract [en]

In this letter, we study an abstract model of neocortex based on its modularization into mini- and hypercolumns. We discuss a full-scale instance of this model and connect its network properties to the underlying biological properties of neurons in cortex. In particular, we discuss how the biological constraints put on the network determine the network's performance in terms of storage capacity. We show that a network instantiating the model scales well given the biologically constrained parameters on activity and connectivity, which makes this network interesting also as an engineered system. In this model, the minicolumns are grouped into hypercolumns that can be active or quiescent, and the model predicts that only a few percent of the hypercolumns should be active at any one time. With this model, we show that at least 20 to 30 pyramidal neurons should be aggregated into a minicolumn and at least 50 to 60 minicolumns should be grouped into a hypercolumn in order to achieve high storage capacity.

Place, publisher, year, edition, pages
2007. Vol. 19, no 7, 1871-1896 p.
Keyword [en]
cortical associative memory, motor cortex, neural-networks, visual-cortex, autoassociative memory, synaptic connectivity, pyramidal neurons, cerebral-cortex, short-range, dynamics
National Category
Computer Science
URN: urn:nbn:se:kth:diva-6243DOI: 10.1162/neco.2007.19.7.1871ISI: 000246886200007PubMedID: 17521282ScopusID: 2-s2.0-34447262844OAI: diva2:10895
QC 20100903Available from: 2006-10-09 Created: 2006-10-09 Last updated: 2011-12-29Bibliographically approved
In thesis
1. An Attractor Memory Model of Neocortex
Open this publication in new window or tab >>An Attractor Memory Model of Neocortex
2006 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [en]

This thesis presents an abstract model of the mammalian neocortex. The model was constructed by taking a top-down view on the cortex, where it is assumed that cortex to a first approximation works as a system with attractor dynamics. The model deals with the processing of static inputs from the perspectives of biological mapping, algorithmic, and physical implementation, but it does not consider the temporal aspects of these inputs. The purpose of the model is twofold: Firstly, it is an abstract model of the cortex and as such it can be used to evaluate hypotheses about cortical function and structure. Secondly, it forms the basis of a general information processing system that may be implemented in computers. The characteristics of this model are studied both analytically and by simulation experiments, and we also discuss its parallel implementation on cluster computers as well as in digital hardware.

The basic design of the model is based on a thorough literature study of the mammalian cortex’s anatomy and physiology. We review both the layered and columnar structure of cortex and also the long- and short-range connectivity between neurons. Characteristics of cortex that defines its computational complexity such as the time-scales of cellular processes that transport ions in and out of neurons and give rise to electric signals are also investigated. In particular we study the size of cortex in terms of neuron and synapse numbers in five mammals; mouse, rat, cat, macaque, and human. The cortical model is implemented with a connectionist type of network where the functional units correspond to cortical minicolumns and these are in turn grouped into hypercolumn modules. The learning-rules used in the model are local in space and time, which make them biologically plausible and also allows for efficient parallel implementation. We study the implemented model both as a single- and multi-layered network. Instances of the model with sizes up to that of a rat-cortex equivalent are implemented and run on cluster computers in 23% of real time. We demonstrate on tasks involving image-data that the cortical model can be used for meaningful computations such as noise reduction, pattern completion, prototype extraction, hierarchical clustering, classification, and content addressable memory, and we show that also the largest cortex equivalent instances of the model can perform these types of computations. Important characteristics of the model are that it is insensitive to limited errors in the computational hardware and noise in the input data. Furthermore, it can learn from examples and is self-organizing to some extent. The proposed model contributes to the quest of understanding the cortex and it is also a first step towards a brain-inspired computing system that can be implemented in the molecular scale computers of tomorrow.

The main contributions of this thesis are: (i) A review of the size, modularization, and computational structure of the mammalian neocortex. (ii) An abstract generic connectionist network model of the mammalian cortex. (iii) A framework for a brain-inspired self-organizing information processing system. (iv) Theoretical work on the properties of the model when used as an autoassociative memory. (v) Theoretical insights on the anatomy and physiology of the cortex. (vi) Efficient implementation techniques and simulations of cortical sized instances. (vii) A fixed-point arithmetic implementation of the model that can be used in digital hardware.

Place, publisher, year, edition, pages
Stockholm: KTH, 2006. ix,148 p.
Trita-CSC-A, ISSN 1653-5723 ; 2006:14
Attractor Neural Networks, Cerebral Cortex, Neocortex, Brain Like Computing, Hypercolumns, Minicolumns, BCPNN, Parallel Computers, Autoassociative Memory
National Category
Computer Science
urn:nbn:se:kth:diva-4136 (URN)91-7178-461-6 (ISBN)
Public defence
2006-10-26, F2, Lindstedtsvägen 28, Stockholm, 10:15
QC 20100903Available from: 2006-10-09 Created: 2006-10-09 Last updated: 2010-09-03Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textPubMedScopus

Search in DiVA

By author/editor
Johansson, ChristopherLansner, Anders
By organisation
Numerical Analysis and Computer Science, NADAComputational Biology, CB
In the same journal
Neural Computation
Computer Science

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 56 hits
ReferencesLink to record
Permanent link

Direct link