Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Attractor neural networks with patchy connectivity
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.ORCID iD: 0000-0002-2358-7815
2006 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 69, no 7-9, 627-633 p.Article in journal (Refereed) Published
Abstract [en]

 The neurons in the mammalian visual cortex are arranged in columnar structures, and the synaptic contacts of the pyramidal neurons in layer II/III are clustered into patches that are sparsely distributed over the surrounding cortical surface. Here, We use an attractor neural-network model of the cortical circuitry and investigate the effects of patchy connectivity, both on the properties of the network and the attractor dynamics. An analysis of the network shows that the signal-to-noise ratio of the synaptic potential sums are improved by the patchy connectivity, which results in a higher storage capacity. This analysis is performed for both the Hopfield and Willshaw learning rules and the results are confirmed by simulation experiments.

Place, publisher, year, edition, pages
2006. Vol. 69, no 7-9, 627-633 p.
Keyword [en]
attractor neural network, patchy connectivity, clustered connections, neocortex, small world network, hypercolumn
National Category
Neurosciences
Identifiers
URN: urn:nbn:se:kth:diva-6311DOI: 10.1016/j.neucom.2005.12.002ISI: 000235797000002Scopus ID: 2-s2.0-32644438075OAI: oai:DiVA.org:kth-6311DiVA: diva2:10990
Note
QC 20100831. Conference: 13th European Symposium on Artificial Neural Networks (ESANN). Brugge, BELGIUM. APR, 2005Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2017-12-14Bibliographically approved
In thesis
1. Aspects of memory and representation in cortical computation
Open this publication in new window or tab >>Aspects of memory and representation in cortical computation
2006 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [sv]

Denna avhandling i datalogi föreslår modeller för hur vissa beräkningsmässiga uppgifter kan utföras av hjärnbarken. Utgångspunkten är dels kända fakta om hur en area i hjärnbarken är uppbyggd och fungerar, dels etablerade modellklasser inom beräkningsneurobiologi, såsom attraktorminnen och system för gles kodning. Ett neuralt nätverk som producerar en effektiv gles kod i binär mening för sensoriska, särskilt visuella, intryck presenteras. Jag visar att detta nätverk, när det har tränats med naturliga bilder, reproducerar vissa egenskaper (receptiva fält) hos nervceller i lager IV i den primära synbarken och att de koder som det producerar är lämpliga för lagring i associativa minnesmodeller. Vidare visar jag hur ett enkelt autoassociativt minne kan modifieras till att fungera som ett generellt sekvenslärande system genom att utrustas med synapsdynamik. Jag undersöker hur ett abstrakt attraktorminnessystem kan implementeras i en detaljerad modell baserad på data om hjärnbarken. Denna modell kan sedan analyseras med verktyg som simulerar experiment som kan utföras på en riktig hjärnbark. Hypotesen att hjärnbarken till avsevärd del fungerar som ett attraktorminne undersöks och visar sig leda till prediktioner för dess kopplingsstruktur. Jag diskuterar också metodologiska aspekter på beräkningsneurobiologin idag.

Abstract [en]

In this thesis I take a modular approach to cortical function. I investigate how the cerebral cortex may realise a number of basic computational tasks, within the framework of its generic architecture. I present novel mechanisms for certain assumed computational capabilities of the cerebral cortex, building on the established notions of attractor memory and sparse coding. A sparse binary coding network for generating efficient representations of sensory input is presented. It is demonstrated that this network model well reproduces the simple cell receptive field shapes seen in the primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realised on the microcircuit level -- and how it may be analysed using tools similar to those used experimentally. I outline some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimised for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience.

Place, publisher, year, edition, pages
Stockholm: KTH, 2006. xiv, 99 p.
Series
Trita-NA, ISSN 0348-2952 ; 2006:17
Keyword
cerebral cortex, neural networks, attractor memory, sequence learning, biological vision, generative models, serial order, computational neuroscience, dynamical synapses
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-4161 (URN)91-7178-478-0 (ISBN)
Public defence
2006-11-13, F3, KTH, Lindstedtsvägen 26, Stockholm, 14:15
Opponent
Supervisors
Note
QC 20100916Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2010-09-16Bibliographically approved
2. An Attractor Memory Model of Neocortex
Open this publication in new window or tab >>An Attractor Memory Model of Neocortex
2006 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [en]

This thesis presents an abstract model of the mammalian neocortex. The model was constructed by taking a top-down view on the cortex, where it is assumed that cortex to a first approximation works as a system with attractor dynamics. The model deals with the processing of static inputs from the perspectives of biological mapping, algorithmic, and physical implementation, but it does not consider the temporal aspects of these inputs. The purpose of the model is twofold: Firstly, it is an abstract model of the cortex and as such it can be used to evaluate hypotheses about cortical function and structure. Secondly, it forms the basis of a general information processing system that may be implemented in computers. The characteristics of this model are studied both analytically and by simulation experiments, and we also discuss its parallel implementation on cluster computers as well as in digital hardware.

The basic design of the model is based on a thorough literature study of the mammalian cortex’s anatomy and physiology. We review both the layered and columnar structure of cortex and also the long- and short-range connectivity between neurons. Characteristics of cortex that defines its computational complexity such as the time-scales of cellular processes that transport ions in and out of neurons and give rise to electric signals are also investigated. In particular we study the size of cortex in terms of neuron and synapse numbers in five mammals; mouse, rat, cat, macaque, and human. The cortical model is implemented with a connectionist type of network where the functional units correspond to cortical minicolumns and these are in turn grouped into hypercolumn modules. The learning-rules used in the model are local in space and time, which make them biologically plausible and also allows for efficient parallel implementation. We study the implemented model both as a single- and multi-layered network. Instances of the model with sizes up to that of a rat-cortex equivalent are implemented and run on cluster computers in 23% of real time. We demonstrate on tasks involving image-data that the cortical model can be used for meaningful computations such as noise reduction, pattern completion, prototype extraction, hierarchical clustering, classification, and content addressable memory, and we show that also the largest cortex equivalent instances of the model can perform these types of computations. Important characteristics of the model are that it is insensitive to limited errors in the computational hardware and noise in the input data. Furthermore, it can learn from examples and is self-organizing to some extent. The proposed model contributes to the quest of understanding the cortex and it is also a first step towards a brain-inspired computing system that can be implemented in the molecular scale computers of tomorrow.

The main contributions of this thesis are: (i) A review of the size, modularization, and computational structure of the mammalian neocortex. (ii) An abstract generic connectionist network model of the mammalian cortex. (iii) A framework for a brain-inspired self-organizing information processing system. (iv) Theoretical work on the properties of the model when used as an autoassociative memory. (v) Theoretical insights on the anatomy and physiology of the cortex. (vi) Efficient implementation techniques and simulations of cortical sized instances. (vii) A fixed-point arithmetic implementation of the model that can be used in digital hardware.

Place, publisher, year, edition, pages
Stockholm: KTH, 2006. ix,148 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2006:14
Keyword
Attractor Neural Networks, Cerebral Cortex, Neocortex, Brain Like Computing, Hypercolumns, Minicolumns, BCPNN, Parallel Computers, Autoassociative Memory
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-4136 (URN)91-7178-461-6 (ISBN)
Public defence
2006-10-26, F2, Lindstedtsvägen 28, Stockholm, 10:15
Opponent
Supervisors
Note
QC 20100903Available from: 2006-10-09 Created: 2006-10-09 Last updated: 2010-09-03Bibliographically approved
3. Some computational aspects of attractor memory
Open this publication in new window or tab >>Some computational aspects of attractor memory
2005 (English)Licentiate thesis, comprehensive summary (Other scientific)
Abstract [en]

In this thesis I present novel mechanisms for certain computational capabilities of the cerebral cortex, building on the established notion of attractor memory. A sparse binary coding network for generating efficient representation of sensory input is presented. It is demonstrated that this network model well reproduces receptive field shapes seen in primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realized on the microcircuit level -- and how it may be analyzed using similar tools as used experimentally. I demonstrate some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimized for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience.

Place, publisher, year, edition, pages
Stockholm: KTH, 2005. viii, 76 p.
Series
Trita-NA, ISSN 0348-2952 ; 0509
Keyword
Datalogi, attractor memory, cerebral cortex, neural networks, Datalogi
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-249 (URN)91-7283-983-X (ISBN)
Presentation
2005-03-15, Sal E32, KTH, Lindstedtsvägen 3, Stockholm, 07:00
Opponent
Supervisors
Note
QC 20101220Available from: 2005-05-31 Created: 2005-05-31 Last updated: 2010-12-20Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Johansson, ChristopherRehn, MartinLansner, Anders
By organisation
Numerical Analysis and Computer Science, NADAComputational Biology, CB
In the same journal
Neurocomputing
Neurosciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 98 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf