Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Aspects of memory and representation in cortical computation
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
2006 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [sv]

Denna avhandling i datalogi föreslår modeller för hur vissa beräkningsmässiga uppgifter kan utföras av hjärnbarken. Utgångspunkten är dels kända fakta om hur en area i hjärnbarken är uppbyggd och fungerar, dels etablerade modellklasser inom beräkningsneurobiologi, såsom attraktorminnen och system för gles kodning. Ett neuralt nätverk som producerar en effektiv gles kod i binär mening för sensoriska, särskilt visuella, intryck presenteras. Jag visar att detta nätverk, när det har tränats med naturliga bilder, reproducerar vissa egenskaper (receptiva fält) hos nervceller i lager IV i den primära synbarken och att de koder som det producerar är lämpliga för lagring i associativa minnesmodeller. Vidare visar jag hur ett enkelt autoassociativt minne kan modifieras till att fungera som ett generellt sekvenslärande system genom att utrustas med synapsdynamik. Jag undersöker hur ett abstrakt attraktorminnessystem kan implementeras i en detaljerad modell baserad på data om hjärnbarken. Denna modell kan sedan analyseras med verktyg som simulerar experiment som kan utföras på en riktig hjärnbark. Hypotesen att hjärnbarken till avsevärd del fungerar som ett attraktorminne undersöks och visar sig leda till prediktioner för dess kopplingsstruktur. Jag diskuterar också metodologiska aspekter på beräkningsneurobiologin idag.

Abstract [en]

In this thesis I take a modular approach to cortical function. I investigate how the cerebral cortex may realise a number of basic computational tasks, within the framework of its generic architecture. I present novel mechanisms for certain assumed computational capabilities of the cerebral cortex, building on the established notions of attractor memory and sparse coding. A sparse binary coding network for generating efficient representations of sensory input is presented. It is demonstrated that this network model well reproduces the simple cell receptive field shapes seen in the primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realised on the microcircuit level -- and how it may be analysed using tools similar to those used experimentally. I outline some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimised for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience.

Place, publisher, year, edition, pages
Stockholm: KTH , 2006. , xiv, 99 p.
Series
Trita-NA, ISSN 0348-2952 ; 2006:17
Keyword [en]
cerebral cortex, neural networks, attractor memory, sequence learning, biological vision, generative models, serial order, computational neuroscience, dynamical synapses
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-4161ISBN: 91-7178-478-0 (print)OAI: oai:DiVA.org:kth-4161DiVA: diva2:10991
Public defence
2006-11-13, F3, KTH, Lindstedtsvägen 26, Stockholm, 14:15
Opponent
Supervisors
Note
QC 20100916Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2010-09-16Bibliographically approved
List of papers
1. A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields
Open this publication in new window or tab >>A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields
2007 (English)In: Journal of Computational Neuroscience, ISSN 0929-5313, E-ISSN 1573-6873, Vol. 22, no 2, 135-146 p.Article in journal (Refereed) Published
Abstract [en]

Computational models of primary visual cortexhave demonstrated that principles of efficient coding andneuronal sparseness can explain the emergence of neuroneswith localised oriented receptive fields. Yet, existing modelshave failed to predict the diverse shapes of receptive fieldsthat occur in nature. The existing models used a particular“soft” form of sparseness that limits average neuronal activity.Here we study models of efficient coding in a broadercontext by comparing soft and “hard” forms of neuronalsparseness.As a result of our analyses, we propose a novel networkmodel for visual cortex. Themodel forms efficient visual representationsin which the number of active neurones, ratherthan mean neuronal activity, is limited. This form of hardsparseness also economises cortical resources like synapticmemory and metabolic energy. Furthermore, our model accuratelypredicts the distribution of receptive field shapesfound in the primary visual cortex of cat and monkey.

Keyword
Biological vision, Sparse coding, Receptive field learning
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-6304 (URN)10.1007/s10827-006-0003-9 (DOI)000244296700003 ()17053994 (PubMedID)2-s2.0-33847100046 (Scopus ID)
Note
QC 20100916Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2017-12-14Bibliographically approved
2. Recognition of handwritten digits using sparse codes generated by local feature extraction methods
Open this publication in new window or tab >>Recognition of handwritten digits using sparse codes generated by local feature extraction methods
2006 (English)In: ESANN'2006: 14th European Symposium on Artificial Neural Networks, 2006, 161-166 p.Conference paper, Published paper (Refereed)
Abstract [en]

We investigate when sparse coding of sensory inputs canimprove performance in a classification task. For this purpose, we use astandard data set, the MNIST database of handwritten digits. We systematicallystudy combinations of sparse coding methods and neural classifiersin a two-layer network. We find that processing the image data intoa sparse code can indeed improve the classification performance, comparedto directly classifying the images. Further, increasing the level of sparsenessleads to even better performance, up to a point where the reductionof redundancy in the codes is offset by loss of information.

National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-6305 (URN)2-930307-06-4 (ISBN)
Conference
ESANN'2006 - European Symposium on Artificial Neural Networks. Bruges, Belgium. 26-28 April 2006
Note
QC 20100916Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2011-12-20Bibliographically approved
3. Tonically driven and self-sustaining activity in the lamprey hemicord: when can they co-exist?
Open this publication in new window or tab >>Tonically driven and self-sustaining activity in the lamprey hemicord: when can they co-exist?
2007 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Neurocomputing, ISSN 0925-2312, Vol. 70, no 10-12, 1882-1886 p.Article in journal (Refereed) Published
Abstract [en]

In lamprey hernisegmental preparations, two types of rhythmic activity are found: slower tonically driven activity which varies according to the external drive, and faster, more stereotypic activity that arises after a transient electrical stimulus. We present a simple conceptual model where a bistable excitable system can exhibit the two states. We then show that a neuronal network model can display the desired characteristics, given that synaptic dynamics-facilitation and saturation-are included. The model behaviour and its dependence on key parameters are illustrated. We discuss the relevance of our model to the lamprey locomotor system.

Keyword
Dynamical systems; Lamprey; Locomotion; Recurrent excitation
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-6306 (URN)10.1016/j.neucom.2006.10.055 (DOI)000247215300055 ()2-s2.0-34247515220 (Scopus ID)
Note
QC 20100715. Uppdaterad från In press till Published 20100715.Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2017-12-14Bibliographically approved
4. Sequence memory with dynamical synapses
Open this publication in new window or tab >>Sequence memory with dynamical synapses
2004 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 58-60, 271-278 p.Article in journal (Refereed) Published
Abstract [en]

We present an attractor model of cortical memory, capable of sequence learning. The network incorporates a dynamical synapse model and is trained using a Hebbian learning rule that operates by redistribution of synaptic efficacy. It performs sequential recall or unordered recall depending on parameters. The model reproduces data from free recall experiments in humans. Memory capacity scales with network size, storing sequences at about 0.18 bits per synapse.

Keyword
sequence learning, free recall, dynamical synapses, synaptic depression, attractor memory
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-6307 (URN)10.1016/j.neucom.2004.01.055 (DOI)000222245900043 ()2-s2.0-2542437082 (Scopus ID)
Note
QC 20100916. 12th Annual Computational Neuroscience Meeting (CSN 03). Alicante, SPAIN. JUL 05-09, 2003 Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2017-12-14Bibliographically approved
5. Storing and restoring visual input with collaborative rank coding and associative memory
Open this publication in new window or tab >>Storing and restoring visual input with collaborative rank coding and associative memory
2006 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 69, no 10-12, 1219-1223 p.Article in journal (Refereed) Published
Abstract [en]

Associative memory in cortical circuits has been held as a major mechanism for content-addressable memory. Hebbian synapses implement associative memory efficiently when storing sparse binary activity patterns. However, in models of sensory processing, representations are graded and not binary. Thus, it has been an unresolved question how sensory computation could exploit cortical associative memory.Here we propose a way how sensory processing could benefit from memory in cortical circuitry. We describe a new collaborative method of rank coding for converting graded stimuli, such as natural images, into sequences of synchronous spike volleys. Such sequences of sparse binary patterns can be efficiently processed in associative memory of the Willshaw type. We evaluate storage capacity and noise tolerance of the proposed system and demonstrate its use in cleanup and fill-in for noisy or occluded visual input

Keyword
sensory coding, attractor memory, rank coding, sequence memory, data compressio
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-6308 (URN)10.1016/j.neucom.2005.12.080 (DOI)000237873900050 ()2-s2.0-33646509186 (Scopus ID)
Note
QC 20100916. Conference: 14th Annual Computational Neuroscience Meeting (CNS 05). Madison, WI. JUL 17-21, 2005Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2017-12-14Bibliographically approved
6. Attractor dynamics in a modular network model of the cerebral cortex
Open this publication in new window or tab >>Attractor dynamics in a modular network model of the cerebral cortex
2006 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 69, no 10-12, 1155-1159 p.Article in journal (Refereed) Published
Abstract [en]

Computational models of cortical associative memory often take a top-down approach. We have previously described such an abstract model with a hypercolumnar structure. Here we explore a similar, biophysically detailed but subsampled network model of neocortex. We study how the neurodynamics and associative memory properties of this biophysical model relate to the abstract model as well as to experimental data. The resulting network exhibits attractor dynamics; pattern completion and pattern rivalry. It reproduces several features of experimentally observed local UP states, as well as oscillatory behavior on the gamma and theta time scales observed in the cerebral cortex.

Keyword
attractor memory, cortex, gamma rhythm, conduction-based models, UP states
National Category
Physiology
Identifiers
urn:nbn:se:kth:diva-6309 (URN)10.1016/j.neucom.2005.12.065 (DOI)000237873900035 ()2-s2.0-33646110257 (Scopus ID)
Note
QC 20100831. Cited References: 17 [ view related records ] Citation MapCitation Map Conference: 14th Annual Computational Neuroscience Meeting (CNS 05). Madison, WI. JUL 17-21, 2005Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2017-12-14Bibliographically approved
7. Attractor dynamics in a modular network model of neocortex
Open this publication in new window or tab >>Attractor dynamics in a modular network model of neocortex
2006 (English)In: Network, ISSN 0954-898X, E-ISSN 1361-6536, Network: Computation in Neural Systems, Vol. 17, no 3, 253-276 p.Article in journal (Refereed) Published
Abstract [en]

Starting from the hypothesis that the mammalian neocortex to a first approximation functions as an associative memory of the attractor network type, we formulate a quantitative computational model of neocortical layers 2/3. The model employs biophysically detailed multi-compartmental model neurons with conductance based synapses and includes pyramidal cells and two types of inhibitory interneurons, i.e., regular spiking non-pyramidal cells and basket cells. The simulated network has a minicolumnar as well as a hypercolumnar modular structure and we propose that minicolumns rather than single cells are the basic computational units in neocortex. The minicolumns are represented in full scale and synaptic input to the different types of model neurons is carefully matched to reproduce experimentally measured values and to allow a quantitative reproduction of single cell recordings. Several key phenomena seen experimentally in vitro and in vivo appear as emergent features of this model. It exhibits a robust and fast attractor dynamics with pattern completion and pattern rivalry and it suggests an explanation for the so-called attentional blink phenomenon. During assembly dynamics, the model faithfully reproduces several features of local UP states, as they have been experimentally observed in vitro, as well as oscillatory behavior similar to that observed in the neocortex.

Keyword
cortex, UP State, attentional blink, attractor dynamics, synchronization
National Category
Neurosciences
Identifiers
urn:nbn:se:kth:diva-6310 (URN)10.1080/09548980600774619 (DOI)000244140900003 ()2-s2.0-33845421947 (Scopus ID)
Note

QC 20150729

Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2017-12-14Bibliographically approved
8. Attractor neural networks with patchy connectivity
Open this publication in new window or tab >>Attractor neural networks with patchy connectivity
2006 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 69, no 7-9, 627-633 p.Article in journal (Refereed) Published
Abstract [en]

 The neurons in the mammalian visual cortex are arranged in columnar structures, and the synaptic contacts of the pyramidal neurons in layer II/III are clustered into patches that are sparsely distributed over the surrounding cortical surface. Here, We use an attractor neural-network model of the cortical circuitry and investigate the effects of patchy connectivity, both on the properties of the network and the attractor dynamics. An analysis of the network shows that the signal-to-noise ratio of the synaptic potential sums are improved by the patchy connectivity, which results in a higher storage capacity. This analysis is performed for both the Hopfield and Willshaw learning rules and the results are confirmed by simulation experiments.

Keyword
attractor neural network, patchy connectivity, clustered connections, neocortex, small world network, hypercolumn
National Category
Neurosciences
Identifiers
urn:nbn:se:kth:diva-6311 (URN)10.1016/j.neucom.2005.12.002 (DOI)000235797000002 ()2-s2.0-32644438075 (Scopus ID)
Note
QC 20100831. Conference: 13th European Symposium on Artificial Neural Networks (ESANN). Brugge, BELGIUM. APR, 2005Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2017-12-14Bibliographically approved

Open Access in DiVA

fulltext(7965 kB)260 downloads
File information
File name FULLTEXT01.pdfFile size 7965 kBChecksum MD5
b276f457536a996bc1ef0538c84fd191f4f9d8a85d29143238b8a7a5e194fea8ef8b8def
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Rehn, Martin
By organisation
Numerical Analysis and Computer Science, NADA
Computer Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 260 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 870 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf