Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields
KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
2007 (English)In: Journal of Computational Neuroscience, ISSN 0929-5313, E-ISSN 1573-6873, Vol. 22, no 2, 135-146 p.Article in journal (Refereed) Published
Abstract [en]

Computational models of primary visual cortexhave demonstrated that principles of efficient coding andneuronal sparseness can explain the emergence of neuroneswith localised oriented receptive fields. Yet, existing modelshave failed to predict the diverse shapes of receptive fieldsthat occur in nature. The existing models used a particular“soft” form of sparseness that limits average neuronal activity.Here we study models of efficient coding in a broadercontext by comparing soft and “hard” forms of neuronalsparseness.As a result of our analyses, we propose a novel networkmodel for visual cortex. Themodel forms efficient visual representationsin which the number of active neurones, ratherthan mean neuronal activity, is limited. This form of hardsparseness also economises cortical resources like synapticmemory and metabolic energy. Furthermore, our model accuratelypredicts the distribution of receptive field shapesfound in the primary visual cortex of cat and monkey.

Place, publisher, year, edition, pages
2007. Vol. 22, no 2, 135-146 p.
Keyword [en]
Biological vision, Sparse coding, Receptive field learning
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-6304DOI: 10.1007/s10827-006-0003-9ISI: 000244296700003PubMedID: 17053994Scopus ID: 2-s2.0-33847100046OAI: oai:DiVA.org:kth-6304DiVA: diva2:10983
Note
QC 20100916Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2017-12-14Bibliographically approved
In thesis
1. Aspects of memory and representation in cortical computation
Open this publication in new window or tab >>Aspects of memory and representation in cortical computation
2006 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [sv]

Denna avhandling i datalogi föreslår modeller för hur vissa beräkningsmässiga uppgifter kan utföras av hjärnbarken. Utgångspunkten är dels kända fakta om hur en area i hjärnbarken är uppbyggd och fungerar, dels etablerade modellklasser inom beräkningsneurobiologi, såsom attraktorminnen och system för gles kodning. Ett neuralt nätverk som producerar en effektiv gles kod i binär mening för sensoriska, särskilt visuella, intryck presenteras. Jag visar att detta nätverk, när det har tränats med naturliga bilder, reproducerar vissa egenskaper (receptiva fält) hos nervceller i lager IV i den primära synbarken och att de koder som det producerar är lämpliga för lagring i associativa minnesmodeller. Vidare visar jag hur ett enkelt autoassociativt minne kan modifieras till att fungera som ett generellt sekvenslärande system genom att utrustas med synapsdynamik. Jag undersöker hur ett abstrakt attraktorminnessystem kan implementeras i en detaljerad modell baserad på data om hjärnbarken. Denna modell kan sedan analyseras med verktyg som simulerar experiment som kan utföras på en riktig hjärnbark. Hypotesen att hjärnbarken till avsevärd del fungerar som ett attraktorminne undersöks och visar sig leda till prediktioner för dess kopplingsstruktur. Jag diskuterar också metodologiska aspekter på beräkningsneurobiologin idag.

Abstract [en]

In this thesis I take a modular approach to cortical function. I investigate how the cerebral cortex may realise a number of basic computational tasks, within the framework of its generic architecture. I present novel mechanisms for certain assumed computational capabilities of the cerebral cortex, building on the established notions of attractor memory and sparse coding. A sparse binary coding network for generating efficient representations of sensory input is presented. It is demonstrated that this network model well reproduces the simple cell receptive field shapes seen in the primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realised on the microcircuit level -- and how it may be analysed using tools similar to those used experimentally. I outline some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimised for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience.

Place, publisher, year, edition, pages
Stockholm: KTH, 2006. xiv, 99 p.
Series
Trita-NA, ISSN 0348-2952 ; 2006:17
Keyword
cerebral cortex, neural networks, attractor memory, sequence learning, biological vision, generative models, serial order, computational neuroscience, dynamical synapses
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-4161 (URN)91-7178-478-0 (ISBN)
Public defence
2006-11-13, F3, KTH, Lindstedtsvägen 26, Stockholm, 14:15
Opponent
Supervisors
Note
QC 20100916Available from: 2006-11-01 Created: 2006-11-01 Last updated: 2010-09-16Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textPubMedScopus

Search in DiVA

By author/editor
Rehn, MartinSommer, Friedrich T.
By organisation
Computational Biology, CB
In the same journal
Journal of Computational Neuroscience
Computer Science

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 51 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf