Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Attractor dynamics in a modular network model of neocortex
KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.ORCID-id: 0000-0002-2358-7815
2006 (engelsk)Inngår i: Network, ISSN 0954-898X, E-ISSN 1361-6536, Network: Computation in Neural Systems, Vol. 17, nr 3, s. 253-276Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Starting from the hypothesis that the mammalian neocortex to a first approximation functions as an associative memory of the attractor network type, we formulate a quantitative computational model of neocortical layers 2/3. The model employs biophysically detailed multi-compartmental model neurons with conductance based synapses and includes pyramidal cells and two types of inhibitory interneurons, i.e., regular spiking non-pyramidal cells and basket cells. The simulated network has a minicolumnar as well as a hypercolumnar modular structure and we propose that minicolumns rather than single cells are the basic computational units in neocortex. The minicolumns are represented in full scale and synaptic input to the different types of model neurons is carefully matched to reproduce experimentally measured values and to allow a quantitative reproduction of single cell recordings. Several key phenomena seen experimentally in vitro and in vivo appear as emergent features of this model. It exhibits a robust and fast attractor dynamics with pattern completion and pattern rivalry and it suggests an explanation for the so-called attentional blink phenomenon. During assembly dynamics, the model faithfully reproduces several features of local UP states, as they have been experimentally observed in vitro, as well as oscillatory behavior similar to that observed in the neocortex.

sted, utgiver, år, opplag, sider
2006. Vol. 17, nr 3, s. 253-276
Emneord [en]
cortex, UP State, attentional blink, attractor dynamics, synchronization
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-6310DOI: 10.1080/09548980600774619ISI: 000244140900003Scopus ID: 2-s2.0-33845421947OAI: oai:DiVA.org:kth-6310DiVA, id: diva2:10989
Merknad

QC 20150729

Tilgjengelig fra: 2006-11-01 Laget: 2006-11-01 Sist oppdatert: 2018-01-13bibliografisk kontrollert
Inngår i avhandling
1. Aspects of memory and representation in cortical computation
Åpne denne publikasjonen i ny fane eller vindu >>Aspects of memory and representation in cortical computation
2006 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [sv]

Denna avhandling i datalogi föreslår modeller för hur vissa beräkningsmässiga uppgifter kan utföras av hjärnbarken. Utgångspunkten är dels kända fakta om hur en area i hjärnbarken är uppbyggd och fungerar, dels etablerade modellklasser inom beräkningsneurobiologi, såsom attraktorminnen och system för gles kodning. Ett neuralt nätverk som producerar en effektiv gles kod i binär mening för sensoriska, särskilt visuella, intryck presenteras. Jag visar att detta nätverk, när det har tränats med naturliga bilder, reproducerar vissa egenskaper (receptiva fält) hos nervceller i lager IV i den primära synbarken och att de koder som det producerar är lämpliga för lagring i associativa minnesmodeller. Vidare visar jag hur ett enkelt autoassociativt minne kan modifieras till att fungera som ett generellt sekvenslärande system genom att utrustas med synapsdynamik. Jag undersöker hur ett abstrakt attraktorminnessystem kan implementeras i en detaljerad modell baserad på data om hjärnbarken. Denna modell kan sedan analyseras med verktyg som simulerar experiment som kan utföras på en riktig hjärnbark. Hypotesen att hjärnbarken till avsevärd del fungerar som ett attraktorminne undersöks och visar sig leda till prediktioner för dess kopplingsstruktur. Jag diskuterar också metodologiska aspekter på beräkningsneurobiologin idag.

Abstract [en]

In this thesis I take a modular approach to cortical function. I investigate how the cerebral cortex may realise a number of basic computational tasks, within the framework of its generic architecture. I present novel mechanisms for certain assumed computational capabilities of the cerebral cortex, building on the established notions of attractor memory and sparse coding. A sparse binary coding network for generating efficient representations of sensory input is presented. It is demonstrated that this network model well reproduces the simple cell receptive field shapes seen in the primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realised on the microcircuit level -- and how it may be analysed using tools similar to those used experimentally. I outline some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimised for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience.

sted, utgiver, år, opplag, sider
Stockholm: KTH, 2006. s. xiv, 99
Serie
Trita-NA, ISSN 0348-2952 ; 2006:17
Emneord
cerebral cortex, neural networks, attractor memory, sequence learning, biological vision, generative models, serial order, computational neuroscience, dynamical synapses
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-4161 (URN)91-7178-478-0 (ISBN)
Disputas
2006-11-13, F3, KTH, Lindstedtsvägen 26, Stockholm, 14:15
Opponent
Veileder
Merknad
QC 20100916Tilgjengelig fra: 2006-11-01 Laget: 2006-11-01 Sist oppdatert: 2018-01-13bibliografisk kontrollert
2. Large-scale simulation of neuronal systems
Åpne denne publikasjonen i ny fane eller vindu >>Large-scale simulation of neuronal systems
2009 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Biologically detailed computational models of large-scale neuronal networks have now become feasible due to the development of increasingly powerful massively parallel supercomputers. We report here about the methodology involved in simulation of very large neuronal networks. Using conductance-based multicompartmental model neurons based on Hodgkin-Huxley formalism, we simulate a neuronal network model of layers II/III of the neocortex. These simulations, the largest of this type ever performed, were made on the Blue Gene/L supercomputer and comprised up to 8 million neurons and 4 billion synapses. Such model sizes correspond to the cortex of a small mammal. After a series of optimization steps, performance measurements show linear scaling behavior both on the Blue Gene/L supercomputer and on a more conventional cluster computer. Results from the simulation of a model based on more abstract formalism, and of considerably larger size, also shows linear scaling behavior on both computer architectures.

sted, utgiver, år, opplag, sider
Stockholm: KTH, 2009. s. xii, 65
Serie
Trita-CSC-A, ISSN 1653-5723 ; 2009:06
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-10616 (URN)978-91-7415-323-1 (ISBN)
Disputas
2009-06-09, Sal F2, KTH, Lindstedtsvägen 26, Stockholm, 10:00 (engelsk)
Opponent
Veileder
Merknad

QC 20100722

Tilgjengelig fra: 2009-06-03 Laget: 2009-06-03 Sist oppdatert: 2018-01-13bibliografisk kontrollert
3. Some computational aspects of attractor memory
Åpne denne publikasjonen i ny fane eller vindu >>Some computational aspects of attractor memory
2005 (engelsk)Licentiatavhandling, med artikler (Annet vitenskapelig)
Abstract [en]

In this thesis I present novel mechanisms for certain computational capabilities of the cerebral cortex, building on the established notion of attractor memory. A sparse binary coding network for generating efficient representation of sensory input is presented. It is demonstrated that this network model well reproduces receptive field shapes seen in primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realized on the microcircuit level -- and how it may be analyzed using similar tools as used experimentally. I demonstrate some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimized for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience.

sted, utgiver, år, opplag, sider
Stockholm: KTH, 2005. s. viii, 76
Serie
Trita-NA, ISSN 0348-2952 ; 0509
Emneord
Datalogi, attractor memory, cerebral cortex, neural networks, Datalogi
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-249 (URN)91-7283-983-X (ISBN)
Presentation
2005-03-15, Sal E32, KTH, Lindstedtsvägen 3, Stockholm, 07:00
Opponent
Veileder
Merknad
QC 20101220Tilgjengelig fra: 2005-05-31 Laget: 2005-05-31 Sist oppdatert: 2018-01-11bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Søk i DiVA

Av forfatter/redaktør
Lundqvist, MikaelRehn, MartinDjurfeldt, MikaelLansner, Anders
Av organisasjonen
I samme tidsskrift
Network

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 422 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf