Ändra sökning
Avgränsa sökresultatet
123 1 - 50 av 140
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Auffarth, Benjamin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Kaplan, Bernhard
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Anders, Lansner
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Map formation in the olfactory bulb by axon guidance of olfactory neurons2011Ingår i: Frontiers in Systems Neuroscience, ISSN 1662-5137, E-ISSN 1662-5137, Vol. 5, nr 0Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The organization of representations in the brain has been observed to locally reflect subspaces of inputs that are relevant to behavioral or perceptual feature combinations, such as in areas receptive to lower and higher-order features in the visual system. The early olfactory system developed highly plastic mechanisms and convergent evidence indicates that projections from primary neurons converge onto the glomerular level of the olfactory bulb (OB) to form a code composed of continuous spatial zones that are differentially active for particular physico?-chemical feature combinations, some of which are known to trigger behavioral responses. In a model study of the early human olfactory system, we derive a glomerular organization based on a set of real-world,biologically-relevant stimuli, a distribution of receptors that respond each to a set of odorants of similar ranges of molecular properties, and a mechanism of axon guidance based on activity. Apart from demonstrating activity-dependent glomeruli formation and reproducing the relationship of glomerular recruitment with concentration, it is shown that glomerular responses reflect similarities of human odor category perceptions and that further, a spatial code provides a better correlation than a distributed population code. These results are consistent with evidence of functional compartmentalization in the OB and could suggest a function for the bulb in encoding of perceptual dimensions.

  • 2.
    Benjaminsson, Simon
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Fransson, Peter
    Department of Clinical Neuroscience, Karolinska Institute.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    A Novel Model-Free Data Analysis Technique Based on Clustering in a Mutual Information Space: Application to Resting-State fMRI2010Ingår i: Frontiers in Systems Neuroscience, ISSN 1662-5137, E-ISSN 1662-5137, Vol. 4, s. 34:1-34:8Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Non-parametric data-driven analysis techniques can be used to study datasets with few assumptions about the data and underlying experiment. Variations of independent component analysis (ICA) have been the methods mostly used on fMRI data, e.g., in finding resting-state networks thought to reflect the connectivity of the brain. Here we present a novel data analysis technique and demonstrate it on resting-state fMRI data. It is a generic method with few underlying assumptions about the data. The results are built from the statistical relations between all input voxels, resulting in a whole-brain analysis on a voxel level. It has good scalability properties and the parallel implementation is capable of handling large datasets and databases. From the mutual information between the activities of the voxels over time, a distance matrix is created for all voxels in the input space. Multidimensional scaling is used to put the voxels in a lower-dimensional space reflecting the dependency relations based on the distance matrix. By performing clustering in this space we can find the strong statistical regularities in the data, which for the resting-state data turns out to be the resting-state networks. The decomposition is performed in the last step of the algorithm and is computationally simple. This opens up for rapid analysis and visualization of the data on different spatial levels, as well as automatically finding a suitable number of decomposition components.

  • 3.
    Benjaminsson, Simon
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Herman, Pawel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Odour discrimination and mixture segmentation in a holistic model of the mammalian olfactory systemManuskript (preprint) (Övrigt vetenskapligt)
  • 4.
    Benjaminsson, Simon
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC).
    Herman, Pawel
    KTH, Skolan för datavetenskap och kommunikation (CSC).
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC).
    Performance of a computational model of the mammalian olfactory system2016Ingår i: Neuromorphic Olfaction, CRC Press , 2016, s. 173-211Kapitel i bok, del av antologi (Övrigt vetenskapligt)
  • 5.
    Benjaminsson, Simon
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Adaptive sensor drift counteraction by a modular neural network2011Ingår i: Chemical sensors, ISSN 0379-864X, Vol. 36, nr 1, s. E41-E41Artikel i tidskrift (Övrigt vetenskapligt)
  • 6.
    Benjaminsson, Simon
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Adaptive sensor drift counteraction by a modular neural network2010Ingår i: Neuroscience research, ISSN 0168-0102, E-ISSN 1872-8111, Vol. 68, s. E212-E212Artikel i tidskrift (Övrigt vetenskapligt)
    Abstract [en]

    The response properties of sensors such as electronic noses vary in time due to internal or environmental factors. Recalibration is often costly or technically infeasible, which is why algorithms aimed at addressing the sensor drift problem at the data processing level have been developed. These falls in two categories: The pre-processing approaches, such as component correction [1], try to extract the direction and amount of drift in the training data and remove the drift component during operation. Adaptive algorithms, such as the self-organizing map [2], try to counteract the drift during runtime by adjusting the network to the incoming data.

    We have previously suggested a modular neural network architecture as a model of cortical layer 4 [3]. Here we show how it quite well can handle the sensor drift problem in chemosensor data. It creates a distributed and redundant code suitable for a noisy and drifting environment. A feature extraction layer governed by competitive learning allows for network adaptation during runtime. In addition, training data can be utilized to create a prediction of the underlying drift to further improve the network performance. Hence, we attempt to combine the two aforementioned methodological categories into one network model.

    The capabilities of the proposed network are demonstrated on surrogate data as well as real-world data collected from an electronic nose.

  • 7.
    Benjaminsson, Simon
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Extreme scaling of brain simulation on JUGENE2011Rapport (Övrigt vetenskapligt)
  • 8.
    Benjaminsson, Simon
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Nexa: A scalable neural simulator with integrated analysis2012Ingår i: Network, ISSN 0954-898X, E-ISSN 1361-6536, Vol. 23, nr 4, s. 254-271Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Large-scale neural simulations encompass challenges in simulator design, data handling and understanding of simulation output. As the computational power of supercomputers and the size of network models increase, these challenges become even more pronounced. Here we introduce the experimental scalable neural simulator Nexa, for parallel simulation of large-scale neural network models at a high level of biological abstraction and for exploration of the simulation methods involved. It includes firing-rate models and capabilities to build networks using machine learning inspired methods for e. g. self-organization of network architecture and for structural plasticity. We show scalability up to the size of the largest machines currently available for a number of model scenarios. We further demonstrate simulator integration with online analysis and real-time visualization as scalable solutions for the data handling challenges.

  • 9.
    Benjaminsson, Simon
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Silverstein, David
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Herman, Pawel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Melis, Paul
    Visualization Group, SARA, Amsterdam, The Netherlands.
    Slavnić, Vladimir
    Scientific Computing Laboratory, Institute of Physics Belgrade, University of Belgrade.
    Spasojević, Marko
    Scientific Computing Laboratory, Institute of Physics Belgrade, University of Belgrade.
    Alexiev, Kiril
    Department of Mathematical Methods for Sensor Information Processing, Institute of Information and Communication Technologies, Bulgaria.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Visualization of Output from Large-Scale Brain Simulations2012Rapport (Övrigt vetenskapligt)
    Abstract [en]

    This project concerned the development of tools for visualization of output from brain simulations performed on supercomputers. The project had two main parts: 1) creating visualizations using large-scale simulation output from existing neural simulation codes, and 2) making extensions to  some of the existing codes to allow interactive runtime (in-situ) visualization. In 1) simulation data was converted to HDF5 format and split over multiple files. Visualization pipelines were created for different types of visualizations, e.g. voltage and calcium. In 2) by using the VisIt visualization application and its libsim library, simulation code was instrumented so that VisIt could access simulation data directly. The simulation code was instrumented and tested on different clusters where control of simulation was demonstrated and in-situ visualization of neural unit’s and population data was achieved.

  • 10.
    Berthet, Pierre
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Hällgren Kotaleski, Jeanette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Action selection performance of a reconfigurable Basal Ganglia inspired model with Hebbian-Bayesian Go-NoGo connectivity2012Ingår i: Frontiers in Behavioral Neuroscience, ISSN 1662-5153, E-ISSN 1662-5153, Vol. 6, s. 65-Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Several studies have shown a strong involvement of the basal ganglia (BG) in action selection and dopamine dependent learning. The dopaminergic signal to striatum, the input stage of the BG, has been commonly described as coding a reward prediction error (RPE), i.e. the difference between the predicted and actual reward. The RPE has been hypothesized to be critical in the modulation of the synaptic plasticity in cortico-striatal synapses in the direct and indirect pathway. We developed an abstract computational model of the BG, with a dual pathway structure functionally corresponding to the direct and indirect pathways, and compared its behaviour to biological data as well as other reinforcement learning models. The computations in our model are inspired by Bayesian inference, and the synaptic plasticity changes depend on a three factor Hebbian-Bayesian learning rule based on co-activation of pre- and post-synaptic units and on the value of the RPE. The model builds on a modified Actor-Critic architecture and implements the direct (Go) and the indirect (NoGo) pathway, as well as the reward prediction (RP) system, acting in a complementary fashion. We investigated the performance of the model system when different configurations of the Go, NoGo and RP system were utilized, e.g. using only the Go, NoGo, or RP system, or combinations of those. Learning performance was investigated in several types of learning paradigms, such as learning-relearning, successive learning, stochastic learning, reversal learning and a two-choice task. The RPE and the activity of the model during learning were similar to monkey electrophysiological and behavioural data. Our results, however, show that there is not a unique best way to configure this BG model to handle well all the learning paradigms tested. We thus suggest that an agent might dynamically configure its action selection mode, possibly depending on task characteristics and also on how much time is available.

  • 11.
    Berthet, Pierre
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Optogenetic Stimulation in a Computational Model of the Basal Ganglia Biases Action Selection and Reward Prediction Error2014Ingår i: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 9, nr 3, s. e90578-Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Optogenetic stimulation of specific types of medium spiny neurons (MSNs) in the striatum has been shown to bias the selection of mice in a two choices task. This shift is dependent on the localisation and on the intensity of the stimulation but also on the recent reward history. We have implemented a way to simulate this increased activity produced by the optical flash in our computational model of the basal ganglia (BG). This abstract model features the direct and indirect pathways commonly described in biology, and a reward prediction pathway (RP). The framework is similar to Actor-Critic methods and to the ventral/ dorsal distinction in the striatum. We thus investigated the impact on the selection caused by an added stimulation in each of the three pathways. We were able to reproduce in our model the bias in action selection observed in mice. Our results also showed that biasing the reward prediction is sufficient to create a modification in the action selection. However, we had to increase the percentage of trials with stimulation relative to that in experiments in order to impact the selection. We found that increasing only the reward prediction had a different effect if the stimulation in RP was action dependent (only for a specific action) or not. We further looked at the evolution of the change in the weights depending on the stage of learning within a block. A bias in RP impacts the plasticity differently depending on that stage but also on the outcome. It remains to experimentally test how the dopaminergic neurons are affected by specific stimulations of neurons in the striatum and to relate data to predictions of our model.

  • 12.
    Berthet, Pierre
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB. Stockholm Univ, Sweden; Karolinska Inst, Sweden.
    Lindahl, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB. Karolinska Inst, Sweden.
    Tully, Philip J.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB. Karolinska Inst, Sweden; Univ Edinburgh, Scotland.
    Hellgren-Kotaleski, Jeanette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB. Karolinska Inst, Sweden.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB. Stockholm Univ, Sweden; Karolinska Inst, Sweden.
    Functional Relevance of Different Basal Ganglia Pathways Investigated in a Spiking Model with Reward Dependent Plasticity2016Ingår i: Frontiers in Neural Circuits, ISSN 1662-5110, E-ISSN 1662-5110, Vol. 10, artikel-id 53Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The brain enables animals to behaviorally adapt in order to survive in a complex and dynamic environment, but how reward-oriented behaviors are achieved and computed by its underlying neural circuitry is an open question. To address this concern, we have developed a spiking model of the basal ganglia (BG) that learns to dis-inhibit the action leading to a reward despite ongoing changes in the reward schedule. The architecture of the network features the two pathways commonly described in BG, the direct (denoted D1) and the indirect (denoted D2) pathway, as well as a loop involving striatum and the dopaminergic system. The activity of these dopaminergic neurons conveys the reward prediction error (RPE), which determines the magnitude of synaptic plasticity within the different pathways. All plastic connections implement a versatile four-factor learning rule derived from Bayesian inference that depends upon pre- and post-synaptic activity, receptor type, and dopamine level. Synaptic weight updates occur in the D1 or D2 pathways depending on the sign of the RPE, and an efference copy informs upstream nuclei about the action selected. We demonstrate successful performance of the system in a multiple-choice learning task with a transiently changing reward schedule. We simulate lesioning of the various pathways and show that a condition without the D2 pathway fares worse than one without D1. Additionally, we simulate the degeneration observed in Parkinson's disease (PD) by decreasing the number of dopaminergic neurons during learning. The results suggest that the D1 pathway impairment in PD might have been overlooked. Furthermore, an analysis of the alterations in the synaptic weights shows that using the absolute reward value instead of the RPE leads to a larger change in D1.

  • 13. Bjorklund, A.
    et al.
    Lansner, Anders
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Grill, V. E.
    Glucose-induced Ca2+ (i) abnormalities in human pancreatic islets - Important role of overstimulation2000Ingår i: Diabetes, ISSN 0012-1797, E-ISSN 1939-327X, Vol. 49, nr 11, s. 1840-1848Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Chronic hyperglycemia desensitizes beta -cells to glucose. To further define the mechanisms behind desensitization and the role of overstimulation, we tested human pancreatic islets for the effects of long-term elevated glucose levels on cytoplasmic free Ca2+ concentration ([Ca2+](i)) and its relationship to overstimulation. Islets were cultured for 48 h with 5.5 or 27 mmol/l glucose. Culture with 27 mmol/l glucose obliterated postculture insulin responses to 27 mmol/l glucose. This desensitization was specific for glucose versus arginine, Desensitization was accompanied by three major [Ca2+](i) abnormalities: 1) elevated basal [Ca2+](i),) loss of a glucose-induced rise in [Ca2+](i) and 3) perturbations of oscillatory activity with a decrease in glucose-induced slow oscillations (0.2-0.5 min(-1)). Coculture with 0.3 mmol/l diazoxide was performed to probe the role of overstimulation. Neither glucose nor diazoxide affected islet glucose utilization or oxidation, Coculture with diazoxide and 27 mmol/l glucose significantly (P < 0.05) restored postculture insulin responses to glucose and lowered basal [Ca2+](i) and normalized glucose-induced oscillatory activity. However, diazoxide completely failed to revive an increase in [Ca2+](i) during postculture glucose stimulation. In conclusion, desensitization of glucose-induced insulin secretion in human pancreatic islets is induced in parallel with major glucose-specific [Ca2+](i) abnormalities. Overstimulation is an important but not exclusive factor behind [Ca2+](i) abnormalities.

  • 14. Brette, Romain
    et al.
    Rudolph, Michelle
    Carnevale, Ted
    Hines, Michael
    Beeman, David
    Bower, James M.
    Diesmann, Markus
    Morrison, Abigail
    Goodman, Philip H.
    Harris, Frederick C., Jr.
    Zirpe, Milind
    Natschlaeger, Thomas
    Pecevski, Dejan
    Ermentrout, Bard
    Djurfeldt, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Rochel, Olivier
    Vieville, Thierry
    Muller, Eilif
    Davison, Andrew P.
    El Boustani, Sami
    Destexhe, Alain
    Simulation of networks of spiking neurons: A review of tools and strategies2007Ingår i: Journal of Computational Neuroscience, ISSN 0929-5313, E-ISSN 1573-6873, Vol. 23, nr 3, s. 349-398Artikel, forskningsöversikt (Refereegranskat)
    Abstract [en]

    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.

  • 15. Bruederle, Daniel
    et al.
    Petrovici, Mihai A.
    Vogginger, Bernhard
    Ehrlich, Matthias
    Pfeil, Thomas
    Millner, Sebastian
    Gruebl, Andreas
    Wendt, Karsten
    Mueller, Eric
    Schwartz, Marc-Olivier
    de Oliveira, Dan Husmann
    Jeltsch, Sebastian
    Fieres, Johannes
    Schilling, Moritz
    Mueller, Paul
    Breitwieser, Oliver
    Petkov, Venelin
    Muller, Lyle
    Davison, Andrew P.
    Krishnamurthy, Pradeep
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Kremkow, Jens
    Lundqvist, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Muller, Eilif
    Partzsch, Johannes
    Scholze, Stefan
    Zuehl, Lukas
    Mayr, Christian
    Destexhe, Alain
    Diesmann, Markus
    Potjans, Tobias C.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Schueffny, Rene
    Schemmel, Johannes
    Meier, Karlheinz
    A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems2011Ingår i: Biological Cybernetics, ISSN 0340-1200, E-ISSN 1432-0770, Vol. 104, nr 4-5, s. 263-296Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.

  • 16.
    Chrysanthidis, Nikolaos
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST). Aristotle University of Thessaloniki, Faculty of Engineering, School of Electrical and Computer Engineering, 54124, Thessaloniki, Greece.
    Fiebig, Florian
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST). Institute for Adaptive and Neural Computation, Edinburgh University, EH8 9AB Edinburgh, Scotland.
    Lansner, Anders
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST). Department of Numerical Analysis and Computer Science, Stockholm University, 10691 Stockholm, Sweden.
    Introducing double bouquet cells into a modular cortical associative memory modelManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    We present an electrophysiological model of double bouquet cells and integrate them into an established cortical columnar microcircuit model that has previously been used as a spiking attractor model for memory. Learning in that model relies on a Bayesian-Hebbian learning rule to condition recurrent connectivity between pyramidal cells. We here demonstrate that the inclusion of a biophysically plausible double bouquet cell model can solve earlier concerns about learning rules that simultaneously learn excitation and inhibition and might thus violate Dale's Principle. We show that learning ability and resulting effective connectivity between functional columns of previous network models is preserved when pyramidal synapses onto double-bouquet cells are plastic under the same Hebbian-Bayesian learning rule. The proposed architecture draws on experimental evidence on double bouquet cells and effectively solves the problem of duplexed learning of inhibition and excitation by replacing recurrent inhibition between pyramidal cells in functional columns of different stimulus selectivity with a plastic disynaptic pathway. We thus show that the resulting change to the microcircuit architecture improves the model's biological plausibility without otherwise impacting the models spiking activity, basic operation, and learning abilities.

  • 17.
    Cürüclü, Baran
    et al.
    Mälardalens högskola.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Quantitative Assessment of the Local and Long-Range Horizontal Connections within the Striate Cortex2003Konferensbidrag (Övrigt vetenskapligt)
  • 18.
    Cürüklü, Baran
    et al.
    Department of Computer Science and Engineering, Mälardalen University.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    A model of the summation pools within the layer 4 (area 17)2005Ingår i: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 65, s. 167-172Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We propose a developmental model of the summation pools within the layer 4. The model is based on the modular structure of the neocortex and captures some of the known properties of layer 4. Connections between the orientation minicolumns are developed during exposure to visual input. Excitatory local connections are dense and biased towards the iso-orientation domain. Excitatory long-range connections are sparse and target all orientation domains equally. Inhibition is local. The summation pools are elongated along the orientation axis. These summation pools can facilitate weak and poorly tuned LGN input and explain improved visibility as an effect of enlargement of a stimulus.

  • 19. De Schutter, E.
    et al.
    Ekeberg, Örjan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Hellgren Kotaleski, Jeanette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Achard, P.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Biophysically detailed modelling of microcircuits and beyond2005Ingår i: TINS - Trends in Neurosciences, ISSN 0166-2236, E-ISSN 1878-108X, Vol. 28, nr 10, s. 562-569Artikel, forskningsöversikt (Refereegranskat)
    Abstract [en]

    Realistic bottom-up modelling has been seminal to understanding which properties of microcircuits control their dynamic behaviour, such as the locomotor rhythms generated by central pattern generators. In this article of the TINS Microcircuits Special Feature, we review recent modelling work on the leech-heartbeat and lamprey-swimming pattern generators as examples. Top-down mathematical modelling also has an important role in analyzing microcircuit properties but it has not always been easy to reconcile results from the two modelling approaches. Most realistic microcircuit models are relatively simple and need to be made more detailed to represent complex processes more accurately. We review methods to add neuromechanical feedback, biochemical pathways or full dendritic morphologies to microcircuit models. Finally, we consider the advantages and challenges of full-scale simulation of networks of microcircuits.

  • 20.
    Djurfeldt, Mikael
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Ekeberg, Örjan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Large-scale modeling - a tool for conquering the complexity of the brain2008Ingår i: Frontiers in Neuroinformatics, ISSN 1662-5196, E-ISSN 1662-5196, Vol. 2, s. 1-4Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Is there any hope of achieving a thorough understanding of higher functions such as perception, memory, thought and emotion or is the stunning complexity of the brain a barrier which will limit such efforts for the foreseeable future? In this perspective we discuss methods to handle complexity, approaches to model building, and point to detailed large-scale models as a new contribution to the toolbox of the computational neuroscientist. We elucidate some aspects which distinguishes large-scale models and some of the technological challenges which they entail.

  • 21.
    Djurfeldt, Mikael
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Parallelldatorcentrum, PDC.
    Johansson, Christopher
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Ekeberg, Örjan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Rehn, Martin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lundqvist, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Massively parallel simulation of brain-scale neuronal network models2005Rapport (Övrigt vetenskapligt)
  • 22.
    Djurfeldt, Mikael
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    1st INCF Workshop on Large-scale Modeling of the Nervous System2007Rapport (Övrigt vetenskapligt)
  • 23.
    Djurfeldt, Mikael
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Memory capacity in a model of cortical layers II/III2008Konferensbidrag (Refereegranskat)
  • 24.
    Djurfeldt, Mikael
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lundqvist, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Johansson, Christopher
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Rehn, Martin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Ekeberg, Örjan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Brain-scale simulation of the neocortex on the IBM Blue Gene/L  supercomputer2008Ingår i: IBM Journal of Research and Development, ISSN 0018-8646, E-ISSN 2151-8556, Vol. 52, nr 1-2, s. 31-41Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Biologically detailed large-scale models of the brain can now be simulated thanks to increasingly powerful massively parallel supercomputers. We present an overview, for the general technical reader, of a neuronal network model of layers II/III of the neocortex built with biophysical model neurons. These simulations, carried out on an IBM Blue Gene/Le supercomputer, comprise up to 22 million neurons and 11 billion synapses, which makes them the largest simulations of this type ever performed. Such model sizes correspond to the cortex of a small mammal. The SPLIT library, used for these simulations, runs on single-processor as well as massively parallel machines. Performance measurements show good scaling behavior on the Blue Gene/L supercomputer up to 8,192 processors. Several key phenomena seen in the living brain appear as emergent phenomena in the simulations. We discuss the role of this kind of model in neuroscience and note that full-scale models may be necessary to preserve natural dynamics. We also discuss the need for software tools for the specification of models as well as for analysis and visualization of output data. Combining models that range from abstract connectionist type to biophysically detailed will help us unravel the basic principles underlying neocortical function.

  • 25.
    Djurfeldt, Mikael
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Sandberg, Anders
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Ekeberg, Örjan
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Lansner, Anders
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    See-A framework for simulation of biologically detailed and artificial neural networks and systems1999Ingår i: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 26-27, s. 997-1003Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    See is a software framework for simulation of biologically detailed and artficial neural networks and systems. It includes a general purpose scripting language, based on Scheme,which also can be used interactively, while the basic framework is written in C++. Models can be built on the Scheme level from `simulation objectsa, each representing a population ofneurons, a projection, etc. The simulator provides a flexible and efficient protocol for data transfer between such objects. See contains a user interface to the parallelized, platformindependent, library SPLIT intended for biologically detailed modeling of large-scale networks and is easy to extend with new user code, both on the C++ and Scheme levels.

  • 26.
    Ekeberg, Örjan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Fransén, Erik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Hellgren Kotaleski, Jeanette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Herman, Pawel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Kumar, Arvind
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Lindeberg, Tony
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Computational Brain Science at CST, CSC, KTH2016Övrigt (Övrigt vetenskapligt)
    Abstract [en]

    Mission and Vision - Computational Brain Science Lab at CST, CSC, KTH

    The scientific mission of the Computational Brain Science Lab at CSC is to be at the forefront of mathematical modelling, quantitative analysis and mechanistic understanding of brain function. We perform research on (i) computational modelling of biological brain function and on (ii) developing theory, algorithms and software for building computer systems that can perform brain-like functions. Our research answers scientific questions and develops methods in these fields. We integrate results from our science-driven brain research into our work on brain-like algorithms and likewise use theoretical results about artificial brain-like functions as hypotheses for biological brain research.

    Our research on biological brain function includes sensory perception (vision, hearing, olfaction, pain), cognition (action selection, memory, learning) and motor control at different levels of biological detail (molecular, cellular, network) and mathematical/functional description. Methods development for investigating biological brain function and its dynamics as well as dysfunction comprises biomechanical simulation engines for locomotion and voice, machine learning methods for analysing functional brain images, craniofacial morphology and neuronal multi-scale simulations. Projects are conducted in close collaborations with Karolinska Institutet and Karolinska Hospital in Sweden as well as other laboratories in Europe, U.S., Japan and India.

    Our research on brain-like computing concerns methods development for perceptual systems that extract information from sensory signals (images, video and audio), analysis of functional brain images and EEG data, learning for autonomous agents as well as development of computational architectures (both software and hardware) for neural information processing. Our brain-inspired approach to computing also applies more generically to other computer science problems such as pattern recognition, data analysis and intelligent systems. Recent industrial collaborations include analysis of patient brain data with MentisCura and the startup company 13 Lab bought by Facebook.

    Our long term vision is to contribute to (i) deeper understanding of the computational mechanisms underlying biological brain function and (ii) better theories, methods and algorithms for perceptual and intelligent systems that perform artificial brain-like functions by (iii) performing interdisciplinary and cross-fertilizing research on both biological and artificial brain-like functions. 

    On one hand, biological brains provide existence proofs for guiding our research on artificial perceptual and intelligent systems. On the other hand, applying Richard Feynman’s famous statement ”What I cannot create I do not understand” to brain science implies that we can only claim to fully understand the computational mechanisms underlying biological brain function if we can build and implement corresponding computational mechanisms on a computerized system that performs similar brain-like functions.

  • 27.
    Ekeberg, Örjan
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Grillner, Sten
    Karolinska Institutet.
    Lansner, Anders
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    The Neural Control of Fish Swimming studied through Numerical Simulations1995Ingår i: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 3, nr 4, s. 363-384Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The neuronal generation of vertebrate locomotion has been extensively studied in the lamprey. Computer simulations of this system have been carried out with different aims and with different techniques. in this article, we review some of these simulations, particularly those leading toward models that describe She interaction that occurs between the neuronal system and its mechanical environment during swimming. Here we extend these models, enabling two new experiments to be conducted. The first one addresses the role of sensory feedback by exposing the neuromechanical system to unexpected perturbations. The second one tests the validity of an earlier proposed hypothesis for the neural generation of three-dimensional (3D) steering by coupling this central pattern generator to a mechanical 3D simulation.

  • 28.
    Eriksson, David
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Fransén, Erik
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Zilberter, Y.
    Lansner, Anders
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Effects of short-term synaptic plasticity in a local microcircuit on cell firing2003Ingår i: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 52-54, s. 7-12Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Effects of short-term synaptic plasticity on cell firing properties in a microcircuit formed by a reciprocally connected pyramidal cell and FSN interneuron in layer 2/3 of neocortex were analyzed in a biophysical model. Induction of synaptic depression by backpropagating dendritic action potentials was replicated, as well as the resulting time dependent depression of IPSP amplitudes. Results indicate that the effect of the depression becomes significant above 30 Hz input frequency. The magnitude of the effect depends on the time constant of the dendritic calcium regulating the depression. The frequency range depends on the time constant of the IPSP.

  • 29. Eriksson, Johan
    et al.
    Vogel, Edward K.
    Lansner, Anders B.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB. Department of Numerical Analysis and Computer Science, Stockholm University, Sweden.
    Bergstrom, Fredrik
    Nyberg, Lars
    Neurocognitive Architecture of Working Memory2015Ingår i: Neuron, ISSN 0896-6273, E-ISSN 1097-4199, Vol. 88, nr 1, s. 33-46Artikel, forskningsöversikt (Refereegranskat)
    Abstract [en]

    A crucial role for working memory in temporary information processing and guidance of complex behavior has been recognized for many decades. There is emerging consensus that working-memory maintenance results from the interactions among long-term memory representations and basic processes, including attention, that are instantiated as reentrant loops between frontal and posterior cortical areas, as well as sub-cortical structures. The nature of such interactions can account for capacity limitations, lifespan changes, and restricted transfer after working-memory training. Recent data and models indicate that working memory may also be based on synaptic plasticity and that working memory can operate on non-consciously perceived information.

  • 30.
    Farahini, Nasim
    et al.
    KTH, Skolan för informations- och kommunikationsteknik (ICT), Elektroniksystem.
    Hemani, Ahmed
    KTH, Skolan för informations- och kommunikationsteknik (ICT), Elektroniksystem.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Clermidy, F.
    Svensson, C.
    A scalable custom simulation machine for the Bayesian Confidence Propagation Neural Network model of the brain2014Ingår i: 2014 19th Asia and South Pacific Design Automation Conference (ASP-DAC), IEEE , 2014, s. 578-585Konferensbidrag (Refereegranskat)
    Abstract [en]

    A multi-chip custom digital super-computer called eBrain for simulating Bayesian Confidence Propagation Neural Network (BCPNN) model of the human brain has been proposed. It uses Hybrid Memory Cube (HMC), the 3D stacked DRAM memories for storing synaptic weights that are integrated with a custom designed logic chip that implements the BCPNN model. In 22nm node, eBrain executes BCPNN in real time with 740 TFlops/s while accessing 30 TBs synaptic weights with a bandwidth of 112 TBs/s while consuming less than 6 kWs power for the typical case. This efficiency is three orders better than general purpose supercomputers in the same technology node.

  • 31.
    Fiebig, Florian
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST).
    Herman, Pawel
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST).
    Lansner, Anders
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST). Department of Mathematics, Stockholm University, 10691 Stockholm, Swed.
    An Indexing Theory for Working Memory based on Fast Hebbian PlasticityManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    Working memory (WM) is a key component of human memory and cognitive function. Computational models have been used to uncover the underlying neural mechanisms. However, these studies have mostly focused on the short-term memory aspects of WM and neglected the equally important role of interactions between short- and long-term memory (STM, LTM). Here, we concentrate on these interactions within the framework of our new computational model of WM, which accounts for three cortical patches in macaque brain, corresponding to networks in prefrontal cortex (PFC) together with parieto-temporal cortical areas. In particular, we propose a cortical indexing theory that explains how PFC could associate, maintain and update multi-modal LTM representations. Our simulation results demonstrate how simultaneous, brief multi-modal memory cues could build a temporary joint memory representation linked via an "index" in the prefrontal cortex by means of fast Hebbian synaptic plasticity. The latter can then activate spontaneously and thereby reactivate the associated long-term representations. Cueing one long-term memory item rapidly pattern-completes the associated un-cued item via prefrontal cortex. The STM network updates flexibly as new stimuli arrive thereby gradually over-writing older representations. In a wider context, this WM model suggests a novel explanation for "variable binding", a long-standing and fundamental phenomenon in cognitive neuroscience, which is still poorly understood in terms of detailed neural mechanisms.

  • 32.
    Fiebig, Florian
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST). Edinburgh University, UK.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST). Stockholm University, Sweden.
    A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation2017Ingår i: Journal of Neuroscience, ISSN 0270-6474, Vol. 37, nr 1, s. 83-96Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism.

  • 33.
    Fiebig, Florian
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Memory consolidation from seconds to weeks: a three-stage neural network model with autonomous reinstatement dynamics2014Ingår i: Frontiers in Computational Neuroscience, ISSN 1662-5188, E-ISSN 1662-5188, Vol. 8, s. 64-Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Declarative long-term memories are not created in an instant. Gradual stabilization and temporally shifting dependence of acquired declarative memories in different brain regions called systems consolidation- can be tracked in time by lesion experiments. The observation of temporally graded retrograde amnesia(RA) following hippocampal lesions points to a gradual transfer of memory from hippocampus to neocortical long-term memory. Spontaneous reactivations of hippocampal memories, asobserved in place cell reactivations during slow wave- sleep, are supposed to driven eocortical reinstatements and facilitate this process. We proposea functional neural network implementation of these ideas and further more suggest anextended three-state framework that includes the prefrontal cortex( PFC). It bridges the temporal chasm between working memory percepts on the scale of seconds and consolidated long-term memory on the scale of weeks or months. Wes how that our three-stage model can autonomously produce the necessary stochastic reactivation dynamics for successful episodic memory consolidation. There sulting learning system is shown to exhibit classical memory effects seen in experimental studies, such as retrograde and anterograde amnesia(AA) after simulated hippocampal lesioning; further more the model reproduces peculiar biological findings on memory modulation, such as retrograde facilitation of memory after suppressed acquisition of new longterm memories- similar to the effects of benzodiazepines on memory.

  • 34.
    Fiebig, Florian
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB. Stockholm Univ, Sweden.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB. Stockholm University, Sweden.
    Memory Consolidation from Seconds to Weeks Through Autonomous Reinstatement Dynamics in a Three-Stage Neural Network Model2015Ingår i: ADVANCES IN COGNITIVE NEURODYNAMICS (IV), SPRINGER , 2015, s. 47-53Konferensbidrag (Refereegranskat)
    Abstract [en]

    Long-term memories for facts and events are not created at an instant. Rather, memories stabilize gradually over time and involve various brain regions. The shifting dependence of acquired declarative memories on different brain regions - called systems consolidation - can be tracked in time by lesion experiments and has led to the development of the Complementary Learning Systems framework, which focuses on hippocampal-cortical interaction. Observations of temporally graded retrograde amnesia following hippocampal lesions, point to a gradual transfer from hippocampus to cortical long-term memory. Spontaneous reactivations of hippocampal memories, as observed in place cell reactivations during slow-wave-sleep, are supposed to drive cortical reinstatements and facilitate this process. We propose a functional neural network implementation of these ideas and furthermore suggest an extended three-stage framework that also includes the prefrontal cortex and bridges the temporal chasm between working memory percepts on the scale of seconds and consolidated long-term memory on the scale of weeks or months. We show that our three-stage model can autonomously produce the necessary stochastic reactivation dynamics for successful episodic memory consolidation. The resulting learning system is shown to exhibit classical memory effects seen in experimental studies, such as retrograde and anterograde amnesia after simulated hippocampal lesioning.

  • 35. Fonollosa, J.
    et al.
    Gutierrez-Galvez, A.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Martinez, D.
    Rospars, J. P.
    Beccherelli, R.
    Perera, A.
    Pearce, T.
    Vershure, P.
    Persaud, K.
    Marco, S.
    Biologically inspired computation for chemical sensing2011Ingår i: Procedia Comput. Sci., 2011, s. 226-227Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we present how the achievements related to NEUROCHEM project (FP7, Bio-ICT, Grant number 216916) have increased the understanding of the olfactory system and helped to develop novel computing architectures and models for chemical sensing. We present the developed computational models of the olfactory pathway of vertebrates and insects to capture the mechanisms that underlie their chemical information processing abilities. To mimic the biological olfactory epithelium a large scale chemical sensor array has been developed.We also present a robot that demonstrates the chemical search task as a direct application of the computing paradigms extracted.

  • 36.
    Fransén, Erik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Kozlov, Alexander
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Xie, Yuecong
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Christensen, C.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Djurfeldt, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Ekeberg, Örjan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Evaluation of model scalability in parallel neural simulators2005Konferensbidrag (Refereegranskat)
    Abstract [en]

    A long standing belief in neuroscience has been that the brain and specifically the neocortex obtains its computational power by massive parallelism. Albeit conceptually appealing, this notion that effective processing requires large networks has not been possible to test in detailed simulations. In one project, we intend to study the generation of theta activity in the entorhinal-hippocampal system. Several simulation studies indicate that frequency and synchronization of the oscillation generated may depend on density of connectivity and/or geometry of connections. In a second project, we are studying how a model of early visual processing scales towards realistic sizes. To effectively evaluate the model, it must be scaled up to sizes where processing demands from the input given are sufficiently high, and where network size is made sufficiently large to process this information.

    We have in preliminary studies tested two parallel simulators. One is a version of pGENESIS supporting MPI from University of Sunderland, UK. The other is Split, a software produced in our own laboratory. Both have been tested on an Itanium2 cluster. Tests include variable number of processors and scaling number of neurons/compartments or number of synapses. In these simulations, average spike frequency in the network is also varied. The aim is to identify main bottle-necks. For instance, we foresee the need to parallelize the construction/layout of synapses.

  • 37.
    Fransén, Erik
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Lansner, Anders
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    A model of cortical associative memory based on a horizontal network of connected columns1998Ingår i: Network, ISSN 0954-898X, E-ISSN 1361-6536, Vol. 9, nr 2, s. 235-264Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    An attractor network model of cortical associative memory functions has been constructed and simulated. By replacing the single cell as the functional unit by multiple cells in cortical columns connected by long-range fibers, the model is improved in terms of correspondence with cortical connectivity. The connectivity is improved, since the original dense and symmetric connectivity of a standard recurrent network becomes sparse and asymmetric at the cell-to-cell level. Our simulations show that this kind of network, with model neurons of the Hodgkin-Huxley type arranged in columns, can operate as an associative memory in much the same way as previous models having simpler connectivity. The network shows attractor-like behaviour and performs the standard assembly operations despite differences in the dynamics introduced by the more detailed cell model and network structure. Furthermore, the model has become sufficiently detailed to allow evaluation against electrophysiological and anatomical observations. For instance, cell activities comply with experimental findings and reaction times are within biological and psychological ranges. By introducing a scaling model we demonstrate that a network approaching experimentally reported neuron numbers and synaptic distributions also could work like the model studied here.

  • 38. Grillner, Sten
    et al.
    Kozlov, Alexander
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Dario, Paolo
    Stefanini, Cesare
    Menciassi, Arianna
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Hellgren Kotaleski, Jeanette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Modeling a vertebrate motor system: pattern generation, steering and control of body orientation2007Ingår i: Progress in Brain Research, ISSN 0079-6123, E-ISSN 1875-7855, Vol. 165, s. 221-234Artikel, forskningsöversikt (Refereegranskat)
    Abstract [en]

    The lamprey is one of the few vertebrates in which the neural control system for goal-directed locomotion including steering and control of body orientation is well described at a cellular level. In this report we review the modeling of the central pattern-generating network, which has been carried out based on detailed experimentation. In the same way the modeling of the control system for steering and control of body orientation is reviewed, including neuromechanical simulations and robotic devices.

  • 39.
    Hellgren Kotaleski, Jeanette
    et al.
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Grillner, S
    Lansner, Anders
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Computer simulation of the segmental neural network generation locomotion in laprey by using populations of network inteneurons1992Ingår i: Biological Cybernetics, ISSN 0340-1200, E-ISSN 1432-0770, Vol. 68, s. 1-13Artikel i tidskrift (Övrigt vetenskapligt)
  • 40.
    Hellgren Kotaleski, Jeanette
    et al.
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Grillner, Sten
    Lansner, Anders
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Neural mechanisms potentially contributing to the intersegmental phase lag in lamprey I.: Segmental oscillations dependent on reciprocal inhibition1999Ingår i: Biological Cybernetics, ISSN 0340-1200, E-ISSN 1432-0770, Vol. 81, nr 4, s. 317-330Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Factors contributing to the production of a phase lag along chains of oscillatory networks consisting of Hodgkin-Huxley type neurons are analyzed by means of simulations. Simplified network configurations are explored consisting of the basic building blocks of the spinal central pattern generator (CPG) generating swimming in the lamprey. It consists of reciprocally coupled crossed inhibitory C interneurons and ipsilateral excitatory E interneurons that activate C neurons and other E neurons. Oscillatory activity in the model network can, in the simplest case, be produced by a pair of reciprocally coupled C interneurons oscillating through an escape mechanism. Different levels of tonic excitation drive the network over a wide burst frequency range. In this type of network, powerful frequency-regulating factors are the effective inhibition produced by the active side, in combination with the tendency of the inactive side to escape from the inhibition. These two mechanisms can be affected by several factors, e.g. spike frequency adaptation (calcium-dependent K+ channels): N-methyl-D-aspartate membrane properties as well as presence of low-voltage activated calcium channels. A rostrocaudal phase lag can be produced either by extending the contralateral inhibitory projections or the ipsilateral excitatory projections relatively more in the caudal than the rostral direction, since both an increased inhibition and a phasic excitation slow down the receiving network. The phase lag becomes decreased if the length of the intersegmental projections is increased or if the projections are extended symmetrically in both the rostral and the caudal directions. The simulations indicate that the conditions in the ends of an oscillator chain may significantly affect sign, magnitude and constancy of the phase lag. Also, with short and relatively weak intersegmental connections, the network remains robust against perturbations as well as intrinsic frequency differences along the chain. The phase lag (percentage of cycle duration) increases, however, with burst frequency also when the coupling strength is comparatively weak. The results are discussed and compared with previous "phase pulling" models as well as relaxation oscillators.

  • 41.
    Hellgren Kotaleski, Jeanette
    et al.
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Lansner, Anders
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Grillner, Sten
    Neural mechanisms potentially contributing to the intersegmental phase lag in lamprey II.: Hemisegmental oscillations produced by mutually coupled excitatory neurons1999Ingår i: Biological Cybernetics, ISSN 0340-1200, E-ISSN 1432-0770, Vol. 81, nr 4, s. 299-315Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Most previous models of the spinal central pattern generator (CPG) underlying locomotion in the lamprey have relied on reciprocal inhibition between the left and right side for oscillations to be produced. Here, we have explored the consequences of using self-oscillatory hemisegments. Within a single hemisegment, the oscillations are produced by a network of recurrently coupled excitatory neurons (E neurons) that by themselves are not oscillatory but when coupled together through N-methyl-D-aspartate (NMDA) and x-amino-3-hydroxy-5-methyl-4-isoxazolepropionicacid (AMPA)/kainate transmission can produce oscillations. The bursting mechanism relies on intracellular accumulation of calcium that activates Ca2+-dependent KC The intracellular calcium is modeled by two different intracellular calcium pools, one of which represents the calcium entry following the action potential, Ca-AP pool, and the other represents the calcium inflow through the NMDA channels, Ca-NMDA pool. The Ca2+-dependent K+ activated by these two calcium pools are referred to as K-CaAP and K-CaNMDA respectively, and their relative conductances are modulated and increase with the background activation of the network. When changing the background stimulation, the bursting activity in this network can be made to cover a frequency range of 0.5-5.5 Hz with reasonable burst proportions if the adaptation is modulated with the activity. When a chain of such hemisegments are coupled together, a phase lag along the chain can be produced. The local oscillations as well as the phase lag is dependent on the axonal conduction delay as well as the types of excitatory coupling that are assumed, i.e. AMPA/kainate and/or NMDA. When the caudal excitatory projections are extended further than the rostral ones, and assumed to be of approximately equal strength, this kind of network is capable of reproducing several experimental observations such as those occurring during strychnine blockade of the left-right reciprocal inhibition. Addition of reciprocally coupled inhibitory neurons in such a network gives rise to antiphasic activity between the left and right side, but not necessarily to any change of the frequency if the burst proportion of the hemisegmental bursts is well below 50%. Prolongation of the C neuron projection in the rostrocaudal direction restricts the phase lag produced by only the excitatory hemisegmental network by locking together the interburst intervals at different levels of the spinal cord.

  • 42.
    Hellgren Kotaleski, Jeanette
    et al.
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Lansner, Anders
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Grillner, Sten
    Production of phase lag in chains of neural networks oscillating through an escape mechanism1998Ingår i: Proceedings of the sixth annual conference on Computational neuroscience: trends in research, 1998, s. 65-70Konferensbidrag (Övrigt vetenskapligt)
  • 43.
    Herman, Pawel Andrzej
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB. Stockholm University.
    Lundqvist, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB. Stockholm University.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB. Stockholm University.
    Nested theta to gamma oscillations and precise spatiotemporal firing during memory retrieval in a simulated attractor network2013Ingår i: Brain Research, ISSN 0006-8993, E-ISSN 1872-6240, Vol. 1536, nr SI, s. 68-87Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Nested oscillations, where the phase of the underlying slow rhythm modulates the power of faster oscillations, have recently attracted considerable research attention as the increased phase-coupling of cross-frequency oscillations has been shown to relate to memory processes. Here we investigate the hypothesis that reactivations of memory patterns, induced by either external stimuli or internal dynamics, are manifested as distributed cell assemblies oscillating at gamma-like frequencies with life-times on a theta scale. For this purpose, we study the spatiotemporal oscillatory dynamics of a previously developed meso-scale attractor network model as a correlate of its memory function. The focus is on a hierarchical nested organization of neural oscillations in delta/theta (2-5Hz) and gamma frequency bands (25-35Hz), and in some conditions even in lower alpha band (8-12Hz), which emerge in the synthesized field potentials during attractor memory retrieval. We also examine spiking behavior of the network in close relation to oscillations. Despite highly irregular firing during memory retrieval and random connectivity within each cell assembly, we observe precise spatiotemporal firing patterns that repeat across memory activations at a rate higher than expected from random firing. In contrast to earlier studies aimed at modeling neural oscillations, our attractor memory network allows us to elaborate on the functional context of emerging rhythms and discuss their relevance. We provide support for the hypothesis that the dynamics of coherent delta/theta oscillations constitute an important aspect of the formation and replay of neuronal assemblies. This article is part of a Special Issue entitled Neural Coding 2012.

  • 44.
    Herman, Pawel
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Odor recognition framework for evaluating olfactory codes2011Konferensbidrag (Övrigt vetenskapligt)
  • 45.
    Herman, Pawel
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lundqvist, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Oscillations in a simulated meso-scale memory network: origin and function of theta to gamma rhythmsArtikel i tidskrift (Övrigt vetenskapligt)
  • 46.
    Huss, Mikael
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Hess, Dietmar
    d'Incamps, Boris Lamotte
    El Manira, Abdeljabbar
    Lansner, Anders
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Hellgren Kotaleski, Jeanette
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Role of A-current in lamprey locomotor network neurons2003Ingår i: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 52-54, s. 295-300Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A compartmental model of lamprey central pattern generator neurons was built in order to examine the effects of a fast, transient, high-voltage-activated potassium current (A-current) found experimentally. The model consisted of a soma, a compartment corresponding to the axon initial segment, and a dendritic tree. The simulation showed that the A-current was necessary for repetitive spiking in the single neuron following current injection. The functional role of adding an A-current was also examined in a network model. In this model, the A-current stabilizes the swimming rhythm by making the burst cycle duration and the number of spikes per burst less variable. All these effects are also seen experimentally.

  • 47.
    Huss, Mikael
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA. KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Wallén, Peter
    Department of Neuroscience, Nobel Institute for Neurophysiology, Karolinska Institutet.
    El Manira, Abdeljabbar
    Department of Neuroscience, Nobel Institute for Neurophysiology, Karolinska Institutet.
    Grillner, Sten
    Department of Neuroscience, Nobel Institute for Neurophysiology, Karolinska Institutet.
    Hellgren Kotaleski, Jeanette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Roles of ionic currents in lamprey CPG neurons: a modeling study2007Ingår i: Journal of Neurophysiology, ISSN 0022-3077, E-ISSN 1522-1598, Vol. 97, nr 4, s. 2696-2711Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The spinal network underlying locomotion in the lamprey consists of a core network of glutamatergic and glycinergic interneurons, previously studied experimentally and through mathematical modeling. We present a new and more detailed computational model of lamprey locomotor network neurons, based primarily on detailed electrophysiological measurements and incorporating new experimental findings. The model uses a Hodgkin Huxley-like formalism and consists of 86 membrane compartments containing 12 types of ion currents. One of the goals was to introduce a fast, transient potassium current (K-t) and two sodium-dependent potassium currents, one faster (K-NaF) and one slower (K-NaS), in the model. Not only has the model lent support to the interpretation of experimental results but it has also provided predictions for further experimental analysis of single-network neurons. For example, K-t was shown to be one critical factor for controlling action potential duration. In addition, the model has proved helpful in investigating the possible influence of the slow afterhyperpolarization on repetitive firing during ongoing activation. In particular, the balance between the simulated slow sodium-dependent and calcium-dependent potassium currents has been explored, as well as the possible involvement of dendritic conductances.

  • 48.
    Huss, Mikael
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA. KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Wallén, Peter
    El Manira, Abdeljabbar
    Grillner, Sten
    Kotaleski, Jeanette Hellgren
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Functional roles of ionic currents in lamprey CPG neurons: a model studyManuskript (preprint) (Övrigt vetenskapligt)
  • 49. Iatropoulos, G.
    et al.
    Herman, Pawel
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST).
    Lansner, Anders
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST).
    Karlgren, Jussi
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Teoretisk datalogi, TCS. Gavagai, Slussplan 9, Stockholm, Sweden.
    Larsson, M.
    Olofsson, J. K.
    The language of smell: Connecting linguistic and psychophysical properties of odor descriptors2018Ingår i: Cognition, ISSN 0010-0277, E-ISSN 1873-7838, Vol. 178, s. 37-49Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The olfactory sense is a particularly challenging domain for cognitive science investigations of perception, memory, and language. Although many studies show that odors often are difficult to describe verbally, little is known about the associations between olfactory percepts and the words that describe them. Quantitative models of how odor experiences are described in natural language are therefore needed to understand how odors are perceived and communicated. In this study, we develop a computational method to characterize the olfaction-related semantic content of words in a large text corpus of internet sites in English. We introduce two new metrics: olfactory association index (OAI, how strongly a word is associated with olfaction) and olfactory specificity index (OSI, how specific a word is in its description of odors). We validate the OAI and OSI metrics using psychophysical datasets by showing that terms with high OAI have high ratings of perceived olfactory association and are used to describe highly familiar odors. In contrast, terms with high OSI have high inter-individual consistency in how they are applied to odors. Finally, we analyze Dravnieks's (1985) dataset of odor ratings in terms of OAI and OSI. This analysis reveals that terms that are used broadly (applied often but with moderate ratings) tend to be olfaction-unrelated and abstract (e.g., “heavy” or “light”; low OAI and low OSI) while descriptors that are used selectively (applied seldom but with high ratings) tend to be olfaction-related (e.g., “vanilla” or “licorice”; high OAI). Thus, OAI and OSI provide behaviorally meaningful information about olfactory language. These statistical tools are useful for future studies of olfactory perception and cognition, and might help integrate research on odor perception, neuroimaging, and corpus-based linguistic models of semantic organization.

  • 50. Iatropoulos, Georgios
    et al.
    Olofsson, Jonas
    Herman, Pawel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Lansner, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Larsson, Maria
    Analysis of Statistics and Semantic Relations of Odor-Describing Words in Written Olfactory Versus Non- Olfactory Contexts2017Ingår i: Chemical Senses, ISSN 0379-864X, E-ISSN 1464-3553, Vol. 42, nr 2, s. E34-E35Artikel i tidskrift (Refereegranskat)
123 1 - 50 av 140
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf