Change search
Refine search result
1 - 44 of 44
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Berg, H.
    et al.
    Olsson, R.
    Lindblad, Thomas
    KTH, School of Engineering Sciences (SCI), Physics, Particle and Astroparticle Physics.
    Chilo, J.
    Automatic design of pulse coupled neurons for image segmentation2008In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 71, no 10-12, p. 1980-1993Article in journal (Refereed)
    Abstract [en]

    Automatic Design of Algorithms through Evolution (ADATE) is a program synthesis system that creates recursive programs in a functional language with automatic invention of recursive help functions and self-adaptive optimization of numerical values. We implement a neuron in a pulse coupled neural network (PCNN) as a recursive function in the ADATE language and then use ADATE to automatically evolve better PCNN neurons for image segmentation. Our technique is generally applicable for automatic improvement of most image processing algorithms and neural computing methods. It may be used either to generally improve a given implementation or to tailor that implementation to a specific problem, which with respect to image segmentation for example can be road following for autonomous vehicles or infrared image segmentation for heat seeking missiles that are to distinguish the heat source of the target from flares.

  • 2.
    Cartling, Bo
    KTH, School of Engineering Sciences (SCI), Theoretical Physics, Statistical Physics.
    On the implicit acquisition of a context-free grammar by a simple recurrent neural network2008In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 71, no 7-9, p. 1527-1537Article in journal (Refereed)
    Abstract [en]

    The performance of a simple recurrent neural network on the implicit acquisition of a context-free grammar is re-examined and found to be significantly higher than previously reported by Elman. This result is obtained although the previous work employed it multilayer extension of the basic form of simple recurrent network and restricted the complexity of training and test corpora. The high performance is traced to a well-organized internal representation of the grammatical elements, as probed by a principal-component analysis of the hidden-layer activities. From the next-symbol-prediction performance on sentences not present in the training corpus, it capacity of generalization is demonstrated.

  • 3.
    Cürüklü, Baran
    et al.
    Department of Computer Science and Engineering, Mälardalen University.
    Lansner, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    A model of the summation pools within the layer 4 (area 17)2005In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 65, p. 167-172Article in journal (Refereed)
    Abstract [en]

    We propose a developmental model of the summation pools within the layer 4. The model is based on the modular structure of the neocortex and captures some of the known properties of layer 4. Connections between the orientation minicolumns are developed during exposure to visual input. Excitatory local connections are dense and biased towards the iso-orientation domain. Excitatory long-range connections are sparse and target all orientation domains equally. Inhibition is local. The summation pools are elongated along the orientation axis. These summation pools can facilitate weak and poorly tuned LGN input and explain improved visibility as an effect of enlargement of a stimulus.

  • 4.
    Djurfeldt, Mikael
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Ekeberg, Örjan
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Graybiel, Ann M.
    Brain and Cognitive Sciences, MIT, Cambridge, Boston, U.S.A:.
    Cortex-basal ganglia interaction and attractor states2001In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 38-40, p. 573-579Article in journal (Refereed)
    Abstract [en]

    We propose a set of hypotheses about how the basal ganglia contribute to information processing in cortical networks and how the cortex and basal ganglia interact during learning and behavior. We introduce a computational model on the level of system of networks. We suggest that the basal ganglia control cortical activity by pushing a local cortical network into a new attractor state, thereby selecting certain attractors over others. The ideas of temporal difference learning and convergence of corticostriatal fibers from multiple cortical areas within the striatum are combined in a modular learning system capable of acquiring behavior with sequential structure.

  • 5.
    Djurfeldt, Mikael
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Sandberg, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Ekeberg, Örjan
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lansner, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    See-A framework for simulation of biologically detailed and artificial neural networks and systems1999In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 26-27, p. 997-1003Article in journal (Refereed)
    Abstract [en]

    See is a software framework for simulation of biologically detailed and artficial neural networks and systems. It includes a general purpose scripting language, based on Scheme,which also can be used interactively, while the basic framework is written in C++. Models can be built on the Scheme level from `simulation objectsa, each representing a population ofneurons, a projection, etc. The simulator provides a flexible and efficient protocol for data transfer between such objects. See contains a user interface to the parallelized, platformindependent, library SPLIT intended for biologically detailed modeling of large-scale networks and is easy to extend with new user code, both on the C++ and Scheme levels.

  • 6.
    Eriksson, David
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Fransén, Erik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Zilberter, Y.
    Lansner, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Effects of short-term synaptic plasticity in a local microcircuit on cell firing2003In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 52-54, p. 7-12Article in journal (Refereed)
    Abstract [en]

    Effects of short-term synaptic plasticity on cell firing properties in a microcircuit formed by a reciprocally connected pyramidal cell and FSN interneuron in layer 2/3 of neocortex were analyzed in a biophysical model. Induction of synaptic depression by backpropagating dendritic action potentials was replicated, as well as the resulting time dependent depression of IPSP amplitudes. Results indicate that the effect of the depression becomes significant above 30 Hz input frequency. The magnitude of the effect depends on the time constant of the dendritic calcium regulating the depression. The frequency range depends on the time constant of the IPSP.

  • 7.
    Fransén, Erik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    A synapse which can switch from inhibitory to excitatory and back2005In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 65, p. 39-45Article in journal (Refereed)
    Abstract [en]

    Co-release of transmitters has recently been observed at synapse terminals and can even be a combination such as glutamate and GABA. A second recent experimental finding is a short-term synaptic plasticity, which depends on postsynaptic depolarization releasing a dendritic transmitter, which affects presynaptic release probability. In this work we are investigating the functional consequences for a synapse if it had both co-release and conditioning depression. If initially the GABA component is larger than the glutamate component, the synapse has an inhibitory net effect. However, if the postsynaptic cell is conditioned, the GABA component will be suppressed yielding an excitatory synapse.

  • 8.
    Fransén, Erik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Coexistence of synchronized oscillatory and desynchronized rate activity in cortical networks2003In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. apr-52, p. 763-769Article in journal (Refereed)
    Abstract [en]

    The basis of MRI and PET experiments is the finding that neuronal cell firing levels are modulated in a task dependent manner. Results from EEG and MEG experiments on the other hand point to the importance of synchrony, e.g. the peak frequency may depend on the difficulty of the task. In most models only one of these activity modes of firing is desirable or possible to produce. In this work we show how a cortical microcircuit can produce either synchronized or desynchronized firing, and how this solves problems of present day rate and synchronization models.

  • 9.
    Fransén, Erik
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Neural response profile design: Reducing epileptogenic activity by modifying neuron responses to synchronized input using novel potassium channels obtained by parameter search optimization2007In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 70, no 10-12, p. 1630-1634Article in journal (Refereed)
    Abstract [en]

    Neurons obtain their dynamical electrical characteristics by a set of ion channels. These properties may not only affect the function of the neuron and the local network it forms part of, but it may also eventually affect behavior. We were interested to study whether epileptogenic activity could be reduced by adding an ion channel. In this work, we used computational search techniques to optimize ion channel properties for the goal of modifying neural response characteristics. Our results show that this type of parameter search is possible and reasonably efficient. Successful searches were generated using the direct method PRAXIS, and by systematic searches in low-dimensional sub-spaces. We also report on unsuccessful searches using a simplex-type method, a gradient-based method, and attempts to reduce goal function evaluation time. Importantly, using this search strategy, our study has shown that it is possible to change a neuron's characteristics selectively with regard to response to degree of synchronicity in synaptic input.

  • 10.
    Fransén, Erik
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Alonso, A. A.
    Hasselmo, M. E.
    Entorhinal neuronal activity during delayed matching tasks may depend upon muscarinic-induced non-specific cation current I(CANM)2001In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 38, p. 601-606Article in journal (Refereed)
    Abstract [en]

    Biophysical compartmental models of stellate, pyramidal-like and interneurons in layer II of the rat entorhinal cortex were used to explore cellular and synaptic components involved in neuronal responses to stimuli in a delayed match to sample (DMS) task. Simulations demonstrate that the muscarinic receptor-induced non-specific cation current, I(CANM), could contribute to these phenomena. Facilitation of I(CANM) by calcium influx during spikes induced by the sample stimulus can cause enhanced responses for matches as well as delay activity. In a network, lateral inhibition can produce match suppression, and in conjunction with stimulus selective/non-selective cells produce non-match enhancement and suppression.

  • 11.
    Hallén, Kristofer
    et al.
    Dept. of Physics/Msrmt. Technology, Linköping University.
    Huss, Mikael
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Kettunen, Petronella
    Nobel Institute for Neurophysiology, Department of Neuroscience, Karolinska Institutet.
    El Manira, Abdeljabbar
    Nobel Institute for Neurophysiology, Department of Neuroscience, Karolinska Institutet.
    Hellgren Kotaleski, Jeanette
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    mGluR-mediated calcium oscillations in the lamprey: a computational model2004In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 58-60, p. 431-435Article in journal (Refereed)
    Abstract [en]

    Slow Ca2+ oscillations caused by release from intracellular stores have been observed in neurons in the lamprey spinal cord. These oscillations are triggered by activation of metabotropic glutamate receptors on the cell surface. The pathway leading from receptor activation to the inositol triphosphate-mediated release of Ca2+ from the endoplasmatic reticulum has been modelled in order to facilitate further understanding of the nature of these oscillations. The model generates Ca2+ oscillations with a frequency range of 0.01-0.09 Hz. A prediction of the model is that the frequency will increase with a stronger extracellular glutamate signal.

  • 12. Hellgren Kotaleski, Jeanette
    et al.
    Blackwell, K. T.
    Sensitivity to interstimulus interval due to calcium interactions in the Purkinje cell spines2002In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 44, p. 13-18Article in journal (Refereed)
    Abstract [en]

    Pairing specific LTD (PSD) is produced by paired parallel fiber (PF) and climbing fiber (CF) stimulation and requires Ca2+ elevation. CF or PF activation cause Ca2+ increase through voltage dependent channels and IP3 induced Ca2+ release, respectively. We developed a model of Ca2+ dynamics in Purkinje cell spines to investigate why paired PF-CF activation is necessary for PSD. Simulations show a supralinear increase of the Ca2+ signal if the CF input occurs in a restricted time interval following the PF input. Ca2- buffers significantly contribute to this phenomenon. This mechanism may be involved in the requirement of temporal specificity in classical conditioning.

  • 13. Hellgren Kotaleski, Jeanette
    et al.
    Krieger, P.
    Simulation of metabotropic glutamate receptor induced effects in the lamprey CPG2000In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 32, p. 433-439Article in journal (Refereed)
    Abstract [en]

    Metabotropic glutamate receptor (mGluR) activation modulates the lamprey spinal locomotor network. Potentiation of the NMDA receptor current is observed suggesting that one possible mechanism for mGluR to regulate locomotion is through such an interaction. The present study investigates this possibility. The behavior of NMDA induced oscillations are explored when potentiation is achieved of different contributing factors of the NMDA receptor current. The effects on the duration of the depolarized phase as well as hyperpolarized interval depend on both level of activation and holding potential of the cells. From these effects on the cell level the network effects are predicted.

  • 14.
    Hellgren Kotaleski, Jeanette
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Plenz, D.
    Blackwell, K. T.
    The role of background synaptic noise in striatal fast spiking interneurons2005In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 65, p. 727-732Article in journal (Refereed)
    Abstract [en]

    Striatal fast spiking (FS) interneurons provide inhibition to each other as well as to medium spiny projection (SP) neurons. They exhibit up-states synchronously with SP neurons, and receive GABAergic and AMPA synaptic input during both up- and down-states. The synaptic input during down-states can be considered noise and might affect detection of up-states. We investigate what role this background noise might play for up-state firing in a 127 compartment FS model neuron. The model has Na, KDr and KA conductances, and is activated through AMPA and GABA synapses. The model's response to current injection and synaptic inputs resembled experimental data. We show that intermediate levels of noise neither facilitates nor degrades the ability of the FS neuron model to detect up-states.

  • 15.
    Hellgren Kotaleski, Jeanette
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Tegner, J
    Lansner, A
    Grillner, S
    Control of burst proportion and frequency range by drive-dependent modulation of adaptation1999In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 26-27, p. 185-191Article in journal (Refereed)
    Abstract [en]

    Factors controlling burst proportion in oscillatory networks are analyzed. This question is motivated by the lamprey swimming motor pattern which, independently on burst frequency, is characterized by a constant burst proportion. We investigate the effect of active modulation of the relative influence of a slower and faster adaptation controlling the depolarized phase. Using Morris–Lecar oscillators, NMDA-dependent oscillations or a network of mutually excitatory neurons, it is shown that the burst proportion can be controlled by increasing what corresponds to adaptation. Oscillations occur over an extended range of background stimulation values, leading to a higher maximal frequency.

  • 16.
    Hjorth, Johannes
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Elias, Alex Hanna
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Hellgren Kotaleski, Jeanette
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    The significance of gap junction location in striatal fast spiking interneurons2007In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 70, no 10-12, p. 1887-1891Article in journal (Refereed)
    Abstract [en]

    Fast spiking (FS) interneurons in the striatunt are hypothesised to control spike timing in the numerous medium spiny (MS) projection neurons by inhibiting or delaying firing in the MS neurons. The FS neurons are connected to each other through electrical gap junctions. This might synchronise the FS neurons, leading to increased influence on target neurons. Here, we explore the possible difference between proximal and distal gap junction locations. Somatic and distal dendritic gap junctions with equal effective coupling coefficient, as defined for steady-state somatic inputs, showed significantly different effective coupling coefficient with transient inputs. However, the ability to synchronise spiking in pairwise coupled FS neurons, which received synaptic inputs as during striatal up-state periods, was as effective with distal gap junctions as with proximal ones. Proximal gap junctions, however, caused synchronisation within a more precise time window.

  • 17.
    Huang, Jin
    et al.
    KTH, School of Electrical Engineering (EES), Information Science and Engineering.
    Xiao, Ming
    KTH, School of Electrical Engineering (EES), Information Science and Engineering.
    State of the art on road traffic sensing and learning based on mobile user network log data2018In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 278, p. 110-118Article in journal (Refereed)
    Abstract [en]

    With the improvement of the storage and big data processing technology, mobile operators are able to extract and store a large amount of mobile network generated user behavior data, in order to develop various intelligent applications. One interesting application based on these data is traffic sensing, which uses techniques of learning human mobility patterns from updated location information in network interaction log data. Mobile networks, under which a huge amount of frequently updated location information of mobile users are tracked, can provide complete coverage to estimate traffic condition on roads and highways. This paper studies potential challenges and opportunities in intelligent traffic sensing from the data science point of view with mobile network generated data. Firstly, we classify the data resources available in the commercial radio network according to different taxonomy criteria. Then we outline the broken-down problems that fit in the framework of traffic sensing based on mobile user network log data. We study the existing data processing and learning algorithms on extracting traffic condition information from a large amount of mobile network log data. Finally we make suggestion on potential future work for traffic sensing on data from mobile networks. We believe the techniques and insights provided here will inspire the research community in data science to develop the machine learning models of traffic sensing on the widely collected mobile user behavior data.

  • 18.
    Huss, Mikael
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Cangiano, Lorenzo
    Department of Neuroscience, Karolinska Institutet.
    Hellgren Kotaleski, Jeanette
    Department of Neuroscience, Karolinska Institutet.
    Modelling self-sustained rhythmic activity in lamprey hemisegmental networks2006In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 69, no 10-12, p. 1097-1102Article in journal (Refereed)
    Abstract [en]

    Recent studies of the lamprey spinal cord have shown that hemisegmental preparations can display rhythmic activity in response to a constant input drive. This activity is believed to be generated by a network of recurrently connected excitatory interneurons. A recent study found and characterized self-sustaining rhythmic activity-locomotor bouts-after brief electrical stimulation of hemisegmental preparations. The mechanisms behind the bouts are still unclear. We have developed a computational model of the hemisegmental network. The model addresses the possible involvement of NMDA, AMPA, acetylcholine, and metabotropic glutamate receptors as well as axonal delays in locomotor bouts.

  • 19.
    Huss, Mikael
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Hess, Dietmar
    d'Incamps, Boris Lamotte
    El Manira, Abdeljabbar
    Lansner, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Hellgren Kotaleski, Jeanette
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Role of A-current in lamprey locomotor network neurons2003In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 52-54, p. 295-300Article in journal (Refereed)
    Abstract [en]

    A compartmental model of lamprey central pattern generator neurons was built in order to examine the effects of a fast, transient, high-voltage-activated potassium current (A-current) found experimentally. The model consisted of a soma, a compartment corresponding to the axon initial segment, and a dendritic tree. The simulation showed that the A-current was necessary for repetitive spiking in the single neuron following current injection. The functional role of adding an A-current was also examined in a network model. In this model, the A-current stabilizes the swimming rhythm by making the burst cycle duration and the number of spikes per burst less variable. All these effects are also seen experimentally.

  • 20.
    Huss, Mikael
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Rehn, Martin
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Tonically driven and self-sustaining activity in the lamprey hemicord: when can they co-exist?2007In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Neurocomputing, ISSN 0925-2312, Vol. 70, no 10-12, p. 1882-1886Article in journal (Refereed)
    Abstract [en]

    In lamprey hernisegmental preparations, two types of rhythmic activity are found: slower tonically driven activity which varies according to the external drive, and faster, more stereotypic activity that arises after a transient electrical stimulus. We present a simple conceptual model where a bistable excitable system can exhibit the two states. We then show that a neuronal network model can display the desired characteristics, given that synaptic dynamics-facilitation and saturation-are included. The model behaviour and its dependence on key parameters are illustrated. We discuss the relevance of our model to the lamprey locomotor system.

  • 21.
    Johansson, Christopher
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Lansner, Anders
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Implementing Plastic Weights in Neural Networks using Low Precision Arithmetic2009In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 72, no 4-6, p. 968-972Article in journal (Refereed)
    Abstract [en]

    In this letter, we develop a fixed-point arithmetic, low precision, implementation of an exponentially weighted moving average (EWMA) that is used in a neural network with plastic weights. We analyze the proposed design both analytically and experimentally, and we also evaluate its performance in the application of an attractor neural network. The EWMA in the proposed design has a constant relative truncation error, which is important for avoiding round-off errors in applications with slowly decaying processes, e.g. connectionist networks. We conclude that the proposed design offers greatly improved memory and computational efficiency compared to a naive implementation of the EWMA's difference equation, and that it is well suited for implementation in digital hardware.

  • 22.
    Johansson, Christopher
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Rehn, Martin
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Lansner, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Attractor neural networks with patchy connectivity2006In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 69, no 7-9, p. 627-633Article in journal (Refereed)
    Abstract [en]

     The neurons in the mammalian visual cortex are arranged in columnar structures, and the synaptic contacts of the pyramidal neurons in layer II/III are clustered into patches that are sparsely distributed over the surrounding cortical surface. Here, We use an attractor neural-network model of the cortical circuitry and investigate the effects of patchy connectivity, both on the properties of the network and the attractor dynamics. An analysis of the network shows that the signal-to-noise ratio of the synaptic potential sums are improved by the patchy connectivity, which results in a higher storage capacity. This analysis is performed for both the Hopfield and Willshaw learning rules and the results are confirmed by simulation experiments.

  • 23. Kozlov, A. K.
    et al.
    Aurell, Erik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Orlovsky, G. N.
    Deliagina, T. G.
    Zelenin, P. V.
    Hellgren Kotaleski, Jeanette
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Grillner, S.
    Modeling control of roll-plane body orientation in lamprey2000In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 32, p. 871-877Article in journal (Refereed)
    Abstract [en]

    A phenomenological model of the mechanism of stabilization of the dorsal-side-up orientation in the lamprey is suggested. Mathematical modeling is based on the experimental results on investigation of postural control in lampreys using combined in vivo and robotics approaches. Dynamics of the model agrees qualitatively with the experiment. It is shown by computer simulations that postural correction commands from one or several reticulospinal neurons provide information which may be sufficient for stabilization of body orientation in the lamprey.

  • 24. Kozlov, A. K.
    et al.
    Fagerstedt, P.
    Ullen, F.
    Aurell, Erik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Turning behavior in lamprey in response to descending unilateral commands: Experiments and modeling2001In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 38, p. 1373-1378Article in journal (Refereed)
    Abstract [en]

    Steering maneuvers in vertebrates are characterized by asymmetric modulation of the cycle duration and the intensity of the symmetric rhythmic locomotor activity. In the lamprey in vitro model system, turns can be evoked by electrical skin stimuli applied to one side of the head, which give rise to descending unilateral excitatory commands. Turns are observed as increased activity on one side of the spinal cord, followed by a rebound on the other. We investigated the generation of turns in single-segment models of the lamprey locomotor spinal network, and were able to reproduce all main experimental results. Sufficient mechanisms to explain changes in the locomotor rhythm, including rebound, are asymmetric activation of crossing inhibitory neurons, accompanied by a calcium influx in these neurons.

  • 25.
    Kozlov, Alexander
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Hellgren Kotaleski, Jeanette
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Aurell, Erik
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Grillner, S
    Lansner, A
    Modeling of plasticity of the synaptic connections in the lamprey spinal CPG - consequences for network behavior2000In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 32-33, p. 441-446Article in journal (Refereed)
    Abstract [en]

    Consequences of synaptic plasticity in the lamprey spinal CPG are analyzed. This is motivated by the experimentally found effects substance P and 5-hydroxytryptamin (5-HT) have on the inhibitory and excitatory synaptic transmission. The effects can be a change of the amplitude of the postsynaptic potentials as well as induction of an activity-dependent facilitation or depression during repetitive activation. Simulations show that network level effects (i.e. swimming frequency) of substance P and 5-HT can to a substantial part be explained based on their effects on the plasticity of the synaptic transmission.

  • 26.
    Kozlov, Alexander
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Hellgren Kotaleski, Jeanette
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Aurell, Erik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Grillner, S.
    Lansner, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Modeling of plasticity of the synaptic connections in the lamprey spinal CPG - consequences for network behavior2000In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 32, p. 441-446Article in journal (Refereed)
    Abstract [en]

    Consequences of synaptic plasticity in the lamprey spinal CPG are analyzed. This is motivated by the experimentally found effects substance P and 5-hydroxytryptamin (5-HT) have on the inhibitory and excitatory synaptic transmission. The effects can be a change of the amplitude of the postsynaptic potentials as well as induction of an activity-dependent facilitation or depression during repetitive activation. Simulations show that network level effects (i.e. swimming frequency) of substance P and 5-HT can to a substantial part be explained based on their effects on the plasticity of the synaptic transmission.

  • 27.
    Kozlov, Alexander
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lansner, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Grillner, S.
    Burst dynamics under mixed NMDA and AMPA drive in the models of the lamprey spinal CPG2003In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 52-54, p. 65-71Article in journal (Refereed)
    Abstract [en]

    The spinal CPG of the lamprey is modeled using a chain of nonlinear oscillators. Each oscillator represents a small neuron population capable of bursting under mixed NMDA and AMPA drive. Parameters of the oscillator are derived from detailed conductance-based neuron models. Analysis and simulations of dynamics of a single oscillator, a chain of locally coupled excitatory oscillators and a chain of two pairs of excitatory and inhibitory oscillators in each segment are done. The roles of asymmetric couplings and additional rostral drive for generation of a traveling wave with one cycle per chain length in a realistic frequency range are studied.

  • 28. Kremkow, Jens
    et al.
    Kumar, Arvind
    Rotter, Stefan
    Aertsen, Ad
    Emergence of population synchrony in a layered network of the cat visual cortex2007In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 70, no 10-12, p. 2069-2073Article in journal (Refereed)
    Abstract [en]

    Recently, a quantitative wiring diagram for the local neuronal network of cat visual cortex was described [T. Binzegger, R.J. Douglas, K.A.C. Martin, A quantitative map of the circuit of the cat primary visual cortex, J. Neurosci. 39 (24) (2004) 8441-8453.] giving the first complete estimate of synaptic connectivity among various types of neurons in different cortical layers. Here we numerically studied the activity dynamics of the resulting heterogeneous layered network of spiking integrate-and-fire neurons, connected with conductance-based synapses. The layered network exhibited, among other states, an interesting asynchronous activity with intermittent population-wide synchronizations. These population bursts (PB) were initiated by a network hot spot, and then spread into the other parts of the network. The cause of this PB is the correlation amplifying nature of recurrent connections, which becomes significant in densely coupled networks. The hot spot was located in layer 2 / 3, the part of the network with the highest number of excitatory recurrent connections. We conclude that in structured networks, regions with a high degree of recurrence and many out-going fibres may be a source for population-wide synchronization.

  • 29.
    Llorens, Vicente Charcos
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Fransén, Erik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Intrinsic desynchronization properties of neurons containing dendritic rapidly activating K-currents2004In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 58, p. 137-143Article in journal (Refereed)
    Abstract [en]

    In this work, we investigate the role of the potassium A-current (K-A) in linking network synchrony to cellular excitability and firing frequency. We present an analysis of the notion of synchrony and we describe its conceptual and modeling implications. An full synchronization, K-A enables a control over the timing, or even a suppression, of spikes. For completely desynchronized activity, we show how K-A affects fast changes in amplitude of the summed EPSPs as well as amount of depolarization caused by the input. Simulations at intermediate levels of synchrony suggest that activity resulting from the interaction between cellular excitability and network synchrony could be altered through K-A modulation.

  • 30.
    Lundqvist, Mikael
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Rehn, Martin
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Lansner, Anders
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Attractor dynamics in a modular network model of the cerebral cortex2006In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 69, no 10-12, p. 1155-1159Article in journal (Refereed)
    Abstract [en]

    Computational models of cortical associative memory often take a top-down approach. We have previously described such an abstract model with a hypercolumnar structure. Here we explore a similar, biophysically detailed but subsampled network model of neocortex. We study how the neurodynamics and associative memory properties of this biophysical model relate to the abstract model as well as to experimental data. The resulting network exhibits attractor dynamics; pattern completion and pattern rivalry. It reproduces several features of experimentally observed local UP states, as well as oscillatory behavior on the gamma and theta time scales observed in the cerebral cortex.

  • 31. Ma, Zhanyu
    et al.
    Chien, Jen-Tzung
    Tan, Zheng-Hua
    Song, Yi-Zhe
    Taghia, Jalil
    Xiao, Ming
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    Recent advances in machine learning for non-Gaussian data processing2018In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 278, p. 1-3Article in journal (Refereed)
  • 32. Meier, Ralph
    et al.
    Kumar, Arvind
    Institute of Biology III, Albert-Ludwigs-University, Germany .
    Schulze-Bonhage, Andreas
    Aertsen, Ad
    Comparison of dynamical states of random networks with human EEG2007In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 70, no 10-12, p. 1843-1847Article in journal (Refereed)
    Abstract [en]

    Existing models of EEG have mainly focused on relations to network dynamics characterized by firing rates [L. de Arcangelis, H.J. Herrmann, C. Perrone-Capano, Activity-dependent brain model explaining EEG spectra, arXiv:q-bio.NC/0411043 v1, 23 Nov 2004; D.T. Liley, D.M. Alexander, J.J. Wright, M.D. Aldous, Alpha rhythm emerges from large-scale networks of realistically coupled multicompartmental model cortical neurons, Network 10(1) (1999) 79-92; O. David, J.K. Friston, A neural mass model for MEG/EEG: coupling and neuronal dynamics, NeuroImage 20 (2003) 1743-1755]. Generally, these models assume that there exists a linear mapping between network firing rates and EEG states. However, firing rate is only one of several descriptors for network activity states. Other relevant descriptors are synchrony and irregularity of firing patterns [N. Brunel, Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons, J. Comput. Neurosci. 8(3) (2000) 183-208]. To develop a better understanding of the EEG we need to relate these state descriptors to EEG states. Here, we try to go beyond the firing rate based approaches described in [D.T. Liley, D.M. Alexander, J.J. Wright, M.D. Aldous, Alpha rhythm emerges from large-scale networks of realistically coupled multicompartmental model cortical neurons, Network 10(1) (1999) 79-92; O. David, J.K. Friston, A neural mass model for MEG/EEG: coupling and neuronal dynamics, NeuroImage 20 (2003) 1743-1755] and relate synchronicity and irregularity in the network to EEG states. We show that the transformation between network activity and EEG can be approximately mediated by linear kernel with the shape of an α- or γ-function, allowing us a comparison between EEG states and network activity space. We find that the simulated EEG generated from asynchronous irregular type network activity is closely related to the human EEG recorded in the awake state, evaluated using power spectral density characteristics.

  • 33. Qu, Hong
    et al.
    Xing, Ke
    Takacs, Alexander
    KTH, School of Computer Science and Communication (CSC).
    An improved genetic algorithm with co-evolutionary strategy for global path planning of multiple mobile robots2013In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 120, p. 509-517Article in journal (Refereed)
    Abstract [en]

    This paper presents a Co-evolutionary Improved Genetic Algorithm (CIGA) for global path planning of multiple mobile robots, which employs a co-evolution mechanism together with an improved genetic algorithm (GA). This improved GA presents an effective and accurate fitness function, improves genetic operators of conventional genetic algorithms and proposes a new genetic modification operator. Moreover, the improved GA, compared with conventional GAs, is better at avoiding the problem of local optimum and has an accelerated convergence rate. The use of a co-evolution mechanism takes into full account the cooperation between populations, which avoids collision between mobile robots and is conductive for each mobile robot to obtain an optimal or near-optimal collision-free path. Simulations are carried out to demonstrate the efficiency of the improved GA and the effectiveness of CIGA.

  • 34.
    Raicevic, Peter
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Paralled reinforcement learning using multiple reward signals2006In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 69, no 16/18, p. 2171-2179Article in journal (Refereed)
    Abstract [en]

    Reinforcement learning is a quite general learning paradigm that can be used to solve a large set of problems. For complex problems it has been shown that by using task decomposition it may be possible for the system to learn faster. One common approach is to construct systems with multiple modules, where each module learns a sub-task. We present a parallel learning method for agents with an actor–critic architecture based on artificial neural networks. The agents have multiple modules, where the modules can learn in parallel to further increase learning speed. Each module solves a sub-problem and receives its own separate reward signal with all modules trained concurrently. We use the method on a grid world navigation task and show that parallel learning can significantly reduce learning time.

  • 35.
    Rehn, Martin
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lansner, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Sequence memory with dynamical synapses2004In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 58-60, p. 271-278Article in journal (Refereed)
    Abstract [en]

    We present an attractor model of cortical memory, capable of sequence learning. The network incorporates a dynamical synapse model and is trained using a Hebbian learning rule that operates by redistribution of synaptic efficacy. It performs sequential recall or unordered recall depending on parameters. The model reproduces data from free recall experiments in humans. Memory capacity scales with network size, storing sequences at about 0.18 bits per synapse.

  • 36.
    Rehn, Martin
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Sommer, Friedrich T.
    Storing and restoring visual input with collaborative rank coding and associative memory2006In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 69, no 10-12, p. 1219-1223Article in journal (Refereed)
    Abstract [en]

    Associative memory in cortical circuits has been held as a major mechanism for content-addressable memory. Hebbian synapses implement associative memory efficiently when storing sparse binary activity patterns. However, in models of sensory processing, representations are graded and not binary. Thus, it has been an unresolved question how sensory computation could exploit cortical associative memory.Here we propose a way how sensory processing could benefit from memory in cortical circuitry. We describe a new collaborative method of rank coding for converting graded stimuli, such as natural images, into sequences of synchronous spike volleys. Such sequences of sparse binary patterns can be efficiently processed in associative memory of the Willshaw type. We evaluate storage capacity and noise tolerance of the proposed system and demonstrate its use in cleanup and fill-in for noisy or occluded visual input

  • 37. Samuelsson, Ebba
    et al.
    Hellgren Kotaleski, Jeanette
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Exploring GABAergic and dopaminergic effects in a minimal model of a medium spiny projection neuron2007In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 70, no 12-okt, p. 1615-1618Article in journal (Refereed)
    Abstract [en]

    Striatum is the input stage of the basal ganglia, a collection of nuclei in the midbrain. The basal ganglia are involved in cognitive and motor behaviour, including reward-dependent learning. The reward system in the brain is heavily linked to the dopaminergic system, and many striatal neurons react in a reward-dependent manner. This study explores a minimal model of a striatal medium spiny (MS) projection neuron displaying dopamine-induced bistability. MS neurons mostly fluctuate between two states, a hyperpolarised down-state and a depolarised up-state. MS neurons are only active in the up-state and therefore spiking requires the transition from the down-state. For high dopamine levels in the model, the appearance of a bifurcation results in more distinct state transitions. GABAergic input from local fast-spiking interneurons to MS neurons results in a small depolarisation, but far from causing a transition to the up-state by itself. We investigate if a GABAergic PSP could facilitate the transition to the up-state elicited by glutamatergic input. The model predicts that GABAergic input to MS neurons might facilitate and speed up the transition to the up-state. The prerequisite for this is that the GABAergic enhancement starts slightly before the glutamatergic increase which causes the up-state transition.

  • 38. Sandberg, A.
    et al.
    Lansner, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Synaptic depression as an intrinsic driver of reinstatement dynamics in an attractor network2002In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 44, p. 615-622Article in journal (Refereed)
    Abstract [en]

    Long-term memory consolidation is commonly assumed to occur through the reinstatement of previous activation states in the cortex by the action of the medial temporal lobe (MTL) memory system. In order to produce a sequence of reinstated patterns the MTL system either needs to be externally cued, or have an intrinsic dynamics enabling it to present earlier learned patterns. We show that the Hebbian depression of synaptic connections between the neurons in an attractor network can produce such a dynamics that retains information about relative attractor strength.

  • 39. Sandberg, A.
    et al.
    Lansner, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Petersson, K. M.
    Selective enhancement of recall through plasticity modulation in an autoassociative memory2001In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 38, p. 867-873Article in journal (Refereed)
    Abstract [en]

    The strength of a memory trace is modulated by a variety of factors such as arousal, attention, context, type of processing during encoding, salience and novelty of the experience. Some of these factors can be modeled as a variable plasticity level in the memory system, controlled by arousal or relevance-estimating systems. We demonstrate that a Bayesian confidence propagation neural network with learning time constant modulated in this way exhibits enhanced recall of an item tagged as salient. Proactive and retroactive inhibition of other items is also demonstrated as well as an inverted U-shape response to overall plasticity.

  • 40.
    Sandberg, Anders
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Fransén, Erik
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    An autocatalytic model of STDP timing from slow calcium-dependent signals2005In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 65-66, p. 603-608Article in journal (Refereed)
    Abstract [en]

    Data of spike timing-dependent plasticity (STDP) show a sharp temporal transition between potentiation and depression despite a relatively slow time course of calcium concentration. We show how autocatalytic amplification of initial concentration differences can enable a high degree of temporal selectivity and produce the sharp STDP weight change curve despite having a relatively slow time constant. This simple model is robust to parameter changes, noise and details of the model. The model correctly predicts the location of the maximum and minimum for STDP at +/- 10ms from coincidence.

  • 41.
    Sandberg, Anders
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Lansner, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Petersson, K. M.
    Ekeberg, Örjan
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    A palimpsest memory based on an incremental Bayesian learning rule2000In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 32, p. 987-994Article in journal (Refereed)
    Abstract [en]

    Capacity limited memory systems need to gradually forget old information in order to avoid catastrophic forgetting where all stored information is lost. This can be achieved by allowing new information to overwrite old, as in the so-called palimpsest memory. This paper describes a new such learning rule employed in an attractor neural network. The network does not exhibit catastrophic forgetting, has a capacity dependent on the learning time constant and exhibits recency effects in retrieval.

  • 42. Tegn'er, J
    et al.
    Hellgren Kotaleski, Jeanette
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    The synaptic NMDA component desynchronizes neural bursters1999In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 26-27, p. 557-563Article in journal (Refereed)
    Abstract [en]

    The influence of excitatory and inhibitory coupling on synchronization depends on the temporal dynamics of the synapse. Slow excitation is desynchronizing whereas fast excitation tends to synchronize neuronal firing. Excitation via glutamatergic synapses, however, activates both ionotropic AMPA/kainate and NMDA receptors. Here we analyze the role of the synaptic NMDA component. We show that slowly bursting neurons desynchronize when connected by symmetrical NMDA synapses whereas they tend to synchronize when coupled with symmetrical AMPA/kainate synapses. This suggests that the effect on synchronization of an excitatory synapse also depends on the relative proportion of NMDA and AMPA/kainate synapses.

  • 43. Wahlgren, N.
    et al.
    Lansner, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Biological evaluation of a Hebbian-Bayesian learning rule2001In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 38, p. 433-438Article in journal (Refereed)
    Abstract [en]

    A correlation based Hebbian-Bayesian learning rule formulated on theoretical, probabilistic grounds has been extended to an incremental version running in continuous time and with spiking units. This learning rule has, however, not previously been evaluated in any detail with regard to biological plausibility and ability to mimic synaptic long-term potentiation and depression. It is demonstrated here that this learning rule indeed captures several fundamental aspects of Hebbian spike-timing dependent synaptic plasticity. A slightly modified version of the model gives a quantitative fit to data.

  • 44. Yang, K. H.
    et al.
    Hellgren Kotaleski, Jeanette
    Blackwell, K. T.
    The role of protein kinase C in the biochemical pathways of classical conditioning2001In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 38, p. 79-85Article in journal (Refereed)
    Abstract [en]

    Evidence suggests that protein kinase C (PKC) is required for long term memory storage such as classical conditioning. Stimulation of parallel fibers (PF) and climbing fibers (CF) of the cerebellum leads to production of the second messengers diacylglycerol, arachidonic acid, and calcium which are activators of PKC. A model is developed that describes the cascade of biochemical reactions in response to PF and CF stimulation and leading to PKC activation. Model simulations are used to evaluate the temporal specificity of PKC activation and the sensitivity of PKC activation to the interstimulus interval (ISI) of classical conditioning. Simulations at different ISI show that if PF stimulation precedes CF stimulation, PKC activation is elevated.

1 - 44 of 44
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf