Change search
ReferencesLink to record
Permanent link

Direct link
Action selection performance of a reconfigurable Basal Ganglia inspired model with Hebbian-Bayesian Go-NoGo connectivity
KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.ORCID iD: 0000-0002-0550-0739
KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
2012 (English)In: Frontiers in Behavioral Neuroscience, ISSN 1662-5153, Vol. 6, 65- p.Article in journal (Refereed) Published
Abstract [en]

Several studies have shown a strong involvement of the basal ganglia (BG) in action selection and dopamine dependent learning. The dopaminergic signal to striatum, the input stage of the BG, has been commonly described as coding a reward prediction error (RPE), i.e. the difference between the predicted and actual reward. The RPE has been hypothesized to be critical in the modulation of the synaptic plasticity in cortico-striatal synapses in the direct and indirect pathway. We developed an abstract computational model of the BG, with a dual pathway structure functionally corresponding to the direct and indirect pathways, and compared its behaviour to biological data as well as other reinforcement learning models. The computations in our model are inspired by Bayesian inference, and the synaptic plasticity changes depend on a three factor Hebbian-Bayesian learning rule based on co-activation of pre- and post-synaptic units and on the value of the RPE. The model builds on a modified Actor-Critic architecture and implements the direct (Go) and the indirect (NoGo) pathway, as well as the reward prediction (RP) system, acting in a complementary fashion. We investigated the performance of the model system when different configurations of the Go, NoGo and RP system were utilized, e.g. using only the Go, NoGo, or RP system, or combinations of those. Learning performance was investigated in several types of learning paradigms, such as learning-relearning, successive learning, stochastic learning, reversal learning and a two-choice task. The RPE and the activity of the model during learning were similar to monkey electrophysiological and behavioural data. Our results, however, show that there is not a unique best way to configure this BG model to handle well all the learning paradigms tested. We thus suggest that an agent might dynamically configure its action selection mode, possibly depending on task characteristics and also on how much time is available.

Place, publisher, year, edition, pages
2012. Vol. 6, 65- p.
Keyword [en]
Basal ganglia, Bayesian inference, BCPNN, Behaviour selection, Direct-indirect pathway, Dopamine, Hebbian-Bayesian plasticity, Reinforcement learning, article, basal ganglion, Bayes theorem, brain function, controlled study, dopaminergic transmission, learning, mathematical model, mental task, motor control, nerve cell plasticity, probability, reinforcement, statistical analysis
National Category
Bioinformatics (Computational Biology)
URN: urn:nbn:se:kth:diva-105249DOI: 10.3389/fnbeh.2012.00065ISI: 000310727200001ScopusID: 2-s2.0-84866713557OAI: diva2:570603
EU, European Research Council, 237955 201716Swedish Research CouncilSwedish e‐Science Research Center

QC 20121120

Available from: 2012-11-20 Created: 2012-11-19 Last updated: 2013-04-08Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Berthet, PierreHällgren Kotaleski, JeanetteLansner, Anders
By organisation
Computational Biology, CB
In the same journal
Frontiers in Behavioral Neuroscience
Bioinformatics (Computational Biology)

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 93 hits
ReferencesLink to record
Permanent link

Direct link