Action selection performance of a reconfigurable Basal Ganglia inspired model with Hebbian-Bayesian Go-NoGo connectivity
2012 (English)In: Frontiers in Behavioral Neuroscience, ISSN 1662-5153, Vol. 6, 65- p.Article in journal (Refereed) Published
Several studies have shown a strong involvement of the basal ganglia (BG) in action selection and dopamine dependent learning. The dopaminergic signal to striatum, the input stage of the BG, has been commonly described as coding a reward prediction error (RPE), i.e. the difference between the predicted and actual reward. The RPE has been hypothesized to be critical in the modulation of the synaptic plasticity in cortico-striatal synapses in the direct and indirect pathway. We developed an abstract computational model of the BG, with a dual pathway structure functionally corresponding to the direct and indirect pathways, and compared its behaviour to biological data as well as other reinforcement learning models. The computations in our model are inspired by Bayesian inference, and the synaptic plasticity changes depend on a three factor Hebbian-Bayesian learning rule based on co-activation of pre- and post-synaptic units and on the value of the RPE. The model builds on a modified Actor-Critic architecture and implements the direct (Go) and the indirect (NoGo) pathway, as well as the reward prediction (RP) system, acting in a complementary fashion. We investigated the performance of the model system when different configurations of the Go, NoGo and RP system were utilized, e.g. using only the Go, NoGo, or RP system, or combinations of those. Learning performance was investigated in several types of learning paradigms, such as learning-relearning, successive learning, stochastic learning, reversal learning and a two-choice task. The RPE and the activity of the model during learning were similar to monkey electrophysiological and behavioural data. Our results, however, show that there is not a unique best way to configure this BG model to handle well all the learning paradigms tested. We thus suggest that an agent might dynamically configure its action selection mode, possibly depending on task characteristics and also on how much time is available.
Place, publisher, year, edition, pages
2012. Vol. 6, 65- p.
Basal ganglia, Bayesian inference, BCPNN, Behaviour selection, Direct-indirect pathway, Dopamine, Hebbian-Bayesian plasticity, Reinforcement learning, article, basal ganglion, Bayes theorem, brain function, controlled study, dopaminergic transmission, learning, mathematical model, mental task, motor control, nerve cell plasticity, probability, reinforcement, statistical analysis
Bioinformatics (Computational Biology)
IdentifiersURN: urn:nbn:se:kth:diva-105249DOI: 10.3389/fnbeh.2012.00065ISI: 000310727200001ScopusID: 2-s2.0-84866713557OAI: oai:DiVA.org:kth-105249DiVA: diva2:570603
FunderEU, European Research Council, 237955 201716Swedish Research CouncilSwedish e‐Science Research Center
QC 201211202012-11-202012-11-192013-04-08Bibliographically approved