kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Variance-Aware Regret Bounds for Undiscounted Reinforcement Learning in MDPs
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).ORCID iD: 0000-0002-1934-7421
INRIA Lille – Nord Europe, Villeneuve d’Ascq, France.ORCID iD: 0000-0001-7935-7026
2018 (English)In: Proceedings of 29th International Conference on Algorithmic Learning Theory, ALT 2018, ML Research Press , 2018, p. 770-805Conference paper, Published paper (Refereed)
Abstract [en]

The problem of reinforcement learning in an unknown and discrete Markov Decision Process (MDP) under the average-reward criterion is considered, when the learner interacts with the system in a single stream of observations, starting from an initial state without any reset. We revisit the minimax lower bound for that problem by making appear the local variance of the bias function in place of the diameter of the MDP. Furthermore, we provide a novel analysis of the KL-Ucrl algorithm establishing a high-probability regret bound scaling as Oe(q S Ps,a V?s,aT ) for this algorithm for ergodic MDPs, where S denotes the number of states and where Vs,a? is the variance of the bias function with respect to the next-state distribution following action a in state s. The resulting bound improves upon the best previously known regret bound Oe(DS√AT) for that algorithm, where A and D respectively denote the maximum number of actions (per state) and the diameter of MDP. We finally compare the leading terms of the two bounds in some benchmark MDPs indicating that the derived bound can provide an order of magnitude improvement in some cases. Our analysis leverages novel variations of the transportation lemma combined with Kullback-Leibler concentration inequalities, that we believe to be of independent interest.

Place, publisher, year, edition, pages
ML Research Press , 2018. p. 770-805
Keywords [en]
Bellman Optimality, Concentration Inequalities, Markov Decision Processes, Regret Minimization, Undiscounted Reinforcement Learning
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-350411Scopus ID: 2-s2.0-85160280324OAI: oai:DiVA.org:kth-350411DiVA, id: diva2:1884001
Conference
29th International Conference on Algorithmic Learning Theory, ALT 2018, Lanzarote, Spain, Apr 7 2018 - Apr 9 2018
Note

QC 20240712

Available from: 2024-07-12 Created: 2024-07-12 Last updated: 2024-07-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Talebi Mazraeh Shahi, Mohammad Sadegh

Search in DiVA

By author/editor
Talebi Mazraeh Shahi, Mohammad SadeghMaillard, Odalric Ambrym
By organisation
Decision and Control Systems (Automatic Control)
Control Engineering

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 15 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf