Open this publication in new window or tab >>2018 (English)In: Advances In Neural Information Processing Systems 31 (NIPS 2018) / [ed] Bengio, S Wallach, H Larochelle, H Grauman, K CesaBianchi, N Garnett, R, Neural Information Processing Systems (NIPS) , 2018, Vol. 31Conference paper, Published paper (Refereed)
Abstract [en]
We address reinforcement learning problems with finite state and action spaces where the underlying MDP has some known structure that could be potentially exploited to minimize the exploration rates of suboptimal (state, action) pairs. For any arbitrary structure, we derive problem-specific regret lower bounds satisfied by any learning algorithm. These lower bounds are made explicit for unstructured MDPs and for those whose transition probabilities and average reward functions are Lipschitz continuous w.r.t. the state and action. For Lipschitz MDPs, the bounds are shown not to scale with the sizes S and A of the state and action spaces, i.e., they are smaller than c log T where T is the time horizon and the constant c only depends on the Lipschitz structure, the span of the bias function, and the minimal action sub-optimality gap. This contrasts with unstructured MDPs where the regret lower bound typically scales as SA log T. We devise DEL (Directed Exploration Learning), an algorithm that matches our regret lower bounds. We further simplify the algorithm for Lipschitz MDPs, and show that the simplified version is still able to efficiently exploit the structure.
Place, publisher, year, edition, pages
Neural Information Processing Systems (NIPS), 2018
Series
Advances in Neural Information Processing Systems, ISSN 1049-5258 ; 31
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-249916 (URN)000461852003043 ()2-s2.0-85064810437 (Scopus ID)
Conference
32nd Conference on Neural Information Processing Systems (NIPS), DEC 02-08, 2018, Montreal, CANADA
Note
QC 20190425. QC 20191023
2019-04-252019-04-252023-10-03Bibliographically approved