Distributional Reachability for Markov Decision Processes: Theory and Applications
2024 (English)In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 69, no 7, p. 4598-4613Article in journal (Refereed) Published
Abstract [en]
We study distributional reachability for finite Markov decision processes (MDPs) from a control theoretical perspective. Unlike standard probabilistic reachability notions, which are defined over MDP states or trajectories, in this paper reachability is formulated over the space of probability distributions. We propose two set-valued maps for the forward and backward distributional reachability problems: the forward map collects all state distributions that can be reached from a set of initial distributions, while the backward map collects all state distributions that can reach a set of final distributions. We show that there exists a maximal invariant set under the forward map and this set is the region where the state distributions eventually always belong to, regardless of the initial state distribution and policy. The backward map provides an alternative way to solve a class of important problems for MDPs: the study of controlled invariance, the characterization of the domain of attraction, and reach-avoid problems. Three case studies illustrate the effectiveness of our approach.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2024. Vol. 69, no 7, p. 4598-4613
Keywords [en]
Aerospace electronics, Computational modeling, distributional reachability, Markov decision processes, Markov processes, Probabilistic logic, probabilistic reachability, Probability distribution, reach-avoid problems, Safety, set invariance, Trajectory
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-350164DOI: 10.1109/TAC.2023.3341282ISI: 001259639500010Scopus ID: 2-s2.0-85179798691OAI: oai:DiVA.org:kth-350164DiVA, id: diva2:1883386
Note
QC 20240710
2024-07-102024-07-102024-07-22Bibliographically approved