Open this publication in new window or tab >>Show others...
2025 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468Article in journal (Refereed) Epub ahead of print
Abstract [en]
A differential dynamic programming (DDP)-based framework for inverse reinforcement learning (IRL) is introduced to recover the parameters in the cost function, system dynamics, and constraints from demonstrations. Different from existing work, where DDP was usually used for the inner forward problem, our proposed framework uses it to efficiently compute the gradient required in the outer inverse problem with equality and inequality constraints. The equivalence between the proposed and existing methods based on Pontryagin’s Maximum Principle (PMP) is established. More importantly, using this DDPbased IRL with an open-loop loss function, a closed-loop IRL framework is presented. In this framework, a loss function is proposed to capture the closed-loop nature of demonstrations. It is shown to be better than the commonly used open-loop loss function. We show that the closed-loop IRL framework reduces to a constrained inverse optimal control problem under certain assumptions. Under these assumptions and a rank condition, it is proven that the learning parameters can be recovered from the demonstration data. The proposed framework is extensively evaluated through four numerical robot examples and one realworld quadrotor system. The experiments validate the theoretical results and illustrate the practical relevance of the approach.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Constrained Optimal Control, Differential Dynamical Programming, Inverse Optimal Control, Inverse Problems, Inverse Reinforcement Learning
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-372627 (URN)10.1109/TRO.2025.3623769 (DOI)2-s2.0-105019984764 (Scopus ID)
Note
QC 20251111
2025-11-112025-11-112025-11-11Bibliographically approved