kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Differential Dynamic Programming Framework for Inverse Reinforcement Learning
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). Tongji University, Department of Control Science and Engineering, College of Electronics and Information Engineering, Shanghai, China, 201804; Ministry of Education, Shanghai Institute of Intelligent Science and Technology, National Key Laboratory of Autonomous Intelligent Unmanned Systems, and Frontiers Science Center for Intelligent Autonomous Systems, Beijing, China, 100816; Nanyang Technological University, 50 Nanyang Avenue, School of Electrical and Electronic Engineering, Singapore.
Nanyang Technological University, 50 Nanyang Avenue, School of Electrical and Electronic Engineering, Singapore.
Arizona State University, School for Engineering of Matter, Transport, and Energy, USA.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Digital futures.ORCID iD: 0000-0001-9940-5929
Show others and affiliations
2025 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468Article in journal (Refereed) Epub ahead of print
Abstract [en]

A differential dynamic programming (DDP)-based framework for inverse reinforcement learning (IRL) is introduced to recover the parameters in the cost function, system dynamics, and constraints from demonstrations. Different from existing work, where DDP was usually used for the inner forward problem, our proposed framework uses it to efficiently compute the gradient required in the outer inverse problem with equality and inequality constraints. The equivalence between the proposed and existing methods based on Pontryagin’s Maximum Principle (PMP) is established. More importantly, using this DDPbased IRL with an open-loop loss function, a closed-loop IRL framework is presented. In this framework, a loss function is proposed to capture the closed-loop nature of demonstrations. It is shown to be better than the commonly used open-loop loss function. We show that the closed-loop IRL framework reduces to a constrained inverse optimal control problem under certain assumptions. Under these assumptions and a rank condition, it is proven that the learning parameters can be recovered from the demonstration data. The proposed framework is extensively evaluated through four numerical robot examples and one realworld quadrotor system. The experiments validate the theoretical results and illustrate the practical relevance of the approach.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2025.
Keywords [en]
Constrained Optimal Control, Differential Dynamical Programming, Inverse Optimal Control, Inverse Problems, Inverse Reinforcement Learning
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-372627DOI: 10.1109/TRO.2025.3623769Scopus ID: 2-s2.0-105019984764OAI: oai:DiVA.org:kth-372627DiVA, id: diva2:2012916
Note

QC 20251111

Available from: 2025-11-11 Created: 2025-11-11 Last updated: 2025-11-11Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Cao, KunJohansson, Karl H.

Search in DiVA

By author/editor
Cao, KunJohansson, Karl H.
By organisation
Decision and Control Systems (Automatic Control)Digital futures
In the same journal
IEEE Transactions on robotics
Control Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 16 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf