kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Lambda-Policy Iteration with Randomization for Contractive Models with Infinite Policies: Well-Posedness and Convergence
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).ORCID iD: 0000-0002-1857-2301
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).ORCID iD: 0000-0001-9940-5929
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).ORCID iD: 0000-0002-3672-5316
2020 (English)In: Proceedings of the 2nd Conference on Learning for Dynamics and Control, L4DC 2020, ML Research Press , 2020, p. 540-549Conference paper, Published paper (Refereed)
Abstract [en]

dynamic programming models are used to analyze λ-policy iteration with randomization algorithms. Particularly, contractive models with infinite policies are considered and it is shown that well-posedness of the λ-operator plays a central role in the algorithm. The operator is known to be well-posed for problems with finite states, but our analysis shows that it is also well-defined for the contractive models with infinite states studied. Similarly, the algorithm we analyze is known to converge for problems with finite policies, but we identify the conditions required to guarantee convergence with probability one when the policy space is infinite regardless of the number of states. Guided by the analysis, we exemplify a data-driven approximated implementation of the algorithm for estimation of optimal costs of constrained linear and nonlinear control problems. Numerical results indicate potentials of this method in practice.

Place, publisher, year, edition, pages
ML Research Press , 2020. p. 540-549
Keywords [en]
approximate dynamic programming, reinforcement learning, λ-policy iteration
National Category
Control Engineering Computational Mathematics Probability Theory and Statistics
Identifiers
URN: urn:nbn:se:kth:diva-338628Scopus ID: 2-s2.0-85161123035OAI: oai:DiVA.org:kth-338628DiVA, id: diva2:1809158
Conference
2nd Annual Conference on Learning for Dynamics and Control, L4DC 2020, Berkeley, United States of America, Jun 10 2020 - Jun 11 2020
Note

QC 20231102

Available from: 2023-11-02 Created: 2023-11-02 Last updated: 2023-11-02Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

ScopusPaper

Authority records

Li, YuchaoJohansson, Karl H.Mårtensson, Jonas

Search in DiVA

By author/editor
Li, YuchaoJohansson, Karl H.Mårtensson, Jonas
By organisation
Decision and Control Systems (Automatic Control)
Control EngineeringComputational MathematicsProbability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 27 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf