kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Stability-Guided Reinforcement Learning Control for Power Converters: A Lyapunov Approach
KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electric Power and Energy Systems.ORCID iD: 0000-0002-9406-5600
KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electric Power and Energy Systems.ORCID iD: 0000-0002-2793-9048
2025 (English)In: IEEE Transactions on Industrial Electronics, ISSN 0278-0046, E-ISSN 1557-9948, Vol. 72, no 7, p. 7553-7562Article in journal (Refereed) Published
Abstract [en]

Reinforcement learning (RL) has gained popularity in power electronics due to its ability to handle nonlinearities and self-learning characteristics. When properly configured, an RL agent can autonomously learn the optimal control policy by interacting with the converter system. In particular, similar to conventional finite-control-set model predictive control (FCS-MPC), the RL agent can learn the optimal switching strategy for the power converter and achieve desirable control performance. However, the alteration of closed-loop dynamics by the RL controller poses challenges in ensuring and assessing system stability. To address this, the article proposes formulating a Lyapunov function to guide the agent in learning an optimal control policy that enhances desirable control performance while ensuring closed-loop stability. Additionally, the practical stability region of the system is quantified by deriving a compact set regarding the convergence of voltage control error. Finally, the proposed Lyapunov-guided RL controller is validated through a demonstration framework with a practical experimental setup. Both simulation and experimental results confirm the effectiveness of the proposed method.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2025. Vol. 72, no 7, p. 7553-7562
Keywords [en]
Closed-loop stability, Lyapunov function, optimal switching strategy, power converter, reinforcement learning (RL)
National Category
Control Engineering Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-367214DOI: 10.1109/TIE.2024.3522491ISI: 001389652500001Scopus ID: 2-s2.0-85214300494OAI: oai:DiVA.org:kth-367214DiVA, id: diva2:1984360
Funder
StandUp
Note

QC 20250715

Available from: 2025-07-15 Created: 2025-07-15 Last updated: 2026-03-03

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Wan, YihaoXu, Qianwen

Search in DiVA

By author/editor
Wan, YihaoXu, Qianwen
By organisation
Electric Power and Energy Systems
In the same journal
IEEE Transactions on Industrial Electronics
Control EngineeringOther Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 61 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf