kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Coordinated Control of FACTS Setpoints Using Reinforcement Learning
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Hitachi Energy Sweden AB, 721 82 Västerås, Sweden.ORCID iD: 0000-0002-3138-9915
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

With the increasing electrification and integration of renewables, power system operators face severe control challenges. These challenges include voltage stability, faster dynamics, and congestion management. Potential solutions encompass more advanced control systems and accurate measurements. One encouraging mitigation strategy is coordinated control of Flexible AC Transmission Systems (FACTS) setpoints to substantially improve voltage and power flow control. However, due to model-based optimization challenges related to e.g. imperfect models and uncertainty, fixed setpoints are often used in practice. Alternative promising control methods are data-driven methods based on, for example, reinforcement learning (RL). Motivated by these challenges, the accumulation of high-quality data, and the advancements in RL, this thesis explores an RL-based coordinated control of FACTS setpoints. With a focus on safety, four problem settings are investigated on the IEEE 14-bus and IEEE 57-bus systems addressing limited pre-training, model errors, few measurements, and datasets for pre-training. First, we propose WMAP, a model-based RL algorithm that learns and uses a compressed dynamics model to optimize voltage and current setpoints. WMAP includes a mechanism to mitigate poor performance in case of out-of-distribution data. Moreover, WMAP is shown to outperform model-free RL and a non-frequently updated expert policy. Second, when power system model errors are present, safe RL is demonstrated to outperform classical model-based optimization in terms of constraint satisfaction. Third, RL is shown to exceed the performance of fixed setpoints using a few measurements provided it has a complete, albeit simple, constraint signal. Finally, RL that leverages datasets for offline pre-training is demonstrated to outperform the original policy that generated the dataset and an RL agent trained from scratch. Overall, these four works contribute to an advancement in the field towards a more adaptable and sustainable power system.    

Abstract [sv]

Med den ökande elektrifieringen och integrationen av förnybar energi står elnätsoperatörer inför stora reglerlutmaningar. Dessa utmaningar inkluderar spänningsstabilitet, snabbare dynamik och hantering av överlaster. Potentiella lösningar innefattar mer avancerade styrsystem och noggranna mätningar. En lovande strategi för att delvis hantera dessa problem är koordinerad styrning av referensvärden för Flexible AC Transmission Systems (FACTS), vilket kan förbättra spännings- och effektflödesregleringen avsevärt. I praktiken används dock ofta konstanta referensvärden, till följd av optimeringssvårigheter kopplade till exempelvis osäkerhet och modellfel. Ett alternativ med stor potential är datadrivna metoder baserade på exempelvis förstärkande inlärning (reinforcement learning, RL). Mot bakgrund av dessa utmaningar, tillgången till högkvalitativ data samt framstegen inom RL, undersöker denna avhandling en RL-baserad koordinerad styrning av referensvärden för FACTS. Med fokus på säkerhet undersöks fyra problemställningar på IEEE:s 14-nods- och 57-nodssystem, med hänsyn till begränsad förträning, modellfel, få mätvärden samt användning av dataset för förträning. För det första föreslår vi WMAP, en modellbaserad RL-algoritm som lär sig och använder en komprimerad dynamikmodell för att optimera spännings- och strömreferenser. WMAP innehåller en mekanism för att mildra sämre prestanda vid data utanför träningsförhållandena. WMAP visas överträffa modellfri RL och en expertpolicy som uppdateras sällan. För det andra, när modellfel förekommer i kraftsystemet, visar vi att säker RL uppnår bättre måluppfyllelse än klassisk modellbaserad optimering. För det tredje visar vi att RL kan prestera bättre än fasta referensvärden med hjälp av ett fåtal mätvärden, förutsatt att den har tillgång till en komplett, om än enkel, constraint-signal. Slutligen visar vi att RL som använder dataset för offline-förträning kan överträffa både den ursprungliga policy som genererat datasetet och en RL-agent tränad från grunden. Sammantaget bidrar dessa fyra arbeten till framsteg inom området mot ett mer anpassningsbart och hållbart elsystem.

Place, publisher, year, edition, pages
Stocholm, Sweden: KTH Royal Institute of Technology, 2025. , p. xviii, 103
Series
TRITA-EECS-AVL ; 2025:80
Keywords [en]
Decision support systems, Flexible AC Transmission Systems (FACTS), power system control, reinforcement learning
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-369519ISBN: 978-91-8106-387-5 (print)OAI: oai:DiVA.org:kth-369519DiVA, id: diva2:1996061
Public defence
2025-10-08, https://kth-se.zoom.us/j/65901664759, F3 (Flodis), Lindstedtsvägen 26 & 28, Stockholm, 13:00 (English)
Opponent
Supervisors
Funder
Swedish Foundation for Strategic Research, ID19-0058
Note

QC 20250908

Available from: 2025-09-08 Created: 2025-09-08 Last updated: 2025-10-13Bibliographically approved
List of papers
1. A World Model Based Reinforcement Learning Architecture for Autonomous Power System Control
Open this publication in new window or tab >>A World Model Based Reinforcement Learning Architecture for Autonomous Power System Control
Show others...
2021 (English)In: 2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), IEEE , 2021, p. 364-370Conference paper, Published paper (Refereed)
Abstract [en]

Renewable generation is leading to rapidly shifting power flows and it is anticipated that traditional power system control may soon be inadequate to cope with these fluctuations. Traditional control include human-in-the-loop-control schemes while more autonomous control methods can be categorized into Wide-Area Monitoring, Protection and Control systems (WAMPAC). Within this latter group of more advanced systems, reinforcement learning (RL) is a potential candidate to facilitate power system control facing these new challenges. In this paper we demonstrate how a model based reinforcement learning (MBRL) algorithm, which learns and uses an internal model of the world, can be used for autonomous power system control. The proposed RL agent, called the World Model for Autonomous Power System Control (WMAP), includes a safety shield to minimize risk of poor decisions at high uncertainty. The shield can be configured to permit WMAP to take actions with the condition that WMAP asks for guidance, e.g. from a human operator, when in doubt. As an alternative, WMAP could be run in full decision support mode which would require the operator to take all the active decisions. A case study is performed on a IEEE 14-bus system where WMAP is setup to control setpoints of two FACTS devices to emulate grid stability improvements. Results show that improved grid stability is achieved using WMAP while staying within voltage limits. Furthermore, a disastrous situation is avoided when WMAP asks for help in a test scenario event that it had not been trained for.

Place, publisher, year, edition, pages
IEEE, 2021
Keywords
Decision support systems, Flexible AC Transmission Systems (FACTS), learning, power system control, smart grids
National Category
Control Engineering Computer Sciences
Identifiers
urn:nbn:se:kth:diva-308920 (URN)10.1109/smartgridcomm51999.2021.9632332 (DOI)2-s2.0-85123908499 (Scopus ID)
Conference
2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, SmartGridComm 2021 Aachen 25 October 2021 through 28 October 2021
Note

QC 20220223

Part of conference proceeding: ISBN 978-166541502-6

Available from: 2022-02-15 Created: 2022-02-15 Last updated: 2025-09-08Bibliographically approved
2. Safe Reinforcement Learning to Improve FACTS Setpoint Control in Presence of Model Errors
Open this publication in new window or tab >>Safe Reinforcement Learning to Improve FACTS Setpoint Control in Presence of Model Errors
Show others...
2025 (English)In: IEEE transactions on industry applications, ISSN 0093-9994, E-ISSN 1939-9367Article in journal (Refereed) Epub ahead of print
Abstract [en]

There is limited application of closed-loop control using model-based approaches in wide area monitoring, protection, and control. Challenges that impede model-based approaches include engineering complexity, convergence issues, and model errors. Specifically, considering the rapid growth of distributed generation and renewables in the grid, maintaining an updated model without model errors is challenging. As an alternative to model-based approaches, data-driven control architectures based on reinforcement learning (RL) have shown great promise. In this work, we confront safety concerns with data-driven approaches by studying safe RL to improve voltage and power flow control. For both a model-free RL agent and a model-based RL agent, the accumulated constraint violation is investigated in a case study on the IEEE 14-bus and IEEE 57-bus systems. To evaluate performance, agents are compared against a model-based approach subject to errors. Our findings suggest that RL could be considered for optimizing voltage and current setpoints in systems when topological model errors are present.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Decision support systems, Flexible AC Transmission Systems (FACTS), power system control, reinforcement learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-365876 (URN)10.1109/tia.2025.3569502 (DOI)2-s2.0-105005212319 (Scopus ID)
Funder
Swedish Foundation for Strategic Research, ID19-0058
Note

QC 20250701

Available from: 2025-07-01 Created: 2025-07-01 Last updated: 2025-09-08Bibliographically approved
3. Reinforcement Learning for FACTS Setpoint Control with Limited Information
Open this publication in new window or tab >>Reinforcement Learning for FACTS Setpoint Control with Limited Information
2024 (English)In: IEEE PES Innovative Smart Grid Technologies Europe, ISGT EUROPE 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024Conference paper, Published paper (Refereed)
Abstract [en]

A coordinated control of Flexible AC Transmission Systems (FACTS) reference setpoints is often absent in real systems. Despite the power quality gains demonstrated in studies, this absence can partly be derived from challenges with model-based control. As promising alternative methods of control, data driven approaches based on reinforcement learning (RL) have been considered. In this work, we study the potential gains in power quality using RL while recognizing the increasing number of installed Phasor Measurement Units, providing limited but reliable information. We demonstrate on the IEEE 14-bus and IEEE 57-bus systems that by adding a few measurements per FACTS device and a constraint violation signal, an RL scheme may significantly improve power quality compared to a baseline of fixed setpoints. To evaluate robustness, several configurations are simulated and for larger systems, we identify unobserved constraint violations as the main risk and propose a potential path for new research.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Decision support systems, Flexible AC Transmission Systems (FACTS), power system control, reinforcement learning
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-361442 (URN)10.1109/ISGTEUROPE62998.2024.10863003 (DOI)001451133800008 ()2-s2.0-86000010404 (Scopus ID)
Conference
2024 IEEE PES Innovative Smart Grid Technologies Europe Conference, ISGT EUROPE 2024, Dubrovnik, Croatia, Oct 14 2024 - Oct 17 2024
Note

Part of ISBN 9789531842976

QC 20250319

Available from: 2025-03-19 Created: 2025-03-19 Last updated: 2025-09-08Bibliographically approved
4. Offline to Online Reinforcement Learning for Optimizing FACTS Setpoints
Open this publication in new window or tab >>Offline to Online Reinforcement Learning for Optimizing FACTS Setpoints
Show others...
2025 (English)In: Sustainable Energy, Grids and Networks, E-ISSN 2352-4677, Vol. 43, article id 101826Article in journal (Refereed) Published
Abstract [en]

With the growing electrification and integration of renewables, network operators face unprecedented challenges. Coordinated control of Flexible AC Transmission Systems (FACTS) setpoints using real-time optimization techniques has been proposed to substantially improve voltage and power flow control. However, optimizing the setpoints of several FACTS devices is rarely done in practice. In part, this can be derived from the challenges with model-based methods. As alternative control methods, data-driven methods based on reinforcement learning (RL) have shown great promise. However, RL has its own challenges that include data and safety during learning. Motivated by the increasing collection of data, we study an RL-based optimization of FACTS setpoints and how datasets can be leveraged for pre-training to improve safety. We demonstrate on the IEEE 14-bus and IEEE 57-bus systems that an offline to online RL algorithm can significantly reduce voltage deviations and constraint violations. The performance is compared against an RL agent learning from scratch and the original control policy that generated the dataset. Moreover, our analysis shows that dataset coverage and the amount of pre-training updates affect the performance considerably. Finally, to identify the gap to an optimal policy, the proposed approach is benchmarked against an optimal controller with perfect information.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Decision support systems, Flexible AC Transmission Systems (FACTS), power system control, reinforcement learning
National Category
Computer Sciences
Research subject
Computer Science; Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-365883 (URN)10.1016/j.segan.2025.101826 (DOI)001550489000016 ()2-s2.0-105012372136 (Scopus ID)
Conference
Bulk Power System Dynamics and Control - XII, June 2025, Sorrento, Italy
Funder
Swedish Foundation for Strategic Research, ID19-0058
Note

QC 20250916

Available from: 2025-07-01 Created: 2025-07-01 Last updated: 2025-09-16Bibliographically approved

Open Access in DiVA

comprehensive summary(4666 kB)165 downloads
File information
File name FULLTEXT01.pdfFile size 4666 kBChecksum SHA-512
79c98efe050e30636fb5b25e8593fb0f5121ebcd1ed832b969948e725a764b7b0fc89c68c708dd18edf68cad90f16fd4f40a9dbcdd80cb28fb4c8d393f8856e2
Type fulltextMimetype application/pdf
errata(194 kB)10 downloads
File information
File name ERRATA01.pdfFile size 194 kBChecksum SHA-512
a5b3bf0de546d4e07890093ede1045604b2c20ee2a1b156c2e13f943491049ec14a46eb76b8afe8b73ca447791ec4b686d179b4efb6ebb1d6b01116a4bdb8ba8
Type errataMimetype application/pdf

Authority records

Tarle, Magnus

Search in DiVA

By author/editor
Tarle, Magnus
By organisation
Robotics, Perception and Learning, RPL
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 165 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1501 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf