kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Mitigation of Extreme Events in the Distribution Grid Through Control of Flexible Resources
KTH, School of Electrical Engineering and Computer Science (EECS), Electric Power and Energy Systems.ORCID iD: 0000-0002-6745-4918
KTH, School of Electrical Engineering and Computer Science (EECS), Electric Power and Energy Systems.ORCID iD: 0000-0002-5380-5289
KU Leuven, Belgium.ORCID iD: 0000-0002-7765-8068
KTH, School of Electrical Engineering and Computer Science (EECS), Electric Power and Energy Systems.ORCID iD: 0000-0003-3014-5609
Show others and affiliations
2026 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 14, p. 31520-31536Article in journal (Refereed) Published
Abstract [en]

Despite ongoing efforts to decarbonize society, climate change continues to result in more extreme events that can reduce the electrical grid’s ability to reliably supply power to end users. At the same time, across the distribution grid, Distributed Energy Resources (DERs), such as renewable generation, energy storage systems, and flexible resources, offer the possibility of operating the grid in novel ways. Distribution System Operator (DSOs) could employ these DERs to improve their ability to mitigate extreme events. This work therefore demonstrates how DERs, in the form of flexible loads, can be controlled by a Deep Reinforcement Learning (DRL) agent to minimize Energy Not Supplied (ENS) in the immediate aftermath of an extreme event. To obtain near-optimal performance on unseen scenarios, an enhanced Implicit Q Network (IQN+) architecture is proposed, trained, and evaluated on a modified CIGRE MV benchmark grid. The resulting IQN+ agent can outperform a passive baseline policy, a trained Rainbow DQN policy, and a single-timestep Optimal Power Flow (OPF) based policy on the test set. Sensitivity analysis reveals that the location and quantity of available DERs also impact the efficacy of the IQN+ agent, with the agent preferentially using actions on loads that result in greater average episode durations. These results highlight the potential for RL to rapidly provide decision support to operators by suggesting potential remedial actions to mitigate the impact of extreme events.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2026. Vol. 14, p. 31520-31536
Keywords [en]
Decision Support, Deep Reinforcement Learning, Demand Response, Resilience
National Category
Computer Sciences Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-377873DOI: 10.1109/ACCESS.2026.3666213Scopus ID: 2-s2.0-105030831498OAI: oai:DiVA.org:kth-377873DiVA, id: diva2:2044455
Note

QC 20260309

Available from: 2026-03-09 Created: 2026-03-09 Last updated: 2026-03-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Weiss, XavierRolander, ArvidNordström, LarsHilber, Patrik

Search in DiVA

By author/editor
Weiss, XavierRolander, ArvidKazmi, HussainNordström, LarsHilber, Patrik
By organisation
Electric Power and Energy Systems
In the same journal
IEEE Access
Computer SciencesOther Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 15 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf