We investigate an optimal distributed energy dispatch strategy for networked Microgrids (MGs) considering uncertainties of distributed energy resources, the impact of energy storage, and privacy. The energy dispatch problem is formulated as a Partially Observed Markov Decision Process (POMDP), and is solved using Deep Deterministic Policy Gradient (DDPG) method. To reduce the communication load and protect privacy, a federated reinforcement learning (FRL) framework is proposed, where each MG trains model parameters with its own local data, and only transmits model weights to the global server. Finally, each MG can obtain a global model that can be generalized well in various cases. The proposed method is communication-efficient, privacy-preserving, and scalable. Numerical simulations are tested with real-world datasets, results demonstrate the effectiveness of the proposed FRL method.
Part of ISBN 9781665464413
QC 20231106