Open this publication in new window or tab >>Show others...
2024 (English)In: Proceedings of IEEE/IFIP Network Operations and Management Symposium 2024, NOMS 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024Conference paper, Published paper (Refereed)
Abstract [en]
Dynamic resource allocation for network services is pivotal for achieving end-to-end management objectives. Previous research has demonstrated that Reinforcement Learning (RL) is a promising approach to resource allocation in networks, allowing to obtain near-optimal control policies for non-trivial system configurations. Current RL approaches however have the drawback that a change in the system or the management objective necessitates expensive retraining of the RL agent. To tackle this challenge, practical solutions including offline retraining, transfer learning, and model-based rollout have been proposed. In this work, we study these methods and present comparative results that shed light on their respective performance and benefits. Our study finds that rollout achieves faster adaptation than transfer learning, yet its effectiveness highly depends on the accuracy of the system model.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Istio, Kubernetes, Performance management, policy adaptation, reinforcement learning, rollout, service mesh
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-351010 (URN)10.1109/NOMS59830.2024.10575398 (DOI)001270140300103 ()2-s2.0-85198375028 (Scopus ID)
Conference
2024 IEEE/IFIP Network Operations and Management Symposium, NOMS 2024, Seoul, Korea, May 6 2024 - May 10 2024
Note
Part of ISBN 9798350327939
QC 20240725
2024-07-242024-07-242024-09-27Bibliographically approved