Comparing Transfer Learning and Rollout for Policy Adaptation in a Changing Network EnvironmentVise andre og tillknytning
2024 (engelsk)Inngår i: Proceedings of IEEE/IFIP Network Operations and Management Symposium 2024, NOMS 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]
Dynamic resource allocation for network services is pivotal for achieving end-to-end management objectives. Previous research has demonstrated that Reinforcement Learning (RL) is a promising approach to resource allocation in networks, allowing to obtain near-optimal control policies for non-trivial system configurations. Current RL approaches however have the drawback that a change in the system or the management objective necessitates expensive retraining of the RL agent. To tackle this challenge, practical solutions including offline retraining, transfer learning, and model-based rollout have been proposed. In this work, we study these methods and present comparative results that shed light on their respective performance and benefits. Our study finds that rollout achieves faster adaptation than transfer learning, yet its effectiveness highly depends on the accuracy of the system model.
sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE) , 2024.
Emneord [en]
Istio, Kubernetes, Performance management, policy adaptation, reinforcement learning, rollout, service mesh
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-351010DOI: 10.1109/NOMS59830.2024.10575398ISI: 001270140300103Scopus ID: 2-s2.0-85198375028OAI: oai:DiVA.org:kth-351010DiVA, id: diva2:1885685
Konferanse
2024 IEEE/IFIP Network Operations and Management Symposium, NOMS 2024, Seoul, Korea, May 6 2024 - May 10 2024
Merknad
Part of ISBN 9798350327939
QC 20240725
2024-07-242024-07-242024-09-27bibliografisk kontrollert