Congruent Learning for Self-Regulated Federated Learning in 6GShow others and affiliations
2024 (English)In: IEEE TRANSACTIONS ON MACHINE LEARNING IN COMMUNICATIONS AND NETWORKING, ISSN 2831-316X, Vol. 2, p. 129-149Article in journal (Refereed) Published
Abstract [en]
Future 6G networks are expected to be AI-native with distributed machine learning functionalities responsible for improving and automating a variety of network- and service-management tasks. To enable a privacy-preserving approach to distributed learning, federated learning (FL) has become prevalent in the communication-and-networking domain. However, for efficient management of the networks, FL needs to be automated requiring minimal hyperparameter tuning. An outstanding challenge towards automation of FL is regarding difficulties in handling overfitting. Existing techniques tackle overfitting via regularization heuristics that rely on hyperparameter tuning and as such presume availability of representative validation data. However, in the dynamic and heterogeneous network environments, this assumption is limiting. Even if existence of validation data can be assumed, hyperparameter tuning comes with added communication and compute overhead cost which grows prohibitively as the federation scales in size. Here, we propose the congruent federated learning (CFL) as a self-regulated method of learning that is robust to overfitting and achieves the robustness without reliance on hyperparameter tuning. CFL employs a self-taught regularization mechanism that refrains local models from overfitting to the local data. This is enabled via introduction of the congruent activation functions as a class of similarity-promoting activation functions that discourage learning local models which differ excessively from the global (federated) model. Across four networking use cases on several tasks, reflecting different profiles of data heterogeneity and limited availability of data, it is shown that CFL greatly reduces overfitting and in nearly all cases improves the performance-a relative gain of about 21% averaged across all use cases.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2024. Vol. 2, p. 129-149
Keywords [en]
Federated learning, Data models, Tuning, Computational modeling, Automation, 6G mobile communication, Predictive models, Collaborative intelligence, distributed and federated machine learning for efficient network performance, scalability and complexity of machine learning in networks
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-364696DOI: 10.1109/TMLCN.2023.3347680ISI: 001487791100001Scopus ID: 2-s2.0-105027920054OAI: oai:DiVA.org:kth-364696DiVA, id: diva2:1979994
Note
QC 20260128
2025-07-012025-07-012026-01-28Bibliographically approved