In this paper, we focus on distributed learning over peer-to-peer networks. In particular, we address the challenge of expensive communications (which arise when e.g. training neural networks), by proposing a novel local training algorithm, LTADMM. We extend the distributed ADMM enabling the agents to perform multiple local gradient steps per communication round (local training). We present a preliminary convergence analysis of the algorithm under a graph regularity assumption, and show how the use of local training does not compromise the accuracy of the learned model. We compare the algorithm with the state of the art for a classification task, and in different set-ups. The results are very promising showing a great performance of LT-ADMM, and paving the way for future important theoretical developments.
Part of ISBN 9798350316339
QC 20250401