kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Distributed Variance Consensus with Application to Personalized Learning
DIEE, University of Cagliari, 09123 Cagliari, Italy.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Digital futures.ORCID iD: 0000-0002-5634-8802
DIEE, University of Cagliari, 09123 Cagliari, Italy.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Digital futures.ORCID iD: 0000-0001-9940-5929
2025 (English)In: IFAC-PapersOnLine, ISSN 2405-8963, Vol. 59, no 4, p. 37-42Article in journal (Refereed) Published
Abstract [en]

This paper addresses the problem of computing the sample variance of datasets scattered across a network of interconnected agents. A general procedure is outlined to allow the agents to reach consensus on the variance of their local data, which involves two cascaded (dynamic) average consensus protocols. Our implementation of the procedure exploits the distributed ADMM, yielding a distributed protocol that does not involve the sharing of any local, private data nor any coordination of a central authority; the algorithm is proved to be convergent with linear rate and null steady-state error. The proposed distributed variance estimation scheme is then leveraged to tune personalization in "personalized learning" where agents aim at training a local model tailored to their own data, while still benefiting from the cooperation with other agents to enhance the models’ generalization power. The degree to which an agent tailors its local model depends on the diversity of the local datasets, and we propose to use the variance to tune personalization. Numerical simulations test the proposed approach in a classification task of handwritten digits, drawn from the EMNIST dataset, showing the better performance of variance-tuned personalization over non-personalized training. 

Place, publisher, year, edition, pages
Elsevier BV , 2025. Vol. 59, no 4, p. 37-42
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-370394DOI: 10.1016/j.ifacol.2025.07.041Scopus ID: 2-s2.0-105012471379OAI: oai:DiVA.org:kth-370394DiVA, id: diva2:2000818
Note

QC 20250925

Available from: 2025-09-25 Created: 2025-09-25 Last updated: 2025-09-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Bastianello, NicolaJohansson, Karl H.

Search in DiVA

By author/editor
Bastianello, NicolaJohansson, Karl H.
By organisation
Decision and Control Systems (Automatic Control)Digital futures
Control Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 18 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf