kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Online Distributed Learning with Quantized Finite-Time Coordination
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). (Digital Futures)ORCID iD: 0000-0002-5634-8802
Boston University, Division of Systems Engineering, Department of Electrical and Computer Engineering, Boston, MA, US.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). (Digital Futures)ORCID iD: 0000-0001-9940-5929
2023 (English)In: 2023 62nd IEEE Conference on Decision and Control, CDC 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 5026-5032Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we consider online distributed learning problems. Online distributed learning refers to the process of training learning models on distributed data sources. In our setting a set of agents need to cooperatively train a learning model from streaming data. Differently from federated learning, the proposed approach does not rely on a central server but only on peer-to-peer communications among the agents. This approach is often used in scenarios where data cannot be moved to a centralized location due to privacy, security, or cost reasons. In order to overcome the absence of a central server, we propose a distributed algorithm that relies on a quantized, finite-time coordination protocol to aggregate the locally trained models. Furthermore, our algorithm allows for the use of stochastic gradients during local training. Stochastic gradients are computed using a randomly sampled subset of the local training data, which makes the proposed algorithm more efficient and scalable than traditional gradient descent. In our paper, we analyze the performance of the proposed algorithm in terms of the mean distance from the online solution. Finally, we present numerical results for a logistic regression task.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2023. p. 5026-5032
Series
Proceedings of the IEEE Conference on Decision and Control, ISSN 0743-1546
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-343735DOI: 10.1109/CDC49753.2023.10383824ISI: 001166433804028Scopus ID: 2-s2.0-85184823934OAI: oai:DiVA.org:kth-343735DiVA, id: diva2:1839930
Conference
62nd IEEE Conference on Decision and Control, CDC 2023, Singapore, Singapore, Dec 13 2023 - Dec 15 2023
Note

QC 20240222

Part of ISBN 979-8-3503-0124-3

Available from: 2024-02-22 Created: 2024-02-22 Last updated: 2024-03-26Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Bastianello, NicolaJohansson, Karl H.

Search in DiVA

By author/editor
Bastianello, NicolaJohansson, Karl H.
By organisation
Decision and Control Systems (Automatic Control)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 53 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf