kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Fast Server Learning Rate Tuning for Coded Federated Dropout
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
KTH, School of Electrical Engineering and Computer Science (EECS).
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-9675-9729
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-1256-1070
Show others and affiliations
2023 (English)In: FL 2022: Trustworthy Federated Learning / [ed] Goebel, R Yu, H Faltings, B Fan, L Xiong, Z, Springer Nature , 2023, Vol. 13448, p. 84-99Conference paper, Published paper (Refereed)
Abstract [en]

In Federated Learning (FL), clients with low computational power train a common machine model by exchanging parameters via updates instead of transmitting potentially private data. Federated Dropout (FD) is a technique that improves the communication efficiency of a FL session by selecting a subset of model parameters to be updated in each training round. However, compared to standard FL, FD produces considerably lower accuracy and faces a longer convergence time. In this chapter, we leverage coding theory to enhance FD by allowing different sub-models to be used at each client. We also show that by carefully tuning the server learning rate hyper-parameter, we can achieve higher training speed while also reaching up to the same final accuracy as the no dropout case. Evaluations on the EMNIST dataset show that our mechanism achieves 99.6% of the final accuracy of the no dropout case while requiring 2.43x less bandwidth to achieve this level of accuracy.

Place, publisher, year, edition, pages
Springer Nature , 2023. Vol. 13448, p. 84-99
Series
Lecture Notes in Artificial Intelligence, ISSN 2945-9133
Keywords [en]
Federated Learning, Hyper-parameters tuning, Coding Theory
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-330513DOI: 10.1007/978-3-031-28996-5_7ISI: 000999818400007Scopus ID: 2-s2.0-85152560522OAI: oai:DiVA.org:kth-330513DiVA, id: diva2:1777954
Conference
1st International Workshop on Trustworthy Federated Learning (FL), JUL 23, 2022, Vienna, AUSTRIA
Note

QC 20230630

Available from: 2023-06-30 Created: 2023-06-30 Last updated: 2023-06-30Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Verardo, GiacomoBarreira, DanielChiesa, MarcoKostic, DejanMaguire Jr., Gerald Q.

Search in DiVA

By author/editor
Verardo, GiacomoBarreira, DanielChiesa, MarcoKostic, DejanMaguire Jr., Gerald Q.
By organisation
Software and Computer systems, SCSSchool of Electrical Engineering and Computer Science (EECS)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 139 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf