kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Near-Optimal Resilient Aggregation Rules for Distributed Learning Using 1-Center and 1-Mean Clustering with Outliers
College of Computer Science, Sichuan University, College of Computer Science, Sichuan University.
School of Statistics and Data Science, Nankai University, School of Statistics and Data Science, Nankai University.
College of Computer Science, Sichuan University, College of Computer Science, Sichuan University.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).ORCID iD: 0000-0002-0819-5303
Show others and affiliations
2024 (English)In: Proceedings of the 38th AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence (AAAI) , 2024, Vol. 38, p. 16469-16477Conference paper, Published paper (Refereed)
Abstract [en]

Byzantine machine learning has garnered considerable attention in light of the unpredictable faults that can occur in large-scale distributed learning systems. The key to secure resilience against Byzantine machines in distributed learning is resilient aggregation mechanisms. Although abundant resilient aggregation rules have been proposed, they are designed in ad-hoc manners, imposing extra barriers on comparing, analyzing, and improving the rules across performance criteria. This paper studies near-optimal aggregation rules using clustering in the presence of outliers. Our outlier-robust clustering approach utilizes geometric properties of the update vectors provided by workers. Our analysis show that constant approximations to the 1-center and 1-mean clustering problems with outliers provide near-optimal resilient aggregators for metric-based criteria, which have been proven to be crucial in the homogeneous and heterogeneous cases respectively. In addition, we discuss two contradicting types of attacks under which no single aggregation rule is guaranteed to improve upon the naive average. Based on the discussion, we propose a two-phase resilient aggregation framework. We run experiments for image classification using a non-convex loss function. The proposed algorithms outperform previously known aggregation rules by a large margin with both homogeneous and heterogeneous data distributions among non-faulty workers. Code and appendix are available at https://github.com/jerry907/AAAI24-RASHB.

Place, publisher, year, edition, pages
Association for the Advancement of Artificial Intelligence (AAAI) , 2024. Vol. 38, p. 16469-16477
Series
Proceedings of the AAAI Conference on Artificial Intelligence, ISSN 2159-5399
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-345728DOI: 10.1609/aaai.v38i15.29584ISI: 001239314700027Scopus ID: 2-s2.0-85189519760OAI: oai:DiVA.org:kth-345728DiVA, id: diva2:1852504
Conference
38th AAAI Conference on Artificial Intelligence, AAAI 2024, Feb 20-27 2024, Vancouver, Canada
Note

QC 20240424

Available from: 2024-04-18 Created: 2024-04-18 Last updated: 2024-09-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Liu, Changxin

Search in DiVA

By author/editor
Liu, Changxin
By organisation
Decision and Control Systems (Automatic Control)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 33 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf