kth.sePublications KTH
Operational message
There are currently operational disruptions. Troubleshooting is in progress.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Distributed zeroth-order optimization: Convergence rates that match centralized counterpart
School of Automation, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu, P.R. China, Jiangsu.
State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310058, Zhejiang, P.R. China., Zhejiang.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).ORCID iD: 0000-0002-4679-4673
Australian Center for Field Robotics, Sydney Institute for Robotics and Intelligent Systems, The University of Sydney, Sydney, NSW 2006, Australia.
2024 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 159, article id 111328Article in journal (Refereed) Published
Abstract [en]

Zeroth-order optimization has become increasingly important in complex optimization and machine learning when cost functions are impossible to be described in closed analytical forms. The key idea of zeroth-order optimization lies in the ability for a learner to build gradient estimates by queries sent to the cost function, and then traditional gradient descent algorithms can be executed replacing gradients by the estimates. For optimization over large-scale multi-agent systems with decentralized data and costs, zeroth-order optimization can continue to be utilized to develop scalable and distributed algorithms. In this paper, we aim at understanding the trend in performance transitioning from centralized to distributed zeroth-order algorithms in terms of convergence rates, and focus on multi-agent systems with time-varying communication networks. We establish a series of convergence rates for distributed zeroth-order subgradient algorithms under both one-point and two-point zeroth-order oracles. Apart from the additional node-to-node communication cost due to the distributed nature of algorithms, the established rates in convergence are shown to match their centralized counterpart. We also propose a multi-stage distributed zeroth-order algorithm that better utilizes the learning rates, reduces the computational complexity, and attains even faster convergence rates for compact decision set.

Place, publisher, year, edition, pages
Elsevier BV , 2024. Vol. 159, article id 111328
Keywords [en]
Distributed optimization, Multi-stage optimization algorithm, Optimal convergence rate, Zeroth-order optimization
National Category
Computational Mathematics
Identifiers
URN: urn:nbn:se:kth:diva-339481DOI: 10.1016/j.automatica.2023.111328ISI: 001101835000001Scopus ID: 2-s2.0-85175044513OAI: oai:DiVA.org:kth-339481DiVA, id: diva2:1811449
Note

QC 20231113

Available from: 2023-11-13 Created: 2023-11-13 Last updated: 2025-12-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Proutiere, Alexandre

Search in DiVA

By author/editor
Proutiere, Alexandre
By organisation
Decision and Control Systems (Automatic Control)
In the same journal
Automatica
Computational Mathematics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 84 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf