Linear Convergence for Distributed Optimization without Strong ConvexityShow others and affiliations
2020 (English)In: Proceedings of the IEEE Conference on Decision and Control, Institute of Electrical and Electronics Engineers Inc. , 2020, p. 3643-3648Conference paper, Published paper (Refereed)
Abstract [en]
This paper considers the distributed optimization problem of minimizing a global cost function formed by a sum of local smooth cost functions by using local information exchange. Various distributed optimization algorithms have been proposed for solving such a problem. A standard condition for proving the linear convergence for existing distributed algorithms is the strong convexity of the cost functions. However, the strong convexity may not hold for many practical applications, such as least squares and logistic regression. In this paper, we propose a distributed primal-dual gradient descent algorithm and establish its linear convergence under the condition that the global cost function satisfies the Polyak-Lojasiewicz condition. This condition is weaker than strong convexity and the global minimizer is not necessarily unique. The theoretical result is illustrated by numerical simulations.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2020. p. 3643-3648
Keywords [en]
Gradient methods, Logistic regression, Optimization, Distributed optimization, Global minimizers, Gradient descent algorithms, Least Square, Linear convergence, Local information, Standard conditions, Strong convexities, Cost functions
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-301199DOI: 10.1109/CDC42340.2020.9304381ISI: 000717663402147Scopus ID: 2-s2.0-85099876574OAI: oai:DiVA.org:kth-301199DiVA, id: diva2:1591769
Conference
59th IEEE Conference on Decision and Control, CDC 2020, 14 December 2020 through 18 December 2020
Funder
Swedish Research Council
Note
QC 20210907
2021-09-072021-09-072023-04-05Bibliographically approved