kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 12) Show all publications
Wu, X., Sou, K. C. & Lu, J. (2023). A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks. Optimization Methods and Software, 38(4), 813-836
Open this publication in new window or tab >>A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks
2023 (English)In: Optimization Methods and Software, ISSN 1055-6788, E-ISSN 1029-4937, Vol. 38, no 4, p. 813-836Article in journal (Refereed) Published
Abstract [en]

In this paper, we develop a regularized Fenchel dual gradient method (RFDGM), which allows nodes in a time-varying undirected network to find a common decision, in a fully distributed fashion, for minimizing the sum of their local objective functions subject to their local constraints. Different from most existing distributed optimization algorithms that also cope with time-varying networks, RFDGM is able to handle problems with general convex objective functions and distinct local constraints, and still has non-asymptotic convergence results. Specifically, under a standard network connectivity condition, we show that RFDGM is guaranteed to reach ϵ-accuracy in both optimality and feasibility within (Formula presented.) iterations. Such iteration complexity can be improved to (Formula presented.) if the local objective functions are strongly convex but not necessarily differentiable. Finally, simulation results demonstrate the competence of RFDGM in practice.

Place, publisher, year, edition, pages
Informa UK Limited, 2023
Keywords
decentralized optimization, Distributed optimization, dual optimization method
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-338476 (URN)10.1080/10556788.2023.2189713 (DOI)000959495300001 ()2-s2.0-85152010605 (Scopus ID)
Note

QC 20231115

Available from: 2023-11-15 Created: 2023-11-15 Last updated: 2023-11-15Bibliographically approved
Wu, X., Liu, C., Magnússon, S. & Johansson, M. (2023). Delay-agnostic Asynchronous Coordinate Update Algorithm. In: Proceedings of the 40th International Conference on Machine Learning, ICML 2023: . Paper presented at 40th International Conference on Machine Learning, ICML 2023, Honolulu, United States of America, Jul 23 2023 - Jul 29 2023 (pp. 37582-37606). ML Research Press
Open this publication in new window or tab >>Delay-agnostic Asynchronous Coordinate Update Algorithm
2023 (English)In: Proceedings of the 40th International Conference on Machine Learning, ICML 2023, ML Research Press , 2023, p. 37582-37606Conference paper, Published paper (Refereed)
Abstract [en]

We propose a delay-agnostic asynchronous coordinate update algorithm (DEGAS) for computing operator fixed points, with applications to asynchronous optimization. DEGAS includes novel asynchronous variants of ADMM and block-coordinate descent as special cases. We prove that DEGAS converges with both bounded and unbounded delays under delay-free parameter conditions. We also validate by theory and experiments that DEGAS adapts well to the actual delays. The effectiveness of DEGAS is demonstrated by numerical experiments on classification problems.

Place, publisher, year, edition, pages
ML Research Press, 2023
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-350006 (URN)2-s2.0-85174395838 (Scopus ID)
Conference
40th International Conference on Machine Learning, ICML 2023, Honolulu, United States of America, Jul 23 2023 - Jul 29 2023
Note

QC 20240705

Available from: 2024-07-05 Created: 2024-07-05 Last updated: 2024-07-05Bibliographically approved
Wu, X., Liu, C., Magnusson, S. & Johansson, M. (2023). Delay-agnostic Asynchronous Distributed Optimization. In: 2023 62Nd Ieee Conference On Decision And Control, Cdc: . Paper presented at 62nd IEEE Conference on Decision and Control (CDC), DEC 13-15, 2023, IEEE Control Syst Soc, Singapore, SINGAPORE (pp. 1082-1087). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Delay-agnostic Asynchronous Distributed Optimization
2023 (English)In: 2023 62Nd Ieee Conference On Decision And Control, Cdc, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1082-1087Conference paper, Published paper (Refereed)
Abstract [en]

Existing asynchronous distributed optimization algorithms often use diminishing step-sizes that cause slow practical convergence, or fixed step-sizes that depend on an assumed upper bound of delays. Not only is such a delay bound hard to obtain in advance, but it is also large and therefore results in unnecessarily slow convergence. This paper develops asynchronous versions of two distributed algorithms, DGD and DGD-ATC, for solving consensus optimization problems over undirected networks. In contrast to alternatives, our algorithms can converge to the fixed point set of their synchronous counterparts using step-sizes that are independent of the delays. We establish convergence guarantees under both partial and total asynchrony. The practical performance of our algorithms is demonstrated by numerical experiments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE Conference on Decision and Control, ISSN 0743-1546
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-344706 (URN)10.1109/CDC49753.2023.10383647 (DOI)001166433800128 ()2-s2.0-85180319795 (Scopus ID)
Conference
62nd IEEE Conference on Decision and Control (CDC), DEC 13-15, 2023, IEEE Control Syst Soc, Singapore, SINGAPORE
Note

QC 20240326

Part of ISBN 979-8-3503-0124-3

Available from: 2024-03-26 Created: 2024-03-26 Last updated: 2024-10-11Bibliographically approved
Wu, X., Magnússon, S. & Vejdemo-Johansson, M. (2023). Distributed safe resource allocation using barrier functions. Automatica, 153, Article ID 111051.
Open this publication in new window or tab >>Distributed safe resource allocation using barrier functions
2023 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 153, article id 111051Article in journal (Refereed) Published
Abstract [en]

Resource allocation plays a central role in networked systems such as smart grids, communication networks, and urban transportation systems. In these systems, many constraints have physical meaning and infeasible allocations can result in a system breakdown. Hence, algorithms with asymptotic feasibility guarantees can be insufficient since they cannot ensure that an implementable solution is found in finite time. This paper proposes a distributed feasible method (DFM) for resource allocation based on barrier functions. In DFM, every iterate is feasible and safe to implement since it does not violate the physical constraints. We prove that, under mild conditions, DFM converges to an arbitrarily small neighborhood of the optimum. Numerical experiments demonstrate the competitive performance of DFM.

Place, publisher, year, edition, pages
Elsevier BV, 2023
Keywords
Barrier function, Distributed optimization, Feasible method, Safe resource allocation
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-330904 (URN)10.1016/j.automatica.2023.111051 (DOI)2-s2.0-85153293753 (Scopus ID)
Note

QC 20230705

Available from: 2023-07-05 Created: 2023-07-05 Last updated: 2024-12-03Bibliographically approved
Liu, C., Wu, X., Yi, X., Shi, Y. & Johansson, K. H. (2023). Rate analysis of dual averaging for nonconvex distributed optimization. In: IFAC-PapersOnLine: . Paper presented at 22nd IFAC World Congress, Yokohama, Japan, Jul 9 2023 - Jul 14 2023 (pp. 5209-5214). Elsevier BV, 56
Open this publication in new window or tab >>Rate analysis of dual averaging for nonconvex distributed optimization
Show others...
2023 (English)In: IFAC-PapersOnLine, Elsevier BV , 2023, Vol. 56, p. 5209-5214Conference paper, Published paper (Refereed)
Abstract [en]

This work studies nonconvex distributed constrained optimization over stochastic communication networks. We revisit the distributed dual averaging algorithm, which is known to converge for convex problems. We start from the centralized case, for which the change of two consecutive updates is taken as the suboptimality measure. We validate the use of such a measure by showing that it is closely related to stationarity. This equips us with a handle to study the convergence of dual averaging in nonconvex optimization. We prove that the squared norm of this suboptimality measure converges at rate O(1/t). Then, for the distributed setup we show convergence to the stationary point at rate O(1/t). Finally, a numerical example is given to illustrate our theoretical results.

Place, publisher, year, edition, pages
Elsevier BV, 2023
Series
IFAC-PapersOnLine, ISSN 2405-8963 ; 56
Keywords
distributed constrained optimization, Dual averaging, multi-agent consensus, nonconvex optimization, stochastic networks
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-343686 (URN)10.1016/j.ifacol.2023.10.117 (DOI)2-s2.0-85184963049 (Scopus ID)
Conference
22nd IFAC World Congress, Yokohama, Japan, Jul 9 2023 - Jul 14 2023
Note

QC 20240223

Part of ISBN 9781713872344

Available from: 2024-02-22 Created: 2024-02-22 Last updated: 2024-02-23Bibliographically approved
Berglund, E., Khirirat, S., Wu, X., Magnússon, S. & Johansson, M. (2023). Revisiting the Curvature-aided IAG: Improved Theory and Reduced Complexity. In: IFAC-PapersOnLine: . Paper presented at 22nd IFAC World Congress, Yokohama, Japan, Jul 9 2023 - Jul 14 2023 (pp. 5221-5226). Elsevier BV, 56
Open this publication in new window or tab >>Revisiting the Curvature-aided IAG: Improved Theory and Reduced Complexity
Show others...
2023 (English)In: IFAC-PapersOnLine, Elsevier BV , 2023, Vol. 56, p. 5221-5226Conference paper, Published paper (Refereed)
Abstract [en]

The curvature-aided IAG (CIAG) algorithm is an efficient asynchronous optimization method that accelerates IAG using a delay compensation technique. However, existing step-size rules for CIAG are conservative and hard to implement, and the Hessian computation in CIAG is often computationally expensive. To alleviate these issues, we first provide an easy-to-implement and less conservative step-size rule for CIAG. Next, we propose a modified CIAG algorithm that reduces the computational complexity by approximating the Hessian with a constant matrix. Convergence results are derived for each algorithm on both convex and strongly convex problems, and numerical experiments on logistic regression demonstrate their effectiveness in practice.

Place, publisher, year, edition, pages
Elsevier BV, 2023
Keywords
asynchronous optimization, delay compensation, incremental aggregated gradient
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-343698 (URN)10.1016/j.ifacol.2023.10.119 (DOI)2-s2.0-85184959428 (Scopus ID)
Conference
22nd IFAC World Congress, Yokohama, Japan, Jul 9 2023 - Jul 14 2023
Note

QC 20240222

Part of ISBN 978-171387234-4

Available from: 2024-02-22 Created: 2024-02-22 Last updated: 2024-02-22Bibliographically approved
Wu, X. & Lu, J. (2022). A Unifying Approximate Method of Multipliers for Distributed Composite Optimization. IEEE Transactions on Automatic Control, 1-1
Open this publication in new window or tab >>A Unifying Approximate Method of Multipliers for Distributed Composite Optimization
2022 (English)In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, p. 1-1Article in journal (Refereed) Published
Abstract [en]

This paper investigates solving convex composite optimization on an undirected network, where each node, privately endowed with a smooth component function and a nonsmooth one, is required to minimize the sum of all the component functions throughout the network. To address such a problem, a general Approximate Method of Multipliers (AMM) is developed, which attempts to approximate the Method of Multipliers by virtue of a surrogate function with numerous options. We then design the possibly nonseparable, time-varying surrogate function in various ways, leading to different distributed realizations of AMM. We demonstrate that AMM generalizes more than ten state-of-the-art distributed optimization algorithms, and certain specific designs of its surrogate function result in a variety of new algorithms to the literature. Furthermore, we show that AMM is able to achieve an <formula><tex>$O(1/k)$</tex></formula> rate of convergence to optimality, and the convergence rate becomes linear when the problem is locally restricted strongly convex and smooth. Such convergence rates provide new or stronger convergence results to many prior methods that can be viewed as specializations of AMM. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Keywords
Approximation algorithms, Convergence, Convex composite optimization, Convex functions, Design methodology, Distributed algorithms, distributed optimization, Linear programming, the Method of Multipliers, Topology, Approximation theory, Distributed computer systems, Functions, Parallel algorithms, Approximate methods, Composite optimization, Linear-programming, The method of multiplier
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-324956 (URN)10.1109/TAC.2022.3173171 (DOI)000965642400001 ()2-s2.0-85132539331 (Scopus ID)
Note

QC 20230508

Available from: 2023-03-28 Created: 2023-03-28 Last updated: 2023-08-03Bibliographically approved
Wu, X., Magnússon, S., Feyzmahdavian, H. R. & Vejdemo-Johansson, M. (2022). Delay-adaptive Step-sizes for Asynchronous Learning. In: Proceedings of the 39th International Conference on Machine Learning, ICML 2022: . Paper presented at 39th International Conference on Machine Learning, ICML 2022, Baltimore, United States of America, Jul 17 2022 - Jul 23 2022 (pp. 24093-24113). ML Research Press
Open this publication in new window or tab >>Delay-adaptive Step-sizes for Asynchronous Learning
2022 (English)In: Proceedings of the 39th International Conference on Machine Learning, ICML 2022, ML Research Press , 2022, p. 24093-24113Conference paper, Published paper (Refereed)
Abstract [en]

In scalable machine learning systems, model training is often parallelized over multiple nodes that run without tight synchronization. Most analysis results for the related asynchronous algorithms use an upper bound on the information delays in the system to determine learning rates. Not only are such bounds hard to obtain in advance, but they also result in unnecessarily slow convergence. In this paper, we show that it is possible to use learning rates that depend on the actual time-varying delays in the system. We develop general convergence results for delay-adaptive asynchronous iterations and specialize these to proximal incremental gradient descent and block-coordinate descent algorithms. For each of these methods, we demonstrate how delays can be measured on-line, present delay-adaptive step-size policies, and illustrate their theoretical and practical advantages over the state-of-the-art.

Place, publisher, year, edition, pages
ML Research Press, 2022
Series
Proceedings of Machine Learning Research, ISSN 26403498
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-333429 (URN)000900130205013 ()2-s2.0-85146974810 (Scopus ID)
Conference
39th International Conference on Machine Learning, ICML 2022, Baltimore, United States of America, Jul 17 2022 - Jul 23 2022
Note

QC 20230801

Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2023-08-14Bibliographically approved
Wu, X., Wang, H. & Lu, J. (2022). Distributed Optimization with Coupling Constraints. IEEE Transactions on Automatic Control, 1-1
Open this publication in new window or tab >>Distributed Optimization with Coupling Constraints
2022 (English)In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, p. 1-1Article in journal (Refereed) Published
Abstract [en]

In this paper, we investigate distributed convex optimization with both inequality and equality constraints, where the objective function can be a general nonsmooth convex function and all the constraints can be both sparsely and densely coupling. By strategically integrating ideas from primal-dual, proximal, and virtual-queue optimization methods, we develop a novel distributed algorithm, referred to as IPLUX, to address the problem over a connected, undirected graph. We show that IPLUX achieves an <formula><tex>$O(1/k)$</tex></formula> rate of convergence in terms of optimality and feasibility, which is stronger than the convergence results of the alternative methods and eliminates the standard assumption on the compactness of the feasible region. Finally, IPLUX exhibits faster convergence and higher efficiency than several state-of-the-art methods in the simulation. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Keywords
constrained optimization, Convergence, Convex functions, Couplings, Distributed algorithms, Distributed optimization, Linear programming, Optimization, primal-dual method, proximal algorithm, Transforms, Convex optimization, Mathematical transformations, Undirected graphs, Virtual addresses, Convex optimisation, Coupling constraints, Inequality constraint, Linear-programming, Optimisations, Primal-dual methods
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-323805 (URN)10.1109/TAC.2022.3169955 (DOI)000966034800001 ()2-s2.0-85129641062 (Scopus ID)
Note

QC 20230508

Available from: 2023-02-13 Created: 2023-02-13 Last updated: 2023-05-08Bibliographically approved
Wu, X., Magnusson, S., Feyzmahdavian, H. R. & Johansson, M. (2022). Optimal convergence rates of totally asynchronous optimization. In: 2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC): . Paper presented at IEEE 61st Conference on Decision and Control (CDC), DEC 06-09, 2022, Cancun, MEXICO (pp. 6484-6490). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Optimal convergence rates of totally asynchronous optimization
2022 (English)In: 2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 6484-6490Conference paper, Published paper (Refereed)
Abstract [en]

Asynchronous optimization algorithms are at the core of modern machine learning and resource allocation systems. However, most convergence results consider bounded information delays and several important algorithms lack guarantees when they operate under total asynchrony. In this paper, we derive explicit convergence rates for the proximal incremental aggregated gradient (PIAG) and the asynchronous block-coordinate descent (Async-BCD) methods under a specific model of total asynchrony, and show that the derived rates are order-optimal. The convergence bounds provide an insightful understanding of how the growth rate of the delays deteriorates the convergence times of the algorithms. Our theoretical findings are demonstrated by a numerical example.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
IEEE Conference on Decision and Control, ISSN 0743-1546
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-326407 (URN)10.1109/CDC51059.2022.9993168 (DOI)000948128105072 ()2-s2.0-85147025723 (Scopus ID)
Conference
IEEE 61st Conference on Decision and Control (CDC), DEC 06-09, 2022, Cancun, MEXICO
Note

QC 20230502

Available from: 2023-05-02 Created: 2023-05-02 Last updated: 2023-05-02Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7409-9611

Search in DiVA

Show all publications