kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 39) Show all publications
Wang, B., Xu, L., Yi, X., Jia, Y. & Yang, T. (2024). Semiglobal Suboptimal Output Regulation for Heterogeneous Multi-Agent Systems With Input Saturation via Adaptive Dynamic Programming. IEEE Transactions on Neural Networks and Learning Systems, 35(3), 3242-3250
Open this publication in new window or tab >>Semiglobal Suboptimal Output Regulation for Heterogeneous Multi-Agent Systems With Input Saturation via Adaptive Dynamic Programming
Show others...
2024 (English)In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 35, no 3, p. 3242-3250Article in journal (Refereed) Published
Abstract [en]

This article considers the semiglobal cooperative suboptimal output regulation problem of heterogeneous multi-agent systems with unknown agent dynamics in the presence of input saturation. To solve the problem, we develop distributed suboptimal control strategies from two perspectives, namely, model-based and data-driven. For the model-based case, we design a suboptimal control strategy by using the low-gain technique and output regulation theory. Moreover, when the agents’ dynamics are unknown, we design a data-driven algorithm to solve the problem. We show that proposed control strategies ensure each agent’s output gradually follows the reference signal and achieves interference suppression while guaranteeing closed-loop stability. The theoretical results are illustrated by a numerical simulation example. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Adaptive dynamic programming (ADP), Decentralized control, Directed graphs, Eigenvalues and eigenfunctions, input saturation, Multi-agent systems, output regulation, Regulation, Regulators, System dynamics, Adaptive control systems, Dynamic programming, Adaptive dynamic programming, Control strategies, Decentralised control, Eigenvalue and eigenfunctions, Regulator, Multi agent systems
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-326670 (URN)10.1109/TNNLS.2022.3191673 (DOI)000833063900001 ()35895655 (PubMedID)2-s2.0-85135755750 (Scopus ID)
Note

QC 20250519

Available from: 2023-05-10 Created: 2023-05-10 Last updated: 2025-05-19Bibliographically approved
He, X., Yi, X., Zhao, Y., Johansson, K. H. & Gupta, V. (2023). Asymptotic Analysis of Federated Learning Under Event-Triggered Communication. IEEE Transactions on Signal Processing, 71, 2654-2667
Open this publication in new window or tab >>Asymptotic Analysis of Federated Learning Under Event-Triggered Communication
Show others...
2023 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 71, p. 2654-2667Article in journal (Refereed) Published
Abstract [en]

Federated learning (FL) is a collaborative machine learning (ML) paradigm based on persistent communication between a central server and multiple edge devices. However, frequent communication of large ML models can incur considerable communication resources, especially costly for wireless network nodes. In this paper, we develop a communication-efficient protocol to reduce the number of communication instances in each round while maintaining convergence rate and asymptotic distribution properties. First, we propose a novel communication-efficient FL algorithm that utilizes an event-triggered communication mechanism, where each edge device updates the model by using stochastic gradient descent with local sampling data and the central server aggregates all local models from the devices by using model averaging. Communication can be reduced since each edge device and the central server transmits its updated model only when the difference between the current model and the last communicated model is larger than a threshold. Thresholds of the devices and server are not necessarily coordinated, and the thresholds and step sizes are not constrained to be of specific forms. Under mild conditions on loss functions, step sizes and thresholds, for the proposed algorithm, we establish asymptotic analysis results in three ways, respectively: convergence in expectation, almost-sure convergence, and asymptotic distribution of the estimation error. In addition, we show that by fine-tunning the parameters, the proposed event-triggered FL algorithm can reach the same convergence rate as state-of-the-art results from existing time-driven FL. We also establish asymptotic efficiency in the sense of Central Limit Theorem of the estimation error. Numerical simulations for linear regression and image classification problems in the literature are provided to show the effectiveness of the developed results.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Federated learning, asymptotic convergence, event-triggered, stochastic gradient descent, distributed optimization
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-335203 (URN)10.1109/TSP.2023.3295734 (DOI)001049952000001 ()2-s2.0-85165250225 (Scopus ID)
Note

QC 20230914

Available from: 2023-09-14 Created: 2023-09-14 Last updated: 2023-09-14Bibliographically approved
Yi, X., Zhang, S., Yang, T., Chai, T. & Johansson, K. H. (2023). Communication Compression for Distributed Nonconvex Optimization. IEEE Transactions on Automatic Control, 68(9), 5477-5492
Open this publication in new window or tab >>Communication Compression for Distributed Nonconvex Optimization
Show others...
2023 (English)In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 68, no 9, p. 5477-5492Article in journal (Refereed) Published
Abstract [en]

This paper considers distributed nonconvex optimization with the cost functions being distributed over agents. Noting that information compression is a key tool to reduce the heavy communication load for distributed algorithms as agents iteratively communicate with neighbors, we propose three distributed primal–dual algorithms with compressed communication. The first two algorithms are applicable to a general class of compressors with bounded relative compression error and the third algorithm is suitable for two general classes of compressors with bounded absolute compression error. We show that the proposed distributed algorithms with compressed communication have comparable convergence properties as state-of-the-art algorithms with exact communication. Specifically, we show that they can find first-order stationary points with sublinear convergence rate when each local cost function is smooth, where is the total number of iterations, and find global optima with linear convergence rate under an additional condition that the global cost function satisfies the Polyak–Łojasiewicz condition. Numerical simulations are provided to illustrate the effectiveness of the theoretical results.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Communication compression, distributed optimization, linear convergence, nonconvex optimization, Polyak–Łojasiewicz condition
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-335765 (URN)10.1109/TAC.2022.3225515 (DOI)001059698200021 ()2-s2.0-85144046697 (Scopus ID)
Note

QC 20230911

Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2023-10-25Bibliographically approved
Zhang, K., Xu, L., Yi, X., Ding, Z., Johansson, K. H., Chai, T. & Yang, T. (2023). Predefined-time distributed multiobjective optimization for network resource allocation. Science China Information Sciences, 66(7), Article ID 170204.
Open this publication in new window or tab >>Predefined-time distributed multiobjective optimization for network resource allocation
Show others...
2023 (English)In: Science China Information Sciences, ISSN 1674-733X, E-ISSN 1869-1919, Vol. 66, no 7, article id 170204Article in journal (Refereed) Published
Abstract [en]

We consider the multiobjective optimization problem for the resource allocation of the multiagent network, where each agent contains multiple conflicting local objective functions. The goal is to find compromise solutions minimizing all local objective functions subject to resource constraints as much as possible, i.e., the Pareto optimums. To this end, we first reformulate the multiobjective optimization problem into one single-objective distributed optimization problem by using the weighted L-p preference index, where the weighting factors of all local objective functions are obtained from the optimization procedure so that the optimizer of the latter is the desired Pareto optimum of the former. Next, we propose novel predefined-time algorithms to solve the reformulated problem by time-based generators. We show that the reformulated problem is solved within a predefined time if the local objective functions are strongly convex and smooth. Moreover, the settling time can be arbitrarily preset since it does not depend on the initial values and designed parameters. Finally, numerical simulations are presented to illustrate the effectiveness of the proposed algorithms.

Place, publisher, year, edition, pages
Springer Nature, 2023
Keywords
distributed optimization, multiobjective optimization, predefined-time algorithms, time-based generators, weighted L-p preference index
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-332221 (URN)10.1007/s11432-022-3791-8 (DOI)001020878900001 ()2-s2.0-85163729886 (Scopus ID)
Note

QC 20230722

Available from: 2023-07-22 Created: 2023-07-22 Last updated: 2023-07-22Bibliographically approved
Liu, C., Wu, X., Yi, X., Shi, Y. & Johansson, K. H. (2023). Rate analysis of dual averaging for nonconvex distributed optimization. In: IFAC-PapersOnLine: . Paper presented at 22nd IFAC World Congress, Yokohama, Japan, Jul 9 2023 - Jul 14 2023 (pp. 5209-5214). Elsevier BV, 56
Open this publication in new window or tab >>Rate analysis of dual averaging for nonconvex distributed optimization
Show others...
2023 (English)In: IFAC-PapersOnLine, Elsevier BV , 2023, Vol. 56, p. 5209-5214Conference paper, Published paper (Refereed)
Abstract [en]

This work studies nonconvex distributed constrained optimization over stochastic communication networks. We revisit the distributed dual averaging algorithm, which is known to converge for convex problems. We start from the centralized case, for which the change of two consecutive updates is taken as the suboptimality measure. We validate the use of such a measure by showing that it is closely related to stationarity. This equips us with a handle to study the convergence of dual averaging in nonconvex optimization. We prove that the squared norm of this suboptimality measure converges at rate O(1/t). Then, for the distributed setup we show convergence to the stationary point at rate O(1/t). Finally, a numerical example is given to illustrate our theoretical results.

Place, publisher, year, edition, pages
Elsevier BV, 2023
Series
IFAC-PapersOnLine, ISSN 2405-8963 ; 56
Keywords
distributed constrained optimization, Dual averaging, multi-agent consensus, nonconvex optimization, stochastic networks
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-343686 (URN)10.1016/j.ifacol.2023.10.117 (DOI)2-s2.0-85184963049 (Scopus ID)
Conference
22nd IFAC World Congress, Yokohama, Japan, Jul 9 2023 - Jul 14 2023
Note

QC 20240223

Part of ISBN 9781713872344

Available from: 2024-02-22 Created: 2024-02-22 Last updated: 2024-02-23Bibliographically approved
Yi, X., Li, X., Yang, T., Xie, L., Chai, T. & Johansson, K. H. (2023). Regret and Cumulative Constraint Violation Analysis for Distributed Online Constrained Convex Optimization. IEEE Transactions on Automatic Control, 68(5), 2875-2890
Open this publication in new window or tab >>Regret and Cumulative Constraint Violation Analysis for Distributed Online Constrained Convex Optimization
Show others...
2023 (English)In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 68, no 5, p. 2875-2890Article in journal (Refereed) Published
Abstract [en]

This article considers the distributed online convex optimization problem with time-varying constraints over a network of agents. This is a sequential decision making problem with two sequences of arbitrarily varying convex loss and constraint functions. At each round, each agent selects a decision from the decision set, and then only a portion of the loss function and a coordinate block of the constraint function at this round are privately revealed to this agent. The goal of the network is to minimize the network-wide loss accumulated over time. Two distributed online algorithms with full-information and bandit feedback are proposed. Both dynamic and static network regret bounds are analyzed for the proposed algorithms, and network cumulative constraint violation is used to measure constraint violation, which excludes the situation that strictly feasible constraints can compensate the effects of violated constraints. In particular, we show that the proposed algorithms achieve O(Tmax { \κ,1-\κ }) static network regret and O(T1-κ /2) network cumulative constraint violation, where T is the time horizon and κ \in (0,1) is a user-defined tradeoff parameter. Moreover, if the loss functions are strongly convex, then the static network regret bound can be reduced to O(Tκ ). Finally, numerical simulations are provided to illustrate the effectiveness of the theoretical results.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Cumulative constraint violation, distributed optimization, online optimization, regret, time-varying constraints
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-330070 (URN)10.1109/TAC.2022.3230766 (DOI)000979661300018 ()2-s2.0-85146243148 (Scopus ID)
Note

QC 20230626

Available from: 2023-06-26 Created: 2023-06-26 Last updated: 2023-07-06Bibliographically approved
Yi, X., Zhang, S., Yang, T., Chai, T. & Johansson, K. H. (2023). Sublinear and Linear Convergence of Modified ADMM for Distributed Nonconvex Optimization. IEEE Transactions on Control of Network Systems, 10(1), 75-86
Open this publication in new window or tab >>Sublinear and Linear Convergence of Modified ADMM for Distributed Nonconvex Optimization
Show others...
2023 (English)In: IEEE Transactions on Control of Network Systems, E-ISSN 2325-5870, Vol. 10, no 1, p. 75-86Article in journal (Refereed) Published
Abstract [en]

In this article, we consider distributed nonconvex optimization over an undirected connected network. Each agent can only access to its own local nonconvex cost function and all agents collaborate to minimize the sum of these functions by using local information exchange. We first propose a modified alternating direction method of multipliers (ADMM) algorithm. We show that the proposed algorithm converges to a stationary point with the sublinear rate O(1/T) if each local cost function is smooth and the algorithm parameters are chosen appropriately. We also show that the proposed algorithm linearly converges to a global optimum under an additional condition that the global cost function satisfies the Polyak-Łojasiewicz condition, which is weaker than the commonly used conditions for showing linear convergence rates including strong convexity. We then propose a distributed linearized ADMM (L-ADMM) algorithm, derived from the modified ADMM algorithm, by linearizing the local cost function at each iteration. We show that the L-ADMM algorithm has the same convergence properties as the modified ADMM algorithm under the same conditions. Numerical simulations are included to verify the correctness and efficiency of the proposed algorithms. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Alternating direction method of multipliers (ADMM), distributed optimization, linear convergence, linearized ADMM, Polyak ojasiewicz condition, Costs, Iterative methods, Alternating directions method of multipliers, Communications networks, Condition, Convergence, Convex functions, Cost-function, Nonconvex optimization, Nonconvex-optimization, Sublinear, Symmetric matrices, Cost functions
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-325690 (URN)10.1109/TCNS.2022.3186653 (DOI)000967284600001 ()2-s2.0-85133731914 (Scopus ID)
Note

QC 20230412

Available from: 2023-04-12 Created: 2023-04-12 Last updated: 2023-05-15Bibliographically approved
Yi, X., Zhang, S., Yang, T., Chai, T. & Johansson, K. H. (2022). A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization. IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 9(5), 812-833
Open this publication in new window or tab >>A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization
Show others...
2022 (English)In: IEEE-CAA JOURNAL OF AUTOMATICA SINICA, ISSN 2329-9266, Vol. 9, no 5, p. 812-833Article in journal (Refereed) Published
Abstract [en]

The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered. This problem is an important component of many machine learning techniques with data parallelism, such as deep learning and federated learning. We propose a distributed primal-dual stochastic gradient descent (SGD) algorithm, suitable for arbitrarily connected communication networks and any smooth (possibly nonconvex) cost functions. We show that the proposed algorithm achieves the linear speedup convergence rate O(1/root nT) for general nonconvex cost functions and the linear speedup convergence rate O(1/nT)) when the global cost function satisfies the Polyak-Lojasiewicz (P-L) condition, where T is the total number of iterations. We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum. We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Keywords
Distributed nonconvex optimization, linear speedup, Polyak-Lojasiewicz (P-L) condition, primal-dual algorithm, stochastic gradient descent
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-313517 (URN)10.1109/JAS.2022.105554 (DOI)000794202700009 ()2-s2.0-85129516654 (Scopus ID)
Note

QC 20220607

Available from: 2022-06-07 Created: 2022-06-07 Last updated: 2022-06-25Bibliographically approved
Li, X., Yi, X. & Xie, L. (2022). Distributed Online Convex Optimization With an Aggregative Variable. IEEE Transactions on Control of Network Systems, 9(1), 438-449
Open this publication in new window or tab >>Distributed Online Convex Optimization With an Aggregative Variable
2022 (English)In: IEEE Transactions on Control of Network Systems, E-ISSN 2325-5870, Vol. 9, no 1, p. 438-449Article in journal (Refereed) Published
Abstract [en]

This article investigates distributed online convex optimization in the presence of an aggregative variable without any global/central coordinators over a multiagent network. In this problem, each individual agent is only able to access partial information of time-varying global loss functions, thus requiring local information exchanges between neighboring agents. Motivated by many applications in reality, the considered local loss functions depend not only on their own decision variables, but also on an aggregative variable, such as the average of all decision variables. To handle this problem, an online distributed gradient tracking algorithm (O-DGT) is proposed with exact gradient information and it is shown that the dynamic regret is upper bounded by three terms: 1) a sublinear term; 2) a path variation term; and 3) a gradient variation term. Meanwhile, the O-DGT algorithm is also analyzed with stochastic/noisy gradients, showing that the expected dynamic regret has the same upper bound as the exact gradient case. To our best knowledge, this article is the first to study online convex optimization in the presence of an aggregative variable, which enjoys new characteristics in comparison with the conventional scenario without the aggregative variable. Finally, a numerical experiment is provided to corroborate the obtained theoretical results.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Keywords
Aggregative variable, distributed algorithms, dynamic regret, multiagent networks, online convex optimization
National Category
Other Materials Engineering
Identifiers
urn:nbn:se:kth:diva-313771 (URN)10.1109/TCNS.2021.3107480 (DOI)000802014900039 ()2-s2.0-85131365490 (Scopus ID)
Note

QC 20230228

Available from: 2022-06-10 Created: 2023-02-27 Last updated: 2023-02-28Bibliographically approved
Yang, T., Xu, L., Yi, X., Zhang, S.-J. -., Chen, R.-J. -. & Li, Y.-Z. -. (2022). Event-triggered Distributed Optimization Algorithms. Zidonghua Xuebao/Acta Automatica Sinica, 48(1), 133-143
Open this publication in new window or tab >>Event-triggered Distributed Optimization Algorithms
Show others...
2022 (English)In: Zidonghua Xuebao/Acta Automatica Sinica, ISSN 0254-4156, Vol. 48, no 1, p. 133-143Article in journal (Refereed) Published
Abstract [en]

This paper studies a class of distributed optimization problems, whose objective is to minimize the global cost function formed by a sum of local cost functions through local information exchanges. For undirected connected graphs, we propose two distributed optimization algorithms based on the proportional-integral feedback mechanism. Under the condition that the local cost functions are differentiable and convex, it is proved that the proposed algorithms asymptotically converge to a global minimum. For the case that the local cost functions have local Lipschitz gradient and the global cost function is strongly convex with respect to the global minimum, the exponential convergences of the two distributed optimization algorithms are established. In addition, in order to avoid continuous communication between agents and reduce communication burden, by integrating the two proposed distributed optimization algorithms with event-triggered communications, two event-triggered based distributed optimization algorithms are developed. It is shown that the two proposed event-triggered optimization algorithms are free of Zeno behavior. Moreover, the two proposed event-triggered based distributed optimization algorithms maintain the same convergence properties as the distributed optimization algorithms with continuous communications under the corresponding conditions. Finally, the above theoretical results are verified by numerical simulations. Copyright

Place, publisher, year, edition, pages
Science Press, 2022
Keywords
Distributed optimization, Event-triggered communications, Proportional-integral algorithm, Zeno behavior, Cost functions, Global optimization, Undirected graphs, Condition, Cost-function, Event-triggered, Event-triggered communication, Global minimum, Optimization problems, Proportional integral, Two term control systems
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-311789 (URN)10.16383/j.aas.c200838 (DOI)2-s2.0-85115891895 (Scopus ID)
Note

QC 20220504

Available from: 2022-05-04 Created: 2022-05-04 Last updated: 2022-06-25Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4299-0471

Search in DiVA

Show all publications