kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 11) Show all publications
Zhang, H., Li, Y. & Hu, X. (2021). Discrete-Time Inverse Linear Quadratic Optimal Control over Finite Time-Horizon under Noisy Output Measurements. Control Theory and Technology, 19(4), 563-572
Open this publication in new window or tab >>Discrete-Time Inverse Linear Quadratic Optimal Control over Finite Time-Horizon under Noisy Output Measurements
2021 (English)In: Control Theory and Technology, ISSN 2095-6983, Vol. 19, no 4, p. 563-572Article in journal (Refereed) Published
Abstract [en]

In this paper, the problem of inverse quadratic optimal control over finite time-horizon for discrete-time linear systems is considered. Our goal is to recover the corresponding quadratic objective function using noisy observations. First, the identifiability of the model structure for the inverse optimal control problem is analyzed under relative degree assumption and we show the model structure is strictly globally identifiable. Next, we study the inverse optimal control problem whose initial state distribution and the observation noise distribution are unknown, yet the exact observations on the initial states are available. We formulate the problem as a risk minimization problem and approximate the problem using empirical average. It is further shown that the solution to the approximated problem is statistically consistent under the assumption of relative degrees. We then study the case where the exact observations on the initial states are not available, yet the observation noises are known to be white Gaussian distributed and the distribution of the initial state is also Gaussian (with unknown mean and covariance). EM-algorihm is used to estimate the parameters in the objective function. The effectiveness of our results are demonstrated by numerical examples.

Place, publisher, year, edition, pages
Springer Nature, 2021
Keywords
Inverse optimal control, Linear quadratic regulator, Statistical consistency, EM-algorithm
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-310129 (URN)10.1007/s11768-021-00066-8 (DOI)000718777100001 ()2-s2.0-85119060446 (Scopus ID)
Note

QC 20220323

Available from: 2022-03-22 Created: 2022-03-22 Last updated: 2022-06-25Bibliographically approved
Zhang, H., Umenberger, J. & Hu, X. (2019). Inverse optimal control for discrete-time finite-horizon Linear Quadratic Regulators. Automatica, 110, Article ID 108593.
Open this publication in new window or tab >>Inverse optimal control for discrete-time finite-horizon Linear Quadratic Regulators
2019 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 110, article id 108593Article in journal (Refereed) Published
Abstract [en]

In this paper, we consider the inverse optimal control problem for discrete-time Linear Quadratic Regulators (LQR), over finite-time horizons. Given observations of the optimal trajectories, or optimal control inputs, to a linear time-invariant system, the goal is to infer the parameters that define the quadratic cost function. The well-posedness of the inverse optimal control problem is first justified. In the noiseless case, when these observations are exact, we analyze the identifiability of the problem and provide sufficient conditions for uniqueness of the solution. In the noisy case, when the observations are corrupted by additive zero-mean noise, we formulate the problem as an optimization problem and prove that the solution to this problem is statistically consistent. The performance of the proposed method is illustrated through numerical examples.

Place, publisher, year, edition, pages
Elsevier Ltd, 2019
Keywords
Inverse optimal control, Linear Quadratic Regulator, Cost functions, Invariance, Linear control systems, Linear systems, Numerical methods, Optimal control systems, Time varying control systems, Finite time horizon, Inverse optimal control problems, Inverse-optimal control, Linear time invariant systems, Optimal trajectories, Optimization problems, Quadratic cost functions, Inverse problems
National Category
Other Mathematics Computer Sciences
Research subject
Applied and Computational Mathematics, Optimization and Systems Theory
Identifiers
urn:nbn:se:kth:diva-263468 (URN)10.1016/j.automatica.2019.108593 (DOI)000495491900014 ()2-s2.0-85072573124 (Scopus ID)
Note

QC 20191205

Available from: 2019-12-05 Created: 2019-12-05 Last updated: 2022-06-26Bibliographically approved
Zhang, H., Li, Y. & Hu, X. (2019). Inverse Optimal Control for Finite-Horizon Discrete-time Linear Quadratic Regulator under Noisy Output. In: Proceedings of the IEEE Conference on Decision and Control: . Paper presented at 58th IEEE Conference on Decision and Control, CDC 2019, 11-13 December 2019, Nice, France (pp. 6663-6668). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Inverse Optimal Control for Finite-Horizon Discrete-time Linear Quadratic Regulator under Noisy Output
2019 (English)In: Proceedings of the IEEE Conference on Decision and Control, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 6663-6668Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, the problem of inverse optimal control for finite-horizon discrete-time Linear Quadratic Regulators (LQRs) is considered. The goal of the inverse optimal control problem is to recover the corresponding objective function by the noisy observations. We consider the problem of inverse optimal control in two scenarios: 1) the distributions of the initial state and the observation noise are unknown, yet the exact observations on the initial states and the noisy observations on system output are available; 2) the exact observations on the initial states are not available, yet the observation noises are known white Gaussian and the distribution of the initial state is also Gaussian (with unknown mean and covariance). For the first scenario, we formulate the problem as a risk minimization problem and show that its solution is statistically consistent. For the second scenario, we fit the problem into the framework of maximum-likelihood and Expectation Maximization (EM) algorithm is used to solve this problem. The performance for the estimations are shown by numerical examples.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2019
Keywords
Maximum likelihood, Maximum principle, Optimal control systems, Risk perception, Expectation-maximization algorithms, Inverse optimal control problems, Inverse-optimal control, Linear quadratic regulator, Noisy observations, Objective functions, Observation noise, Risk minimization, Inverse problems
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-274081 (URN)10.1109/CDC40024.2019.9029795 (DOI)000560779006016 ()2-s2.0-85082497379 (Scopus ID)
Conference
58th IEEE Conference on Decision and Control, CDC 2019, 11-13 December 2019, Nice, France
Note

QC 20200702

Part of ISBN 9781728113982

Available from: 2020-07-02 Created: 2020-07-02 Last updated: 2024-10-25Bibliographically approved
Zhang, H. (2019). Optimizing Networked Systems and Inverse Optimal Control. (Doctoral dissertation). KTH Royal Institute of Technology
Open this publication in new window or tab >>Optimizing Networked Systems and Inverse Optimal Control
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis is concerned with the problems of optimizing networked systems, including designing a distributed energy optimal consensus controller for homogeneous networked linear systems, maximizing the algebraic connectivity of a network by projected saddle point dynamics. In addition, the inverse optimal control problems for discrete-time finite time-horizon Linear Quadratic Regulators (LQRs) are considered. The goal is to infer the Q matrix in the quadratic cost function using the observations (possibly noisy) either on the optimal state trajectories, optimal control input or the system output.

In Paper A, an optimal energy cost controller design for identical networked linear systems asymptotic consensus is considered. It is assumed that the topology of the network is given and the controller can only depend on relative information of the agents. Since finding the control gain for such a controller is hard, we focus on finding an optimal controller among a classical family of controllers which is based on the Algebraic Riccati Equation (ARE) and guarantees asymptotic consensus. We find that the energy cost is bounded by an interval and hence we minimize the upper bound. Further, the minimization for the upper bound boils down to optimizing the control gain and the edge weights of the graph separately. A suboptimal control gain is obtained by choosing Q=0 in the ARE. Negative edge weights are allowed, meaning that "competitions" between the agents are allowed. The edge weight optimization problem is formulated as a Semi-Definite Programming (SDP) problem. We show that the lowest control energy cost is reached when the graph is complete and with equal edge weights. Furthermore, two sufficient conditions for the existence of negative optimal edge weights realization are given. In addition, we provide a distributed way of solving the SDP problem when the graph topology is regular.

In Paper B, a projected primal-dual gradient flow of augmented Lagrangian is presented to solve convex optimization problems that are not necessarily strictly convex. The optimization variables are restricted by a convex set with computable projection operation on its tangent cone as well as equality constraints. We show that the projected dynamical system converges to one of the saddle points and hence finding an optimal solution. Moreover, the problem of distributedly maximizing the algebraic connectivity of an undirected network by optimizing the "port gains" of each nodes is considered. The original SDP problem is relaxed into a nonlinear programming (NP) problem that will be solved by the aforementioned projected dynamical system. Numerical examples show the convergence of the aforementioned algorithm to one of the optimal solutions. The effect of the relaxation is illustrated empirically with numerical examples. A methodology is presented so that the number of iterations needed to converge is reduced. Complexity per iteration of the algorithm is illustrated with numerical examples.

In Paper C and D, the inverse optimal control problems over finite-time horizon for discrete-time LQRs are considered. The well-posedness of the inverse optimal control problem is first justified. In the noiseless case, when these observations of the optimal state trajectories or the optimal control input are exact, we analyze the identifiability of the problem and provide sufficient conditions for uniqueness of the solution. In the noisy case, when the observations are corrupted by additive zero-mean noise, we formulate the problem as an optimization problem and prove that the solution to this problem is statistically consistent. The following two scenarios are further considered: 1) the distributions of the initial state and the observation noise are unknown, yet the exact observations on the initial states and the noisy observations on the system output are available; 2) the exact observations on the initial states are not available, yet the observation noises are known to be white Gaussian and the distribution of the initial state is also Gaussian (with unknown mean and covariance). For the first scenario, we show statistical consistency for the estimation. For the second scenario, we fit the problem into the framework of maximum-likelihood and Expectation Maximization (EM) algorithm is used to solve this problem. The performance of the proposed method is illustrated through numerical examples.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2019. p. 23
Series
TRITA-SCI-FOU ; 2019:04
Keywords
Networked systems, energy optimal consensus control, semi-definite programming, distributed optimization, inverse optimal control
National Category
Computational Mathematics
Research subject
Mathematics
Identifiers
urn:nbn:se:kth:diva-241424 (URN)978-91-7873-085-8 (ISBN)
Public defence
2019-02-18, Kollegiesalen, Brinellvägen 8​, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20190121

Available from: 2019-01-21 Created: 2019-01-21 Last updated: 2022-06-26Bibliographically approved
Li, Y., Zhang, H., Yao, Y. & Hu, X. (2018). A Convex Optimization Approach to Inverse Optimal Control. In: Chen, X Zhao, QC (Ed.), 2018 37Th Chinese Control Conference, CCC (CCC): . Paper presented at 37th Chinese Control Conference, CCC 2018; Wuhan; China; 25 July 2018 through 27 July 2018 (pp. 257-262). IEEE, 2018
Open this publication in new window or tab >>A Convex Optimization Approach to Inverse Optimal Control
2018 (English)In: 2018 37Th Chinese Control Conference, CCC (CCC) / [ed] Chen, X Zhao, QC, IEEE, 2018, Vol. 2018, p. 257-262Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, the problem of inverse optimal control (IOC) is investigated, where the quadratic cost function of a dynamic process is required to be recovered based on the observation of optimal control sequences. In order to guarantee the feasibility of the problem, the IOC is reformulated as an infinite-dimensional convex optimization problem, which is then solved in the primal-dual framework. In addition, the feasibility of the original IOC could be determined from the optimal value of reformulated problem, which also gives out an approximate solution when the original problem is not feasible. In addition, several simplification methods are proposed to facilitate the computation, by which the problem is reduced to a boundary value problem of ordinary differential equations. Finally, numerical simulations are used to demonstrate the effectiveness and feasibility of the proposed methods.

Place, publisher, year, edition, pages
IEEE, 2018
Series
Chinese Control Conference, ISSN 2161-2927
Keywords
Inverse optimal control, Convex optimization, Primal-dual method
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-254152 (URN)10.23919/ChiCC.2018.8482872 (DOI)000468622100046 ()2-s2.0-85056081591 (Scopus ID)978-9-8815-6395-8 (ISBN)
Conference
37th Chinese Control Conference, CCC 2018; Wuhan; China; 25 July 2018 through 27 July 2018
Note

QC 20190620

Available from: 2019-06-20 Created: 2019-06-20 Last updated: 2022-06-26Bibliographically approved
Zhang, H. & Hu, X. (2018). Consensus control for linear systems with optimal energy cost. Automatica, 93, 83-91
Open this publication in new window or tab >>Consensus control for linear systems with optimal energy cost
2018 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 93, p. 83-91Article in journal (Refereed) Published
Abstract [en]

In this paper, we design an optimal energy cost controller for linear systems asymptotic consensus given the topology of the graph. The controller depends only on relative information of the agents. Since finding the control gain for such controller is hard, we focus on finding an optimal controller among a classical family of controllers which is based on Algebraic Riccati Equation (ARE) and guarantees asymptotic consensus. Through analysis, we find that the energy cost is bounded by an interval and hence we minimize the upper bound. In order to do that, there are two classes of variables that need to be optimized: the control gain and the edge weights of the graph and are hence designed from two perspectives. A suboptimal control gain is obtained by choosing Q=0 in the ARE. Negative edge weights are allowed, and the problem is formulated as a Semi-definite Programming (SDP) problem. Having negative edge weights means that “competitions” between the agents are allowed. The motivation behind this setting is to have a better system performance. We provide a different proof compared to Thunberg and Hu (2016) from the angle of optimization and show that the lowest control energy cost is reached when the graph is complete and with equal edge weights. Furthermore, two sufficient conditions for the existence of negative optimal edge weights realization are given. In addition, we provide a distributed way of solving the SDP problem when the graph topology is regular.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Consensus control, Distributed optimization, Multi-agent systems, Optimal control, Semi-definite programming
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-227534 (URN)10.1016/j.automatica.2018.03.044 (DOI)000436916200010 ()2-s2.0-85044478071 (Scopus ID)
Note

QC 20180521

Available from: 2018-05-21 Created: 2018-05-21 Last updated: 2024-03-15Bibliographically approved
Zhang, H., Wei, J., Yi, P. & Hu, X. (2018). Projected primal-dual gradient flow of augmented Lagrangian with application to distributed maximization of the algebraic connectivity of a network. Automatica, 98, 34-41
Open this publication in new window or tab >>Projected primal-dual gradient flow of augmented Lagrangian with application to distributed maximization of the algebraic connectivity of a network
2018 (English)In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 98, p. 34-41Article in journal (Refereed) Published
Abstract [en]

In this paper, a projected primal-dual gradient flow of augmented Lagrangian is presented to solve convex optimization problems that are not necessarily strictly convex. The optimization variables are restricted by a convex set with computable projection operation on its tangent cone as well as equality constraints. As a supplement of the analysis in Niederlander and Cortes (2016), we show that the projected dynamical system converges to one of the saddle points and hence finding an optimal solution. Moreover, the problem of distributedly maximizing the algebraic connectivity of an undirected network by optimizing the port gains of each nodes (base stations) is considered. The original semi-definite programming (SDP) problem is relaxed into a nonlinear programming (NP) problem that will be solved by the aforementioned projected dynamical system. Numerical examples show the convergence of the aforementioned algorithm to one of the optimal solutions. The effect of the relaxation is illustrated empirically with numerical examples. A methodology is presented so that the number of iterations needed to reach the equilibrium is suppressed. Complexity per iteration of the algorithm is illustrated with numerical examples.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Projected dynamical systems, Semi-definite programming, Distributed optimization
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-239468 (URN)10.1016/j.automatica.2018.09.004 (DOI)000449310900005 ()2-s2.0-85053807178 (Scopus ID)
Note

QC 20181126

Available from: 2018-11-26 Created: 2018-11-26 Last updated: 2024-03-15Bibliographically approved
Zhang, H. & Hu, X. (2017). Optimal energy consensus control for linear multi-agent systems. In: 2017 36th Chinese Control Conference, CCC: . Paper presented at 36th Chinese Control Conference, CCC 2017, Dalian, China, 26 July 2017 through 28 July 2017 (pp. 2663-2668). IEEE Computer Society, Article ID 8027765.
Open this publication in new window or tab >>Optimal energy consensus control for linear multi-agent systems
2017 (English)In: 2017 36th Chinese Control Conference, CCC, IEEE Computer Society, 2017, p. 2663-2668, article id 8027765Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, an optimal energy cost controller for linear multi-agent systems' consensus is proposed. It is assumed that the topology among the agents is fixed and the agents are connected through an edge-weighted graph. The controller only uses relative information between agents. Due to the difficulty of finding the controller gain, we focus on finding the optimal controller among a sub-family whose design is based on Algebraic Riccati Equation (ARE) and guarantee consensus. It is found that the energy cost for such controllers is bounded by an interval and hence we minimize the upper bound. To do that, the control gain and the edge weights are optimized separately. The control gain is optimized by choosing Q = 0 in the ARE; the edge weights are optimized under the assumption that there is limited communication resources in the network. Negative edge weights are allowed, and the problem is formulated as a Semi-definite Programming (SDP) problem. The controller coincides with the optimal control in [8] when the graph is complete. Furthermore, two sufficient conditions for the existence of negative optimal edge weights realization are given.

Place, publisher, year, edition, pages
IEEE Computer Society, 2017
Series
Chinese Control Conference, CCC, ISSN 1934-1768
Keywords
Consensus Control, Multi-Agent Systems, Optimal Control, Semi-definite Programming, Synchronizability
National Category
Control Engineering
Identifiers
urn:nbn:se:kth:diva-217583 (URN)10.23919/ChiCC.2017.8027765 (DOI)000432014403065 ()2-s2.0-85032191076 (Scopus ID)9789881563934 (ISBN)
Conference
36th Chinese Control Conference, CCC 2017, Dalian, China, 26 July 2017 through 28 July 2017
Note

QC 20171115

Available from: 2017-11-15 Created: 2017-11-15 Last updated: 2024-03-15Bibliographically approved
Zhang, H. & Hu, X.Inverse Optimal Control for Finite-Horizon Discrete-time Linear Quadratic Regulator Under Noisy Output.
Open this publication in new window or tab >>Inverse Optimal Control for Finite-Horizon Discrete-time Linear Quadratic Regulator Under Noisy Output
(English)Manuscript (preprint) (Other academic)
Abstract [en]

In this paper, the problem of inverse optimal control for finite-horizon discrete-time Linear Quadratic Regulators (LQRs) is considered. The goal of the inverse optimal control problem is to recover the corresponding objective function by the noisy observations. We consider the problem of inverse optimal control in two scenarios: 1) the distributions of the initial state and the observation noise are unknown, yet the exact observations on the initial states and the noisy observations on system output are available; 2) the exact observations on the initial states are not available, yet the observation noises are known white Gaussian and the distribution of the initial state is also Gaussian (with unknown mean and covariance). For the first scenario, we formulate the problem as a risk minimization problem and show that its solution is statistically consistent. For the second scenario, we fit the problem into the framework of maximum-likelihood and Expectation Maximization (EM) algorithm is used to solve this problem. The performance for the estimations are shown by numerical examples.

Keywords
Inverse optimal control, Linear Quadratic Regulator, noisy output
National Category
Control Engineering Computational Mathematics
Research subject
Electrical Engineering; Mathematics
Identifiers
urn:nbn:se:kth:diva-241423 (URN)
Note

QC 20190121

Available from: 2019-01-21 Created: 2019-01-21 Last updated: 2022-06-26Bibliographically approved
Zhang, H., Umenberger, J. & Hu, X.Inverse Quadratic Optimal Control for Discrete-Time Linear Systems.
Open this publication in new window or tab >>Inverse Quadratic Optimal Control for Discrete-Time Linear Systems
(English)Manuscript (preprint) (Other academic)
Abstract [en]

In this paper, we consider the inverse optimal control problem for discrete-time Linear Quadratic Regulators (LQRs), over finite-time horizons. Given observations of the optimal trajectories, or optimal control inputs, to a linear time-invariant system, the goal is to infer the parameters that define the quadratic cost function. The well-posedness of the inverse optimal control problem is first justied. In the noiseless case, when these observations are exact, we analyze the identiability of the problem and provide sufficient conditions for uniqueness of the solution. In the noisy case, when the observations are corrupted by additive zero-mean noise, we formulate the problem as an optimization problem and prove that the solution to this problem is statistically consistent. The performance of the proposed method is illustrated through numerical examples.

Keywords
Inverse optimal control, Linear Quadratic Regulator
National Category
Control Engineering Computational Mathematics
Research subject
Mathematics
Identifiers
urn:nbn:se:kth:diva-241428 (URN)
Note

QC 20190121

Available from: 2019-01-21 Created: 2019-01-21 Last updated: 2022-06-26Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3905-0633

Search in DiVA

Show all publications