Change search
ReferencesLink to record
Permanent link

Direct link
Accelerating Convergence of Large-scale Optimization Algorithms
KTH, School of Electrical Engineering (EES), Automatic Control.
2015 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

Several recent engineering applications in multi-agent systems, communication networks, and machine learning deal with decision problems that can be formulated as optimization problems. For many of these problems, new constraints limit the usefulness of traditional optimization algorithms. In some cases, the problem size is much larger than what can be conveniently dealt with using standard solvers. In other cases, the problems have to be solved in a distributed manner by several decision-makers with limited computational and communication resources. By exploiting problem structure, however, it is possible to design computationally efficient algorithms that satisfy the implementation requirements of these emerging applications.

In this thesis, we study a variety of techniques for improving the convergence times of optimization algorithms for large-scale systems. In the first part of the thesis, we focus on multi-step first-order methods. These methods add memory to the classical gradient method and account for past iterates when computing the next one. The result is a computationally lightweight acceleration technique that can yield significant improvements over gradient descent. In particular, we focus on the Heavy-ball method introduced by Polyak. Previous studies have quantified the performance improvements over the gradient through a local convergence analysis of twice continuously differentiable objective functions. However, the convergence properties of the method on more general convex cost functions has not been known. The first contribution of this thesis is a global convergence analysis of the Heavy- ball method for a variety of convex problems whose objective functions are strongly convex and have Lipschitz continuous gradient. The second contribution is to tailor the Heavy- ball method to network optimization problems. In such problems, a collection of decision- makers collaborate to find the decision vector that minimizes the total system cost. We derive the optimal step-sizes for the Heavy-ball method in this scenario, and show how the optimal convergence times depend on the individual cost functions and the structure of the underlying interaction graph. We present three engineering applications where our algorithm significantly outperform the tailor-made state-of-the-art algorithms.

In the second part of the thesis, we consider the Alternating Direction Method of Multipliers (ADMM), an alternative powerful method for solving structured optimization problems. The method has recently attracted a large interest from several engineering communities. Despite its popularity, its optimal parameters have been unknown. The third contribution of this thesis is to derive optimal parameters for the ADMM algorithm when applied to quadratic programming problems. Our derivations quantify how the Hessian of the cost functions and constraint matrices affect the convergence times. By exploiting this information, we develop a preconditioning technique that allows to accelerate the performance even further. Numerical studies of model-predictive control problems illustrate significant performance benefits of a well-tuned ADMM algorithm. The fourth and final contribution of the thesis is to extend our results on optimal scaling and parameter tuning of the ADMM method to a distributed setting. We derive optimal algorithm parameters and suggest heuristic methods that can be executed by individual agents using local information. The resulting algorithm is applied to distributed averaging problem and shown to yield substantial performance improvements over the state-of-the-art algorithms. 

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2015. , ix, 156 p.
TRITA-EE, ISSN 1653-5146 ; 2015:012
Keyword [en]
Convex optimization, Large-scale systems, First-order methods, Convergence analysis, ADMM, Optimization algorithms
National Category
Telecommunications Control Engineering Signal Processing
Research subject
Electrical Engineering; Mathematics
URN: urn:nbn:se:kth:diva-162377ISBN: 978-91-7595-485-1OAI: diva2:797724
Public defence
2015-04-29, F3, Lindstedtsvägen 26, KTH, Stockholm, 10:00 (English)

QC 20150327

Available from: 2015-03-27 Created: 2015-03-24 Last updated: 2015-03-27Bibliographically approved

Open Access in DiVA

e.ghadimi(3044 kB)424 downloads
File information
File name FULLTEXT01.pdfFile size 3044 kBChecksum SHA-512
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Ghadimi, Euhanna
By organisation
Automatic Control
TelecommunicationsControl EngineeringSignal Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 424 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 3301 hits
ReferencesLink to record
Permanent link

Direct link