On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems
2015 (English)In: IEEE Transactions on Control of Network Systems, ISSN 2325-5870Article in journal (Refereed) Epub ahead of print
Nonconvex and structured optimization problemsarise in many engineering applications that demand scalableand distributed solution methods. The study of the convergenceproperties of these methods is in general difficult due to thenonconvexity of the problem. In this paper, two distributedsolution methods that combine the fast convergence propertiesof augmented Lagrangian-based methods with the separabilityproperties of alternating optimization are investigated. The firstmethod is adapted from the classic quadratic penalty functionmethod and is called the Alternating Direction Penalty Method(ADPM). Unlike the original quadratic penalty function method,in which single-step optimizations are adopted, ADPM uses analternating optimization, which in turn makes it scalable. Thesecond method is the well-known Alternating Direction Methodof Multipliers (ADMM). It is shown that ADPM for nonconvexproblems asymptotically converges to a primal feasible pointunder mild conditions and an additional condition ensuringthat it asymptotically reaches the standard first order necessary conditions for local optimality are introduced. In thecase of the ADMM, novel sufficient conditions under whichthe algorithm asymptotically reaches the standard first ordernecessary conditions are established. Based on this, completeconvergence of ADMM for a class of low dimensional problemsare characterized. Finally, the results are illustrated by applyingADPM and ADMM to a nonconvex localization problem inwireless sensor networks.
Place, publisher, year, edition, pages
IEEE Press, 2015.
IdentifiersURN: urn:nbn:se:kth:diva-178440DOI: 10.1109/TCNS.2015.2476198OAI: oai:DiVA.org:kth-178440DiVA: diva2:877834
QC 201512082015-12-082015-12-082016-02-16Bibliographically approved