kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Inexact block coordinate descent algorithms for nonsmooth nonconvex optimization
University of Luxembourg, Luxembourg, L-1855, Luxembourg. (Signal Processing)ORCID iD: 0000-0003-2298-6774
2020 (English)In: IEEE Transactions on Signal Processing, ISSN 1053-587X, Vol. 68, p. 947-961, article id 8931014Article in journal (Refereed) Published
Abstract [en]

In this paper, we propose an inexact block coordinate descent algorithm for large-scale nonsmooth nonconvex optimization problems. At each iteration, a particular block variable is selected and updated by inexactly solving the original optimization problem with respect to that block variable. More precisely, a local approximation of the original optimization problem is solved. The proposed algorithm has several attractive features, namely, i) high flexibility, as the approximation function only needs to be strictly convex and it does not have to be a global upper bound of the original function; ii) fast convergence, as the approximation function can be designed to exploit the problem structure at hand and the stepsize is calculated by the line search; iii) low complexity, as the approximation subproblems are much easier to solve and the line search scheme is carried out over a properly constructed differentiable function; iv) guaranteed convergence of a subsequence to a stationary point, even when the objective function does not have a Lipschitz continuous gradient. Interestingly, when the approximation subproblem is solved by a descent algorithm, convergence of a subsequence to a stationary point is still guaranteed even if the approximation subproblem is solved inexactly by terminating the descent algorithm after a finite number of iterations. These features make the proposed algorithm suitable for large-scale problems where the dimension exceeds the memory and/or the processing capability of the existing hardware. These features are also illustrated by several applications in signal processing and machine learning, for instance, network anomaly detection and phase retrieval. To promote reproducible research, the simulation code is available at https://github.com/optyang/BSCA.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2020. Vol. 68, p. 947-961, article id 8931014
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:kth:diva-286884DOI: 10.1109/TSP.2019.2959240ISI: 000526259400001Scopus ID: 2-s2.0-85081157424OAI: oai:DiVA.org:kth-286884DiVA, id: diva2:1506006
Note

QC 20201210

Available from: 2020-12-02 Created: 2020-12-02 Last updated: 2024-03-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Ottersten, Björn

Search in DiVA

By author/editor
Ottersten, Björn
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 78 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf