kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Data-driven nonsmooth optimization
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).ORCID iD: 0000-0001-8110-6007
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.ORCID iD: 0000-0002-9778-1426
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.). Elekta, Box 7593, S-10393 Stockholm, Sweden..ORCID iD: 0000-0001-9928-3407
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.ORCID iD: 0000-0001-5158-9255
Show others and affiliations
2020 (English)In: SIAM Journal on Optimization, ISSN 1052-6234, E-ISSN 1095-7189, Vol. 30, no 1, p. 102-131Article in journal (Refereed) Published
Abstract [en]

In this work, we consider methods for solving large-scale optimization problems with a possibly nonsmooth objective function. The key idea is to first parametrize a class of optimization methods using a generic iterative scheme involving only linear operations and applications of proximal operators. This scheme contains some modern primal-dual first-order algorithms like the Douglas-Rachford and hybrid gradient methods as special cases. Moreover, we show weak convergence of the iterates to an optimal point for a new method which also belongs to this class. Next, we interpret the generic scheme as a neural network and use unsupervised training to learn the best set of parameters for a specific class of objective functions while imposing a fixed number of iterations. In contrast to other approaches of "learning to optimize," we present an approach which learns parameters only in the set of convergent schemes. Finally, we illustrate the approach on optimization problems arising in tomographic reconstruction and image deconvolution, and train optimization algorithms for optimal performance given a fixed number of iterations.

Place, publisher, year, edition, pages
Society for Industrial & Applied Mathematics (SIAM) , 2020. Vol. 30, no 1, p. 102-131
Keywords [en]
convex optimization, proximal algorithms, monotone operators, machine learning, inverse problems, computerized tomography
National Category
Computational Mathematics
Identifiers
URN: urn:nbn:se:kth:diva-278765DOI: 10.1137/18M1207685ISI: 000546998300005Scopus ID: 2-s2.0-85084927877OAI: oai:DiVA.org:kth-278765DiVA, id: diva2:1455866
Note

QC 20200729

Available from: 2020-07-29 Created: 2020-07-29 Last updated: 2022-06-26Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Banert, SebastianRingh, AxelAdler, JonasKarlsson, JohanÖktem, Ozan

Search in DiVA

By author/editor
Banert, SebastianRingh, AxelAdler, JonasKarlsson, JohanÖktem, Ozan
By organisation
Mathematics (Div.)Optimization and Systems Theory
In the same journal
SIAM Journal on Optimization
Computational Mathematics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 334 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf