kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Convergence of a stochastic gradient method with momentum for non-smooth non-convex optimization
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).ORCID iD: 0000-0002-2237-2580
2020 (English)In: 37th International Conference on Machine Learning, ICML 2020, International Machine Learning Society (IMLS) , 2020, p. 6576-6585Conference paper, Published paper (Refereed)
Abstract [en]

Stochastic gradient methods with momentum are widely used in applications and at the core of optimization subroutines in many popular machine learning libraries. However, their sample complexities have not been obtained for problems beyond those that are convex or smooth. This paper establishes the convergence rate of a stochastic subgradient method with a momentum term of Polyak type for a broad class of non-smooth, non-convex, and constrained optimization problems. Our key innovation is the construction of a special Lyapunov function for which the proven complexity can be achieved without any tuning of the momentum parameter. For smooth problems, we extend the known complexity bound to the constrained case and demonstrate how the unconstrained case can be analyzed under weaker assumptions than the state-of-The-Art. Numerical results confirm our theoretical developments.

Place, publisher, year, edition, pages
International Machine Learning Society (IMLS) , 2020. p. 6576-6585
Keywords [en]
Constrained optimization, Convex optimization, Gradient methods, Lyapunov functions, Machine learning, Momentum, Complexity bounds, Constrained optimi-zation problems, Convergence rates, Nonconvex optimization, Numerical results, Stochastic gradient methods, Sub-gradient methods, Theoretical development, Stochastic systems
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-302903Scopus ID: 2-s2.0-85099888979OAI: oai:DiVA.org:kth-302903DiVA, id: diva2:1599870
Conference
37th International Conference on Machine Learning, ICML 2020, 13 July 2020 through 18 July 2020
Note

QC 20211002

Available from: 2021-10-02 Created: 2021-10-02 Last updated: 2023-04-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Mai, Vien V.Johansson, Mikael

Search in DiVA

By author/editor
Mai, Vien V.Johansson, Mikael
By organisation
Decision and Control Systems (Automatic Control)
Control Engineering

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 16 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf