kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Training (overparametrized) neural networks in near-linear time
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.ORCID iD: 0000-0001-8611-6896
2021 (English)In: Leibniz International Proceedings in Informatics, LIPIcs, Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing , 2021Conference paper, Published paper (Refereed)
Abstract [en]

The slow convergence rate and pathological curvature issues of first-order gradient methods for training deep neural networks, initiated an ongoing effort for developing faster second-order optimization algorithms beyond SGD, without compromising the generalization error. Despite their remarkable convergence rate (independent of the training batch size n), second-order algorithms incur a daunting slowdown in the cost per iteration (inverting the Hessian matrix of the loss function), which renders them impractical. Very recently, this computational overhead was mitigated by the works of [79, 23], yielding an O(mn2)-time second-order algorithm for training two-layer overparametrized neural networks of polynomial width m. We show how to speed up the algorithm of [23], achieving an Oe(mn)-time backpropagation algorithm for training (mildly overparametrized) ReLU networks, which is near-linear in the dimension (mn) of the full gradient (Jacobian) matrix. The centerpiece of our algorithm is to reformulate the Gauss-Newton iteration as an `2-regression problem, and then use a Fast-JL type dimension reduction to precondition the underlying Gram matrix in time independent of M, allowing to find a sufficiently good approximate solution via first-order conjugate gradient. Our result provides a proof-of-concept that advanced machinery from randomized linear algebra – which led to recent breakthroughs in convex optimization (ERM, LPs, Regression) – can be carried over to the realm of deep learning as well. 

Place, publisher, year, edition, pages
Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing , 2021.
Keywords [en]
Deep learning theory, Nonconvex optimization, Backpropagation, Convex optimization, Curve fitting, Deep learning, Deep neural networks, Gradient methods, Jacobian matrices, Machinery, Network layers, Approximate solution, Computational overheads, Dimension reduction, Gauss-Newton iteration, Generalization Error, Second order optimization, Second-order algorithms, Slow convergences, Multilayer neural networks
National Category
Control Engineering Computational Mathematics
Identifiers
URN: urn:nbn:se:kth:diva-309947DOI: 10.4230/LIPIcs.ITCS.2021.63Scopus ID: 2-s2.0-85108156230OAI: oai:DiVA.org:kth-309947DiVA, id: diva2:1645901
Conference
12th Innovations in Theoretical Computer Science Conference, ITCS 2021, 6 January 2021 through 8 January 2021
Note

Part of proceedings: ISBN 978-3-95977-177-1

QC 20220321

Available from: 2022-03-21 Created: 2022-03-21 Last updated: 2023-01-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

van den Brand, Jan

Search in DiVA

By author/editor
van den Brand, Jan
By organisation
Theoretical Computer Science, TCS
Control EngineeringComputational Mathematics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 30 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf