kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Pure and Spurious Critical Points: a Geometric Study of Linear Networks
New York University, United States.
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
New York University, United States.
2020 (English)Conference paper, Published paper (Refereed)
Abstract [en]

The critical locus of the loss function of a neural network is determined by the geometry of the functional space and by the parameterization of this space by the network's weights. We introduce a natural distinction between pure critical points, which only depend on the functional space, and spurious critical points, which arise from the parameterization. We apply this perspective to revisit and extend the literature on the loss function of linear neural networks. For this type of network, the functional space is either the set of all linear maps from input to output space, or a determinantal variety, i.e., a set of linear maps with bounded rank. We use geometric properties of determinantal varieties to derive new results on the landscape of linear networks with different loss functions and different parameterizations. Our analysis clearly illustrates that the absence of “bad” local minima in the loss landscape of linear networks is due to two distinct phenomena that apply in different settings: it is true for arbitrary smooth convex losses in the case of architectures that can express all linear maps (“filling architectures”) but it holds only for the quadratic loss when the functional space is a determinantal variety (“non-filling architectures”). Without any assumption on the architecture, smooth convex losses may lead to landscapes with many bad minima.

Place, publisher, year, edition, pages
International Conference on Learning Representations, ICLR , 2020.
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-283125Scopus ID: 2-s2.0-85114332309OAI: oai:DiVA.org:kth-283125DiVA, id: diva2:1473089
Conference
8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, 30 April 2020
Note

QC 20210202

Available from: 2020-10-05 Created: 2020-10-05 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Kohn, Kathlén

Search in DiVA

By author/editor
Kohn, Kathlén
By organisation
Mathematics (Dept.)
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 100 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf