kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On the Lipschitz Constant of Deep Networks and Double Descent
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-0242-4419
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-5211-6388
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-0579-3372
2023 (English)In: Proceedings 34th British Machine Vision Conference 2023 / [ed] BMVA, 2023Conference paper, Published paper (Refereed)
Abstract [en]

Existing bounds on the generalization error of deep networks assume some form of smooth or bounded dependence on the input variable, falling short of investigating the mechanisms controlling such factors in practice. In this work, we present an extensive experimental study of the empirical Lipschitz constant of deep networks undergoing double descent, and highlight non-monotonic trends strongly correlating with the test error. Building a connection between parameter-space and input-space gradients for SGD around a critical point, we isolate two important factors - namely loss landscape curvature and distance of parameters from initialization - respectively controlling optimization dynamics around a critical point and bounding model function complexity, even beyond the training data. Our study presents novel insights on implicit regularization via overparameterization, and effective model complexity for networks trained in practice.

Place, publisher, year, edition, pages
2023.
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-348454OAI: oai:DiVA.org:kth-348454DiVA, id: diva2:1882587
Conference
34th British Machine Vision Conference 2023, {BMVC} 2023, Aberdeen, UK, November 20-24, 2023
Note

QC 20240709

Available from: 2024-07-05 Created: 2024-07-05 Last updated: 2024-10-17Bibliographically approved
In thesis
1. On Implicit Smoothness Regularization in Deep Learning
Open this publication in new window or tab >>On Implicit Smoothness Regularization in Deep Learning
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

State of the art neural networks provide a rich class of function approximators,fueling the remarkable success of gradient-based deep learning on complex high-dimensional problems, ranging from natural language modeling to imageand video generation and understanding. Modern deep networks enjoy sufficient expressive power to shatter common classification benchmarks, as wellas interpolate noisy regression targets. At the same time, the same models areable to generalize well whilst perfectly fitting noisy training data, even in the absence of external regularization constraining model expressivity. Efforts towards making sense of the observed benign overfitting behaviour uncovered its occurrence in overparameterized linear regression as well as kernel regression,extending classical empirical risk minimization to the study of minimum norm interpolators. Existing theoretical understanding of the phenomenon identi-fies two key factors affecting the generalization ability of interpolating models.First, overparameterization – corresponding to the regime in which a model counts more parameters than the number of constraints imposed by the train-ing sample – effectively reduces model variance in proximity of the training data. Second, the structure of the learner – which determines how patterns in the training data are encoded in the learned representation – controls the ability to separate signal from noise when attaining interpolation. Analyzingthe above factors for deep finite-width networks respectively entails characterizing the mechanisms driving feature learning and norm-based capacity control in practical settings, thus posing a challenging open problem. The present thesis explores the problem of capturing effective complexity of finite-width deep networks trained in practice, through the lens of model function geometry, focusing on factors implicitly restricting model complexity. First,model expressivity is contrasted to effective nonlinearity for models undergoing double descent, highlighting constrained effective complexity afforded by over parameterization. Second, the geometry of interpolation is studied in the presence of noisy targets, observing robust interpolation over volumesof size controlled by model scale. Third, the observed behavior is formally tied to parameter-space curvature, connecting parameter-space geometry tothe input space’s. Finally, the thesis concludes by investigating whether the findings translate to the context of self-supervised learning, relating the geometry of representations to downstream robustness, and highlighting trends in keeping with neural scaling laws. The present work isolates input-space smoothness as a key notion for characterizing effective complexity of model functions expressed by overparameterized deep networks.

Abstract [sv]

Toppmoderna neurala nätverk erbjuder en rik klass funktionsapproximatorer,vilket stimulerar den anmärkningsvärda utvecklingen av gradientbaserad djupinlärning för komplexa högdimensionella problem, allt från modellering avnaturligt språk till bild- och videogenerering och förståelse. Moderna djupanätverk har tillräckligt mycket expressiv kraft för att kunna slå vanliga klassificeringsbenchmarks, samt interpolera brusiga regressionsmål. Samma modeller kan generalisera väl samtidigt som de kan anpassas perfekt till brusigträningsdata, även i frånvaro av extern regularisering som begränsar modellens uttrycksförmåga. Ansträngningar för att förstå det observerade så kallade benign overfitting-beteendet har påvisat dess förekomst i överparameteriserad linjär regression såväl som i kärnbaserad regression, vilket utvidgar klassisk empirisk riskminimering till studiet av miniminorm interpolatorer. Befintlig teoretisk förståelse av fenomenet identifierar två nyckelfaktorer som påverkargeneraliseringsförmågan hos interpolerande modeller. För det första reducerar överparameterisering - motsvarande regimen där en modell har fler paramet-rar än antalet villkor som ställs av träningsproven - effektivt modellvarianseni närheten av träningsdatan. För det andra styr inlärningens struktur - som bestämmer hur mönster i träningsdata kodas i den inlärda representationen- förmågan att separera signal från brus när interpolering uppnås. Att analysera ovanstående faktorer för nätverk med djup ändlig bredd innebär att karakterisera de mekanismer som driver funktionsinlärning och normbaserad kapacitetskontroll i praktiska sammanhang, vilket utgör ett utmanande öppet problem. Den föreliggande avhandlingen utforskar problemet med att fånga den effektiva komplexiteten hos djupa nätverk med ändlig bredd som tränas i praktiken, sett genom linsen av modellfunktionens geometri, med fokus på faktorer som implicit begränsar modellens komplexitet. För det första kontrasteras modellexpressivitet till effektiv olinjäritet för modeller som genomgår så kallad double descent, vilket framhäver begränsad effektiv komplexitet som ges av överparameterisering. För det andra studeras interpolationens geometri i närvaro av brusiga mål, och observerar robust interpolation över volymer av storlekar bestämda av modellskalan. För det tredje kopplas det observerade beteendet formellt till parameter-rymdens krökning, vilket kopplar parameterrymdens geometri till in datarymdens. Slutligen avslutas avhand-lingen med att undersöka huruvida resultaten kan översättas till kontexten av självövervakad inlärning, relaterar representationernas geometri till nedströms robusthet, och belyser trender i linje med neurala skalningslagar. Det föreliggande arbetet isolerar indatarymdens jämnhet som ett nyckelbegrepp för att karakterisera effektiv komplexitet hos modellfunktioner uttryckta av överparameteriserade djupa nätverk.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2024. p. 94
Series
TRITA-EECS-AVL ; 2024:80
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-354917 (URN)978-91-8106-077-5 (ISBN)
Public defence
2024-11-07, https://kth-se.zoom.us/j/62717697317, Kollegiesalen, Brinellvägen 6, Stockholm, 15:00 (English)
Opponent
Supervisors
Note

QC 20241017

Available from: 2024-10-17 Created: 2024-10-17 Last updated: 2024-11-15Bibliographically approved

Open Access in DiVA

fulltext(4403 kB)334 downloads
File information
File name FULLTEXT01.pdfFile size 4403 kBChecksum SHA-512
7f933f1f626a94123cc10c683a6b066f61630b44079a628978247ba4634325e4b98712a1cbedda1f413a5b65ba3093e57fa8cd3d12f58911a3655aa228b0478a
Type fulltextMimetype application/pdf

Other links

Conference websitePresentation video

Authority records

Gamba, MatteoAzizpour, HosseinBjörkman, Mårten

Search in DiVA

By author/editor
Gamba, MatteoAzizpour, HosseinBjörkman, Mårten
By organisation
Robotics, Perception and Learning, RPL
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 334 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 178 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf