kth.sePublications
System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Are All Linear Regions Created Equal?
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-2784-7300
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-5211-6388
Show others and affiliations
2022 (English)In: Proceedings 25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022 / [ed] Camps-Valls, G Ruiz, FJR Valera, I, ML Research Press , 2022, Vol. 151Conference paper, Published paper (Refereed)
Abstract [en]

The number of linear regions has been studied as a proxy of complexity for ReLU networks. However, the empirical success of network compression techniques like pruning and knowledge distillation, suggest that in the overparameterized setting, linear regions density might fail to capture the effective nonlinearity. In this work, we propose an efficient algorithm for discovering linear regions and use it to investigate the effectiveness of density in capturing the nonlinearity of trained VGGs and ResNets on CIFAR-10 and CIFAR-100. We contrast the results with a more principled nonlinearity measure based on function variation, highlighting the shortcomings of linear regions density. Furthermore, interestingly, our measure of nonlinearity clearly correlates with model-wise deep double descent, connecting reduced test error with reduced nonlinearity, and increased local similarity of linear regions.

Place, publisher, year, edition, pages
ML Research Press , 2022. Vol. 151
Series
Proceedings of Machine Learning Research, ISSN 2640-3498
National Category
Computer Sciences Computer Systems
Identifiers
URN: urn:nbn:se:kth:diva-320995ISI: 000841852301002Scopus ID: 2-s2.0-85163053252OAI: oai:DiVA.org:kth-320995DiVA, id: diva2:1708517
Conference
25th International Conference on Artificial Intelligence and Statistics, AISTATS 2022, Virtual, Online, MAR 28-30, 2022
Note

QC 20221104

Available from: 2022-11-04 Created: 2022-11-04 Last updated: 2024-10-17Bibliographically approved
In thesis
1. On Implicit Smoothness Regularization in Deep Learning
Open this publication in new window or tab >>On Implicit Smoothness Regularization in Deep Learning
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

State of the art neural networks provide a rich class of function approximators,fueling the remarkable success of gradient-based deep learning on complex high-dimensional problems, ranging from natural language modeling to imageand video generation and understanding. Modern deep networks enjoy sufficient expressive power to shatter common classification benchmarks, as wellas interpolate noisy regression targets. At the same time, the same models areable to generalize well whilst perfectly fitting noisy training data, even in the absence of external regularization constraining model expressivity. Efforts towards making sense of the observed benign overfitting behaviour uncovered its occurrence in overparameterized linear regression as well as kernel regression,extending classical empirical risk minimization to the study of minimum norm interpolators. Existing theoretical understanding of the phenomenon identi-fies two key factors affecting the generalization ability of interpolating models.First, overparameterization – corresponding to the regime in which a model counts more parameters than the number of constraints imposed by the train-ing sample – effectively reduces model variance in proximity of the training data. Second, the structure of the learner – which determines how patterns in the training data are encoded in the learned representation – controls the ability to separate signal from noise when attaining interpolation. Analyzingthe above factors for deep finite-width networks respectively entails characterizing the mechanisms driving feature learning and norm-based capacity control in practical settings, thus posing a challenging open problem. The present thesis explores the problem of capturing effective complexity of finite-width deep networks trained in practice, through the lens of model function geometry, focusing on factors implicitly restricting model complexity. First,model expressivity is contrasted to effective nonlinearity for models undergoing double descent, highlighting constrained effective complexity afforded by over parameterization. Second, the geometry of interpolation is studied in the presence of noisy targets, observing robust interpolation over volumesof size controlled by model scale. Third, the observed behavior is formally tied to parameter-space curvature, connecting parameter-space geometry tothe input space’s. Finally, the thesis concludes by investigating whether the findings translate to the context of self-supervised learning, relating the geometry of representations to downstream robustness, and highlighting trends in keeping with neural scaling laws. The present work isolates input-space smoothness as a key notion for characterizing effective complexity of model functions expressed by overparameterized deep networks.

Abstract [sv]

Toppmoderna neurala nätverk erbjuder en rik klass funktionsapproximatorer,vilket stimulerar den anmärkningsvärda utvecklingen av gradientbaserad djupinlärning för komplexa högdimensionella problem, allt från modellering avnaturligt språk till bild- och videogenerering och förståelse. Moderna djupanätverk har tillräckligt mycket expressiv kraft för att kunna slå vanliga klassificeringsbenchmarks, samt interpolera brusiga regressionsmål. Samma modeller kan generalisera väl samtidigt som de kan anpassas perfekt till brusigträningsdata, även i frånvaro av extern regularisering som begränsar modellens uttrycksförmåga. Ansträngningar för att förstå det observerade så kallade benign overfitting-beteendet har påvisat dess förekomst i överparameteriserad linjär regression såväl som i kärnbaserad regression, vilket utvidgar klassisk empirisk riskminimering till studiet av miniminorm interpolatorer. Befintlig teoretisk förståelse av fenomenet identifierar två nyckelfaktorer som påverkargeneraliseringsförmågan hos interpolerande modeller. För det första reducerar överparameterisering - motsvarande regimen där en modell har fler paramet-rar än antalet villkor som ställs av träningsproven - effektivt modellvarianseni närheten av träningsdatan. För det andra styr inlärningens struktur - som bestämmer hur mönster i träningsdata kodas i den inlärda representationen- förmågan att separera signal från brus när interpolering uppnås. Att analysera ovanstående faktorer för nätverk med djup ändlig bredd innebär att karakterisera de mekanismer som driver funktionsinlärning och normbaserad kapacitetskontroll i praktiska sammanhang, vilket utgör ett utmanande öppet problem. Den föreliggande avhandlingen utforskar problemet med att fånga den effektiva komplexiteten hos djupa nätverk med ändlig bredd som tränas i praktiken, sett genom linsen av modellfunktionens geometri, med fokus på faktorer som implicit begränsar modellens komplexitet. För det första kontrasteras modellexpressivitet till effektiv olinjäritet för modeller som genomgår så kallad double descent, vilket framhäver begränsad effektiv komplexitet som ges av överparameterisering. För det andra studeras interpolationens geometri i närvaro av brusiga mål, och observerar robust interpolation över volymer av storlekar bestämda av modellskalan. För det tredje kopplas det observerade beteendet formellt till parameter-rymdens krökning, vilket kopplar parameterrymdens geometri till in datarymdens. Slutligen avslutas avhand-lingen med att undersöka huruvida resultaten kan översättas till kontexten av självövervakad inlärning, relaterar representationernas geometri till nedströms robusthet, och belyser trender i linje med neurala skalningslagar. Det föreliggande arbetet isolerar indatarymdens jämnhet som ett nyckelbegrepp för att karakterisera effektiv komplexitet hos modellfunktioner uttryckta av överparameteriserade djupa nätverk.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2024. p. 94
Series
TRITA-EECS-AVL ; 2024:80
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-354917 (URN)978-91-8106-077-5 (ISBN)
Public defence
2024-11-07, https://kth-se.zoom.us/j/62717697317, Kollegiesalen, Brinellvägen 6, Stockholm, 15:00 (English)
Opponent
Supervisors
Note

QC 20241017

Available from: 2024-10-17 Created: 2024-10-17 Last updated: 2024-11-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Gamba, MatteoChmielewski-Anders, AdrianSullivan, JosephineAzizpour, HosseinBjörkman, Mårten

Search in DiVA

By author/editor
Gamba, MatteoChmielewski-Anders, AdrianSullivan, JosephineAzizpour, HosseinBjörkman, Mårten
By organisation
Robotics, Perception and Learning, RPL
Computer SciencesComputer Systems

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 71 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf