kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Reducing Variance in Meta-Learning via Laplace Approximation for Regression Tasks
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-8938-9363
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.ORCID iD: 0000-0002-5761-4105
University of Copenhagen.
Show others and affiliations
2024 (English)In: Transactions on Machine Learning Research, E-ISSN 2835-8856, Vol. 2024Article in journal (Refereed) Published
Abstract [en]

Given a finite set of sample points, meta-learning algorithms aim to learn an optimal adaptation strategy for new, unseen tasks. Often, this data can be ambiguous as it might belong to different tasks concurrently. This is particularly the case in meta-regression tasks. In such cases, the estimated adaptation strategy is subject to high variance due to the limited amount of support data for each task, which often leads to sub-optimal generalization performance. In this work, we address the problem of variance reduction in gradient-based meta-learning and formalize the class of problems prone to this, a condition we refer to as task overlap. Specifically, we propose a novel approach that reduces the variance of the gradient estimate by weighing each support point individually by the variance of its posterior over the parameters. To estimate the posterior, we utilize the Laplace approximation, which allows us to express the variance in terms of the curvature of the loss landscape of our meta-learner. Experimental results demonstrate the effectiveness of the proposed method and highlight the importance of variance reduction in meta-learning.

Place, publisher, year, edition, pages
Transactions on Machine Learning Research , 2024. Vol. 2024
National Category
Robotics and automation Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-361197Scopus ID: 2-s2.0-85219566964OAI: oai:DiVA.org:kth-361197DiVA, id: diva2:1944152
Note

QC 20250312

Available from: 2025-03-12 Created: 2025-03-12 Last updated: 2026-02-16Bibliographically approved
In thesis
1. Interactive Representation Learning: Symmetries, Metric Spaces and Uncertainty
Open this publication in new window or tab >>Interactive Representation Learning: Symmetries, Metric Spaces and Uncertainty
2026 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis investigates how interaction can be used as self-supervision to learn structured state representations that simplify downstream tasks. We formalize two inductive biases naturally present in the trajectories generated by agents that interact with their environment: geometry and temporal consistency of the underlying state space. We show that injecting these biases into representation learning yields additional, taskrelevant properties. First, we focus on geometric bias: we learn translationally equivariant latent spaces from images in which agent actions correspond to vector additions. We show how these representations can be used to estimate a recovery policy that mitigates the compounding of error in data-driven sequential decision-making policies. We further extend equivariant representations to scenes with external objects. Under an interaction-by-contact model, we prove that aligning the object’s and the agent’s latent embeddings yields an isometric, disentangled representation of both. Second, we relax the geometry assumption and explore the milder temporal consistency bias. This allows us to construct representations where the temporal order between states is preserved, a property we refer to as distance monotonicity. In the reinforcement learning setting, we show that, under suitable conditions, this property is enough to recover an approximation of the value function and provably estimate an optimal policy. In a multiple-sensor framework, these representations can be used to construct a Bayesian filtering state estimate robust under unknown noise. Lastly, we extend the concept of interactions from physical systems to the parametric space of a learner. We show how distance monotonic representations of the parameters of a model can be used to approximate the posterior distribution of a Bayesian neural network. Finally,in a meta-learning setting, we explore implicit representations of the learner to reduce the variance of a fast-adaptation model. Collectively, these results demonstrate that interaction-driven biases produce structured representations that simplify or enhance the learning process.

Abstract [sv]

Denna avhandling undersöker hur interaktion kan användas som självövervakning för att lära strukturerade tillståndsrepresentationer som förenklar nedströmsuppgifter. Vi formaliserar två induktiva bias som naturligt uppstår i trajektorier genererade av agenter som interagerar med sin omgivning: geometri samt temporal konsistens i det underliggande tillståndsrummet. Vi visar att införandet av dessa bias i representationsinlärning ger ytterligare, uppgiftsrelevanta egenskaper. Först fokuserar vi på geometrisk bias: vi lär translationsekvivarianta latenta rum från bilder där agentens handlingar motsvarar vektoradditioner. Vi visar hur sådana representationer kan användas för att estimera en återhämtningsstrategi som dämpar felackumulering i datadrivna, sekventiella beslutspolicys. Vi utvidgar därefter ekvivarianta representationer till scener med externa objekt. Under en kontaktbaserad interaktionsmodell bevisar vi att en inriktning (alignment) av objektets och agentens latenta inbäddningar ger en isometrisk och separerad (disentangled) representation av båda. Därefter lättar vi på geometriantagandet och studerar den mildare biasen temporal konsistens. Detta möjliggör konstruktion av representationer där den temporala ordningen mellan tillstånd bevaras—en egenskap vi benämner distansmonotonicitet. I en förstärkningsinlärningsmiljö visar vi att denna egenskap, under lämpliga villkor, räcker för att återvinna en approximation av värdefunktionen och bevisligen skatta en optimal policy. I ett flersensorramverk kan dessa representationer dessutom användas för att konstruera en Bayesiansk filtreringsbaserad tillståndsskattning som är robust mot okänt brus. Slutligen utvidgar vi interaktionsbegreppet från fysikaliska system till en lärares parametriska rum. Vi visar hur distansmonotona representationer av modellparametrar kan utnyttjas för att approximera posteriordistributionen i en Bayesiansk neuronnätmodell. I en meta-inlärningssättning undersöker vi även implicita representationer av läraren för att minska variansen hos en modell för snabb anpassning. Sammantaget demonstrerar resultaten att interaktionsdrivna bias leder till strukturerade representationer som förenklar eller förbättrar inlärningsprocessen.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2026. p. xv, 57
Series
TRITA-EECS-AVL ; 2026:18
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-376773 (URN)978-91-8106-539-8 (ISBN)
Public defence
2026-03-16, https://kth-se.zoom.us/w/63788305553, F3, Lindstedtsvägen 26, Stockholm, 09:00 (English)
Opponent
Supervisors
Note

QC 20260216

Available from: 2026-02-16 Created: 2026-02-16 Last updated: 2026-02-23Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

ScopusGitHub code

Authority records

Reichlin, AlfredoTegner, GustafVasco, MiguelBjörkman, MårtenKragic, Danica

Search in DiVA

By author/editor
Reichlin, AlfredoTegner, GustafVasco, MiguelBjörkman, MårtenKragic, Danica
By organisation
Robotics, Perception and Learning, RPLCollaborative Autonomous Systems
In the same journal
Transactions on Machine Learning Research
Robotics and automationControl Engineering

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 111 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf