kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Reducing Variance in Meta-Learning via Laplace Approximation for Regression Tasks
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-8938-9363
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.ORCID iD: 0000-0002-5761-4105
University of Copenhagen.
Show others and affiliations
2024 (English)In: Transactions on Machine Learning Research, E-ISSN 2835-8856, Vol. 2024Article in journal (Refereed) Published
Abstract [en]

Given a finite set of sample points, meta-learning algorithms aim to learn an optimal adaptation strategy for new, unseen tasks. Often, this data can be ambiguous as it might belong to different tasks concurrently. This is particularly the case in meta-regression tasks. In such cases, the estimated adaptation strategy is subject to high variance due to the limited amount of support data for each task, which often leads to sub-optimal generalization performance. In this work, we address the problem of variance reduction in gradient-based meta-learning and formalize the class of problems prone to this, a condition we refer to as task overlap. Specifically, we propose a novel approach that reduces the variance of the gradient estimate by weighing each support point individually by the variance of its posterior over the parameters. To estimate the posterior, we utilize the Laplace approximation, which allows us to express the variance in terms of the curvature of the loss landscape of our meta-learner. Experimental results demonstrate the effectiveness of the proposed method and highlight the importance of variance reduction in meta-learning.

Place, publisher, year, edition, pages
Transactions on Machine Learning Research , 2024. Vol. 2024
National Category
Robotics and automation Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-361197Scopus ID: 2-s2.0-85219566964OAI: oai:DiVA.org:kth-361197DiVA, id: diva2:1944152
Note

QC 20250312

Available from: 2025-03-12 Created: 2025-03-12 Last updated: 2025-03-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

ScopusGitHub code

Authority records

Reichlin, AlfredoTegner, GustafVasco, MiguelBjörkman, MårtenKragic, Danica

Search in DiVA

By author/editor
Reichlin, AlfredoTegner, GustafVasco, MiguelBjörkman, MårtenKragic, Danica
By organisation
Robotics, Perception and Learning, RPLCollaborative Autonomous Systems
In the same journal
Transactions on Machine Learning Research
Robotics and automationControl Engineering

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 33 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf