kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Quantifying Epistemic Uncertainty in Absolute Pose Regression
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Univrses AB, Stockholm, Sweden.ORCID iD: 0000-0001-7819-3541
Univrses AB, Stockholm, Sweden.
Univrses AB, Stockholm, Sweden.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-1170-7162
2025 (English)In: Image Analysis - 23rd Scandinavian Conference, SCIA 2025, Proceedings, Springer Nature , 2025, p. 180-195Conference paper, Published paper (Refereed)
Abstract [en]

Visual relocalization is the task of estimating the camera pose given an image it views. Absolute pose regression offers a solution to this task by training a neural network, directly regressing the camera pose from image features. While an attractive solution in terms of memory and compute efficiency, absolute pose regression’s predictions are inaccurate and unreliable outside the training domain. In this work, we propose a novel method for quantifying the epistemic uncertainty of an absolute pose regression model by estimating the likelihood of observations within a variational framework. Beyond providing a measure of confidence in predictions, our approach offers a unified model that also handles observation ambiguities, probabilistically localizing the camera in the presence of repetitive structures. Our method outperforms existing approaches in capturing the relation between uncertainty and prediction error.

Place, publisher, year, edition, pages
Springer Nature , 2025. p. 180-195
Keywords [en]
Camera Relocalization, Uncertainty Estimation, VAEs
National Category
Computer graphics and computer vision Signal Processing
Identifiers
URN: urn:nbn:se:kth:diva-368911DOI: 10.1007/978-3-031-95918-9_13ISI: 001553877800013Scopus ID: 2-s2.0-105009846579OAI: oai:DiVA.org:kth-368911DiVA, id: diva2:1991320
Conference
23rd Scandinavian Conference on Image Analysis, SCIA 2025, Reykjavik, Iceland, June 23-25, 2025
Note

Part of ISBN 9783031959172

QC 20250822

Available from: 2025-08-22 Created: 2025-08-22 Last updated: 2025-12-08Bibliographically approved
In thesis
1. Camera Relocalization through Distribution Modeling
Open this publication in new window or tab >>Camera Relocalization through Distribution Modeling
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Relocalization is a key component of robot navigation: in order to move successfully within an environment, a robot must know its location in relation to that environment. Cameras are inexpensive sensors that enable relocalization by comparing visual observations with a model of the scene. To this end, camera relocalization, which also finds applications in augmented reality, has long been a topic of research, leading to elaborately designed pipelines for accurate camera pose estimation. Recently, a paradigm shift has seen explicit models of the scene replaced by implicit ones, where the scene is encoded in the weights of neural networks. This shift simplifies relocalization pipelines but leaves open a fundamental challenge: scenes with repetitive structures often produce ambiguous observations, meaning that the same visual input can correspond to multiple distinct camera poses. This thesis addresses this challenge, with a particular focus on implicit relocalization methods. It critically examines the assumptions underlying existing paradigms such as Absolute Pose Regression (APR) and Scene Coordinate Regression (SCR) about the uniqueness of appearances. As its central contribution, the thesis proposes to model the full distribution of possible solutions, which can be arbitrarily shaped, rather than attempting to recover a single best estimate. To this end, it proposes to leverage Conditional Variational Autoencoders (C-VAEs) as generative models capable of representing both distributions over poses and distributions over points. Furthermore, likelihood estimation within this framework provides a principled means of attaching confidence measures to predictions. These contributions, together with the suggested applications and directions for future work, lay a foundation for simplifying relocalization pipelines by more effectively handling ambiguities in observations.

Abstract [sv]

Omlokalisering är en nyckelkomponent i robotnavigering: för att kunna röra sig framgångsrikt inom en miljö måste en robot känna till sin position i förhållande till den miljön. Kameror är kostnadseffektiva sensorer som möjliggör omlokalisering genom att jämföra visuella observationer med en modell av scenen. Därför har kameraomlokalisering, som också hittar tillämpningar inom förstärkt verklighet, länge varit ett forskningsämne, vilket har lett till noggrant utformade pipelines för korrekt kameraposeuppskattning. Nyligen har ett paradigmskifte sett explicita modeller av scenen ersättas av implicita, där scenen är kodad i vikterna av neurala nätverk. Detta skifte förenklar omlokaliseringspipelines men lämnar en grundläggande utmaning öppen: scener med repetitiva strukturer producerar ofta tvetydiga observationer, vilket innebär att samma visuella input kan motsvara flera distinkta kamerapositioner. Denna avhandling tar upp denna utmaning, med särskilt fokus på implicita omlokaliseringsmetoder. Den granskar kritiskt antagandena bakom befintliga paradigm som Absolute Pose Regression (APR) och Scene Coordinate Regression (SCR), som vanligtvis förutsätter en unik lösning. Som sitt centrala bidrag föreslår avhandlingen att modellera den fullständiga fördelningen av möjliga lösningar, som kan formas godtyckligt, snarare än att försöka hitta en enda bästa uppskattning. För detta ändamål föreslogs att man skulle utnyttja Conditional Variational Autoencoders (C-VAEs) som generativa modeller som kan representera både fördelningar över poser och fördelningar över punkter. Dessutom ger sannolikhetsuppskattning inom detta ramverk ett principiellt sätt att koppla konfidensmått till förutsägelser. Dessa bidrag, tillsammans med de föreslagna tillämpningarna och riktningarna för framtida arbete, lägger en grund för att förenkla omlokaliseringspipelines genom att mer effektivt hantera tvetydighet i observationer.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2025. p. xii, 41
Series
TRITA-EECS-AVL ; 2025:106
National Category
Computer Vision and Learning Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-372920 (URN)978-91-8106-468-1 (ISBN)
Public defence
2025-12-11, https://kth-se.zoom.us/j/68470117111, D3, Lindstedtsvägen 5, KTH Campus, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20251117

Available from: 2025-11-17 Created: 2025-11-16 Last updated: 2025-11-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Zangeneh, FereidoonJensfelt, Patric

Search in DiVA

By author/editor
Zangeneh, FereidoonJensfelt, Patric
By organisation
Robotics, Perception and Learning, RPL
Computer graphics and computer visionSignal Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 56 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf