Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
LDD: Learned Detector and Descriptor of Points for Visual Odometry
KTH, School of Electrical Engineering and Computer Science (EECS).
2018 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

Simultaneous localization and mapping is an important problem in robotics that can be solved using visual odometry -- the process of estimating ego-motion from subsequent camera images. In turn, visual odometry systems rely on point matching between different frames. This work presents a novel method for matching key-points by applying neural networks to point detection and description. Traditionally, point detectors are used in order to select good key-points (like corners) and then these key-points are matched using features extracted with descriptors. However, in this work a descriptor is trained to match points densely and then a detector is trained to predict, which points are more likely to be matched with the descriptor. This information is further used for selection of good key-points. The results of this project show that this approach can lead to more accurate results compared to model-based methods.

Abstract [sv]

Samtidig lokalisering och kartläggning är ett viktigt problem inom robotik som kan lösas med hjälp av visuell odometri -- processen att uppskatta självrörelse från efterföljande kamerabilder. Visuella odometrisystem förlitar sig i sin tur på punktmatchningar mellan olika bildrutor. Detta arbete presenterar en ny metod för matchning av nyckelpunkter genom att applicera neurala nätverk för detektion av punkter och deskriptorer. Traditionellt sett används punktdetektorer för att välja ut bra nyckelpunkter (som hörn) och sedan används dessa nyckelpunkter för att matcha särdrag. I detta arbete tränas istället en deskriptor att matcha punkterna. Sedan tränas en detektor till att förutspå vilka punker som är mest troliga att matchas korrekt med deskriptorn. Denna information används sedan för att välja ut bra nyckelpunkter. Resultatet av projektet visar att det kan leda till mer precisa resultat jämfört med andra modellbaserade metoder.

Place, publisher, year, edition, pages
2018.
Series
TRITA-EECS-EX ; 2018:529
Keywords [en]
visual odometry, deep learning, key-points, detector, descriptor
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-233571OAI: oai:DiVA.org:kth-233571DiVA, id: diva2:1241818
External cooperation
Univrses AB
Supervisors
Examiners
Available from: 2018-09-02 Created: 2018-08-24 Last updated: 2018-09-02Bibliographically approved

Open Access in DiVA

fulltext(5996 kB)80 downloads
File information
File name FULLTEXT01.pdfFile size 5996 kBChecksum SHA-512
bf0034b2ba0c6bea123bf31e4b49e8af8d782ae3e9f2f8360462f1b19e164ff71aae5568c7bafe4414c45469904c1221afb8b31944135cc188706d97e0c818a8
Type fulltextMimetype application/pdf

By organisation
School of Electrical Engineering and Computer Science (EECS)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 80 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 1329 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf