Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A discriminative approach to robust visual place recognition
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-1396-0102
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-1170-7162
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
2006 (English)In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, 3829-3836 p.Conference paper, Published paper (Refereed)
Abstract [en]

An important competence for a mobile robot system is the ability to localize and perform context interpretation. This is required to perform basic navigation and to facilitate local specific services. Usually localization is performed based on a purely geometric model. Through use of vision and place recognition a number of opportunities open up in terms of flexibility and association of semantics to the model. To achieve this the present paper presents an appearance based method for place recognition. The method is based on a large margin classifier in combination with a rich global image descriptor. The method is robust to variations in illumination and minor scene changes. The method is evaluated across several different cameras, changes in time-of-day and weather conditions. The results clearly demonstrate the value of the approach.

Place, publisher, year, edition, pages
NEW YORK: IEEE , 2006. 3829-3836 p.
Series
2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12
National Category
Robotics
Identifiers
URN: urn:nbn:se:kth:diva-42060DOI: 10.1109/IROS.2006.281789ISI: 000245452403159Scopus ID: 2-s2.0-34250663399OAI: oai:DiVA.org:kth-42060DiVA: diva2:446161
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems. Beijing, PEOPLES R CHINA. OCT 09-13, 2006
Note
QC 20111006Available from: 2011-10-06 Created: 2011-10-05 Last updated: 2012-01-10Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Pronobis, AndrzejJensfelt, Patric

Search in DiVA

By author/editor
Pronobis, AndrzejCaputo, BarbaraJensfelt, PatricChristensen, Henrik I.
By organisation
Centre for Autonomous Systems, CASComputer Vision and Active Perception, CVAP
Robotics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 35 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf