Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Hybrid Laser and Vision Based Object Search and Localization
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0002-1170-7162
2008 (English)In: 2008 IEEE International Conference on Robotics and Automation: Vols 1-9, 2008, 2636-2643 p.Conference paper, Published paper (Refereed)
Abstract [en]

We describe a method for an autonomous robot to efficiently locate one or more distinct objects in a realistic environment using monocular vision. We demonstrate how to efficiently subdivide acquired images into interest regions for the robot to zoom in on, using receptive field cooccurrence histograms. Objects are recognized through SIFT feature matching and the positions of the objects are estimated. Assuming a 2D map of the robot's surroundings and a set of navigation nodes between which it is free to move, we show how to compute an efficient sensing plan that allows the robot's camera to cover the environment, while obeying restrictions on the different objects' maximum and minimum viewing distances. The approach has been implemented on a real robotic system and results are presented showing its practicability and the quality of the position estimates obtained.

Place, publisher, year, edition, pages
2008. 2636-2643 p.
Series
IEEE International Conference On Robotics And Automation, ISSN 1050-4729
Keyword [en]
visual search, object search, view planning
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-38268DOI: 10.1109/ROBOT.2008.4543610ISI: 000258095001220Scopus ID: 2-s2.0-51649115616ISBN: 978-1-4244-1646-2 (print)OAI: oai:DiVA.org:kth-38268DiVA: diva2:436465
Conference
2008 IEEE International Conference on Robotics and Automation, ICRA 2008; Pasadena, CA; 19 May 2008 through 23 May 2008
Funder
EU, European Research Council, CoSySwedish Research Council, 621-2006-4520
Note

QC 20110901

Available from: 2011-08-23 Created: 2011-08-23 Last updated: 2016-05-23Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Jensfelt, Patric

Search in DiVA

By author/editor
Gálvez López, DorianSjöö, KristofferPaul, ChandanaJensfelt, Patric
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Computer Science

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 184 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf