kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Robust Visual Servoing
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-2965-2953
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
2014 (English)In: Household Service Robotics, Elsevier, 2014, p. 397-427Chapter in book (Other academic)
Resource type
Text
Abstract [en]

For service robots operating in domestic environments, it is not enough to consider only control level robustness; it is equally important to consider how image information that serves as input to the control process can be used so as to achieve robust and efficient control. In this chapter we present an effort toward the development of robust visual techniques used to guide robots in various tasks. Given a task at hand, we argue that different levels of complexity should be considered; this also defines the choice of the visual technique used to provide the necessary feedback information. We concentrate on visual feedback estimation where we investigate both two- and three-dimensional techniques. In the former case, we are interested in providing coarse information about the object position/velocity in the image plane. In particular, a set of simple visual features (cues) is employed in an integrated framework where voting is used for fusing the responses from individual cues. The experimental evaluation shows the system performance for three different cases of camera-robot configurations most common for robotic systems. For cases where the robot is supposed to grasp the object, a two-dimensional position estimate is often not enough. Complete pose (position and orientation) of the object may be required. Therefore, we present a model-based system where a wire-frame model of the object is used to estimate its pose. Since a number of similar systems have been proposed in literature, we concentrate on the particular part of the system usually neglected-automatic pose initialization. Finally, we show how a number of existing approaches can successfully be integrated in a system that is able to recognize and grasp fairly textured, everyday objects. One of the examples presented in the experimental section shows a mobile robot performing tasks in a real-word environment-a living room.

Place, publisher, year, edition, pages
Elsevier, 2014. p. 397-427
Keywords [en]
Cue integration, Grasping, Integration, Robustness, Visual servoing
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-181262DOI: 10.1016/B978-0-12-800881-2.00018-9Scopus ID: 2-s2.0-84944392576OAI: oai:DiVA.org:kth-181262DiVA, id: diva2:902099
Note

Part of ISBN 978-012800943-7, 978-012800881-2

QC 20241212

Available from: 2016-02-10 Created: 2016-01-29 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kragic, Danica

Search in DiVA

By author/editor
Kragic, DanicaChristensen, Henrik
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 111 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf