kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Robust visual servoing
KTH, Superseded Departments (pre-2005), Numerical Analysis and Computer Science, NADA.ORCID iD: 0000-0003-2965-2953
2003 (English)In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 22, no 11-Oct, p. 923-939Article in journal (Refereed) Published
Abstract [en]

For service robots operating in domestic environments, it is not enough to consider only control level robustness; it is equally important to consider how image information that serves as input to the control process can be used so as to achieve robust and efficient control. In this paper we present an effort towards the development of robust visual techniques used to guide robots in various tasks. Given a task at hand. it e argue that different levels of complexity should be considered: this also defines the choice of the visual technique used to provide the necessary feedback information. We concentrate on visual feedback estimation where we investigate both two- and three-dimensional techniques. In the former case, we art interested in providing coarse information about the object position/velocity in the image plane. In particular a set of simple visual features (cites) is employed in an integrated framework where voting is used for fusing the responses from individual cites. The experimental evaluation shows the system performance for three different cases of camera-robot configurations most common for robotic systems. For cases where the robot is supposed to grasp the object, a two-dimensional position estimate is often not enough. Complete pose (position and orientation) of the object may be required. Therefore, we present a model-based system where a wire-frame model of the object is used to estimate its pose. Since a number of similar systems have been proposed in the literature, we concentrate on the particular part of the system usually neglected-automatic pose initialization. Finally, we show how a number of existing approaches can successfully be integrated in a system that is able to recognize and grasp fairly textured. everyday objects. One of the examples presented in the experimental section shows a mobile robot performing tasks in a real-world environment-a living room.

Place, publisher, year, edition, pages
2003. Vol. 22, no 11-Oct, p. 923-939
Keywords [en]
visual servoing, grasping, cue integration, robustness, integration, tracking, vision
Identifiers
URN: urn:nbn:se:kth:diva-22906DOI: 10.1177/027836490302210009ISI: 000186114200010Scopus ID: 2-s2.0-0142123161OAI: oai:DiVA.org:kth-22906DiVA, id: diva2:341604
Note
QC 20100525Available from: 2010-08-10 Created: 2010-08-10 Last updated: 2022-06-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kragic, Danica

Search in DiVA

By author/editor
Kragic, Danica
By organisation
Numerical Analysis and Computer Science, NADA
In the same journal
The international journal of robotics research

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 90 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf