Change search
ReferencesLink to record
Permanent link

Direct link
Robust visual servoing
KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.ORCID iD: 0000-0003-2965-2953
2003 (English)In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 22, no 11-Oct, 923-939 p.Article in journal (Refereed) Published
Abstract [en]

For service robots operating in domestic environments, it is not enough to consider only control level robustness; it is equally important to consider how image information that serves as input to the control process can be used so as to achieve robust and efficient control. In this paper we present an effort towards the development of robust visual techniques used to guide robots in various tasks. Given a task at hand. it e argue that different levels of complexity should be considered: this also defines the choice of the visual technique used to provide the necessary feedback information. We concentrate on visual feedback estimation where we investigate both two- and three-dimensional techniques. In the former case, we art interested in providing coarse information about the object position/velocity in the image plane. In particular a set of simple visual features (cites) is employed in an integrated framework where voting is used for fusing the responses from individual cites. The experimental evaluation shows the system performance for three different cases of camera-robot configurations most common for robotic systems. For cases where the robot is supposed to grasp the object, a two-dimensional position estimate is often not enough. Complete pose (position and orientation) of the object may be required. Therefore, we present a model-based system where a wire-frame model of the object is used to estimate its pose. Since a number of similar systems have been proposed in the literature, we concentrate on the particular part of the system usually neglected-automatic pose initialization. Finally, we show how a number of existing approaches can successfully be integrated in a system that is able to recognize and grasp fairly textured. everyday objects. One of the examples presented in the experimental section shows a mobile robot performing tasks in a real-world environment-a living room.

Place, publisher, year, edition, pages
2003. Vol. 22, no 11-Oct, 923-939 p.
Keyword [en]
visual servoing, grasping, cue integration, robustness, integration, tracking, vision
URN: urn:nbn:se:kth:diva-22906ISI: 000186114200010OAI: diva2:341604
QC 20100525Available from: 2010-08-10 Created: 2010-08-10Bibliographically approved

Open Access in DiVA

No full text

Search in DiVA

By author/editor
Kragic, Danica
By organisation
Numerical Analysis and Computer Science, NADA
In the same journal
The international journal of robotics research

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 29 hits
ReferencesLink to record
Permanent link

Direct link