Change search
ReferencesLink to record
Permanent link

Direct link
Robust Visual Servoing
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-2965-2953
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
2014 (English)In: Household Service Robotics, Elsevier, 2014, 397-427 p.Chapter in book (Other academic)Text
Abstract [en]

For service robots operating in domestic environments, it is not enough to consider only control level robustness; it is equally important to consider how image information that serves as input to the control process can be used so as to achieve robust and efficient control. In this chapter we present an effort toward the development of robust visual techniques used to guide robots in various tasks. Given a task at hand, we argue that different levels of complexity should be considered; this also defines the choice of the visual technique used to provide the necessary feedback information. We concentrate on visual feedback estimation where we investigate both two- and three-dimensional techniques. In the former case, we are interested in providing coarse information about the object position/velocity in the image plane. In particular, a set of simple visual features (cues) is employed in an integrated framework where voting is used for fusing the responses from individual cues. The experimental evaluation shows the system performance for three different cases of camera-robot configurations most common for robotic systems. For cases where the robot is supposed to grasp the object, a two-dimensional position estimate is often not enough. Complete pose (position and orientation) of the object may be required. Therefore, we present a model-based system where a wire-frame model of the object is used to estimate its pose. Since a number of similar systems have been proposed in literature, we concentrate on the particular part of the system usually neglected-automatic pose initialization. Finally, we show how a number of existing approaches can successfully be integrated in a system that is able to recognize and grasp fairly textured, everyday objects. One of the examples presented in the experimental section shows a mobile robot performing tasks in a real-word environment-a living room.

Place, publisher, year, edition, pages
Elsevier, 2014. 397-427 p.
Keyword [en]
Cue integration, Grasping, Integration, Robustness, Visual servoing
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-181262DOI: 10.1016/B978-0-12-800881-2.00018-9ScopusID: 2-s2.0-84944392576ISBN: 9780128009437OAI: oai:DiVA.org:kth-181262DiVA: diva2:902099
Note

QC 20160210

Available from: 2016-02-10 Created: 2016-01-29 Last updated: 2016-02-10Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Kragic, DanicaChristensen, Henrik
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 13 hits
ReferencesLink to record
Permanent link

Direct link