Change search
ReferencesLink to record
Permanent link

Direct link
A Sensorimotor Approach for Self-Learning of Hand-Eye Coordination
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0001-6738-9872
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-4266-6746
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0003-0579-3372
2015 (English)In: IEEE/RSJ International Conference onIntelligent Robots and Systems, Hamburg, September 28 - October 02, 2015, IEEE conference proceedings, 2015, 4969-4975 p.Conference paper (Refereed)
Abstract [en]

This paper presents a sensorimotor contingencies (SMC) based method to fully autonomously learn to perform hand-eye coordination. We divide the task into two visuomotor subtasks, visual fixation and reaching, and implement these on a PR2 robot assuming no prior information on its kinematic model. Our contributions are three-fold: i) grounding a robot in the environment by exploiting SMCs in the action planning system, which eliminates the need for prior knowledge of the kinematic or dynamic models of the robot; ii) using a forward model to search for proper actions to solve the task by minimizing a cost function, instead of training a separate inverse model, to speed up training; iii) encoding 3D spatial positions of a target object based on the robot’s joint positions, thus avoiding calibration with respect to an external coordinate system. The method is capable of learning the task of hand-eye coordination from scratch by less than 20 sensory-motor pairs that are iteratively generated at real-time speed. In order to examine the robustness of the method while dealing with nonlinear image distortions, we apply a so-called retinal mapping image deformation to the input images. Experimental results show the successfulness of the method even under considerable image deformations.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2015. 4969-4975 p.
Series
, IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Keyword [en]
Reactive and Sensor-Based Planning, Robot Learning, Visual Servoing
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-179834DOI: 10.1109/IROS.2015.7354076ISI: 000371885405012ScopusID: 2-s2.0-84958153652ISBN: 978-147999994-1OAI: oai:DiVA.org:kth-179834DiVA: diva2:889976
Conference
Intelligent Robots and Systems (IROS),Hamburg, September 28 - October 02, 2015
Projects
eSMCs
Note

Qc 20160212

Available from: 2015-12-29 Created: 2015-12-29 Last updated: 2016-04-11Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopusIEEEXplore

Search in DiVA

By author/editor
Ghadirzadeh, AliMaki, AtsutoBjörkman, Mårten
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 111 hits
ReferencesLink to record
Permanent link

Direct link