Grasping Familiar Objects using Shape Context
2009 (English)In: ICAR: 2009 14th International Conference on Advanced Robotics, IEEE , 2009, 50-55 p.Conference paper (Refereed)
We present work on vision based robotic grasping. The proposed method relies on extracting and representing the global contour of an object in a monocular image. A suitable grasp is then generated using a learning framework where prototypical grasping points are learned from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labeled synthetic images. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects. Furthermore, we will show how our representation supports the inference of a full grasp configuration.
Place, publisher, year, edition, pages
IEEE , 2009. 50-55 p.
Descriptors, Learning frameworks, Monocular image, Nonlinear classification, Robotic grasping, Shape contexts, Synthetic images, Vision based, Image processing, Robots
Computer and Information Science
IdentifiersURN: urn:nbn:se:kth:diva-30418ISI: 000270815500009ScopusID: 2-s2.0-70449375329ISBN: 978-1-4244-4855-5OAI: oai:DiVA.org:kth-30418DiVA: diva2:400826
14th International Conference on Advanced Robotics, Munich, Germany, June 22-26, 2009
QC 201102282011-02-282011-02-242014-10-08Bibliographically approved