Integration of Visual Cues for Robotic Grasping
2009 (English)In: COMPUTER VISION SYSTEMS, PROCEEDINGS / [ed] Fritz M, Schiele B, Piater JH, Berlin: Springer-Verlag Berlin , 2009, Vol. 5815, 245-254 p.Conference paper (Refereed)
In this paper, we propose a method that generates grasping actions for novel objects based on visual input from a stereo camera. We are integrating two methods that are advantageous either in predicting how to grasp an object or where to apply a grasp. The first one reconstructs a wire frame object model through curve matching. Elementary grasping actions can be associated to parts of this model. The second method predicts grasping points in a 2D contour image of an object. By integrating the information from the two approaches, we can generate a sparse set, of full grasp configurations that are of a good quality. We demonstrate our approach integrated in a vision system for complex shaped objects as well as in cluttered scenes.
Place, publisher, year, edition, pages
Berlin: Springer-Verlag Berlin , 2009. Vol. 5815, 245-254 p.
, Lecture Notes in Computer Science, ISSN 0302-9743 ; 5815
Computer and Information Science
IdentifiersURN: urn:nbn:se:kth:diva-30223DOI: 10.1007/978-3-642-04667-4_25ISI: 000274012700025ScopusID: 2-s2.0-71549168321ISBN: 978-3-642-04666-7OAI: oai:DiVA.org:kth-30223DiVA: diva2:399047
7th International Conference on Computer Vision Systems Liege, BELGIUM, OCT 13-15, 2009
QC 201102212011-02-212011-02-212012-01-28Bibliographically approved