What's in the Container?: Classifying Object Contents from Vision and Touch
2014 (English)In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), IEEE , 2014, 3961-3968 p.Conference paper (Refereed)
Robots operating in household environments need to interact with food containers of different types. Whether a container is filled with milk, juice, yogurt or coffee may affect the way robots grasp and manipulate the container. In this paper, we concentrate on the problem of identifying what kind of content is in a container based on tactile and/or visual feedback in combination with grasping. In particular, we investigate the benefits of using unimodal (visual or tactile) or bimodal (visual-tactile) sensory data for this purpose. We direct our study toward cardboard containers with liquid or solid content or being empty. The motivation for using grasping rather than shaking is that we want to investigate the content prior to applying manipulation actions to a container. Our results show that we achieve comparable classification rates with unimodal data and that the visual and tactile data are complimentary.
Place, publisher, year, edition, pages
IEEE , 2014. 3961-3968 p.
, IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Intelligent robots, Robots, Visual communication, Classification rates, Food containers, Sensory data, Solid contents, Unimodal, Visual feedback
Computer Vision and Robotics (Autonomous Systems)
IdentifiersURN: urn:nbn:se:kth:diva-163512DOI: 10.1109/IROS.2014.6943119ISI: 000349834604011ScopusID: 2-s2.0-84911468996ISBN: 978-1-4799-6934-0OAI: oai:DiVA.org:kth-163512DiVA: diva2:800542
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep 14-18, 2014, Chicago, IL
QC 201504072015-04-072015-04-072015-04-07Bibliographically approved