Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
What's in the Container?: Classifying Object Contents from Vision and Touch
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
Show others and affiliations
2014 (English)In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems  (IROS 2014), IEEE , 2014, 3961-3968 p.Conference paper, Published paper (Refereed)
Abstract [en]

Robots operating in household environments need to interact with food containers of different types. Whether a container is filled with milk, juice, yogurt or coffee may affect the way robots grasp and manipulate the container. In this paper, we concentrate on the problem of identifying what kind of content is in a container based on tactile and/or visual feedback in combination with grasping. In particular, we investigate the benefits of using unimodal (visual or tactile) or bimodal (visual-tactile) sensory data for this purpose. We direct our study toward cardboard containers with liquid or solid content or being empty. The motivation for using grasping rather than shaking is that we want to investigate the content prior to applying manipulation actions to a container. Our results show that we achieve comparable classification rates with unimodal data and that the visual and tactile data are complimentary.

Place, publisher, year, edition, pages
IEEE , 2014. 3961-3968 p.
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Keyword [en]
Intelligent robots, Robots, Visual communication, Classification rates, Food containers, Sensory data, Solid contents, Unimodal, Visual feedback
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-163512DOI: 10.1109/IROS.2014.6943119ISI: 000349834604011Scopus ID: 2-s2.0-84911468996ISBN: 978-1-4799-6934-0 (print)OAI: oai:DiVA.org:kth-163512DiVA: diva2:800542
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep 14-18, 2014, Chicago, IL
Note

QC 20150407

Available from: 2015-04-07 Created: 2015-04-07 Last updated: 2015-04-07Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Kragic, Danica

Search in DiVA

By author/editor
Güler, PürenBekiroglu, YaseminGratal, XaviKragic, Danica
By organisation
Computer Vision and Active Perception, CVAP
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 76 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf