kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Audio-Visual Classification and Detection of Human Manipulation Actions
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-2314-2880
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-3323-5311
Universidad de Granada, Spain.ORCID iD: 0000-0003-3731-0582
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0002-5750-9655
2014 (English)In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), IEEE conference proceedings, 2014, p. 3045-3052Conference paper, Published paper (Refereed)
Abstract [en]

Humans are able to merge information from multiple perceptional modalities and formulate a coherent representation of the world. Our thesis is that robots need to do the same in order to operate robustly and autonomously in an unstructured environment. It has also been shown in several fields that multiple sources of information can complement each other, overcoming the limitations of a single perceptual modality. Hence, in this paper we introduce a data set of actions that includes both visual data (RGB-D video and 6DOF object pose estimation) and acoustic data. We also propose a method for recognizing and segmenting actions from continuous audio-visual data. The proposed method is employed for extensive evaluation of the descriptive power of the two modalities, and we discuss how they can be used jointly to infer a coherent interpretation of the recorded action.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2014. p. 3045-3052
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Keywords [en]
Acoustic data, Audio-visual, Audio-visual data, Coherent representations, Human manipulation, Multiple source, Unstructured environments, Visual data
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-158004DOI: 10.1109/IROS.2014.6942983ISI: 000349834603023Scopus ID: 2-s2.0-84911478073ISBN: 978-1-4799-6934-0 (print)OAI: oai:DiVA.org:kth-158004DiVA, id: diva2:773353
Conference
2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014, Palmer House Hilton Hotel Chicago, United States, 14 September 2014 through 18 September 2014
Note

QC 20150122

Available from: 2014-12-18 Created: 2014-12-18 Last updated: 2025-02-07Bibliographically approved
In thesis
1. Action Recognition for Robot Learning
Open this publication in new window or tab >>Action Recognition for Robot Learning
2015 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis builds on the observation that robots cannot be programmed to handle any possible situation in the world. Like humans, they need mechanisms to deal with previously unseen situations and unknown objects. One of the skills humans rely on to deal with the unknown is the ability to learn by observing others. This thesis addresses the challenge of enabling a robot to learn from a human instructor. In particular, it is focused on objects. How can a robot find previously unseen objects? How can it track the object with its gaze? How can the object be employed in activities? Throughout this thesis, these questions are addressed with the end goal of allowing a robot to observe a human instructor and learn how to perform an activity. The robot is assumed to know very little about the world and it is supposed to discover objects autonomously. Given a visual input, object hypotheses are formulated by leveraging on common contextual knowledge often used by humans (e.g. gravity, compactness, convexity). Moreover, unknown objects are tracked and their appearance is updated over time since only a small fraction of the object is visible from the robot initially. Finally, object functionality is inferred by looking how the human instructor is manipulating objects and how objects are used in relation to others. All the methods included in this thesis have been evaluated on datasets that are publicly available or that we collected, showing the importance of these learning abilities.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2015. p. v, 38
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2015:09
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-165680 (URN)
Public defence
2015-05-21, F3, Lindstedtsvägen 26, KTH, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20150504

Available from: 2015-05-04 Created: 2015-04-29 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

fulltext(11742 kB)633 downloads
File information
File name FULLTEXT01.pdfFile size 11742 kBChecksum SHA-512
a3cc92c1e8f3e5292e6b1e4a4ba19090292fe3883384724f4fe562d45ff4ef59c0697c4b906e522d160fd4fcd2c724f9f8723c2a2e83d0a475450b77a1fc50b0
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusIEEEXploreConference website

Authority records

Pieropan, AlessandroSalvi, GiampieroPauwels, KarlKjellström, Hedvig

Search in DiVA

By author/editor
Pieropan, AlessandroSalvi, GiampieroPauwels, KarlKjellström, Hedvig
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CASSpeech, Music and Hearing, TMH
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar
Total: 633 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 425 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf