Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Data Driven Non-Verbal Behavior Generation for Humanoid Robots
KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.ORCID iD: 0000-0001-9838-8848
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Social robots need non-verbal behavior to make an interaction pleasant and efficient. Most of the models for generating non-verbal behavior are rule-based and hence can produce a limited set of motions and are tuned to a particular scenario. In contrast, datadriven systems are flexible and easily adjustable. Hence we aim to learn a data-driven model for generating non-verbal behavior (in a form of a 3D motion sequence) for humanoid robots. Our approach is based on a popular and powerful deep generative model: Variation Autoencoder (VAE). Input for our model will be multi-modal and we will iteratively increase its complexity: first, it will only use the speech signal, then also the text transcription and finally - the non-verbal behavior of the conversation partner. We will evaluate our system on the virtual avatars as well as on two humanoid robots with different embodiments: NAO and Furhat. Our model will be easily adapted to a novel domain: this can be done by providing application specific training data.

Place, publisher, year, edition, pages
Boulder, CO, USA: ACM Digital Library, 2018. p. 520-523
Keywords [en]
Non-verbal behavior, data driven systems, machine learning, deep learning, humanoid robot
National Category
Human Computer Interaction
Research subject
Human-computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-238617DOI: 10.1145/3242969.3264970ISI: 000457913100073Scopus ID: 2-s2.0-85056642092ISBN: 978-1-4503-5692-3 (print)OAI: oai:DiVA.org:kth-238617DiVA, id: diva2:1260916
Conference
2018 International Conference on Multimodal Interaction (ICMI ’18), October 16–20, 2018, Boulder, CO, USA
Projects
EACare
Funder
Swedish Foundation for Strategic Research , 7085
Note

QC 20181106

Available from: 2018-11-05 Created: 2018-11-05 Last updated: 2019-03-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopushttps://dl.acm.org/citation.cfm?id=3264970

Authority records BETA

Kucherenko, Taras

Search in DiVA

By author/editor
Kucherenko, Taras
By organisation
Robotics, perception and learning, RPL
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 164 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf