On the Importance of Representations for Speech-Driven Gesture Generation: Extended AbstractShow others and affiliations
2019 (English)Conference paper, Published paper (Refereed)
Abstract [en]
This paper presents a novel framework for automatic speech-driven gesture generation applicable to human-agent interaction, including both virtual agents and robots. Specifically, we extend recent deep-learning-based, data-driven methods for speech-driven gesture generation by incorporating representation learning. Our model takes speech features as input and produces gestures in the form of sequences of 3D joint coordinates representing motion as output. The results of objective and subjective evaluations confirm the benefits of the representation learning.
Place, publisher, year, edition, pages
The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2019. p. 2072-2074
Keywords [en]
Gesture generation; social robotics; representation learning; neural network; deep learning; virtual agents
National Category
Human Computer Interaction
Research subject
Human-computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-251648ISI: 000474345000309Scopus ID: 2-s2.0-85069739876OAI: oai:DiVA.org:kth-251648DiVA, id: diva2:1316254
Conference
International Conference on Autonomous Agents and Multiagent Systems (AAMAS '19), May 13-17, 2019, Montréal, Canada
Projects
EACare
Funder
Swedish Foundation for Strategic Research , RIT15-0107
Note
QC 20190515
2019-05-162019-05-162022-06-26Bibliographically approved