kth.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH.
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.ORCID-id: 0000-0001-9838-8848
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH.
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH, Tal-kommunikation.ORCID-id: 0000-0003-2428-0468
Visa övriga samt affilieringar
2021 (Engelska)Konferensbidrag, Muntlig presentation med publicerat abstract (Refereegranskat)
Abstract [en]

Embodied conversational agents (ECAs) benefit from non-verbal behavior for natural and efficient interaction with users. Gesticulation – hand and arm movements accompanying speech – is an essential part of non-verbal behavior. Gesture generation models have been developed for several decades: starting with rule-based and ending with mainly data-driven methods. To date, recent end-to-end gesture generation methods have not been evaluated in areal-time interaction with users. We present a proof-of-concept

framework, which is intended to facilitate evaluation of modern gesture generation models in interaction. We demonstrate an extensible open-source framework that contains three components: 1) a 3D interactive agent; 2) a chatbot back-end; 3) a gesticulating system. Each component can be replaced,

making the proposed framework applicable for investigating the effect of different gesturing models in real-time interactions with different communication modalities, chatbot backends, or different agent appearances. The code and video are available at the project page https://nagyrajmund.github.io/project/gesturebot.

Ort, förlag, år, upplaga, sidor
2021.
Nyckelord [en]
conversational embodied agents; non-verbal behavior synthesis
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign)
Forskningsämne
Datalogi
Identifikatorer
URN: urn:nbn:se:kth:diva-304616OAI: oai:DiVA.org:kth-304616DiVA, id: diva2:1609609
Konferens
20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
Forskningsfinansiär
Stiftelsen för strategisk forskning (SSF), RIT15-0107
Anmärkning

QC 20211130

Not duplicate with DiVA 1653872

Tillgänglig från: 2021-11-08 Skapad: 2021-11-08 Senast uppdaterad: 2022-06-25Bibliografiskt granskad
Ingår i avhandling
1. Developing and evaluating co-speech gesture-synthesis models for embodied conversational agents
Öppna denna publikation i ny flik eller fönster >>Developing and evaluating co-speech gesture-synthesis models for embodied conversational agents
2021 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

 A  large part of our communication is non-verbal:   humans use non-verbal behaviors to express various aspects of our state or intent.  Embodied artificial agents, such as virtual avatars or robots, should also use non-verbal behavior for efficient and pleasant interaction. A core part of non-verbal communication is gesticulation:  gestures communicate a large share of non-verbal content. For example, around 90\% of spoken utterances in descriptive discourse are accompanied by gestures. Since gestures are important, generating co-speech gestures has been an essential task in the Human-Agent Interaction (HAI) and Computer Graphics communities for several decades.  Evaluating the gesture-generating methods has been an equally important and equally challenging part of field development. Consequently, this thesis contributes to both the development and evaluation of gesture-generation models. 

This thesis proposes three deep-learning-based gesture-generation models. The first model is deterministic and uses only audio and generates only beat gestures.  The second model is deterministic and uses both audio and text, aiming to generate meaningful gestures.  A final model uses both audio and text and is probabilistic to learn the stochastic character of human gesticulation.  The methods have applications to both virtual agents and social robots. Individual research efforts in the field of gesture generation are difficult to compare, as there are no established benchmarks.  To address this situation, my colleagues and I launched the first-ever gesture-generation challenge, which we called the GENEA Challenge.  We have also investigated if online participants are as attentive as offline participants and found that they are both equally attentive provided that they are well paid.   Finally,  we developed a  system that integrates co-speech gesture-generation models into a real-time interactive embodied conversational agent.  This system is intended to facilitate the evaluation of modern gesture generation models in interaction. 

To further advance the development of capable gesture-generation methods, we need to advance their evaluation, and the research in the thesis supports an interpretation that evaluation is the main bottleneck that limits the field.  There are currently no comprehensive co-speech gesture datasets, which should be large, high-quality, and diverse. In addition, no strong objective metrics are yet available.  Creating speech-gesture datasets and developing objective metrics are highlighted as essential next steps for further field development.

Ort, förlag, år, upplaga, sidor
KTH Royal Institute of Technology, 2021. s. 47
Serie
TRITA-EECS-AVL ; 2021:75
Nyckelord
Human-agent interaction, gesture generation, social robotics, conversational agents, non-verbal behavior, deep learning, machine learning
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign)
Forskningsämne
Datalogi
Identifikatorer
urn:nbn:se:kth:diva-304618 (URN)978-91-8040-058-9 (ISBN)
Disputation
2021-12-07, Sal Kollegiesalen, Stockholm, 13:00 (Engelska)
Opponent
Handledare
Forskningsfinansiär
Stiftelsen för strategisk forskning (SSF), RIT15-0107
Anmärkning

QC 20211109

Tillgänglig från: 2021-11-10 Skapad: 2021-11-08 Senast uppdaterad: 2022-06-25Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

https://www.ifaamas.org/Proceedings/aamas2021/pdfs/p1779.pdf

Person

Nagy, RajmundKucherenko, TarasMoell, BirgerAbelho Pereira, André TiagoKjellström, Hedvig

Sök vidare i DiVA

Av författaren/redaktören
Nagy, RajmundKucherenko, TarasMoell, BirgerAbelho Pereira, André TiagoKjellström, Hedvig
Av organisationen
Tal, musik och hörsel, TMHRobotik, perception och lärande, RPLTal-kommunikation
Människa-datorinteraktion (interaktionsdesign)

Sök vidare utanför DiVA

GoogleGoogle Scholar

urn-nbn

Altmetricpoäng

urn-nbn
Totalt: 96 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf