Tutoring Robots: Multiparty Multimodal Social Dialogue With an Embodied Tutor
2014 (English)Conference paper (Refereed)
This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.
Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2014.
Conversational Dominance; Embodied Agent; Multimodal; Multiparty; Non-verbal Signals; Social Robot; Spoken Dialogue; Turn-taking; Tutor; Visual Attention
IdentifiersURN: urn:nbn:se:kth:diva-158149ISI: 000349440300004ScopusID: 2-s2.0-84927643008OAI: oai:DiVA.org:kth-158149DiVA: diva2:774994
9th International Summer Workshop on Multimodal Interfaces, Lisbon, Portugal
QC 201610182014-12-302014-12-302016-10-18Bibliographically approved