Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Tutoring Robots: Multiparty Multimodal Social Dialogue With an Embodied Tutor
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0003-1399-6604
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
Show others and affiliations
2014 (English)Conference paper, Published paper (Refereed)
Abstract [en]

This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2014.
Keyword [en]
Conversational Dominance; Embodied Agent; Multimodal; Multiparty; Non-verbal Signals; Social Robot; Spoken Dialogue; Turn-taking; Tutor; Visual Attention
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-158149ISI: 000349440300004Scopus ID: 2-s2.0-84927643008OAI: oai:DiVA.org:kth-158149DiVA: diva2:774994
Conference
9th International Summer Workshop on Multimodal Interfaces, Lisbon, Portugal
Note

QC 20161018

Available from: 2014-12-30 Created: 2014-12-30 Last updated: 2016-10-18Bibliographically approved

Open Access in DiVA

No full text

Scopus

Authority records BETA

Beskow, JonasJohansson, MartinSkantze, GabrielStefanov, Kalin

Search in DiVA

By author/editor
Al Moubayed, SamerBeskow, JonasBollepalli, BajibabuJohansson, MartinOertel, CatharineSkantze, GabrielStefanov, Kalin
By organisation
Speech Communication and Technology
Computer Science

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 127 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf