kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Modelling Adaptive Presentations in Human-Robot Interaction using Behaviour Trees
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-0112-6732
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8579-1790
2019 (English)In: 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue: Proceedings of the Conference / [ed] Satoshi Nakamura, Stroudsburg, PA: Association for Computational Linguistics (ACL) , 2019, p. 345-352Conference paper, Published paper (Refereed)
Abstract [en]

In dialogue, speakers continuously adapt their speech to accommodate the listener, based on the feedback they receive. In this paper, we explore the modelling of such behaviours in the context of a robot presenting a painting. A Behaviour Tree is used to organise the behaviour on different levels, and allow the robot to adapt its behaviour in real-time; the tree organises engagement, joint attention, turn-taking, feedback and incremental speech processing. An initial implementation of the model is presented, and the system is evaluated in a user study, where the adaptive robot presenter is compared to a non-adaptive version. The adaptive version is found to be more engaging by the users, although no effects are found on the retention of the presented material.

Place, publisher, year, edition, pages
Stroudsburg, PA: Association for Computational Linguistics (ACL) , 2019. p. 345-352
Keywords [en]
human-robot interaction, presentation, acceptance, understanding, hearing, attention, robot, Furhat, presenter, adaptive, non-adaptive, retention, engagement
Keywords [sv]
interaktion, presentation, acceptans, förståelse, förstånd, hörsel, uppmärksamhet, robot, Furhat, presentatör, adaptiv, ickeadaptiv, minne, ihågkomst, engagemang
National Category
Computer graphics and computer vision
Research subject
Speech and Music Communication; Human-computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-267218DOI: 10.18653/v1/W19-5940ISI: 000591510500040Scopus ID: 2-s2.0-85083155670OAI: oai:DiVA.org:kth-267218DiVA, id: diva2:1391441
Conference
20th Annual SIGdial Meeting on Discourse and Dialogue, SIGdial 2019, Stockholm, Sweden, September 11-13, 2019
Projects
Co-adaptive Human-Robot Interactive Systems
Funder
Swedish Foundation for Strategic Research, RIT15-0133
Note

Part of proceedings: ISBN 978-1-950737-61-1

QC 20200205

Available from: 2020-02-04 Created: 2020-02-04 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopusSIGDIAL 2019

Authority records

Axelsson, NilsSkantze, Gabriel

Search in DiVA

By author/editor
Axelsson, NilsSkantze, Gabriel
By organisation
Speech, Music and Hearing, TMH
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 292 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf