Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Modelling Adaptive Presentations in Human-Robot Interaction using Behaviour Trees
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-0112-6732
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8579-1790
2019 (English)In: 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue: Proceedings of the Conference / [ed] Satoshi Nakamura, Stroudsburg, PA, 2019, p. 345-352Conference paper, Published paper (Refereed)
Abstract [en]

In dialogue, speakers continuously adapt their speech to accommodate the listener, based on the feedback they receive. In this paper, we explore the modelling of such behaviours in the context of a robot presenting a painting. A Behaviour Tree is used to organise the behaviour on different levels, and allow the robot to adapt its behaviour in real-time; the tree organises engagement, joint attention, turn-taking, feedback and incremental speech processing. An initial implementation of the model is presented, and the system is evaluated in a user study, where the adaptive robot presenter is compared to a non-adaptive version. The adaptive version is found to be more engaging by the users, although no effects are found on the retention of the presented material.

Place, publisher, year, edition, pages
Stroudsburg, PA, 2019. p. 345-352
Keywords [en]
human-robot interaction, presentation, acceptance, understanding, hearing, attention, robot, Furhat, presenter, adaptive, non-adaptive, retention, engagement
Keywords [sv]
interaktion, presentation, acceptans, förståelse, förstånd, hörsel, uppmärksamhet, robot, Furhat, presentatör, adaptiv, ickeadaptiv, minne, ihågkomst, engagemang
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Speech and Music Communication; Human-computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-267218ISBN: 978-1-950737-61-1 (electronic)OAI: oai:DiVA.org:kth-267218DiVA, id: diva2:1391441
Conference
SIGDIAL 2019
Projects
Co-adaptive Human-Robot Interactive Systems
Funder
Swedish Foundation for Strategic Research , RIT15-0133
Note

QC 20200205

Available from: 2020-02-04 Created: 2020-02-04 Last updated: 2020-02-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

SIGDIAL 2019

Authority records BETA

Axelsson, NilsSkantze, Gabriel

Search in DiVA

By author/editor
Axelsson, NilsSkantze, Gabriel
By organisation
Speech, Music and Hearing, TMH
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 76 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf