kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
What's at Stake?: Robot explanations matter for high but not low-stake scenarios
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-9242-9127
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-6158-4818
Uppsala Univ, Dept Informat Technol, Uppsala, Sweden..ORCID iD: 0000-0002-3309-3552
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-2212-4325
2023 (English)In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2421-2426Conference paper, Published paper (Refereed)
Abstract [en]

Although the field of Explainable Artificial Intelligence (XAI) in Human-Robot Interaction is gathering increasing attention, how well different explanations compare across HRI scenarios is still not well understood. We conducted an exploratory online study with 335 participants analysing the interaction between type of explanation (counterfactual, feature-based, and no explanation), the stake of the scenario (high, low) and the application scenario (healthcare, industry). Participants viewed one of 12 different vignettes depicting a combination of these three factors and rated their system understanding and trust in the robot. Compared to no explanation, both counterfactual and feature-based explanations improved system understanding and performance trust (but not moral trust). Additionally, when no explanation was present, high-stake scenarios led to significantly worse performance trust and system understanding. These findings suggest that explanations can be used to calibrate users' perceptions of the robot in high-stake scenarios.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2023. p. 2421-2426
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-341992DOI: 10.1109/RO-MAN57019.2023.10309566ISI: 001108678600318Scopus ID: 2-s2.0-85186964360OAI: oai:DiVA.org:kth-341992DiVA, id: diva2:1825214
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

Not duplicate with DiVA 1198533

QC 20240109

Available from: 2024-01-09 Created: 2024-01-09 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Melsión, Gaspar IsaacStower, RebeccaLeite, Iolanda

Search in DiVA

By author/editor
Melsión, Gaspar IsaacStower, RebeccaWinkle, KatieLeite, Iolanda
By organisation
Robotics, Perception and Learning, RPL
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 43 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf