kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-robot Interaction
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-2212-4325
KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.ORCID iD: 0000-0002-3086-0322
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-8601-1370
2023 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, no 4, article id 49Article in journal (Refereed) Published
Abstract [en]

Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in five online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM) , 2023. Vol. 12, no 4, article id 49
Keywords [en]
Additional Key Words and Phrases Sonification, Auditory Display, Design Evaluation, Non-verbal communication, unintentional Human-Robot Interaction
National Category
Human Computer Interaction Robotics
Identifiers
URN: urn:nbn:se:kth:diva-342398DOI: 10.1145/3611655ISI: 001153514400005Scopus ID: 2-s2.0-85181449398OAI: oai:DiVA.org:kth-342398DiVA, id: diva2:1828910
Note

QC 20240122

Available from: 2024-01-17 Created: 2024-01-17 Last updated: 2024-03-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Leite, IolandaBresin, RobertoTorre, Ilaria

Search in DiVA

By author/editor
Orthmann, BastianLeite, IolandaBresin, RobertoTorre, Ilaria
By organisation
Robotics, Perception and Learning, RPLMedia Technology and Interaction Design, MID
In the same journal
ACM Transactions on Human-Robot Interaction
Human Computer InteractionRobotics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 49 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf