kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 36) Show all publications
Winkle, K., Lagerstedt, E., Torre, I. & Offenwanger, A. (2023). 15 Years of (Who)man Robot Interaction: Reviewing the H in Human-Robot Interaction. ACM Transactions on Human-Robot Interaction, 12(3), Article ID 3571718.
Open this publication in new window or tab >>15 Years of (Who)man Robot Interaction: Reviewing the H in Human-Robot Interaction
2023 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, no 3, article id 3571718Article in journal (Refereed) Published
Abstract [en]

Recent work identified a concerning trend of disproportional gender representation in research participants in Human-Computer Interaction (HCI). Motivated by the fact that Human-Robot Interaction (HRI) shares many participant practices with HCI, we explored whether this trend is mirrored in our field. By producing a dataset covering participant gender representation in all 684 full papers published at the HRI conference from 2006-2021, we identify current trends in HRI research participation. We find an over-representation of men in research participants to date, as well as inconsistent and/or incomplete gender reporting, which typically engages in a binary treatment of gender at odds with published best practice guidelines. We further examine if and how participant gender has been considered in user studies to date, in-line with current discourse surrounding the importance and/or potential risks of gender based analyses. Finally, we complement this with a survey of HRI researchers to examine correlations between who is doing with the who is taking part, to further reflect on factors which seemingly influence gender bias in research participation across different sub-fields of HRI. Through our analysis, we identify areas for improvement, but also reason for optimism, and derive some practical suggestions for HRI researchers going forward.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Additional Key Words and PhrasesGender, inclusivity, participant recruitment, systematic review, user study methodologies
National Category
Gender Studies Robotics Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-334861 (URN)10.1145/3571718 (DOI)001020331600001 ()2-s2.0-85163177354 (Scopus ID)
Note

QC 20230829

Available from: 2023-08-28 Created: 2023-08-28 Last updated: 2023-08-29Bibliographically approved
Torre, I., Lagerstedt, E., Dennler, N., Seaborn, K., Leite, I. & Székely, É. (2023). Can a gender-ambiguous voice reduce gender stereotypes in human-robot interactions?. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 106-112). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Can a gender-ambiguous voice reduce gender stereotypes in human-robot interactions?
Show others...
2023 (English)In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 106-112Conference paper, Published paper (Refereed)
Abstract [en]

When deploying robots, its physical characteristics, role, and tasks are often fixed. Such factors can also be associated with gender stereotypes among humans, which then transfer to the robots. One factor that can induce gendering but is comparatively easy to change is the robot's voice. Designing voice in a way that interferes with fixed factors might therefore be a way to reduce gender stereotypes in human-robot interaction contexts. To this end, we have conducted a video-based online study to investigate how factors that might inspire gendering of a robot interact. In particular, we investigated how giving the robot a gender-ambiguous voice can affect perception of the robot. We compared assessments (n=111) of videos in which a robot's body presentation and occupation mis/matched with human gender stereotypes. We found evidence that a gender-ambiguous voice can reduce gendering of a robot endowed with stereotypically feminine or masculine attributes. The results can inform more just robot design while opening new questions regarding the phenomenon of robot gendering.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Gender Studies
Identifiers
urn:nbn:se:kth:diva-342047 (URN)10.1109/RO-MAN57019.2023.10309500 (DOI)001108678600016 ()2-s2.0-85187027115 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Available from: 2024-01-10 Created: 2024-01-10 Last updated: 2024-03-22Bibliographically approved
Zhang, B. J., Orthmann, B., Torre, I., Bresin, R., Fick, J., Leite, I. & Fitter, N. T. (2023). Hearing it Out: Guiding Robot Sound Design through Design Thinking. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 2064-2071). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Hearing it Out: Guiding Robot Sound Design through Design Thinking
Show others...
2023 (English)In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2064-2071Conference paper, Published paper (Refereed)
Abstract [en]

Sound can benefit human-robot interaction, but little work has explored questions on the design of nonverbal sound for robots. The unique confluence of sound design and robotics expertise complicates these questions, as most roboticists do not have sound design expertise, necessitating collaborations with sound designers. We sought to understand how roboticists and sound designers approach the problem of robot sound design through two qualitative studies. The first study followed discussions by robotics researchers in focus groups, where these experts described motivations to add robot sound for various purposes. The second study guided music technology students through a generative activity for robot sound design; these sound designers in-training demonstrated high variability in design intent, processes, and inspiration. To unify the two perspectives, we structured recommendations through the design thinking framework, a popular design process. The insights provided in this work may aid roboticists in implementing helpful sounds in their robots, encourage sound designers to enter into collaborations on robot sound, and give key tips and warnings to both.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Design
Identifiers
urn:nbn:se:kth:diva-342045 (URN)10.1109/RO-MAN57019.2023.10309489 (DOI)001108678600269 ()2-s2.0-85186967284 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Available from: 2024-01-10 Created: 2024-01-10 Last updated: 2024-03-22Bibliographically approved
Székely, É., Gustafsson, J. & Torre, I. (2023). Prosody-controllable gender-ambiguous speech synthesis: a tool for investigating implicit bias in speech perception. In: Interspeech 2023: . Paper presented at 24th International Speech Communication Association, Interspeech 2023, Dublin, Ireland, Aug 20 2023 - Aug 24 2023 (pp. 1234-1238). International Speech Communication Association
Open this publication in new window or tab >>Prosody-controllable gender-ambiguous speech synthesis: a tool for investigating implicit bias in speech perception
2023 (English)In: Interspeech 2023, International Speech Communication Association , 2023, p. 1234-1238Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes a novel method to develop gender-ambiguous TTS, which can be used to investigate hidden gender bias in speech perception. Our aim is to provide a tool for researchers to conduct experiments on language use associated with specific genders. Ambiguous voices can also be beneficial for virtual assistants, to help reduce stereotypes and increase acceptance. Our approach uses a multi-speaker embedding in a neural TTS engine, combining two corpora recorded by a male and a female speaker to achieve a gender-ambiguous timbre. We also propose speaker-disentangled prosody control to ensure that the timbre is robust across a range of prosodies and enable more expressive speech. We optimised the output using an SSL-based network trained on hundreds of speakers. We conducted perceptual evaluations on the settings that were judged most ambiguous by the network, which showed that listeners perceived the speech samples as gender-ambiguous, also in prosody-controlled conditions.

Place, publisher, year, edition, pages
International Speech Communication Association, 2023
Keywords
gender bias, human-computer interaction, speech synthesis
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-337832 (URN)10.21437/Interspeech.2023-2086 (DOI)2-s2.0-85171582438 (Scopus ID)
Conference
24th International Speech Communication Association, Interspeech 2023, Dublin, Ireland, Aug 20 2023 - Aug 24 2023
Note

QC 20231009

Available from: 2023-10-09 Created: 2023-10-09 Last updated: 2023-10-09Bibliographically approved
Romeo, M., Torre, I., Le Maguer, S., Cangelosi, A. & Leite, I. (2023). Putting Robots in Context: Challenging the Influence of Voice and Empathic Behaviour on Trust. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 2045-2050). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Putting Robots in Context: Challenging the Influence of Voice and Empathic Behaviour on Trust
Show others...
2023 (English)In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2045-2050Conference paper, Published paper (Refereed)
Abstract [en]

Trust is essential for social interactions, including those between humans and social artificial agents, such as robots. Several robot-related factors can contribute to the formation of trust. However, previous work has often treated trust as an absolute concept, whereas it is highly context-dependent, and it is possible that some robot-related features will influence trust in some contexts, but not in others. In this paper, we present the results of two video-based online studies aimed at investigating the role of robot voice and empathic behaviour on trust formation in a general context as well as in a task-specific context. We found that voice influences trust in the specific context, with no effect of voice or empathic behaviour in the general context. Thus, context mediated whether robot-related features play a role in people's trust formation towards robots.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Interaction Technologies
Identifiers
urn:nbn:se:kth:diva-342054 (URN)10.1109/RO-MAN57019.2023.10309631 (DOI)001108678600266 ()2-s2.0-85187007764 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Available from: 2024-01-10 Created: 2024-01-10 Last updated: 2024-03-21Bibliographically approved
Linard, A., Torre, I., Bartoli, E., Sleat, A., Leite, I. & Tumova, J. (2023). Real-time RRT* with Signal Temporal Logic Preferences. In: 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 1-5, 2023, Detroit, USA. IEEE
Open this publication in new window or tab >>Real-time RRT* with Signal Temporal Logic Preferences
Show others...
2023 (English)In: 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, 2023Conference paper, Published paper (Other academic)
Abstract [en]

Signal Temporal Logic (STL) is a rigorous specification language that allows one to express various spatiotemporal requirements and preferences. Its semantics (called robustness) allows quantifying to what extent are the STL specifications met. In this work, we focus on enabling STL constraints and preferences in the Real-Time Rapidly ExploringRandom Tree (RT-RRT*) motion planning algorithm in an environment with dynamic obstacles. We propose a cost function that guides the algorithm towards the asymptotically most robust solution, i.e. a plan that maximally adheres to the STL specification. In experiments, we applied our method to a social navigation case, where the STL specification captures spatio-temporal preferences on how a mobile robot should avoid an incoming human in a shared space. Our results show that our approach leads to plans adhering to the STL specification, while ensuring efficient cost computation.

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Signal Temporal Logic, Real-Time Planning, Sampling-based Motion Planning.
National Category
Control Engineering Computer Engineering
Identifiers
urn:nbn:se:kth:diva-325105 (URN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 1-5, 2023, Detroit, USA
Note

QC 20231122

Available from: 2023-03-29 Created: 2023-03-29 Last updated: 2023-11-22Bibliographically approved
Orthmann, B., Leite, I., Bresin, R. & Torre, I. (2023). Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-robot Interaction. ACM Transactions on Human-Robot Interaction, 12(4), Article ID 49.
Open this publication in new window or tab >>Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-robot Interaction
2023 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, no 4, article id 49Article in journal (Refereed) Published
Abstract [en]

Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in five online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Additional Key Words and Phrases Sonification, Auditory Display, Design Evaluation, Non-verbal communication, unintentional Human-Robot Interaction
National Category
Human Computer Interaction Robotics
Identifiers
urn:nbn:se:kth:diva-342398 (URN)10.1145/3611655 (DOI)001153514400005 ()2-s2.0-85181449398 (Scopus ID)
Note

QC 20240122

Available from: 2024-01-17 Created: 2024-01-17 Last updated: 2024-03-05Bibliographically approved
Dogan, F. I., Torre, I. & Leite, I. (2022). Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation. In: ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 17th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2022, 7 March 2022 through 10 March 2022 (pp. 461-469). IEEE Computer Society
Open this publication in new window or tab >>Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation
2022 (English)In: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2022, p. 461-469Conference paper, Published paper (Refereed)
Abstract [en]

When a robot aims to comprehend its human partner's request by identifying the referenced objects in Human-Robot Conversation, ambiguities can occur because the environment might contain many similar objects or the objects described in the request might be unknown to the robot. In the case of ambiguities, most of the systems ask users to repeat their request, which assumes that the robot is familiar with all of the objects in the environment. This assumption might lead to task failure, especially in complex real-world environments. In this paper, we address this challenge by presenting an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot. To evaluate our system while disambiguating the referenced objects, we conducted a user study with 63 participants. We analyzed the interactions when the robot asked for clarifications and when it asked users to redescribe the same object. Our results show that generating followup clarification questions helped the robot correctly identify the described objects with fewer attempts (i.e., conversational turns). Also, when people were asked clarification questions, they perceived the task as easier, and they evaluated the task understanding and competence of the robot as higher. Our code and anonymized dataset are publicly available11 https://github.com/IrmakDogan/Resolving-Ambiguities. 

Place, publisher, year, edition, pages
IEEE Computer Society, 2022
Keywords
Follow-Up Clarifications, Referring Expressions, Resolving Ambiguities, Clarification, Robots, Follow up, Follow-up clarification, Human robots, Interactive system, Real world environments, Task failures, User study, Clarifiers
National Category
Robotics Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-322409 (URN)10.1109/HRI53351.2022.9889368 (DOI)000869793600051 ()2-s2.0-85127064182 (Scopus ID)
Conference
17th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2022, 7 March 2022 through 10 March 2022
Note

QC 20221214

Available from: 2022-12-14 Created: 2022-12-14 Last updated: 2023-02-23Bibliographically approved
Linard, A., Torre, I., Leite, I. & Tumova, J. (2022). Inference of Multi-Class STL Specifications for Multi-Label Human-Robot Encounters. In: 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 23-27, 2022, Kyoto, JAPAN (pp. 1305-1311). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Inference of Multi-Class STL Specifications for Multi-Label Human-Robot Encounters
2022 (English)In: 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 1305-1311Conference paper, Published paper (Refereed)
Abstract [en]

This paper is interested in formalizing human trajectories in human-robot encounters. Inspired by robot navigation tasks in human-crowded environments, we consider the case where a human and a robot walk towards each other, and where humans have to avoid colliding with the incoming robot. Further, humans may describe different behaviors, ranging from being in a hurry/minimizing completion time to maximizing safety. We propose a decision tree-based algorithm to extract STL formulae from multi-label data. Our inference algorithm learns STL specifications from data containing multiple classes, where instances can be labelled by one or many classes. We base our evaluation on a dataset of trajectories collected through an online study reproducing human-robot encounters.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Keywords
Temporal Logic Inference, Signal Temporal Logic, Human-Robot Interaction
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-324993 (URN)10.1109/IROS47612.2022.9982088 (DOI)000908368201044 ()2-s2.0-85146319785 (Scopus ID)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 23-27, 2022, Kyoto, JAPAN
Note

QC 20230327

Available from: 2023-03-27 Created: 2023-03-27 Last updated: 2023-04-03Bibliographically approved
Laban, G., Le Maguer, S., Lee, M., Kontogiorgos, D., Reig, S., Torre, I., . . . Pereira, A. (2022). Robo-Identity: Exploring Artificial Identity and Emotion via Speech Interactions. In: PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22): . Paper presented at 17th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 07-10, 2022, ELECTR NETWORK (pp. 1265-1268). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Robo-Identity: Exploring Artificial Identity and Emotion via Speech Interactions
Show others...
2022 (English)In: PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 1265-1268Conference paper, Published paper (Refereed)
Abstract [en]

Following the success of the first edition of Robo-Identity, the second edition will provide an opportunity to expand the discussion about artificial identity. This year, we are focusing on emotions that are expressed through speech and voice. Synthetic voices of robots can resemble and are becoming indistinguishable from expressive human voices. This can be an opportunity and a constraint in expressing emotional speech that can (falsely) convey a human-like identity that can mislead people, leading to ethical issues. How should we envision an agent's artificial identity? In what ways should we have robots that maintain a machine-like stance, e.g., through robotic speech, and should emotional expressions that are increasingly human-like be seen as design opportunities? These are not mutually exclusive concerns. As this discussion needs to be conducted in a multidisciplinary manner, we welcome perspectives on challenges and opportunities from variety of fields. For this year's edition, the special theme will be "speech, emotion and artificial identity".

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
artificial identity, voice, speech, emotion, affective computing, human-robot interaction, affective science
National Category
Human Computer Interaction Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-322490 (URN)10.1109/HRI53351.2022.9889649 (DOI)000869793600219 ()2-s2.0-85140715625 (Scopus ID)
Conference
17th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 07-10, 2022, ELECTR NETWORK
Note

Part of proceedings: ISBN 978-1-6654-0731-1

QC 20221216

Available from: 2022-12-16 Created: 2022-12-16 Last updated: 2022-12-16Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-8601-1370

Search in DiVA

Show all publications