kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 39) Show all publications
Romeo, M., Torre, I., Le Maguer, S., Sleat, A., Cangelosi, A. & Leite, I. (2025). The Effect of Voice and Repair Strategy on Trust Formation and Repair in Human-Robot Interaction. ACM Transactions on Human-Robot Interaction, 14(2), Article ID 33.
Open this publication in new window or tab >>The Effect of Voice and Repair Strategy on Trust Formation and Repair in Human-Robot Interaction
Show others...
2025 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 14, no 2, article id 33Article in journal (Refereed) Published
Abstract [en]

Trust is essential for social interactions, including those between humans and social artificial agents, such as robots. Several factors and combinations thereof can contribute to the formation of trust and, importantly in the case of machines that work with a certain margin of error, to its maintenance and repair after it has been breached. In this article, we present the results of a study aimed at investigating the role of robot voice and chosen repair strategy on trust formation and repair in a collaborative task. People helped a robot navigate through a maze, and the robot made mistakes at pre-defined points during the navigation. Via in-game behaviour and follow-up questionnaires, we could measure people's trust towards the robot. We found that people trusted the robot speaking with a state-of-the-art synthetic voice more than with the default robot voice in the game, even though they indicated the opposite in the questionnaires. Additionally, we found that three repair strategies that people use in human-human interaction (justification of the mistake, promise to be better and denial of the mistake) work also in human-robot interaction.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2025
Keywords
CCS Concepts:, Human-centered computing -> Human computer interaction (HCI), Auditory feedback
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-363669 (URN)10.1145/3711938 (DOI)001460064300002 ()2-s2.0-105003626883 (Scopus ID)
Note

QC 20250520

Available from: 2025-05-20 Created: 2025-05-20 Last updated: 2025-05-20Bibliographically approved
Torre, I., Holk, S., Yadollahi, E., Leite, I., McDonnell, R. & Harte, N. (2024). Smiling in the Face and Voice of Avatars and Robots: Evidence for a ‘smiling McGurk Effect’. IEEE Transactions on Affective Computing, 15(2), 393-404
Open this publication in new window or tab >>Smiling in the Face and Voice of Avatars and Robots: Evidence for a ‘smiling McGurk Effect’
Show others...
2024 (English)In: IEEE Transactions on Affective Computing, E-ISSN 1949-3045, Vol. 15, no 2, p. 393-404Article in journal (Refereed) Published
Abstract [en]

Multisensory integration influences emotional perception, as the McGurk effect demonstrates for the communication between humans. Human physiology implicitly links the production of visual features with other modes like the audio channel: Face muscles responsible for a smiling face also stretch the vocal cords that result in a characteristic smiling voice. For artificial agents capable of multimodal expression, this linkage is modeled explicitly. In our studies, we observe the influence of visual and audio channels on the perception of the agents' emotional expression. We created videos of virtual characters and social robots either with matching or mismatching emotional expressions in the audio and visual channels. In two online studies, we measured the agents' perceived valence and arousal. Our results consistently lend support to the ‘emotional McGurk effect' hypothesis, according to which face transmits valence information, and voice transmits arousal. When dealing with dynamic virtual characters, visual information is enough to convey both valence and arousal, and thus audio expressivity need not be congruent. When dealing with robots with fixed facial expressions, however, both visual and audio information need to be present to convey the intended expression.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Face recognition, Faces, Human-Likeness, multisensory integration, Muscles, Robots, smiling, Social robots, Videos, virtual agent, Visualization, Behavioral research, Muscle, Virtual reality, Audio channels, Face, Human likeness, McGurk effect, Video, Visual channels
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-328339 (URN)10.1109/TAFFC.2022.3213269 (DOI)001236687600001 ()2-s2.0-85139846163 (Scopus ID)
Note

QC 20230608

Available from: 2023-06-08 Created: 2023-06-08 Last updated: 2024-06-17Bibliographically approved
Torre, I., White, L., Goslin, J. & Knight, S. (2024). The irrepressible influence of vocal stereotypes on trust. Quarterly Journal of Experimental Psychology, 77(10), 1957-1966
Open this publication in new window or tab >>The irrepressible influence of vocal stereotypes on trust
2024 (English)In: Quarterly Journal of Experimental Psychology, ISSN 1747-0218, E-ISSN 1747-0226, Vol. 77, no 10, p. 1957-1966Article in journal (Refereed) Published
Abstract [en]

There is a reciprocal relationship between trust and vocal communication in human interactions. On one hand, a predisposition towards trust is necessary for communication to be meaningful and effective. On the other hand, we use vocal cues to signal our own trustworthiness and to infer it from the speech of others. Research on trustworthiness attributions to vocal characteristics is scarce and contradictory, however, being typically based on explicit judgements which may not predict actual trust-oriented behaviour. We use a game theory paradigm to examine the influence of speaker accent and prosody on trusting behaviour towards a simulated game partner, who responds either trustworthily or untrustworthily in an investment game. We found that speaking in a non-regional standard accent increases trust, as does relatively slow articulation rate. The effect of accent persists over time, despite the accumulation of clear evidence regarding the speaker’s level of trustworthiness in a negotiated interaction. Accents perceived as positive for trust can maintain this benefit even in the face of behavioural evidence of untrustworthiness.

Place, publisher, year, edition, pages
SAGE Publications, 2024
Keywords
Accent, prosody, trust
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-350296 (URN)10.1177/17470218231211549 (DOI)001110355200001 ()37872679 (PubMedID)2-s2.0-85177862591 (Scopus ID)
Note

QC 20240711

Available from: 2024-07-11 Created: 2024-07-11 Last updated: 2025-02-11Bibliographically approved
Winkle, K., Lagerstedt, E., Torre, I. & Offenwanger, A. (2023). 15 Years of (Who)man Robot Interaction: Reviewing the H in Human-Robot Interaction. ACM Transactions on Human-Robot Interaction, 12(3), Article ID 3571718.
Open this publication in new window or tab >>15 Years of (Who)man Robot Interaction: Reviewing the H in Human-Robot Interaction
2023 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, no 3, article id 3571718Article in journal (Refereed) Published
Abstract [en]

Recent work identified a concerning trend of disproportional gender representation in research participants in Human-Computer Interaction (HCI). Motivated by the fact that Human-Robot Interaction (HRI) shares many participant practices with HCI, we explored whether this trend is mirrored in our field. By producing a dataset covering participant gender representation in all 684 full papers published at the HRI conference from 2006-2021, we identify current trends in HRI research participation. We find an over-representation of men in research participants to date, as well as inconsistent and/or incomplete gender reporting, which typically engages in a binary treatment of gender at odds with published best practice guidelines. We further examine if and how participant gender has been considered in user studies to date, in-line with current discourse surrounding the importance and/or potential risks of gender based analyses. Finally, we complement this with a survey of HRI researchers to examine correlations between who is doing with the who is taking part, to further reflect on factors which seemingly influence gender bias in research participation across different sub-fields of HRI. Through our analysis, we identify areas for improvement, but also reason for optimism, and derive some practical suggestions for HRI researchers going forward.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Additional Key Words and PhrasesGender, inclusivity, participant recruitment, systematic review, user study methodologies
National Category
Gender Studies Robotics and automation Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-334861 (URN)10.1145/3571718 (DOI)001020331600001 ()2-s2.0-85163177354 (Scopus ID)
Note

QC 20230829

Available from: 2023-08-28 Created: 2023-08-28 Last updated: 2025-02-05Bibliographically approved
Torre, I., Lagerstedt, E., Dennler, N., Seaborn, K., Leite, I. & Székely, É. (2023). Can a gender-ambiguous voice reduce gender stereotypes in human-robot interactions?. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 106-112). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Can a gender-ambiguous voice reduce gender stereotypes in human-robot interactions?
Show others...
2023 (English)In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 106-112Conference paper, Published paper (Refereed)
Abstract [en]

When deploying robots, its physical characteristics, role, and tasks are often fixed. Such factors can also be associated with gender stereotypes among humans, which then transfer to the robots. One factor that can induce gendering but is comparatively easy to change is the robot's voice. Designing voice in a way that interferes with fixed factors might therefore be a way to reduce gender stereotypes in human-robot interaction contexts. To this end, we have conducted a video-based online study to investigate how factors that might inspire gendering of a robot interact. In particular, we investigated how giving the robot a gender-ambiguous voice can affect perception of the robot. We compared assessments (n=111) of videos in which a robot's body presentation and occupation mis/matched with human gender stereotypes. We found evidence that a gender-ambiguous voice can reduce gendering of a robot endowed with stereotypically feminine or masculine attributes. The results can inform more just robot design while opening new questions regarding the phenomenon of robot gendering.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Gender Studies
Identifiers
urn:nbn:se:kth:diva-342047 (URN)10.1109/RO-MAN57019.2023.10309500 (DOI)001108678600016 ()2-s2.0-85187027115 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Available from: 2024-01-10 Created: 2024-01-10 Last updated: 2024-03-22Bibliographically approved
Zhang, B. J., Orthmann, B., Torre, I., Bresin, R., Fick, J., Leite, I. & Fitter, N. T. (2023). Hearing it Out: Guiding Robot Sound Design through Design Thinking. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 2064-2071). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Hearing it Out: Guiding Robot Sound Design through Design Thinking
Show others...
2023 (English)In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2064-2071Conference paper, Published paper (Refereed)
Abstract [en]

Sound can benefit human-robot interaction, but little work has explored questions on the design of nonverbal sound for robots. The unique confluence of sound design and robotics expertise complicates these questions, as most roboticists do not have sound design expertise, necessitating collaborations with sound designers. We sought to understand how roboticists and sound designers approach the problem of robot sound design through two qualitative studies. The first study followed discussions by robotics researchers in focus groups, where these experts described motivations to add robot sound for various purposes. The second study guided music technology students through a generative activity for robot sound design; these sound designers in-training demonstrated high variability in design intent, processes, and inspiration. To unify the two perspectives, we structured recommendations through the design thinking framework, a popular design process. The insights provided in this work may aid roboticists in implementing helpful sounds in their robots, encourage sound designers to enter into collaborations on robot sound, and give key tips and warnings to both.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Design
Identifiers
urn:nbn:se:kth:diva-342045 (URN)10.1109/RO-MAN57019.2023.10309489 (DOI)001108678600269 ()2-s2.0-85186967284 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Available from: 2024-01-10 Created: 2024-01-10 Last updated: 2025-02-24Bibliographically approved
Székely, É., Gustafsson, J. & Torre, I. (2023). Prosody-controllable gender-ambiguous speech synthesis: a tool for investigating implicit bias in speech perception. In: Interspeech 2023: . Paper presented at 24th International Speech Communication Association, Interspeech 2023, August 20-24, 2023, Dublin, Ireland (pp. 1234-1238). International Speech Communication Association
Open this publication in new window or tab >>Prosody-controllable gender-ambiguous speech synthesis: a tool for investigating implicit bias in speech perception
2023 (English)In: Interspeech 2023, International Speech Communication Association , 2023, p. 1234-1238Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes a novel method to develop gender-ambiguous TTS, which can be used to investigate hidden gender bias in speech perception. Our aim is to provide a tool for researchers to conduct experiments on language use associated with specific genders. Ambiguous voices can also be beneficial for virtual assistants, to help reduce stereotypes and increase acceptance. Our approach uses a multi-speaker embedding in a neural TTS engine, combining two corpora recorded by a male and a female speaker to achieve a gender-ambiguous timbre. We also propose speaker-disentangled prosody control to ensure that the timbre is robust across a range of prosodies and enable more expressive speech. We optimised the output using an SSL-based network trained on hundreds of speakers. We conducted perceptual evaluations on the settings that were judged most ambiguous by the network, which showed that listeners perceived the speech samples as gender-ambiguous, also in prosody-controlled conditions.

Place, publisher, year, edition, pages
International Speech Communication Association, 2023
Keywords
gender bias, human-computer interaction, speech synthesis
National Category
Natural Language Processing
Identifiers
urn:nbn:se:kth:diva-337832 (URN)10.21437/Interspeech.2023-2086 (DOI)001186650301078 ()2-s2.0-85171582438 (Scopus ID)
Conference
24th International Speech Communication Association, Interspeech 2023, August 20-24, 2023, Dublin, Ireland
Note

QC 20241014

Available from: 2023-10-09 Created: 2023-10-09 Last updated: 2025-02-07Bibliographically approved
Romeo, M., Torre, I., Le Maguer, S., Cangelosi, A. & Leite, I. (2023). Putting Robots in Context: Challenging the Influence of Voice and Empathic Behaviour on Trust. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 2045-2050). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Putting Robots in Context: Challenging the Influence of Voice and Empathic Behaviour on Trust
Show others...
2023 (English)In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2045-2050Conference paper, Published paper (Refereed)
Abstract [en]

Trust is essential for social interactions, including those between humans and social artificial agents, such as robots. Several robot-related factors can contribute to the formation of trust. However, previous work has often treated trust as an absolute concept, whereas it is highly context-dependent, and it is possible that some robot-related features will influence trust in some contexts, but not in others. In this paper, we present the results of two video-based online studies aimed at investigating the role of robot voice and empathic behaviour on trust formation in a general context as well as in a task-specific context. We found that voice influences trust in the specific context, with no effect of voice or empathic behaviour in the general context. Thus, context mediated whether robot-related features play a role in people's trust formation towards robots.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-342054 (URN)10.1109/RO-MAN57019.2023.10309631 (DOI)001108678600266 ()2-s2.0-85187007764 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Available from: 2024-01-10 Created: 2024-01-10 Last updated: 2025-02-18Bibliographically approved
Linard, A., Torre, I., Bartoli, E., Sleat, A., Leite, I. & Tumova, J. (2023). Real-time RRT* with Signal Temporal Logic Preferences. In: 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 1-5, 2023, Detroit, USA. IEEE
Open this publication in new window or tab >>Real-time RRT* with Signal Temporal Logic Preferences
Show others...
2023 (English)In: 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, 2023Conference paper, Published paper (Other academic)
Abstract [en]

Signal Temporal Logic (STL) is a rigorous specification language that allows one to express various spatiotemporal requirements and preferences. Its semantics (called robustness) allows quantifying to what extent are the STL specifications met. In this work, we focus on enabling STL constraints and preferences in the Real-Time Rapidly ExploringRandom Tree (RT-RRT*) motion planning algorithm in an environment with dynamic obstacles. We propose a cost function that guides the algorithm towards the asymptotically most robust solution, i.e. a plan that maximally adheres to the STL specification. In experiments, we applied our method to a social navigation case, where the STL specification captures spatio-temporal preferences on how a mobile robot should avoid an incoming human in a shared space. Our results show that our approach leads to plans adhering to the STL specification, while ensuring efficient cost computation.

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Signal Temporal Logic, Real-Time Planning, Sampling-based Motion Planning.
National Category
Control Engineering Computer Engineering
Identifiers
urn:nbn:se:kth:diva-325105 (URN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 1-5, 2023, Detroit, USA
Note

QC 20231122

Available from: 2023-03-29 Created: 2023-03-29 Last updated: 2023-11-22Bibliographically approved
Linard, A., Torre, I., Bartoli, E., Sleat, A., Leite, I. & Tumova, J. (2023). Real-Time RRT* with Signal Temporal Logic Preferences. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023: . Paper presented at 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Detroit, United States of America, Oct 1 2023 - Oct 5 2023 (pp. 8621-8627). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Real-Time RRT* with Signal Temporal Logic Preferences
Show others...
2023 (English)In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 8621-8627Conference paper, Published paper (Refereed)
Abstract [en]

Signal Temporal Logic (STL) is a rigorous specification language that allows one to express various spatio-temporal requirements and preferences. Its semantics (called robustness) allows quantifying to what extent are the STL specifications met. In this work, we focus on enabling STL constraints and preferences in the Real-Time Rapidly Exploring Random Tree (RT-RRT*) motion planning algorithm in an environment with dynamic obstacles. We propose a cost function that guides the algorithm towards the asymptotically most robust solution, i.e. a plan that maximally adheres to the STL specification. In experiments, we applied our method to a social navigation case, where the STL specification captures spatio-temporal preferences on how a mobile robot should avoid an incoming human in a shared space. Our results show that our approach leads to plans adhering to the STL specification, while ensuring efficient cost computation.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Real-Time Planning, Sampling-based Motion Planning, Signal Temporal Logic
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-350253 (URN)10.1109/IROS55552.2023.10341993 (DOI)001136907802112 ()2-s2.0-85177884865 (Scopus ID)
Conference
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Detroit, United States of America, Oct 1 2023 - Oct 5 2023
Note

Part of ISBN 9781665491907

QC 20240710

Available from: 2024-07-10 Created: 2024-07-10 Last updated: 2025-02-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-8601-1370

Search in DiVA

Show all publications