kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 87) Show all publications
Rahimzadagan, N., Vahs, M., Leite, I. & Stower, R. (2024). Drone Fail Me Now: How Drone Failures Afect Trust and Risk-Taking Decisions. In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024 (pp. 862-866). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Drone Fail Me Now: How Drone Failures Afect Trust and Risk-Taking Decisions
2024 (English)In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, p. 862-866Conference paper, Published paper (Refereed)
Abstract [en]

So far, research on drone failures has been mostly limited to understanding the technical causes of failures and recovery strategies. In contrast, there is little work looking at how failures of drones are perceived by users. To address this gap, we conduct a real-world study where participants experience drone failures leading to monetary loss whilst navigating a drone over an obstacle course. We tested 46 participants where they experienced both a failure and failure-free (control) interaction. Participants' trust in the drone, their enjoyment of the interaction, perceived control, and future use intentions were all negatively impacted by drone failures. However, risk-taking decisions during the interaction were not affected. These findings suggest that experiencing a failure whilst operating a drone in real-time is detrimental to participants' subjective experience of the interaction.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
Drone, Failure, Human-Drone Interaction, Trust, Risk-Taking, UAV
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-344808 (URN)10.1145/3610978.3640609 (DOI)2-s2.0-85188131674 (Scopus ID)
Conference
19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024
Note

QC 20240402

Part of ISBN 9798400703232

Available from: 2024-03-28 Created: 2024-03-28 Last updated: 2024-04-02Bibliographically approved
Yadollahi, E., Romeo, M., Dogan, F. I., Johal, W., De Graaf, M., Levy-Tzedek, S. & Leite, I. (2024). Explainability for Human-Robot Collaboration. In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024 (pp. 1364-1366). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Explainability for Human-Robot Collaboration
Show others...
2024 (English)In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, p. 1364-1366Conference paper, Published paper (Refereed)
Abstract [en]

In human-robot collaboration, explainability bridges the communication gap between complex machine functionalities and humans. An active area of investigation in robotics and AI is understanding and generating explanations that can enhance collaboration and mutual understanding between humans and machines. A key to achieving such seamless collaborations is understanding end-users, whether naive or expert, and tailoring explanation features that are intuitive, user-centred, and contextually relevant. Advancing on the topic not only includes modelling humans' expectations for generating the explanations but also requires the development of metrics to evaluate generated explanations and assess how effectively autonomous systems communicate their intentions, actions, and decision-making rationale. This workshop is designed to tackle the nuanced role of explainability in enhancing the efficiency, safety, and trust in human-robot collaboration. It aims to initiate discussions on the importance of generating and evaluating explainability features developed in autonomous agents. Simultaneously, it addresses various challenges, including bias in explainability and downsides of explainability and deception in human-robot interaction.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
Explainable Robotics, Human-Centered Robot Explanations, XAI
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-344807 (URN)10.1145/3610978.3638154 (DOI)2-s2.0-85188063647 (Scopus ID)9798400703232 (ISBN)
Conference
19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024
Note

QC 20240409

Part of ISBN 9798400703232

Available from: 2024-03-28 Created: 2024-03-28 Last updated: 2024-04-09Bibliographically approved
Gillet, S., Vázquez, M., Andrist, S., Leite, I. & Sebo, S. (2024). Interaction-Shaping Robotics: Robots That Influence Interactions between Other Agents. ACM Transactions on Human-Robot Interaction, 13(1), Article ID 12.
Open this publication in new window or tab >>Interaction-Shaping Robotics: Robots That Influence Interactions between Other Agents
Show others...
2024 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 13, no 1, article id 12Article in journal (Refereed) Published
Abstract [en]

Work in Human–Robot Interaction (HRI) has investigated interactions between one human and one robot as well as human–robot group interactions. Yet the field lacks a clear definition and understanding of the influence a robot can exert on interactions between other group members (e.g., human-to-human). In this article, we define Interaction-Shaping Robotics (ISR), a subfield of HRI that investigates robots that influence the behaviors and attitudes exchanged between two (or more) other agents. We highlight key factors of interaction-shaping robots that include the role of the robot, the robot-shaping outcome, the form of robot influence, the type of robot communication, and the timeline of the robot’s influence. We also describe three distinct structures of human–robot groups to highlight the potential of ISR in different group compositions and discuss targets for a robot’s interaction-shaping behavior. Finally, we propose areas of opportunity and challenges for future research in ISR.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
Human–robot interaction, interaction-shaping robotics, multiparty interactions, shaping interactions, social influence
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-345236 (URN)10.1145/3643803 (DOI)2-s2.0-85189071275 (Scopus ID)
Note

Imported from Scopus. VERIFY.

Available from: 2024-04-10 Created: 2024-04-10 Last updated: 2024-04-11Bibliographically approved
Zojaji, S., Matviienko, A., Leite, I. & Peters, C. (2024). Join Me Here if You Will: Investigating Embodiment and Politeness Behaviors When Joining Small Groups of Humans, Robots, and Virtual Characters. In: : . Paper presented at CHI Conference on Human Factors in Computing Systems (CHI ’24), Oʻahu, Hawaiʻi, USA, 11-16 May 2024. New York, NY, USA: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Join Me Here if You Will: Investigating Embodiment and Politeness Behaviors When Joining Small Groups of Humans, Robots, and Virtual Characters
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Politeness and embodiment are pivotal elements in Human-Agent Interactions. While many previous works advocate the positive role of embodiment in enhancing Human-Agent Interactions, it remains unclear how embodiment and politeness affect individuals joining groups. In this paper, we explore how polite behaviors (verbal and nonverbal) exhibited by three distinct embodiments (humans, robots, and virtual characters) influence individuals' decisions to join a group of two agents in a controlled experiment (N=54). We assessed agent effectiveness regarding persuasiveness, perceived politeness, and participants' trajectories when joining the group. We found that embodiment does not significantly impact agent persuasiveness and perceived politeness, but polite behaviors do. Direct and explicit politeness strategies have a higher success rate in persuading participants to join at the furthest side. Lastly, participants adhered to social norms when joining at the furthest side, maintained a greater physical distance from humans, chose longer paths, and walked faster when interacting with humans.

Place, publisher, year, edition, pages
New York, NY, USA: Association for Computing Machinery (ACM), 2024
Keywords
Politeness, Free-standing conversational groups, Humans, Robots, Virtual characters, Trajectory, Group dynamics, social norms
National Category
Computer Systems Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-343213 (URN)10.1145/3613904.3642905 (DOI)
Conference
CHI Conference on Human Factors in Computing Systems (CHI ’24), Oʻahu, Hawaiʻi, USA, 11-16 May 2024
Note

QC 20240402

Available from: 2024-02-08 Created: 2024-02-08 Last updated: 2024-05-13Bibliographically approved
Holk, S., Marta, D. & Leite, I. (2024). PREDILECT: Preferences Delineated with Zero-Shot Language-based Reasoning in Reinforcement Learning. In: HRI 2024 - Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024 (pp. 259-268). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>PREDILECT: Preferences Delineated with Zero-Shot Language-based Reasoning in Reinforcement Learning
2024 (English)In: HRI 2024 - Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, p. 259-268Conference paper, Published paper (Refereed)
Abstract [en]

Preference-based reinforcement learning (RL) has emerged as a new field in robot learning, where humans play a pivotal role in shaping robot behavior by expressing preferences on different sequences of state-action pairs. However, formulating realistic policies for robots demands responses from humans to an extensive array of queries. In this work, we approach the sample-efficiency challenge by expanding the information collected per query to contain both preferences and optional text prompting. To accomplish this, we leverage the zero-shot capabilities of a large language model (LLM) to reason from the text provided by humans. To accommodate the additional query information, we reformulate the reward learning objectives to contain flexible highlights - state-action pairs that contain relatively high information and are related to the features processed in a zero-shot fashion from a pretrained LLM. In both a simulated scenario and a user study, we reveal the effectiveness of our work by analyzing the feedback and its implications. Additionally, the collective feedback collected serves to train a robot on socially compliant trajectories in a simulated social navigation landscape. We provide video examples of the trained policies at https://sites.google.com/view/rl-predilect.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Series
ACM/IEEE International Conference on Human-Robot Interaction, ISSN 2167-2148
Keywords
Human-in-the-loop Learning, Interactive learning, Preference learning, Reinforcement learning
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-344936 (URN)10.1145/3610977.3634970 (DOI)2-s2.0-85188450390 (Scopus ID)
Conference
19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024
Note

QC 20240404

Part of ISBN 979-840070322-5

Available from: 2024-04-03 Created: 2024-04-03 Last updated: 2024-04-04Bibliographically approved
Marta, D., Holk, S., Pek, C., Tumova, J. & Leite, I. (2023). Aligning Human Preferences with Baseline Objectives in Reinforcement Learning. In: 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), MAY 29-JUN 02, 2023, London, ENGLAND. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Aligning Human Preferences with Baseline Objectives in Reinforcement Learning
Show others...
2023 (English)In: 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper, Published paper (Refereed)
Abstract [en]

Practical implementations of deep reinforcement learning (deep RL) have been challenging due to an amplitude of factors, such as designing reward functions that cover every possible interaction. To address the heavy burden of robot reward engineering, we aim to leverage subjective human preferences gathered in the context of human-robot interaction, while taking advantage of a baseline reward function when available. By considering baseline objectives to be designed beforehand, we are able to narrow down the policy space, solely requesting human attention when their input matters the most. To allow for control over the optimization of different objectives, our approach contemplates a multi-objective setting. We achieve human-compliant policies by sequentially training an optimal policy from a baseline specification and collecting queries on pairs of trajectories. These policies are obtained by training a reward estimator to generate Pareto optimal policies that include human preferred behaviours. Our approach ensures sample efficiency and we conducted a user study to collect real human preferences, which we utilized to obtain a policy on a social navigation environment.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE International Conference on Robotics and Automation ICRA
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-324924 (URN)10.1109/ICRA48891.2023.10161261 (DOI)001048371100079 ()2-s2.0-85164820716 (Scopus ID)
Conference
IEEE International Conference on Robotics and Automation (ICRA), MAY 29-JUN 02, 2023, London, ENGLAND
Note

Part of ISBN 979-8-3503-2365-8

QC 20230328

Available from: 2023-03-21 Created: 2023-03-21 Last updated: 2023-10-16Bibliographically approved
Torre, I., Lagerstedt, E., Dennler, N., Seaborn, K., Leite, I. & Székely, É. (2023). Can a gender-ambiguous voice reduce gender stereotypes in human-robot interactions?. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 106-112). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Can a gender-ambiguous voice reduce gender stereotypes in human-robot interactions?
Show others...
2023 (English)In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 106-112Conference paper, Published paper (Refereed)
Abstract [en]

When deploying robots, its physical characteristics, role, and tasks are often fixed. Such factors can also be associated with gender stereotypes among humans, which then transfer to the robots. One factor that can induce gendering but is comparatively easy to change is the robot's voice. Designing voice in a way that interferes with fixed factors might therefore be a way to reduce gender stereotypes in human-robot interaction contexts. To this end, we have conducted a video-based online study to investigate how factors that might inspire gendering of a robot interact. In particular, we investigated how giving the robot a gender-ambiguous voice can affect perception of the robot. We compared assessments (n=111) of videos in which a robot's body presentation and occupation mis/matched with human gender stereotypes. We found evidence that a gender-ambiguous voice can reduce gendering of a robot endowed with stereotypically feminine or masculine attributes. The results can inform more just robot design while opening new questions regarding the phenomenon of robot gendering.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Gender Studies
Identifiers
urn:nbn:se:kth:diva-342047 (URN)10.1109/RO-MAN57019.2023.10309500 (DOI)001108678600016 ()2-s2.0-85187027115 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Available from: 2024-01-10 Created: 2024-01-10 Last updated: 2024-03-22Bibliographically approved
Morillo-Mendez, L., Stower, R., Sleat, A., Schreiter, T., Leite, I., Mozos, O. M. & Schrooten, M. G. S. (2023). Can the robot "see" what I see?: Robot gaze drives attention depending on mental state attribution. Frontiers in Psychology, 14, Article ID 1215771.
Open this publication in new window or tab >>Can the robot "see" what I see?: Robot gaze drives attention depending on mental state attribution
Show others...
2023 (English)In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 14, article id 1215771Article in journal (Refereed) Published
Abstract [en]

Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.

Place, publisher, year, edition, pages
Frontiers Media SA, 2023
Keywords
gaze following, cueing effect, attention, mentalizing, intentional stance, social robots
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-333787 (URN)10.3389/fpsyg.2023.1215771 (DOI)001037081700001 ()37519379 (PubMedID)2-s2.0-85166030431 (Scopus ID)
Note

QC 20230810

Available from: 2023-08-10 Created: 2023-08-10 Last updated: 2023-08-10Bibliographically approved
Castellano, G., Riek, L., Cakmak, M. & Leite, I. (2023). Chairs' welcome. In: Proceedings 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023: . Paper presented at 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, 13-16 March 2023. IEEE Computer Society
Open this publication in new window or tab >>Chairs' welcome
2023 (English)In: Proceedings 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, IEEE Computer Society , 2023Conference paper, Published paper (Other academic)
Place, publisher, year, edition, pages
IEEE Computer Society, 2023
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-332977 (URN)10.1145/3568162.fm (DOI)2-s2.0-85150341386 (Scopus ID)
Conference
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, 13-16 March 2023
Note

Part of ISBN 9781450399647

QC 20230724

Available from: 2023-07-24 Created: 2023-07-24 Last updated: 2023-07-24Bibliographically approved
Castellano, G., Riek, L., Cakmak, M. & Leite, I. (2023). Chairs' Welcome. In: Proceedings HRI '23: ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at HRI '23: ACM/IEEE International Conference on Human-Robot Interaction Stockholm Sweden March 13-16, 2023. ACM Press
Open this publication in new window or tab >>Chairs' Welcome
2023 (English)In: Proceedings HRI '23: ACM/IEEE International Conference on Human-Robot Interaction, ACM Press, 2023Conference paper, Published paper (Other academic)
Place, publisher, year, edition, pages
ACM Press, 2023
National Category
Human Computer Interaction Robotics
Identifiers
urn:nbn:se:kth:diva-338438 (URN)2-s2.0-85150438083 (Scopus ID)
Conference
HRI '23: ACM/IEEE International Conference on Human-Robot Interaction Stockholm Sweden March 13-16, 2023
Note

Part of proceedings ISBN 9781450399708

QC 20231116

Available from: 2023-11-16 Created: 2023-11-16 Last updated: 2023-11-16Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2212-4325

Search in DiVA

Show all publications