kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 104) Show all publications
Leite, I., Ahlberg, W., Pereira, A., Sestini, A., Gisslen, L. & Tollmar, K. (2025). A Call for Deeper Collaboration Between Robotics and Game Development. In: Proceedings of the IEEE 2025 Conference on Games, CoG 2025: . Paper presented at 2025 IEEE Conference on Games, CoG 2025, Lisbon, Portugal, August 26-29, 2025. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>A Call for Deeper Collaboration Between Robotics and Game Development
Show others...
2025 (English)In: Proceedings of the IEEE 2025 Conference on Games, CoG 2025, Institute of Electrical and Electronics Engineers (IEEE) , 2025Conference paper, Published paper (Refereed)
Abstract [en]

While robotics and game development have independently achieved significant progress in creating interactive and intelligent systems, a deeper collaboration between these fields could be mutually beneficial. This paper argues for more collaboration, highlighting current limited interactions and proposing directions for future research. We discuss shared foundations such as Artificial Intelligence, Extended Reality, and the increasing use of common tools and standards. We then propose opportunities where game development methodologies can advance robotics (e.g., gamified data collection and richer simulation environments) and where robotics research can contribute to games (e.g., improved NPC autonomy and embodied intelligence). This cross-disciplinary interaction can accelerate innovation and lead to more intelligent and usercentered technologies in both domains.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
artificial intelligence, collaboration, game development, non-player characters, Robotics
National Category
Robotics and automation Computer Sciences Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-370815 (URN)10.1109/CoG64752.2025.11114209 (DOI)2-s2.0-105015576103 (Scopus ID)
Conference
2025 IEEE Conference on Games, CoG 2025, Lisbon, Portugal, August 26-29, 2025
Note

Part of ISBN 9798331589042

QC 20251003

Available from: 2025-10-03 Created: 2025-10-03 Last updated: 2025-10-03Bibliographically approved
Shirol, S., Delfa, J. L., Leite, I. & Yadollahi, E. (2025). Designing Social Behaviours for Autonomous Mobile Robots: The Role of Movement and Light in Communicating Intent. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025 (pp. 1638-1643). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Designing Social Behaviours for Autonomous Mobile Robots: The Role of Movement and Light in Communicating Intent
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 1638-1643Conference paper, Published paper (Refereed)
Abstract [en]

When autonomous mobile robots (AMRs) share space with humans, establishing trust becomes essential for safe, seamless, and effective interaction. Clear communication of a robot's intent is key to building trust by reducing uncertainty and enabling intuitive interaction. This study explores how AMRs can effectively communicate their intentions through simple, intuitive modalities like movement and light, making their actions more predictable and fostering trust. We designed distinct movement cues combined with light patterns to communicate two key intents; yielding (backing off) and making way (prompting humans to move), tested across four different scenarios. To evaluate the clarity and effectiveness of these behaviours, we conducted an online video study analysing qualitative feedback from open-ended responses. Additionally, we collected quantitative data assessing participants' perceptions of the safety and trustworthiness of the robot. Our findings demonstrate a strong correlation between these perceptions and the robot's ability to display socially aware behaviours.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Human-Robot Interaction, Intent Communication, Interaction Design, Multi-modal Human-Robot Interaction, Social Robotics
National Category
Robotics and automation Other Engineering and Technologies Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-363757 (URN)10.1109/HRI61500.2025.10973845 (DOI)2-s2.0-105004879113 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025
Note

Part of ISBN 9798350378931

QC 20250528

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-28Bibliographically approved
Marta, D., Holk, S., Vasco, M., Lundell, J., Homberger, T., Busch, F. L., . . . Leite, I. (2025). FLoRA: Sample-Efficient Preference-based RL via Low-Rank Style Adaptation of Reward Functions. In: IEEE International Conference on Robotics and Automation: . Paper presented at IEEE International Conference on Robotics and Automation, ICRA 2025, Atlanta, USA, 19-23 May 2025 (pp. 4789-4796). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>FLoRA: Sample-Efficient Preference-based RL via Low-Rank Style Adaptation of Reward Functions
Show others...
2025 (English)In: IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers (IEEE), 2025, p. 4789-4796Conference paper, Published paper (Refereed)
Abstract [en]

Preference-based reinforcement learning (PbRL) is a suitable approach for style adaptation of pre-trained robotic behavior: adapting the robot's policy to follow human user preferences while still being able to perform the original task. However, collecting preferences for the adaptation process in robotics is often challenging and time-consuming. In this work we explore the adaptation of pre-trained robots in the low-preference-data regime. We show that, in this regime, recent adaptation approaches suffer from catastrophic reward forgetting (CRF), where the updated reward model overfits to the new preferences, leading the agent to become unable to perform the original task. To mitigate CRF, we propose to enhance the original reward model with a small number of parameters (low-rank matrices) responsible for modeling the preference adaptation. Our evaluation shows that our method can efficiently and effectively adjust robotic behavior to human preferences across simulation benchmark tasks and multiple real-world robotic tasks. We provide videos of our results and source code at https://sites.google.com/view/preflora/

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-360980 (URN)10.1109/ICRA55743.2025.11127633 (DOI)2-s2.0-105016684037 (Scopus ID)
Conference
IEEE International Conference on Robotics and Automation, ICRA 2025, Atlanta, USA, 19-23 May 2025
Note

QC 20250618

Part of ISBN 979-833154139-2

Available from: 2025-03-07 Created: 2025-03-07 Last updated: 2025-10-14Bibliographically approved
Akay, H., Capezza, A. J., Henrysson, M., Leite, I. & Nerini, F. F. (2025). Language Models for Functional Digital Twin of Circular Manufacturing. In: Sustainable Manufacturing as a Driver for Growth - Proceedings of the 19th Global Conference on Sustainable Manufacturing: . Paper presented at 19th Global Conference on Sustainable Manufacturing, GCSM 2023, Buenos Aires, Argentina, Dec 4 2023 - Dec 6 2023 (pp. 553-561). Springer Nature
Open this publication in new window or tab >>Language Models for Functional Digital Twin of Circular Manufacturing
Show others...
2025 (English)In: Sustainable Manufacturing as a Driver for Growth - Proceedings of the 19th Global Conference on Sustainable Manufacturing, Springer Nature , 2025, p. 553-561Conference paper, Published paper (Refereed)
Abstract [en]

A key challenge for implementation of a circular economy model in manufacturing systems is the functional dependence of downstream processes on upstream byproducts. Design principles provide a framework for mapping goals to solutions by decomposing complex engineering problems into structured sets of requirements to be satisfied and embodied by design parameters and process variables. Large Language Models can computationally represent such textually-described design elements to quantify interconnections between problems, solutions, and processes. We present a Functional Digital Twin concept, powered by AI language modeling and guided by principles of manufacturing systems design, to identify functionally coupled process variables in an industrial symbiosis and automatically push alerts to stakeholders in a circular manufacturing system. Changes in byproduct composition are pushed downstream, and upstream decision-makers are guided to balance satisfying their design requirements with maintaining circularity of the system. The presented method is demonstrated in a case study of bio-based absorbent materials for intended use in disposable sanitary articles developed from byproducts of the agro-food industry.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Circular Economy, Digital Twin, Industrial Symbiosis, Language Models
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-360556 (URN)10.1007/978-3-031-77429-4_61 (DOI)2-s2.0-85218156176 (Scopus ID)
Conference
19th Global Conference on Sustainable Manufacturing, GCSM 2023, Buenos Aires, Argentina, Dec 4 2023 - Dec 6 2023
Note

Part of ISBN 9783031774287

QC 20250228

Available from: 2025-02-26 Created: 2025-02-26 Last updated: 2025-02-28Bibliographically approved
Stower, R., Gautier, A., Wozniak, M. K., Jensfelt, P., Tumova, J. & Leite, I. (2025). Take a Chance on Me: How Robot Performance and Risk Behaviour Affects Trust and Risk-Taking. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025 (pp. 391-399). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Take a Chance on Me: How Robot Performance and Risk Behaviour Affects Trust and Risk-Taking
Show others...
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 391-399Conference paper, Published paper (Refereed)
Abstract [en]

Real-world human-robot interactions often encompass uncertainty. This uncertainty can be handled in different ways, for example by designing robot planners to be more or less risk-tolerant. However, how users actually perceive different risk-taking behaviours in robots has yet to be described. Additionally, in the absence of guarantees on optimal robot performance, the interaction between risk and performance on user perceptions is also unclear. To address this gap, we conducted a user study with 84 participants investigating how robot performance and risk behaviour affects users' trust and risk-taking decisions. Participants collaborated with a Franka robot arm to perform a block-stacking task. We compared a robot which displays consistent but sub-optimal behaviours to a robot displaying risky but occasionally optimal behaviour. Risky robot behaviour led to higher trust than consistent behaviour when the robot was on average good at stacking blocks (high expectation), but lower trust when the robot was on average bad at stacking blocks (low expectation). Individual risk-willingness also predicted likelihood of selecting the risky robot over the consistent robot for future interactions, but only when the average expectation was low. These findings have implications for risk-aware planning and decision-making in mixed human-robot systems.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
collaborative robot, failure, risk-taking, trust, user study
National Category
Robotics and automation Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-363768 (URN)10.1109/HRI61500.2025.10973966 (DOI)2-s2.0-105004879443 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025
Note

Part of ISBN 9798350378931

QC 20250527

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-27Bibliographically approved
Gillet, S., Thompson, S., Leite, I. & Vázquez, M. (2025). Templates and Graph Neural Networks for Social Robots Interacting in Small Groups of Varying Sizes. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, March 4-6, 2025 (pp. 458-467). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Templates and Graph Neural Networks for Social Robots Interacting in Small Groups of Varying Sizes
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 458-467Conference paper, Published paper (Refereed)
Abstract [en]

Social robots need to be able to interact effectively with small groups. While there is a significant interest in human-robot interaction in groups, little focus has been placed on developing autonomous social robot decision-making methods that operate smoothly with small groups of any size (e.g. 2, 3, or 4 interactants). In this work, we propose a Template- and Graph-based Modeling approach for robots interacting in small groups (TGM), enabling them to interact with groups in a way that is group-size agnostic. Critically, we separate the decision about the target of their communication, or 'whom to address?' from the decision of 'what to communicate?', which allows us to use template-based actions. We further use Graph Neural Networks (GNNs) to efficiently decide on 'whom' and 'what'. We evaluated TGM using imitation learning and compared the structured reasoning achieved through GNNs to unstructured approaches for this two-part decision-making problem. On two different datasets, we show that TGM outperforms the baselines encouraging future work to invest in collecting larger datasets.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Groups, Human-Robot Interaction, Social be-havior generation
National Category
Computer Sciences Robotics and automation Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-363766 (URN)10.1109/HRI61500.2025.10973917 (DOI)2-s2.0-105004877956 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, March 4-6, 2025
Note

Part of ISBN 9798350378931

QC 20250522

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-22Bibliographically approved
Romeo, M., Torre, I., Le Maguer, S., Sleat, A., Cangelosi, A. & Leite, I. (2025). The Effect of Voice and Repair Strategy on Trust Formation and Repair in Human-Robot Interaction. ACM Transactions on Human-Robot Interaction, 14(2), Article ID 33.
Open this publication in new window or tab >>The Effect of Voice and Repair Strategy on Trust Formation and Repair in Human-Robot Interaction
Show others...
2025 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 14, no 2, article id 33Article in journal (Refereed) Published
Abstract [en]

Trust is essential for social interactions, including those between humans and social artificial agents, such as robots. Several factors and combinations thereof can contribute to the formation of trust and, importantly in the case of machines that work with a certain margin of error, to its maintenance and repair after it has been breached. In this article, we present the results of a study aimed at investigating the role of robot voice and chosen repair strategy on trust formation and repair in a collaborative task. People helped a robot navigate through a maze, and the robot made mistakes at pre-defined points during the navigation. Via in-game behaviour and follow-up questionnaires, we could measure people's trust towards the robot. We found that people trusted the robot speaking with a state-of-the-art synthetic voice more than with the default robot voice in the game, even though they indicated the opposite in the questionnaires. Additionally, we found that three repair strategies that people use in human-human interaction (justification of the mistake, promise to be better and denial of the mistake) work also in human-robot interaction.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2025
Keywords
CCS Concepts:, Human-centered computing -> Human computer interaction (HCI), Auditory feedback
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-363669 (URN)10.1145/3711938 (DOI)001460064300002 ()2-s2.0-105003626883 (Scopus ID)
Note

QC 20250520

Available from: 2025-05-20 Created: 2025-05-20 Last updated: 2025-05-20Bibliographically approved
Reimann, M. M., Hindriks, K. V., Kunneman, F. A., Oertel, C., Skantze, G. & Leite, I. (2025). What Can You Say to a Robot? Capability Communication Leads to More Natural Conversations. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025 (pp. 708-716). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>What Can You Say to a Robot? Capability Communication Leads to More Natural Conversations
Show others...
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 708-716Conference paper, Published paper (Refereed)
Abstract [en]

When encountering a robot in the wild, it is not inherently clear to human users what the robot's capabilities are. When encountering misunderstandings or problems in spoken interaction, robots often just apologize and move on, without additional effort to make sure the user understands what happened. We set out to compare the effect of two speech based capability communication strategies (proactive, reactive) to a robot without such a strategy, in regard to the user's rating of and their behavior during the interaction. For this, we conducted an in-person user study with 120 participants who had three speech-based interactions with a social robot in a restaurant setting. Our results suggest that users preferred the robot communicating its capabilities proactively and adjusted their behavior in those interactions, using a more conversational interaction style while also enjoying the interaction more.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
dialogue management, Human-robot-interaction, spoken interaction, user study
National Category
Human Computer Interaction Robotics and automation Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-363764 (URN)10.1109/HRI61500.2025.10974151 (DOI)2-s2.0-105004876438 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025
Note

Part of ISBN 9798350378931

QC 20250602

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-06-02Bibliographically approved
Rahimzadagan, N., Vahs, M., Leite, I. & Stower, R. (2024). Drone Fail Me Now: How Drone Failures Afect Trust and Risk-Taking Decisions. In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024 (pp. 862-866). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Drone Fail Me Now: How Drone Failures Afect Trust and Risk-Taking Decisions
2024 (English)In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, p. 862-866Conference paper, Published paper (Refereed)
Abstract [en]

So far, research on drone failures has been mostly limited to understanding the technical causes of failures and recovery strategies. In contrast, there is little work looking at how failures of drones are perceived by users. To address this gap, we conduct a real-world study where participants experience drone failures leading to monetary loss whilst navigating a drone over an obstacle course. We tested 46 participants where they experienced both a failure and failure-free (control) interaction. Participants' trust in the drone, their enjoyment of the interaction, perceived control, and future use intentions were all negatively impacted by drone failures. However, risk-taking decisions during the interaction were not affected. These findings suggest that experiencing a failure whilst operating a drone in real-time is detrimental to participants' subjective experience of the interaction.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
Drone, Failure, Human-Drone Interaction, Trust, Risk-Taking, UAV
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-344808 (URN)10.1145/3610978.3640609 (DOI)001255070800183 ()2-s2.0-85188131674 (Scopus ID)
Conference
19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024
Note

QC 20240402

Part of ISBN 9798400703232

Available from: 2024-03-28 Created: 2024-03-28 Last updated: 2024-09-03Bibliographically approved
Yadollahi, E., Romeo, M., Dogan, F. I., Johal, W., De Graaf, M., Levy-Tzedek, S. & Leite, I. (2024). Explainability for Human-Robot Collaboration. In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024 (pp. 1364-1366). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Explainability for Human-Robot Collaboration
Show others...
2024 (English)In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, p. 1364-1366Conference paper, Published paper (Refereed)
Abstract [en]

In human-robot collaboration, explainability bridges the communication gap between complex machine functionalities and humans. An active area of investigation in robotics and AI is understanding and generating explanations that can enhance collaboration and mutual understanding between humans and machines. A key to achieving such seamless collaborations is understanding end-users, whether naive or expert, and tailoring explanation features that are intuitive, user-centred, and contextually relevant. Advancing on the topic not only includes modelling humans' expectations for generating the explanations but also requires the development of metrics to evaluate generated explanations and assess how effectively autonomous systems communicate their intentions, actions, and decision-making rationale. This workshop is designed to tackle the nuanced role of explainability in enhancing the efficiency, safety, and trust in human-robot collaboration. It aims to initiate discussions on the importance of generating and evaluating explainability features developed in autonomous agents. Simultaneously, it addresses various challenges, including bias in explainability and downsides of explainability and deception in human-robot interaction.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
Explainable Robotics, Human-Centered Robot Explanations, XAI
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-344807 (URN)10.1145/3610978.3638154 (DOI)001255070800301 ()2-s2.0-85188063647 (Scopus ID)
Conference
19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024
Note

QC 20240409

Part of ISBN 9798400703232

Available from: 2024-03-28 Created: 2024-03-28 Last updated: 2024-10-11Bibliographically approved
Projects
Foundation Models for Adaptive Transparency in Human-Robot Interaction (FLARE) [2025-05613_VR]; Uppsala University
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2212-4325

Search in DiVA

Show all publications