kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Publications (10 of 14) Show all publications
Khanna, P., Rajabi, N., Kanik, S. U. e., Kragic Jensfelt, D., Björkman, M. & Smith, C. (2026). Early detection of human handover intentions in human–robot collaboration: Comparing EEG, gaze, and hand motion. Robotics and Autonomous Systems, 196, Article ID 105244.
Open this publication in new window or tab >>Early detection of human handover intentions in human–robot collaboration: Comparing EEG, gaze, and hand motion
Show others...
2026 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 196, article id 105244Article in journal (Refereed) Published
Abstract [en]

Human–robot collaboration (HRC) relies on accurate and timely recognition of human intentions to ensure seamless interactions. Among common HRC tasks, human-to-robot object handovers have been studied extensively for planning the robot's actions during object reception, assuming the human intention for object handover. However, distinguishing handover intentions from other actions has received limited attention. Most research on handovers has focused on visually detecting motion trajectories, which often results in delays or false detections when trajectories overlap. This paper investigates whether human intentions for object handovers are reflected in non-movement-based physiological signals. We conduct a multimodal analysis comparing three data modalities: electroencephalogram (EEG), gaze, and hand-motion signals. Our study aims to distinguish between handover-intended human motions and non-handover motions in an HRC setting, evaluating each modality's performance in predicting and classifying these actions before and after human movement initiation. We develop and evaluate human intention detectors based on these modalities, comparing their accuracy and timing in identifying handover intentions. To the best of our knowledge, this is the first study to systematically develop and test intention detectors across multiple modalities within the same experimental context of human–robot handovers. Our analysis reveals that handover intention can be detected from all three modalities. Nevertheless, gaze signals are the earliest as well as the most accurate to classify the motion as intended for handover or non-handover.

Place, publisher, year, edition, pages
Elsevier BV, 2026
Keywords
EEG, Gaze, Human–robot collaboration (HRC), Human–robot handovers, Motion analysis
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-373139 (URN)10.1016/j.robot.2025.105244 (DOI)2-s2.0-105021346666 (Scopus ID)
Note

QC 20251121

Available from: 2025-11-21 Created: 2025-11-21 Last updated: 2025-11-21Bibliographically approved
Khanna, P. (2025). Adapting Robotic Explanations for Robotic Failures in Human Robot Collaboration. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, March 4-6, 2025 (pp. 1863-1865). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Adapting Robotic Explanations for Robotic Failures in Human Robot Collaboration
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 1863-1865Conference paper, Published paper (Refereed)
Abstract [en]

My research focuses on adapting robotic failure explanations to enhance human-robot collaboration (HRC). I examine how different explanation types and explanation progression strategies impact failure resolution and user satisfaction by conducting a user study with multiple interaction rounds featuring repeated robotic failures and varying explanations. I also created a novel multimodal dataset of human responses to these failures and explanations. By analyzing human behavioral responses, I developed a predictor to anticipate user confusion following a specific robotic explanation at a robotic failure. This predictor enables an adaptive mechanism to dynamically adjust explanations based on user needs, fostering efficient and natural collaboration. This research aims to significantly improve overall user experience in HRC, making collaborations with robots smoother and more intuitive even when failures occur.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
HRC, Robotic Failures and Explanations
National Category
Robotics and automation Human Computer Interaction Computer Sciences
Identifiers
urn:nbn:se:kth:diva-363760 (URN)10.1109/HRI61500.2025.10973943 (DOI)2-s2.0-105004878031 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, March 4-6, 2025
Note

Part of ISBN 9798350378931

QC 20250526

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-26Bibliographically approved
Khanna, P. (2025). Adaptive Handovers for Enhanced Human-Robot Collaboration: A Human-Inspired Approach. (Doctoral dissertation). Stockholm: KTH Royal Institute of Technology
Open this publication in new window or tab >>Adaptive Handovers for Enhanced Human-Robot Collaboration: A Human-Inspired Approach
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

As robots become more capable with technology, their presence in human environments is expected to increase, leading to more physical and social interactions between humans and robots. In these shared spaces, handovers—the act of transferring an object from one person to another—constitute a significant part of daily human interactions. This thesis focuses on enhancing human-robot interaction by drawing inspiration from human-to-human handovers.

In this thesis, we investigate forces in human handovers to formulate adaptive robot grip release strategies, specifically addressing when a robot should release an object as a human recipient begins to take it during a handover. We developed a data-driven grip release strategy based on a dataset of recorded human-human handovers, which has been experimentally validated in human-robot interactions. To refine this strategy for different object weights, we recorded additional handovers involving various weights, resulting in publicly available datasets and a weight-adpative grip release strategy. Further, this thesis also examines how object weight affects human motion during handovers, enabling robots to observe changes in human motion to estimate object weights and adapt their motions to convey changes in object weights during handovers.

Additionally, we investigate the use of non-touch modalities, such as EEG brain signals and gaze tracking, to discern human intentions during handovers, specifically differentiating between motions intended for handovers and those that are not. 

Lastly, we also explore how human-robot handovers can be used to resolve robotic failures by providing explanations for these failures and adapting the explanations based on human behavioral responses.

Abstract [sv]

När robotar blir mer kapabla med teknik förväntas deras närvaro i mänskliga miljöer öka, vilket leder till mer fysisk och social interaktion mellan människor och robotar. I dessa delade utrymmen utgör överlämningar – handlingen att överföra ett objekt från en person till en annan – en betydande del av den dagliga mänskliga interaktionen. Den här avhandlingen fokuserar på att förbättra interaktionen mellan människa och robot genom att hämta inspiration från överlämningar från människa till människa.

I det här examensarbetet undersöker vi krafter i mänskliga överlämningar för att formulera adaptiva robotgrepp-release-strategier, som specifikt tar upp när en robot ska släppa ett föremål när en mänsklig mottagare börjar ta det under en överlämning. Vi utvecklade en datadriven strategi för frigörande av grepp baserad på en datauppsättning av inspelade människa-människa överlämningar, som har experimentellt validerats i interaktioner mellan människa och robot. För att förfina denna strategi för olika objektvikter, spelade vi in ytterligare överlämningar som involverade olika vikter, vilket resulterade i allmänt tillgängliga datauppsättningar och en viktadpativ strategi för grepp-släpp. Vidare undersöker denna avhandling också hur objektvikt påverkar mänsklig rörelse under överlämningar, vilket gör det möjligt för robotar att observera förändringar i mänsklig rörelse för att uppskatta objektvikter och anpassa sina rörelser för att förmedla förändringar i objektvikter under överlämningar.

Dessutom undersöker vi användningen av icke-touch-modaliteter, såsom EEG-hjärnsignaler och blickspårning, för att urskilja mänskliga avsikter under överlämningar, specifikt skilja mellan rörelser avsedda för överlämningar och de som inte är det.

Slutligen undersöker vi också hur mänsklig-robot-överlämningar kan användas för att lösa robotfel genom att tillhandahålla förklaringar till dessa fel och anpassa förklaringarna baserat på mänskliga beteendesvar.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2025. p. xx, 130
Series
TRITA-EECS-AVL ; 2025:29
Keywords
Human-Robot Collaboration, Human-Robot Handovers, Adaptive Handovers, Robotic failures, Robotic Failure Explanation, Samarbete mellan människa och robot, Överlämningar av människor och robotar, Adaptiva överlämningar, Robotfel, Förklaring av robotfel.
National Category
Robotics and automation
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-360949 (URN)978-91-8106-216-8 (ISBN)
Public defence
2025-03-31, https://kth-se.zoom.us/j/66859470351, F3 (Flodis), Lindstedsvägen 26 & 28, KTH Campus, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20250307

Available from: 2025-03-07 Created: 2025-03-06 Last updated: 2025-10-29Bibliographically approved
Khanna, P., Naoum, A., Yadollahi, E., Björkman, M. & Smith, C. (2025). REFLEX Dataset: A Multimodal Dataset of Human Reactions to Robot Failures and Explanations. In: Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at ACM/IEEE International Conference on Human-Robot Interaction, HRI, Melbourne, Australia, March 4-6, 2025 (pp. 1032-1036). IEEE
Open this publication in new window or tab >>REFLEX Dataset: A Multimodal Dataset of Human Reactions to Robot Failures and Explanations
Show others...
2025 (English)In: Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, IEEE , 2025, p. 1032-1036Conference paper, Published paper (Refereed)
Abstract [en]

This work presents REFLEX: Robotic Explanations to FaiLures and Human EXpressions, a comprehensive multimodal dataset capturing human reactions to robot failures and subsequent explanations in collaborative settings. It aims to facilitate research into human-robot interaction dynamics, addressing the need to study reactions to both initial failures and explanations, as well as the evolution of these reactions in long-term interactions. By providing rich, annotated data on human responses to different types of failures, explanation levels, and explanation varying strategies, the dataset contributes to the development of more robust, adaptive, and satisfying robotic systems capable of maintaining positive relationships with human collaborators, even during challenges like repeated failures

Place, publisher, year, edition, pages
IEEE, 2025
Keywords
Human Robot Interaction, Dataset, Robotic Failures, Explainable AI.
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-360946 (URN)10.5555/3721488.3721616 (DOI)
Conference
ACM/IEEE International Conference on Human-Robot Interaction, HRI, Melbourne, Australia, March 4-6, 2025
Note

QC 20250310

Available from: 2025-03-06 Created: 2025-03-06 Last updated: 2025-03-10Bibliographically approved
Khanna, P., Naoum, A., Yadollahi, E., Björkman, M. & Smith, C. (2025). REFLEX Dataset: A Multimodal Dataset of Human Reactions to Robot Failures and Explanations. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, March 4-6, 2025 (pp. 1032-1036). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>REFLEX Dataset: A Multimodal Dataset of Human Reactions to Robot Failures and Explanations
Show others...
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 1032-1036Conference paper, Published paper (Refereed)
Abstract [en]

This work presents REFLEX: Robotic Explanations to FaiLures and Human EXpressions, a comprehensive multimodal dataset capturing human reactions to robot failures and subsequent explanations in collaborative settings. It aims to facilitate research into human-robot interaction dynamics, addressing the need to study reactions to both initial failures and explanations, as well as the evolution of these reactions in long-term interactions. By providing rich, annotated data on human responses to different types of failures, explanation levels, and explanation varying strategies, the dataset contributes to the development of more robust, adaptive, and satisfying robotic systems capable of maintaining positive relationships with human collaborators, even during challenges like repeated failures.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Dataset, Explainable AI, Human Robot Interaction, Robotic Failures
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-363769 (URN)10.1109/HRI61500.2025.10974185 (DOI)2-s2.0-105004877597 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, March 4-6, 2025
Note

Part of ISBN 9798350378931

QC 20250526

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-26Bibliographically approved
Khanna, P., Fredberg, J., Björkman, M., Smith, C. & Linard, A. (2024). Hand it to me formally! Data-driven control for human-robot handovers with signal temporal logic. IEEE Robotics and Automation Letters, 9(10), 9039-9046
Open this publication in new window or tab >>Hand it to me formally! Data-driven control for human-robot handovers with signal temporal logic
Show others...
2024 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 9, no 10, p. 9039-9046Article in journal (Refereed) Published
Abstract [en]

To facilitate human-robot interaction (HRI), we aim for robot behavior that is efficient, transparent, and closely resembles human actions. Signal Temporal Logic (STL) is a formal language that enables the specification and verification of complex temporal properties in robotic systems, helping to ensure their correctness. STL can be used to generate explainable robot behaviour, the degree of satisfaction of which can be quantified by checking its STL robustness. In this letter, we use data-driven STL inference techniques to model human behavior in human-human interactions, on a handover dataset. We then use the learned model to generate robot behavior in human-robot interactions. We present a handover planner based on inferred STL specifications to command robotic motion in human-robot handovers. We also validate our method in a human-to-robot handover experiment.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Handover, Robots, Robot kinematics, Behavioral sciences, Trajectory, Logic, Robustness, Human-robot handovers, Signal Temporal Logic (STL)
National Category
Robotics and automation Control Engineering
Identifiers
urn:nbn:se:kth:diva-354524 (URN)10.1109/LRA.2024.3447476 (DOI)001316210300020 ()2-s2.0-85201769650 (Scopus ID)
Note

QC 20241011

Available from: 2024-10-11 Created: 2024-10-11 Last updated: 2025-02-05Bibliographically approved
Khanna, P., Björkman, M. & Smith, C. (2023). A Multimodal Data Set of Human Handovers with Design Implications for Human-Robot Handovers. In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 1843-1850). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>A Multimodal Data Set of Human Handovers with Design Implications for Human-Robot Handovers
2023 (English)In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1843-1850Conference paper, Published paper (Refereed)
Abstract [en]

Handovers are basic yet sophisticated motor tasks performed seamlessly by humans. They are among the most common activities in our daily lives and social environments. This makes mastering the art of handovers critical for a social and collaborative robot. In this work, we present an experimental study that involved human-human handovers by 13 pairs, i.e., 26 participants. We record and explore multiple features of handovers amongst humans aimed at inspiring handovers amongst humans and robots. With this work, we further create and publish a novel data set of 8672 handovers, which includes human motion tracking and the handover-forces. We further analyze the effect of object weight and the role of visual sensory input in human-human handovers, as well as possible design implications for robots. As a proof of concept, the data set was used for creating a human-inspired data-driven strategy for robotic grip release in handovers, which was demonstrated to result in better robot to human handovers.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-341988 (URN)10.1109/RO-MAN57019.2023.10309537 (DOI)001108678600237 ()2-s2.0-85187022992 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240109

Available from: 2024-01-09 Created: 2024-01-09 Last updated: 2025-03-06Bibliographically approved
Rajabi, N., Khanna, P., Kanik, S. U. D., Yadollahi, E., Vasco, M., Björkman, M., . . . Kragic, D. (2023). Detecting the Intention of Object Handover in Human-Robot Collaborations: An EEG Study. In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 549-555). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Detecting the Intention of Object Handover in Human-Robot Collaborations: An EEG Study
Show others...
2023 (English)In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 549-555Conference paper, Published paper (Refereed)
Abstract [en]

Human-robot collaboration (HRC) relies on smooth and safe interactions. In this paper, we focus on the human-to-robot handover scenario, where the robot acts as a taker. We investigate the feasibility of detecting the intention of a human-to-robot handover action through the analysis of electroencephalogram (EEG) signals. Our study confirms that temporal patterns in EEG signals provide information about motor planning and can be leveraged to predict the likelihood of an individual executing a motor task with an average accuracy of 94.7%. We also suggest the effectiveness of the time-frequency features of EEG signals in the final second prior to the movement for distinguishing between handover action and other actions. Furthermore, we classify human intentions for different tasks based on time-frequency representations of pre-movement EEG signals and achieve an average accuracy of 63.5% for contrasting every two tasks against each other. The result encourages the possibility of using EEG signals to detect human handover intention in HRC tasks.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-342040 (URN)10.1109/RO-MAN57019.2023.10309426 (DOI)001108678600078 ()2-s2.0-85186991854 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240110

Available from: 2024-01-10 Created: 2024-01-10 Last updated: 2025-02-09Bibliographically approved
Khanna, P., Yadollahi, E., Björkman, M., Leite, I. & Smith, C. (2023). Effects of Explanation Strategies to Resolve Failures in Human-Robot Collaboration. In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN: . Paper presented at 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA (pp. 1829-1836). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Effects of Explanation Strategies to Resolve Failures in Human-Robot Collaboration
Show others...
2023 (English)In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1829-1836Conference paper, Published paper (Refereed)
Abstract [en]

Despite significant improvements in robot capabilities, they are likely to fail in human-robot collaborative tasks due to high unpredictability in human environments and varying human expectations. In this work, we explore the role of explanation of failures by a robot in a human-robot collaborative task. We present a user study incorporating common failures in collaborative tasks with human assistance to resolve the failure. In the study, a robot and a human work together to fill a shelf with objects. Upon encountering a failure, the robot explains the failure and the resolution to overcome the failure, either through handovers or humans completing the task. The study is conducted using different levels of robotic explanation based on the failure action, failure cause, and action history, and different strategies in providing the explanation over the course of repeated interaction. Our results show that the success in resolving the failures is not only a function of the level of explanation but also the type of failures. Furthermore, while novice users rate the robot higher overall in terms of their satisfaction with the explanation, their satisfaction is not only a function of the robot's explanation level at a certain round but also the prior information they received from the robot.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-341981 (URN)10.1109/RO-MAN57019.2023.10309394 (DOI)001108678600235 ()2-s2.0-85187011787 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240109

Available from: 2024-01-09 Created: 2024-01-09 Last updated: 2025-03-06Bibliographically approved
Khanna, P., Yadollahi, E., Leite, I., Björkman, M. & Smith, C. (2023). How do Humans take an Object from a Robot: Behavior changes observed in a User Study. In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction: . Paper presented at 11th Conference on Human-Agent Interaction, HAI 2023, Gothenburg, Sweden, Dec 4 2023 - Dec 11 2023 (pp. 372-374). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>How do Humans take an Object from a Robot: Behavior changes observed in a User Study
Show others...
2023 (English)In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, Association for Computing Machinery (ACM) , 2023, p. 372-374Conference paper, Published paper (Refereed)
Abstract [en]

To facilitate human-robot interaction and gain human trust, a robot should recognize and adapt to changes in human behavior. This work documents different human behaviors observed while taking objects from an interactive robot in an experimental study, categorized across two dimensions: pull force applied and handedness. We also present the changes observed in human behavior upon repeated interaction with the robot to take various objects.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
HRI, Human-Robot Collaboration, Human-Robot Handovers
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-341674 (URN)10.1145/3623809.3623929 (DOI)001148034200049 ()2-s2.0-85180129229 (Scopus ID)
Conference
11th Conference on Human-Agent Interaction, HAI 2023, Gothenburg, Sweden, Dec 4 2023 - Dec 11 2023
Note

Part of ISBN 9798400708244

QC 20231229

Available from: 2023-12-29 Created: 2023-12-29 Last updated: 2025-02-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1932-1595

Search in DiVA

Show all publications