kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Adaptive Handovers for Enhanced Human-Robot Collaboration: A Human-Inspired Approach
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-1932-1595
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Sustainable development
SDG 8: Decent work and economic growth, SDG 9: Industry, innovation and infrastructure
Abstract [en]

As robots become more capable with technology, their presence in human environments is expected to increase, leading to more physical and social interactions between humans and robots. In these shared spaces, handovers—the act of transferring an object from one person to another—constitute a significant part of daily human interactions. This thesis focuses on enhancing human-robot interaction by drawing inspiration from human-to-human handovers.

In this thesis, we investigate forces in human handovers to formulate adaptive robot grip release strategies, specifically addressing when a robot should release an object as a human recipient begins to take it during a handover. We developed a data-driven grip release strategy based on a dataset of recorded human-human handovers, which has been experimentally validated in human-robot interactions. To refine this strategy for different object weights, we recorded additional handovers involving various weights, resulting in publicly available datasets and a weight-adpative grip release strategy. Further, this thesis also examines how object weight affects human motion during handovers, enabling robots to observe changes in human motion to estimate object weights and adapt their motions to convey changes in object weights during handovers.

Additionally, we investigate the use of non-touch modalities, such as EEG brain signals and gaze tracking, to discern human intentions during handovers, specifically differentiating between motions intended for handovers and those that are not. 

Lastly, we also explore how human-robot handovers can be used to resolve robotic failures by providing explanations for these failures and adapting the explanations based on human behavioral responses.

Abstract [sv]

När robotar blir mer kapabla med teknik förväntas deras närvaro i mänskliga miljöer öka, vilket leder till mer fysisk och social interaktion mellan människor och robotar. I dessa delade utrymmen utgör överlämningar – handlingen att överföra ett objekt från en person till en annan – en betydande del av den dagliga mänskliga interaktionen. Den här avhandlingen fokuserar på att förbättra interaktionen mellan människa och robot genom att hämta inspiration från överlämningar från människa till människa.

I det här examensarbetet undersöker vi krafter i mänskliga överlämningar för att formulera adaptiva robotgrepp-release-strategier, som specifikt tar upp när en robot ska släppa ett föremål när en mänsklig mottagare börjar ta det under en överlämning. Vi utvecklade en datadriven strategi för frigörande av grepp baserad på en datauppsättning av inspelade människa-människa överlämningar, som har experimentellt validerats i interaktioner mellan människa och robot. För att förfina denna strategi för olika objektvikter, spelade vi in ytterligare överlämningar som involverade olika vikter, vilket resulterade i allmänt tillgängliga datauppsättningar och en viktadpativ strategi för grepp-släpp. Vidare undersöker denna avhandling också hur objektvikt påverkar mänsklig rörelse under överlämningar, vilket gör det möjligt för robotar att observera förändringar i mänsklig rörelse för att uppskatta objektvikter och anpassa sina rörelser för att förmedla förändringar i objektvikter under överlämningar.

Dessutom undersöker vi användningen av icke-touch-modaliteter, såsom EEG-hjärnsignaler och blickspårning, för att urskilja mänskliga avsikter under överlämningar, specifikt skilja mellan rörelser avsedda för överlämningar och de som inte är det.

Slutligen undersöker vi också hur mänsklig-robot-överlämningar kan användas för att lösa robotfel genom att tillhandahålla förklaringar till dessa fel och anpassa förklaringarna baserat på mänskliga beteendesvar.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2025. , p. xx, 130
Series
TRITA-EECS-AVL ; 2025:29
Keywords [en]
Human-Robot Collaboration, Human-Robot Handovers, Adaptive Handovers, Robotic failures, Robotic Failure Explanation
Keywords [sv]
Samarbete mellan människa och robot, Överlämningar av människor och robotar, Adaptiva överlämningar, Robotfel, Förklaring av robotfel.
National Category
Robotics and automation
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-360949ISBN: 978-91-8106-216-8 (print)OAI: oai:DiVA.org:kth-360949DiVA, id: diva2:1942899
Public defence
2025-03-31, https://kth-se.zoom.us/j/66859470351, F3 (Flodis), Lindstedsvägen 26 & 28, KTH Campus, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20250307

Available from: 2025-03-07 Created: 2025-03-06 Last updated: 2025-04-02Bibliographically approved
List of papers
1. A Multimodal Data Set of Human Handovers with Design Implications for Human-Robot Handovers
Open this publication in new window or tab >>A Multimodal Data Set of Human Handovers with Design Implications for Human-Robot Handovers
2023 (English)In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1843-1850Conference paper, Published paper (Refereed)
Abstract [en]

Handovers are basic yet sophisticated motor tasks performed seamlessly by humans. They are among the most common activities in our daily lives and social environments. This makes mastering the art of handovers critical for a social and collaborative robot. In this work, we present an experimental study that involved human-human handovers by 13 pairs, i.e., 26 participants. We record and explore multiple features of handovers amongst humans aimed at inspiring handovers amongst humans and robots. With this work, we further create and publish a novel data set of 8672 handovers, which includes human motion tracking and the handover-forces. We further analyze the effect of object weight and the role of visual sensory input in human-human handovers, as well as possible design implications for robots. As a proof of concept, the data set was used for creating a human-inspired data-driven strategy for robotic grip release in handovers, which was demonstrated to result in better robot to human handovers.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-341988 (URN)10.1109/RO-MAN57019.2023.10309537 (DOI)001108678600237 ()2-s2.0-85187022992 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240109

Available from: 2024-01-09 Created: 2024-01-09 Last updated: 2025-03-06Bibliographically approved
2. Human Inspired Grip-Release Technique for Robot-Human Handovers
Open this publication in new window or tab >>Human Inspired Grip-Release Technique for Robot-Human Handovers
2022 (English)In: 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids), IEEE Robotics and Automation Society, 2022, p. 694-701Conference paper, Published paper (Refereed)
Abstract [en]

Fluent and natural robot human handovers are essential for human robot collaborative tasks. The robot's grip-release action is important for achieving this fluency. This paper describes an experimental study investigating interaction forces during grip-release in human-human handovers comprising of 13 participant pairs and a sensor embedded object. The results from this study were used to create a human inspired, data-driven strategy for robot grip-release technique in robot human handovers. This strategy was then evaluated alongside other techniques for grip-release in a robot human handovers experimentation study involving 20 participants. It was concluded that the data-driven strategy outperformed other strategies in getting natural handovers by faster grip-release for the sensor embedded object.

Place, publisher, year, edition, pages
IEEE Robotics and Automation Society, 2022
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-323056 (URN)10.1109/Humanoids53995.2022.10000227 (DOI)000925894300091 ()2-s2.0-85146331409 (Scopus ID)
Conference
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)
Note

QC 20230123

Available from: 2023-01-12 Created: 2023-01-12 Last updated: 2025-03-06Bibliographically approved
3. Impact of Object Weight in Handovers: Inspiring Robotic Grip Release and Motion from Human Handovers
Open this publication in new window or tab >>Impact of Object Weight in Handovers: Inspiring Robotic Grip Release and Motion from Human Handovers
(English)Manuscript (preprint) (Other academic)
Abstract [en]

This work explores the effect of object weight on human motion and grip release during handovers to enhance the naturalness, safety, and efficiency of robot-human interactions. We introduce adaptive robotic strategies based on the analysis of human handover behavior with varying object weights. The key contributions of this work include the development of an adaptive grip-release strategy for robots, a detailed analysis of how object weight influences human motion to guide robotic motion adaptations, and the creation of handover-datasets incorporating various object weights, including the YCB handover dataset. By aligning robotic grip release and motion with human behavior, this work aims to improve robot-human handovers for different weighted objects. We also evaluate these human-inspired adaptive robotic strategies in robot-to-human handovers to assess their effectiveness and performance and demonstrate that they outperform the baseline approaches in terms of naturalness, efficiency, and user perception.

National Category
Robotics and automation
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-360920 (URN)10.48550/arXiv.2502.17834 (DOI)
Note

QC 20250310

Available from: 2025-03-06 Created: 2025-03-06 Last updated: 2025-03-10Bibliographically approved
4. Early Detection of Human Handover Intentions in Human-Robot Collaboration: Comparing EEG, Gaze, and Hand Motion
Open this publication in new window or tab >>Early Detection of Human Handover Intentions in Human-Robot Collaboration: Comparing EEG, Gaze, and Hand Motion
Show others...
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Human-robot collaboration (HRC) relies on accurate and timely recognition of human intentions to ensure seamless interactions. Among common HRC tasks, human-to-robot object handovers have been studied extensively for planning the robot's actions during object reception, assuming the human intention for object handover. However, distinguishing handover intentions from other actions has received limited attention. Most research on handovers has focused on visually detecting motion trajectories, which often results in delays or false detections when trajectories overlap. This paper investigates whether human intentions for object handovers are reflected in non-movement-based physiological signals. We conduct a multimodal analysis comparing three data modalities: electroencephalogram (EEG), gaze, and hand-motion signals. Our study aims to distinguish between handover-intended human motions and non-handover motions in an HRC setting, evaluating each modality's performance in predicting and classifying these actions before and after human movement initiation. We develop and evaluate human intention detectors based on these modalities, comparing their accuracy and timing in identifying handover intentions. To the best of our knowledge, this is the first study to systematically develop and test intention detectors across multiple modalities within the same experimental context of human-robot handovers. Our analysis reveals that handover intention can be detected from all three modalities. Nevertheless, gaze signals are the earliest as well as the most accurate to classify the motion as intended for handover or non-handover.

Keywords
Human-Robot Collaboration (HRC), Human-Robot Handovers, EEG, Gaze, Motion Analysis
National Category
Robotics and automation
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-360885 (URN)10.48550/arXiv.2502.11752 (DOI)
Note

QC 20250305

Available from: 2025-03-04 Created: 2025-03-04 Last updated: 2025-03-06Bibliographically approved
5. Effects of Explanation Strategies to Resolve Failures in Human-Robot Collaboration
Open this publication in new window or tab >>Effects of Explanation Strategies to Resolve Failures in Human-Robot Collaboration
Show others...
2023 (English)In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1829-1836Conference paper, Published paper (Refereed)
Abstract [en]

Despite significant improvements in robot capabilities, they are likely to fail in human-robot collaborative tasks due to high unpredictability in human environments and varying human expectations. In this work, we explore the role of explanation of failures by a robot in a human-robot collaborative task. We present a user study incorporating common failures in collaborative tasks with human assistance to resolve the failure. In the study, a robot and a human work together to fill a shelf with objects. Upon encountering a failure, the robot explains the failure and the resolution to overcome the failure, either through handovers or humans completing the task. The study is conducted using different levels of robotic explanation based on the failure action, failure cause, and action history, and different strategies in providing the explanation over the course of repeated interaction. Our results show that the success in resolving the failures is not only a function of the level of explanation but also the type of failures. Furthermore, while novice users rate the robot higher overall in terms of their satisfaction with the explanation, their satisfaction is not only a function of the robot's explanation level at a certain round but also the prior information they received from the robot.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE RO-MAN, ISSN 1944-9445
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-341981 (URN)10.1109/RO-MAN57019.2023.10309394 (DOI)001108678600235 ()2-s2.0-85187011787 (Scopus ID)
Conference
32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), AUG 28-31, 2023, Busan, SOUTH KOREA
Note

Part of proceedings ISBN 979-8-3503-3670-2

QC 20240109

Available from: 2024-01-09 Created: 2024-01-09 Last updated: 2025-03-06Bibliographically approved
6. REFLEX Dataset: A Multimodal Dataset of Human Reactions to Robot Failures and Explanations
Open this publication in new window or tab >>REFLEX Dataset: A Multimodal Dataset of Human Reactions to Robot Failures and Explanations
Show others...
2025 (English)In: Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, IEEE , 2025, p. 1032-1036Conference paper, Published paper (Refereed)
Abstract [en]

This work presents REFLEX: Robotic Explanations to FaiLures and Human EXpressions, a comprehensive multimodal dataset capturing human reactions to robot failures and subsequent explanations in collaborative settings. It aims to facilitate research into human-robot interaction dynamics, addressing the need to study reactions to both initial failures and explanations, as well as the evolution of these reactions in long-term interactions. By providing rich, annotated data on human responses to different types of failures, explanation levels, and explanation varying strategies, the dataset contributes to the development of more robust, adaptive, and satisfying robotic systems capable of maintaining positive relationships with human collaborators, even during challenges like repeated failures

Place, publisher, year, edition, pages
IEEE, 2025
Keywords
Human Robot Interaction, Dataset, Robotic Failures, Explainable AI.
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-360946 (URN)10.5555/3721488.3721616 (DOI)
Conference
ACM/IEEE International Conference on Human-Robot Interaction, HRI, Melbourne, Australia, March 4-6, 2025
Note

QC 20250310

Available from: 2025-03-06 Created: 2025-03-06 Last updated: 2025-03-10Bibliographically approved

Open Access in DiVA

Kappa_summary(8231 kB)147 downloads
File information
File name SUMMARY01.pdfFile size 8231 kBChecksum SHA-512
e0eb6c4a407c2257d0116d599930d53f9bd310bf5678afc9308a5c2449e2459d7d9380f62124bc51397c9f880dc6b00f98d29434fe70035a3a019fcaecbbb40f
Type summaryMimetype application/pdf

Authority records

Khanna, Parag

Search in DiVA

By author/editor
Khanna, Parag
By organisation
Robotics, Perception and Learning, RPL
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 848 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf