kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Early detection of human handover intentions in human–robot collaboration: Comparing EEG, gaze, and hand motion
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-1932-1595
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.ORCID iD: 0000-0003-2533-7868
Ericsson Research, Stockholm, Sweden.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Collaborative Autonomous Systems.ORCID iD: 0000-0003-2965-2953
Show others and affiliations
2026 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 196, article id 105244Article in journal (Refereed) Published
Abstract [en]

Human–robot collaboration (HRC) relies on accurate and timely recognition of human intentions to ensure seamless interactions. Among common HRC tasks, human-to-robot object handovers have been studied extensively for planning the robot's actions during object reception, assuming the human intention for object handover. However, distinguishing handover intentions from other actions has received limited attention. Most research on handovers has focused on visually detecting motion trajectories, which often results in delays or false detections when trajectories overlap. This paper investigates whether human intentions for object handovers are reflected in non-movement-based physiological signals. We conduct a multimodal analysis comparing three data modalities: electroencephalogram (EEG), gaze, and hand-motion signals. Our study aims to distinguish between handover-intended human motions and non-handover motions in an HRC setting, evaluating each modality's performance in predicting and classifying these actions before and after human movement initiation. We develop and evaluate human intention detectors based on these modalities, comparing their accuracy and timing in identifying handover intentions. To the best of our knowledge, this is the first study to systematically develop and test intention detectors across multiple modalities within the same experimental context of human–robot handovers. Our analysis reveals that handover intention can be detected from all three modalities. Nevertheless, gaze signals are the earliest as well as the most accurate to classify the motion as intended for handover or non-handover.

Place, publisher, year, edition, pages
Elsevier BV , 2026. Vol. 196, article id 105244
Keywords [en]
EEG, Gaze, Human–robot collaboration (HRC), Human–robot handovers, Motion analysis
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-373139DOI: 10.1016/j.robot.2025.105244Scopus ID: 2-s2.0-105021346666OAI: oai:DiVA.org:kth-373139DiVA, id: diva2:2015483
Note

QC 20251121

Available from: 2025-11-21 Created: 2025-11-21 Last updated: 2025-11-21Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Khanna, ParagRajabi, NonaKragic Jensfelt, DanicaBjörkman, MårtenSmith, Christian

Search in DiVA

By author/editor
Khanna, ParagRajabi, NonaKragic Jensfelt, DanicaBjörkman, MårtenSmith, Christian
By organisation
Robotics, Perception and Learning, RPLCollaborative Autonomous Systems
In the same journal
Robotics and Autonomous Systems
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 55 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf