kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A deep learning-enabled visual-inertial fusion method for human pose estimation in occluded human-robot collaborative assembly scenarios
State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou, China.
State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou, China.
School of Engineering Technology, Purdue University, West Lafayette, United States.
State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou, China; Dongfang Electric (Hangzhou) Innovation Institute Co., Ltd., Hangzhou, China.
Show others and affiliations
2025 (English)In: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 93, article id 102906Article in journal (Refereed) Published
Abstract [en]

In the context of human-centric smart manufacturing, human-robot collaboration (HRC) systems leverage the strengths of both humans and machines to achieve more flexible and efficient manufacturing. In particular, estimating and monitoring human motion status determines when and how the robots cooperate. However, the presence of occlusion in industrial settings seriously affects the performance of human pose estimation (HPE). Using more sensors can alleviate the occlusion issue, but it may cause additional computational costs and lower workers' comfort. To address this issue, this work proposes a visual-inertial fusion-based method for HPE in HRC, aiming to achieve accurate and robust estimation while minimizing the influence on human motion. A part-specific cross-modal fusion mechanism is designed to integrate spatial information provided by a monocular camera and six Inertial Measurement Units (IMUs). A multi-scale temporal module is developed to model the motion dependence between frames at different granularities. Our approach achieves 34.9 mm Mean Per Joint Positional Error (MPJPE) on the TotalCapture dataset and 53.9 mm on the 3DPW dataset, outperforming state-of-the-art visual-inertial fusion-based methods. Tests on a synthetic-occlusion dataset further validate the occlusion robustness of our network. Quantitative and qualitative experiments on a real assembly case verified the superiority and potential of our approach in HRC. It is expected that this work can be a reference for human motion perception in occluded HRC scenarios.

Place, publisher, year, edition, pages
Elsevier BV , 2025. Vol. 93, article id 102906
Keywords [en]
Cross transformer, Human pose estimation, Human-robot collaboration, Occlusion, Visual-inertial fusion
National Category
Computer graphics and computer vision Robotics and automation Signal Processing
Identifiers
URN: urn:nbn:se:kth:diva-357680DOI: 10.1016/j.rcim.2024.102906Scopus ID: 2-s2.0-85210534696OAI: oai:DiVA.org:kth-357680DiVA, id: diva2:1920787
Note

QC 20241213

Available from: 2024-12-12 Created: 2024-12-12 Last updated: 2025-02-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Wang, Lihui

Search in DiVA

By author/editor
Wang, Lihui
By organisation
Production engineering
In the same journal
Robotics and Computer-Integrated Manufacturing
Computer graphics and computer visionRobotics and automationSignal Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 38 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf