kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Adaptive sensor fusion labeling framework for hand pose recognition in robot teleoperation
KTH, School of Engineering Sciences (SCI), Solid Mechanics (Dept.).ORCID iD: 0000-0001-8785-5885
Show others and affiliations
2020 (English)In: Assembly Automation, ISSN 0144-5154, E-ISSN 1758-4078, Vol. 41, no 3, p. 393-400Article in journal (Refereed) Published
Abstract [en]

Purpose: The purpose of this paper is to mainly center on the touchless interaction between humans and robots in the real world. The accuracy of hand pose identification and stable operation in a non-stationary environment is the main challenge, especially in multiple sensors conditions. To guarantee the human-machine interaction system’s performance with a high recognition rate and lower computational time, an adaptive sensor fusion labeling framework should be considered in surgery robot teleoperation. Design/methodology/approach: In this paper, a hand pose estimation model is proposed consisting of automatic labeling and classified based on a deep convolutional neural networks (DCNN) structure. Subsequently, an adaptive sensor fusion methodology is proposed for hand pose estimation with two leap motions. The sensor fusion system is implemented to process depth data and electromyography signals capturing from Myo Armband and leap motion, respectively. The developed adaptive methodology can perform stable and continuous hand position estimation even when a single sensor is unable to detect a hand. Findings: The proposed adaptive sensor fusion method is verified with various experiments in six degrees of freedom in space. The results showed that the clustering model acquires the highest clustering accuracy (96.31%) than other methods, which can be regarded as real gestures. Moreover, the DCNN classifier gets the highest performance (88.47% accuracy and lowest computational time) than other methods. Originality/value: This study can provide theoretical and engineering guidance for hand pose recognition in surgery robot teleoperation and design a new deep learning model for accuracy enhancement.

Place, publisher, year, edition, pages
Emerald , 2020. Vol. 41, no 3, p. 393-400
Keywords [en]
Adaptive sensor fusion, Deep learning, Depth vision, EMG, Hand pose recognition
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-313920DOI: 10.1108/AA-11-2020-0178ISI: 000619109700001Scopus ID: 2-s2.0-85100847804OAI: oai:DiVA.org:kth-313920DiVA, id: diva2:1668586
Note

QC 20220613

Available from: 2022-06-13 Created: 2022-06-13 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Zhang, Longbin

Search in DiVA

By author/editor
Zhang, Longbin
By organisation
Solid Mechanics (Dept.)
In the same journal
Assembly Automation
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 79 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf