kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Toward Proactive Human-Robot Collaborative Assembly: A Multimodal Transfer-Learning-Enabled Action Prediction Approach
Hong Kong Polytech Univ, Dept Ind & Syst Engn, Kowloon, Hong Kong, Peoples R China..
Hong Kong Polytech Univ, Dept Ind & Syst Engn, Kowloon, Hong Kong, Peoples R China..
Hong Kong Polytech Univ, Dept Ind & Syst Engn, Kowloon, Hong Kong, Peoples R China..
KTH, School of Industrial Engineering and Management (ITM), Production Engineering, Sustainable Production Systems.ORCID iD: 0000-0001-8679-8049
2022 (English)In: IEEE Transactions on Industrial Electronics, ISSN 0278-0046, E-ISSN 1557-9948, Vol. 69, no 8, p. 8579-8588Article in journal (Refereed) Published
Abstract [en]

Human-robot collaborative assembly (HRCA) is vital for achieving high-level flexible automation for mass personalization in today's smart factories. However, existing works in both industry and academia mainly focus on the adaptive robot planning, while seldom consider human operator's intentions in advance. Hence, it hinders the HRCA transition toward a proactive manner. To overcome the bottleneck, this article proposes a multimodal transfer-learning-enabled action prediction approach, serving as the prerequisite to ensure the proactive HRCA. First, a multimodal intelligence-based action recognition approach is proposed to predict ongoing human actions by leveraging the visual stream and skeleton stream with short-time input frames. Second, a transfer-learning-enabled model is adapted to transfer learnt knowledge from daily activities to industrial assembly operations rapidly for online operator intention analysis. Third, a dynamic decision-making mechanism, including robotic decision and motion control, is described to allow mobile robots to assist operators in a proactive manner. Finally, an aircraft bracket assembly task is demonstrated in the laboratory environment, and the comparative study result shows that the proposed approach outperforms other state-of-the-art ones for efficient action prediction.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2022. Vol. 69, no 8, p. 8579-8588
Keywords [en]
Robots, Three-dimensional displays, Collaboration, Service robots, Skeleton, Videos, Visualization, Action recognition, human-robot collaboration, multimodal intelligence, transfer learning
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-310197DOI: 10.1109/TIE.2021.3105977ISI: 000764880700100Scopus ID: 2-s2.0-85114652119OAI: oai:DiVA.org:kth-310197DiVA, id: diva2:1649543
Note

QC 20220404

Available from: 2022-04-04 Created: 2022-04-04 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Wang, Lihui

Search in DiVA

By author/editor
Wang, Lihui
By organisation
Sustainable Production Systems
In the same journal
IEEE Transactions on Industrial Electronics
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 87 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf