Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards Robust Human-Robot Collaborative Manufacturing: Multimodal Fusion
KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
KTH, School of Industrial Engineering and Management (ITM), Production Engineering, Production Systems.ORCID iD: 0000-0001-8679-8049
2018 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 74762-74771Article in journal (Refereed) Published
Abstract [en]

Intuitive and robust multimodal robot control is the key toward human-robot collaboration (HRC) for manufacturing systems. Multimodal robot control methods were introduced in previous studies. The methods allow human operators to control robot intuitively without programming brand-specific code. However, most of the multimodal robot control methods are unreliable because the feature representations are not shared across multiple modalities. To target this problem, a deep learning-based multimodal fusion architecture is proposed in this paper for robust multimodal HRC manufacturing systems. The proposed architecture consists of three modalities: speech command, hand motion, and body motion. Three unimodal models are first trained to extract features, which are further fused for representation sharing. Experiments show that the proposed multimodal fusion model outperforms the three unimodal models. This paper indicates a great potential to apply the proposed multimodal fusion architecture to robust HRC manufacturing systems.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC , 2018. Vol. 6, p. 74762-74771
Keywords [en]
Deep learning, human-robot collaboration, multimodal fusion, intelligent manufacturing systems
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
URN: urn:nbn:se:kth:diva-241236DOI: 10.1109/ACCESS.2018.2884793ISI: 000454277700001Scopus ID: 2-s2.0-85058114368OAI: oai:DiVA.org:kth-241236DiVA, id: diva2:1279508
Note

QC 20190116

Available from: 2019-01-16 Created: 2019-01-16 Last updated: 2019-01-16Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Liu, HongyiFang, TongtongZhou, TianyuWang, Lihui

Search in DiVA

By author/editor
Liu, HongyiFang, TongtongZhou, TianyuWang, Lihui
By organisation
Production EngineeringSoftware and Computer systems, SCSProduction Systems
In the same journal
IEEE Access
Production Engineering, Human Work Science and Ergonomics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 131 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf