kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards Mutual-Cognitive Human-Robot Collaboration: A Zero-Shot Visual Reasoning Method
The Hong Kong Polytechnic University, Department of Industrial and Systems Engineering, Hong Kong SAR, China.
The Hong Kong Polytechnic University, Department of Industrial and Systems Engineering, Hong Kong SAR, China.
The Hong Kong Polytechnic University, Department of Industrial and Systems Engineering, Hong Kong SAR, China.
KTH, School of Industrial Engineering and Management (ITM), Production engineering.ORCID iD: 0000-0001-9694-0483
Show others and affiliations
2023 (English)In: 2023 IEEE 19th International Conference on Automation Science and Engineering, CASE 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper, Published paper (Refereed)
Abstract [en]

Human-Robot Collaboration (HRC) is showing the potential of widespread application in today's human-centric smart manufacturing, as prescribed by Industry 5.0. To enable safe and efficient collaboration, numerous visual perception methods have been explored, which allows the robot to perceive surroundings and plan collision-free, reactive manipulations. However, current visual perception approaches can only convey basic information between robots and humans, falling short of semantic knowledge. With this limitation, HRC cannot guarantee smooth operation when confronted with similar yet unseen situations in real-world applications. Therefore, a mutual-cognitive HRC architecture is proposed to plan human and robot operations based on the learning of knowledge representation of onsite situations and task structures. A zero-shot visual reasoning approach is introduced to derive suitable teamwork strategies in the mutual-cognitive HRC from perceived results, including human actions and detected objects. It assigns adaptive robot path planning and knowledge support for humans by incorporating perception components into a knowledge graph, even when dealing with a new but similar HRC task. Lastly, the significance of the proposed mutual-cognitive HRC system is revealed through its evaluation in collaborative disassembly tasks of aging electric vehicle batteries.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2023.
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
URN: urn:nbn:se:kth:diva-350285DOI: 10.1109/CASE56687.2023.10260599Scopus ID: 2-s2.0-85174389409OAI: oai:DiVA.org:kth-350285DiVA, id: diva2:1883711
Conference
19th IEEE International Conference on Automation Science and Engineering, CASE 2023, Auckland, New Zealand, Aug 26 2023 - Aug 30 2023
Note

Part of ISBN 9798350320695

QC 20240711

Available from: 2024-07-11 Created: 2024-07-11 Last updated: 2024-07-11Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Wang, Xi VincentWang, Lihui

Search in DiVA

By author/editor
Wang, Xi VincentWang, Lihui
By organisation
Production engineeringIndustrial Production Systems
Production Engineering, Human Work Science and Ergonomics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 129 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf