kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Publications (10 of 45) Show all publications
Yi, S., Liu, S., Lin, X., Yan, S., Wang, X. V. & Wang, L. (2025). A data-efficient and general-purpose hand–eye calibration method for robotic systems using next best view. Advanced Engineering Informatics, 66, Article ID 103432.
Open this publication in new window or tab >>A data-efficient and general-purpose hand–eye calibration method for robotic systems using next best view
Show others...
2025 (English)In: Advanced Engineering Informatics, ISSN 1474-0346, E-ISSN 1873-5320, Vol. 66, article id 103432Article in journal (Refereed) Published
Abstract [en]

Calibration between robots and cameras is critical in automated robot vision systems. However, conventional manually conducted image-based calibration techniques are often limited by their accuracy sensitivity and poor adaptability to dynamic or unstructured environments. These approaches present challenges for ease of calibration and automatic deployment while being susceptible to rigid assumptions that degrade their performance. To close these limitations, this study proposes a data-efficient vision-driven approach for fast, accurate, and robust hand–eye camera calibration, and it aims to maximise the efficiency of robots in obtaining hand–eye calibration images without compromising accuracy. By analysing the previously captured images, the minimisation of the residual Jacobian matrix is utilised to predict the next optimal pose for robot calibration. A method to adjust the camera poses in dynamic environments is proposed to achieve efficient and robust hand–eye calibration. It requires fewer images, reduces dependence on manual expertise, and ensures repeatability. The proposed method is tested using experiments with actual industrial robots. The results demonstrate that our NBV strategy reduces rotational error by 8.8%, translational error by 26.4%, and the number of sampling frames by 25% compared to artificial sampling. The experimental results show that the average prediction time per frame is 3.26 seconds.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Hand–eye calibration, Non-linear optimisation, Robot control, Robot vision system
National Category
Robotics and automation Computer graphics and computer vision Control Engineering
Identifiers
urn:nbn:se:kth:diva-364151 (URN)10.1016/j.aei.2025.103432 (DOI)001504534600004 ()2-s2.0-105005832045 (Scopus ID)
Note

QC 20250605

Available from: 2025-06-04 Created: 2025-06-04 Last updated: 2025-08-15Bibliographically approved
Liu, S., Guo, D., Liu, Z., Wang, T., Qin, Q., Wang, X. V. & Wang, L. (2025). A Digital Twin-Enabled Approach to Reliable Human–robot Collaborative Assembly. In: Human Centric Smart Manufacturing Towards Industry 5 0: (pp. 281-304). Springer Nature
Open this publication in new window or tab >>A Digital Twin-Enabled Approach to Reliable Human–robot Collaborative Assembly
Show others...
2025 (English)In: Human Centric Smart Manufacturing Towards Industry 5 0, Springer Nature , 2025, p. 281-304Chapter in book (Other academic)
Abstract [en]

The conventional automation approach has shown bottlenecks in the era of component assembly. What could be automated has been automated in some high tech industrial production, leaving manual work performed by humans. To achieve ergonomic working environments and better productivity, human–robot collabora tion has been adopted for this purpose through combining the strength, accuracy and repeatability of robots with adaptability, high-level cognition, and flexibility. A reliable human–robot collaborative setting should be supported by dynamically updated and precise models. For this purpose, the digital twin can realise the digital representation of physical collaborative settings through simulation modelling and data synchronisation but is limited by communication delay and constraints. This chapter will develop a digital twin-enabled approach to human–robot collaborative assembly. Within this approach, a sensor-driven 3D modelling of the physical devices of interest is developed to realise the physical-to-digital transformation of human–robot workcell, and a Wise-ShopFloor-based platform enabled by sensor data is used to develop a digital twin model of the physical human–robot workcell. Then, function blocks with embedded algorithms are used for assembly planning, decision making and robot control, and a time-ahead execution and planning approach is developed for reliable human–robot collaborative assembly. Finally, the performance of the developed system is demonstrated by a case study of a partial car engine assembly.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Assembly, Digital twin, Robot
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-368723 (URN)10.1007/978-3-031-82170-7_12 (DOI)2-s2.0-105012012683 (Scopus ID)
Note

Part of ISBN 9783031821691, 9783031821707

QC 20250820

Available from: 2025-08-20 Created: 2025-08-20 Last updated: 2025-08-20Bibliographically approved
Liu, Z., Liu, S., Wang, T., Wang, L. & Wang, X. V. (2025). Establishment and Synchronisation of Digital Twins for Multi-robot Systems in Manufacturing. In: 58th CIRP Conference on Manufacturing Systems, CMS 2025: . Paper presented at 58th CIRP Conference on Manufacturing Systems, CMS 2025, Twente, Netherlands, Kingdom of the, Apr 13 2025 - Apr 16 2025 (pp. 419-424). Elsevier BV
Open this publication in new window or tab >>Establishment and Synchronisation of Digital Twins for Multi-robot Systems in Manufacturing
Show others...
2025 (English)In: 58th CIRP Conference on Manufacturing Systems, CMS 2025, Elsevier BV , 2025, p. 419-424Conference paper, Published paper (Refereed)
Abstract [en]

In Industry 5.0, digital twins have emerged as powerful tools for revolutionizing the operation and control of industrial robots. However, a critical challenge is how to effectively synchronise the establishment and ongoing operations of physical devices with their virtual counterparts to ensure seamless performances. To address the challenge, this paper introduces a state machine-driven method to orchestrate hardware interface establishment and synchronisation processes for multi-robot systems in manufacturing. By leveraging state machines to model the lifecycle of hardware interfaces and their corresponding controllers, a systematic solution is provided for managing transitions and real-time synchronisation across multiple industrial robots. It not only enhances the initialisation efficiency but also ensures consistent system operations. The proposed method is validated through detailed case studies that demonstrate visible improvements in the deployment of manufacturing systems containing multiple industrial robots with different vendors and protocol interfaces. This work contributes to constructing digital twins that can dynamically adapt to evolving industrial environments.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Digital Twins, Multi-robot Systems in Manufacturing, Synchronisation, System Establishment
National Category
Computer Systems Robotics and automation Computer Sciences
Identifiers
urn:nbn:se:kth:diva-368826 (URN)10.1016/j.procir.2025.02.152 (DOI)2-s2.0-105009410507 (Scopus ID)
Conference
58th CIRP Conference on Manufacturing Systems, CMS 2025, Twente, Netherlands, Kingdom of the, Apr 13 2025 - Apr 16 2025
Note

QC 20250902

Available from: 2025-09-02 Created: 2025-09-02 Last updated: 2025-09-02Bibliographically approved
Zhao, S., Liu, S., Jiang, Y., Zhao, B., Lv, Y., Zhang, J., . . . Zhong, R. Y. (2025). Industrial Foundation Models (IFMs) for intelligent manufacturing: A systematic review. Journal of manufacturing systems, 82, 420-448
Open this publication in new window or tab >>Industrial Foundation Models (IFMs) for intelligent manufacturing: A systematic review
Show others...
2025 (English)In: Journal of manufacturing systems, ISSN 0278-6125, E-ISSN 1878-6642, Vol. 82, p. 420-448Article, review/survey (Refereed) Published
Abstract [en]

The remarkable success of Large Foundation Models (LFMs) has demonstrated their tremendous potential for manufacturing and sparked significant interest in the exploration of Industrial Foundation Models (IFMs). This study provides a comprehensive review of the current state of IFMs and their applications in intelligent manufacturing. It conducts an in-depth analysis from three perspectives, including data level, model level, and application level. The definition and framework of IFMs are discussed with a comparison to LFMs across these three perspectives. In addition, this paper provides a brief overview of the advancements in IFMs development across different countries, institutions, and regions. It explores the current application of IFMs, including Industrial Domain Models and Industrial Task Models, which are specifically designed for various industrial domains and tasks. Furthermore, key technologies critical to the training of IFMs are explored, such as data pre-processing, model fine-tuning, prompt engineering, and retrieval-augmented generation. This paper also highlights the essential capabilities of IFMs and their typical applications throughout the manufacturing lifecycle. Finally, it discusses the current challenges and outlines potential future research directions. This study aims to inspire new ideas for advancing IFMs and accelerating the evolution of intelligent manufacturing.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Industrial Foundation Models (IFMs), Intelligent manufacturing, Large Foundation Models (LFMs)
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-368893 (URN)10.1016/j.jmsy.2025.06.011 (DOI)2-s2.0-105009886814 (Scopus ID)
Note

QC 20250822

Available from: 2025-08-22 Created: 2025-08-22 Last updated: 2025-08-22Bibliographically approved
Wang, G., Zhang, C., Liu, S., Zhao, Y., Zhang, Y. & Wang, L. (2025). Multi-robot collaborative manufacturing driven by digital twins: Advancements, challenges, and future directions. Journal of manufacturing systems, 82, 333-361
Open this publication in new window or tab >>Multi-robot collaborative manufacturing driven by digital twins: Advancements, challenges, and future directions
Show others...
2025 (English)In: Journal of manufacturing systems, ISSN 0278-6125, E-ISSN 1878-6642, Vol. 82, p. 333-361Article, review/survey (Refereed) Published
Abstract [en]

Multi-robot systems envisioned for future factories will promote advancements and capabilities of handling complex tasks and realising optimal robotic operations. However, existing multi-robot systems face challenges such as integration complexity, difficult coordination and control, low scalability, and flexibility, and thus are far from realising adaptive and efficient multi-robot collaborative manufacturing (MRCM). Digital twin technology improves visualisation, consistency, and spatial–temporal collaboration in MRCM through real-time interaction and iterative optimisation in physical and virtual spaces. Despite these improvements, barriers such as undeveloped modelling capabilities, indeterminate collaborative strategies, and limited applicability impede widespread integration of MRCM. In response to these needs, this study provides a comprehensive review of the foundational concepts, systematic architecture, and enabling technologies of digital twin-driven MRCM, serving as a prospective vision for future work in collaborative intelligent manufacturing. With the development of sensors and computational capabilities, robot intelligence is evolving towards multi-robot collaboration, including perceptual, cognitive, and behavioural collaboration. Digital twins play a critical supporting role in multi-robot collaboration, and the architecture, methodologies, and applications are elaborated across diverse stages of MRCM processes. This paper also identifies current challenges and future research directions. It encourages academic and industrial stakeholders to integrate state-of-the-art AI technologies more thoroughly into multi-robot digital twin systems for enhanced efficiency and reliability in production.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Collaborative manufacturing, Digital twin, Multi-robot system, Robot
National Category
Production Engineering, Human Work Science and Ergonomics Robotics and automation Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-368690 (URN)10.1016/j.jmsy.2025.06.014 (DOI)001522821800001 ()2-s2.0-105008790890 (Scopus ID)
Note

QC 20250821

Available from: 2025-08-21 Created: 2025-08-21 Last updated: 2025-10-06Bibliographically approved
Zhang, C., Zhang, Y., Liu, S. & Wang, L. (2025). Transfer learning and augmented data-driven parameter prediction for robotic welding. Robotics and Computer-Integrated Manufacturing, 95, Article ID 102992.
Open this publication in new window or tab >>Transfer learning and augmented data-driven parameter prediction for robotic welding
2025 (English)In: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 95, article id 102992Article in journal (Refereed) Published
Abstract [en]

Robotic welding envisioned for the future of factories will promote high-demanding and customised tasks with overall higher productivity and quality. Within the context, robotic welding parameter prediction is essential for maintaining high standards of quality, efficiency, safety, and cost-effectiveness in smart manufacturing. However, data acquisition of welding process parameters is limited by process libraries and small sample sizes, given complex welding working environments, and it also requires extensive and costly experimentation. To address these issues, this study proposes a transfer learning and augmented data-driven approach for high-accuracy prediction of robotic welding parameters. Firstly, a data space transfer method is developed to construct a domain adaptation mapping matrix, focusing on small sample welding process parameters, and a data augmentation method is adopted to transfer welding process parameters with augmented sample data. Then, a DST-Multi-XGBoost model is developed to establish a mapping relationship between welding task features and welding process parameters. The constructed model can consider the relationship between the output, which reduces the complexity of the model and the number of parameters. Even with a small initial sample size, the model can use augmented data to understand complex coupling relationships and accurately predict welding process parameters. Finally, the effectiveness of the developed approach has been experimentally validated by a case study of robotic welding.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Augmented data, Process parameter prediction, Robotic welding, Transfer learning
National Category
Manufacturing, Surface and Joining Technology Robotics and automation Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-361202 (URN)10.1016/j.rcim.2025.102992 (DOI)2-s2.0-85219493563 (Scopus ID)
Note

QC 20250313

Available from: 2025-03-12 Created: 2025-03-12 Last updated: 2025-03-13Bibliographically approved
Liu, S., Wang, L. & Gao, R. X. (2024). Cognitive neuroscience and robotics: Advancements and future research directions. Robotics and Computer-Integrated Manufacturing, 85, Article ID 102610.
Open this publication in new window or tab >>Cognitive neuroscience and robotics: Advancements and future research directions
2024 (English)In: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 85, article id 102610Article, review/survey (Refereed) Published
Abstract [en]

In recent years, brain-based technologies that capitalise on human abilities to facilitate human–system/robot interactions have been actively explored, especially in brain robotics. Brain–computer interfaces, as applications of this conception, have set a path to convert neural activities recorded by sensors from the human scalp via electroencephalography into valid commands for robot control and task execution. Thanks to the advancement of sensor technologies, non-invasive and invasive sensor headsets have been designed and developed to achieve stable recording of brainwave signals. However, robust and accurate extraction and interpretation of brain signals in brain robotics are critical to reliable task-oriented and opportunistic applications such as brainwave-controlled robotic interactions. In response to this need, pervasive technologies and advanced analytical approaches to translating and merging critical brain functions, behaviours, tasks, and environmental information have been a focus in brain-controlled robotic applications. These methods are composed of signal processing, feature extraction, representation of neural activities, command conversion and robot control. Artificial intelligence algorithms, especially deep learning, are used for the classification, recognition, and identification of patterns and intent underlying brainwaves as a form of electroencephalography. Within the context, this paper provides a comprehensive review of the past and the current status at the intersection of robotics, neuroscience, and artificial intelligence and highlights future research directions.

Place, publisher, year, edition, pages
Elsevier BV, 2024
Keywords
Brain robotics, Brainwave/electroencephalography, Brain–computer interface, Deep learning, Robot control, Signal processing
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-333952 (URN)10.1016/j.rcim.2023.102610 (DOI)001049545100001 ()2-s2.0-85165534271 (Scopus ID)
Note

QC 20230818

Available from: 2023-08-18 Created: 2023-08-18 Last updated: 2023-09-01Bibliographically approved
Zhang, J., Liu, S., Wang, L. & Gao, R. (2024). Efficient data management for intelligent manufacturing. In: Manufacturing from Industry 4.0 to Industry 5.0: Advances and Applications: (pp. 289-312). Elsevier BV
Open this publication in new window or tab >>Efficient data management for intelligent manufacturing
2024 (English)In: Manufacturing from Industry 4.0 to Industry 5.0: Advances and Applications, Elsevier BV , 2024, p. 289-312Chapter in book (Other academic)
Abstract [en]

With a focus on human-centricity in rapidly evolving complex production environments, Industry 5.0 further defines intelligent manufacturing that aims to surpass the current state-of-the-art by enhancing production throughput and reliability through data analytics. While algorithm advances have brought new possibilities, the challenge of data quality hinders their successful implementation. Over the past years, research on data curation has attracted increasing attention to ensure high-quality data for meaningful data analytics. This chapter provides an overview of several key techniques in data curation, highlighting breakthroughs in deep learning–based data denoising, annotation, and balancing. These advancements have shown effective in extracting valuable information from noisy, unannotated, and imbalanced data and improve human comprehension to support the next generation of intelligent manufacturing.

Place, publisher, year, edition, pages
Elsevier BV, 2024
Keywords
data curation, data quality, deep learning, human-centric, Industry 5.0
National Category
Production Engineering, Human Work Science and Ergonomics Computer Sciences Other Engineering and Technologies Computer Systems
Identifiers
urn:nbn:se:kth:diva-353587 (URN)10.1016/B978-0-443-13924-6.00010-7 (DOI)2-s2.0-85202905003 (Scopus ID)
Note

Part of ISBN [9780443139246, 9780443139239] QC 20240923

Available from: 2024-09-19 Created: 2024-09-19 Last updated: 2025-02-18Bibliographically approved
Liu, S., Zhang, J., Yi, S., Gao, R., Mourtzis, D. & Wang, L. (2024). Human-centric systems in smart manufacturing. In: Manufacturing from Industry 4.0 to Industry 5.0: Advances and Applications: (pp. 181-205). Elsevier BV
Open this publication in new window or tab >>Human-centric systems in smart manufacturing
Show others...
2024 (English)In: Manufacturing from Industry 4.0 to Industry 5.0: Advances and Applications, Elsevier BV , 2024, p. 181-205Chapter in book (Other academic)
Abstract [en]

Within the context of Industry 5.0, the concept of human centricity shapes a paradigm shift from technology-driven progress to a human-centric approach for smart manufacturing. Manufacturing systems envisioned for the factory of the future will promote human-centric approaches for an ergonomic working environment and improved productivity. For this purpose, this chapter investigates human-centric manufacturing systems within smart manufacturing from a perspective of industrial practices of the human-centric concept. First, confusions in the terminologies of the human-centric concept are clarified and then human roles in the human-centric manufacturing systems are investigated. Typical practices of human-centric systems in manufacturing including human-centric cyber-physical systems, human–robot collaborative systems, and human-centric assembly systems are introduced.

Place, publisher, year, edition, pages
Elsevier BV, 2024
Keywords
automation engineering, control systems, information systems, manufacturing engineering, Robotics
National Category
Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-353591 (URN)10.1016/B978-0-443-13924-6.00006-5 (DOI)2-s2.0-85202889796 (Scopus ID)
Note

Part of ISBN: 9780443139246, 9780443139239

QC 20240926

Available from: 2024-09-19 Created: 2024-09-19 Last updated: 2024-12-03Bibliographically approved
Gao, F., Xia, L., Zhang, J., Liu, S., Wang, L. & Gao, R. X. (2024). Integrating Large Language Model for Natural Language-Based Instruction toward Robust Human-Robot Collaboration. In: : . Paper presented at 18th IFAC Workshop on Time Delay Systems, TDS 2024, Udine, Italy, October 2-5, 2023 (pp. 313-318). Elsevier BV
Open this publication in new window or tab >>Integrating Large Language Model for Natural Language-Based Instruction toward Robust Human-Robot Collaboration
Show others...
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Human-Robot Collaboration (HRC) aims to create environments where robots can understand workspace dynamics and actively assist humans in operations, with the human intention recognition being fundamental to efficient and safe task fulfillment. Language-based control and communication is a natural and convenient way to convey human intentions. However, traditional language models require instructions to be articulated following a rigid, predefined syntax, which can be unnatural, inefficient, and prone to errors. This paper investigates the reasoning abilities that emerged from the recent advancement of Large Language Models (LLMs) to overcome these limitations, allowing for human instructions to be used to enhance human-robot communication. For this purpose, a generic GPT 3.5 model has been fine-tuned to interpret and translate varied human instructions into essential attributes, such as task relevancy and tools and/or parts required for the task. These attributes are then fused with perceived on-going robot action to generate a sequence of relevant actions. The developed technique is evaluated in a case study where robots initially misinterpreted human actions and picked up wrong tools and parts for assembly. It is shown that the fine-tuned LLM can effectively identify corrective actions across a diverse range of instructional human inputs, thereby enhancing the robustness of human-robot collaborative assembly for smart manufacturing.

Place, publisher, year, edition, pages
Elsevier BV, 2024
Keywords
Error correction, Human-robot collaboration, Large language model, Natural language processing
National Category
Production Engineering, Human Work Science and Ergonomics Robotics and automation
Identifiers
urn:nbn:se:kth:diva-358215 (URN)10.1016/j.procir.2024.10.093 (DOI)2-s2.0-85214970201 (Scopus ID)
Conference
18th IFAC Workshop on Time Delay Systems, TDS 2024, Udine, Italy, October 2-5, 2023
Note

QC 20250114

Available from: 2025-01-07 Created: 2025-01-07 Last updated: 2025-02-05Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1909-0507

Search in DiVA

Show all publications