kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (5 of 5) Show all publications
Zhu, X., Mårtensson, P., Hanson, L., Björkman, M. & Maki, A. (2025). Automated assembly quality inspection by deep learning with 2D and 3D synthetic CAD data. Journal of Intelligent Manufacturing, 36(4), 2567-2582, Article ID e222.
Open this publication in new window or tab >>Automated assembly quality inspection by deep learning with 2D and 3D synthetic CAD data
Show others...
2025 (English)In: Journal of Intelligent Manufacturing, ISSN 0956-5515, E-ISSN 1572-8145, Vol. 36, no 4, p. 2567-2582, article id e222Article in journal (Refereed) Published
Abstract [en]

In the manufacturing industry, automatic quality inspections can lead to improved product quality and productivity. Deep learning-based computer vision technologies, with their superior performance in many applications, can be a possible solution for automatic quality inspections. However, collecting a large amount of annotated training data for deep learning is expensive and time-consuming, especially for processes involving various products and human activities such as assembly. To address this challenge, we propose a method for automated assembly quality inspection using synthetic data generated from computer-aided design (CAD) models. The method involves two steps: automatic data generation and model implementation. In the first step, we generate synthetic data in two formats: two-dimensional (2D) images and three-dimensional (3D) point clouds. In the second step, we apply different state-of-the-art deep learning approaches to the data for quality inspection, including unsupervised domain adaptation, i.e., a method of adapting models across different data distributions, and transfer learning, which transfers knowledge between related tasks. We evaluate the methods in a case study of pedal car front-wheel assembly quality inspection to identify the possible optimal approach for assembly quality inspection. Our results show that the method using Transfer Learning on 2D synthetic images achieves superior performance compared with others. Specifically, it attained 95% accuracy through fine-tuning with only five annotated real images per class. With promising results, our method may be suggested for other similar quality inspection use cases. By utilizing synthetic CAD data, our method reduces the need for manual data collection and annotation. Furthermore, our method performs well on test data with different backgrounds, making it suitable for different manufacturing environments.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Assembly quality inspection, Computer vision, Point cloud, Synthetic data, Transfer learning, Unsupervised domain adaptation
National Category
Computer Sciences Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-363099 (URN)10.1007/s10845-024-02375-6 (DOI)001205028300001 ()2-s2.0-105002924620 (Scopus ID)
Note

QC 20250506

Available from: 2025-05-06 Created: 2025-05-06 Last updated: 2025-05-19Bibliographically approved
Zhu, X., Björkman, M., Maki, A., Hanson, L. & Mårtensson, P. (2023). Surface Defect Detection with Limited Training Data: A Case Study on Crown Wheel Surface Inspection. In: 56th CIRP International Conference on Manufacturing Systems, CIRP CMS 2023: . Paper presented at 56th CIRP International Conference on Manufacturing Systems, CIRP CMS 2023, Cape Town, South Africa, Oct 24 2023 - Oct 26 2023 (pp. 1333-1338). Elsevier BV
Open this publication in new window or tab >>Surface Defect Detection with Limited Training Data: A Case Study on Crown Wheel Surface Inspection
Show others...
2023 (English)In: 56th CIRP International Conference on Manufacturing Systems, CIRP CMS 2023, Elsevier BV , 2023, p. 1333-1338Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents an approach to automatic surface defect detection by a deep learning-based object detection method, particularly in challenging scenarios where defects are rare, i.e., with limited training data. We base our approach on an object detection model YOLOv8, preceded by a few steps: 1) filtering out irrelevant information, 2) enhancing the visibility of defects, namely brightness contrast, and 3) increasing the diversity of the training data through data augmentation. We evaluated the method in an industrial case study of crown wheel surface inspection in detecting Unclean Gear as well as Deburring defects, resulting in promising performances. With the combination of the three preprocessing steps, we improved the detection accuracy by 22.2% and 37.5% respectively while detecting those two defects. We believe that the proposed approach is also adaptable to various applications of surface defect detection in other industrial environments as the employed techniques, such as image segmentation, are available off the shelf.

Place, publisher, year, edition, pages
Elsevier BV, 2023
Keywords
Automatic Quality Inspection, Computer Vision, Deep Learning, Image Processing, Surface Defect Detection
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-343752 (URN)10.1016/j.procir.2023.09.172 (DOI)2-s2.0-85184602644 (Scopus ID)
Conference
56th CIRP International Conference on Manufacturing Systems, CIRP CMS 2023, Cape Town, South Africa, Oct 24 2023 - Oct 26 2023
Note

QC 20240222

Available from: 2024-02-22 Created: 2024-02-22 Last updated: 2025-02-07Bibliographically approved
Zhu, X., Bilal, T., Mårtensson, P., Hanson, L., Björkman, M. & Maki, A. (2023). Towards sim-to-real industrial parts classification with synthetic dataset. In: Proceedings: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023. Paper presented at 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023, Vancouver, Canada, Jun 18 2023 - Jun 22 2023 (pp. 4454-4463). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Towards sim-to-real industrial parts classification with synthetic dataset
Show others...
2023 (English)In: Proceedings: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 4454-4463Conference paper, Published paper (Refereed)
Abstract [en]

This paper is about effectively utilizing synthetic data for training deep neural networks for industrial parts classification, in particular, by taking into account the domain gap against real-world images. To this end, we introduce a synthetic dataset that may serve as a preliminary testbed for the Sim-to-Real challenge; it contains 17 objects of six industrial use cases, including isolated and assembled parts. A few subsets of objects exhibit large similarities in shape and albedo for reflecting challenging cases of industrial parts. All the sample images come with and without random backgrounds and post-processing for evaluating the importance of domain randomization. We call it Synthetic Industrial Parts dataset (SIP-17). We study the usefulness of SIP-17 through benchmarking the performance of five state-of-the-art deep network models, supervised and self-supervised, trained only on the synthetic data while testing them on real data. By analyzing the results, we deduce some insights on the feasibility and challenges of using synthetic data for industrial parts classification and for further developing larger-scale synthetic datasets. Our dataset † and code ‡ are publicly available.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-337847 (URN)10.1109/CVPRW59228.2023.00468 (DOI)2-s2.0-85170821045 (Scopus ID)
Conference
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2023, Vancouver, Canada, Jun 18 2023 - Jun 22 2023
Note

Part of ISBN 9798350302493

QC 20231010

Available from: 2023-10-10 Created: 2023-10-10 Last updated: 2025-02-07Bibliographically approved
Zhu, X., Maki, A. & Hanson, L. (2022). Unsupervised domain adaptive object detection for assembly quality inspection. In: Proceedings 15th CIRP Conference on Intelligent Computation in Manufacturing Engineering, ICME 2021: . Paper presented at 15th CIRP Conference on Intelligent Computation in Manufacturing Engineering, ICME 2021, Naples, 14-16 July 2021 (pp. 477-482). Elsevier BV, 112
Open this publication in new window or tab >>Unsupervised domain adaptive object detection for assembly quality inspection
2022 (English)In: Proceedings 15th CIRP Conference on Intelligent Computation in Manufacturing Engineering, ICME 2021, Elsevier BV , 2022, Vol. 112, p. 477-482Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

A challenge to apply deep learning-based computer vision technologies for assembly quality inspection lies in the diverse assembly approaches and the restricted annotated training data. This paper describes a method for overcoming the challenge by training an unsupervised domain adaptive object detection model on annotated synthetic images generated from CAD models and unannotated images captured from cameras. On a case study of pedal car front-wheel assembly, the model achieves promising results compared to other state-of-the-art object detection methods. Besides, the method is efficient to implement in production as it does not require manually annotated data.

Place, publisher, year, edition, pages
Elsevier BV, 2022
Series
Procedia CIRP, ISSN 2212-8271
National Category
Computer Sciences Control Engineering
Identifiers
urn:nbn:se:kth:diva-327337 (URN)10.1016/j.procir.2022.09.038 (DOI)2-s2.0-85142641837 (Scopus ID)
Conference
15th CIRP Conference on Intelligent Computation in Manufacturing Engineering, ICME 2021, Naples, 14-16 July 2021
Note

QC 20230525

Available from: 2023-05-24 Created: 2023-05-24 Last updated: 2023-05-25Bibliographically approved
Zhu, X., Manamasa, H., Jiménez Sánchez, J. L., Maki, A. & Hanson, L. (2021). Automatic assembly quality inspection based on an unsupervised point cloud domain adaptation model. In: Procedia CIRP: . Paper presented at 54th CIRP Conference on Manufacturing Ssystems, CMS 2021, 22 September 2021 through 24 September 2021 (pp. 1801-1806). Elsevier BV
Open this publication in new window or tab >>Automatic assembly quality inspection based on an unsupervised point cloud domain adaptation model
Show others...
2021 (English)In: Procedia CIRP, Elsevier BV , 2021, p. 1801-1806Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes an end-to-end method for automatic assembly quality inspection based on a point cloud domain adaptation model. The method involves automatically generating labeled point clouds from various CAD models and training a model on those point clouds together with a limited number of unlabeled point clouds acquired by 3D cameras. The model can then classify newly captured point clouds from 3D cameras to execute assembly quality inspection with promising performance. The method has been evaluated in an industry case study of pedal car front-wheel assembly. By utilizing CAD data, the method is less time-consuming for implementation in production. 

Place, publisher, year, edition, pages
Elsevier BV, 2021
Keywords
Assembly quality inspection, Deep learning, Domain adaptation, Point cloud, 3D modeling, Cameras, Computer aided design, Inspection, 3D camera, Adaptation models, Assembly quality, Automatic assembly, End to end, Point-clouds, Quality inspection
National Category
Robotics and automation Didactics Production Engineering, Human Work Science and Ergonomics
Identifiers
urn:nbn:se:kth:diva-317517 (URN)10.1016/j.procir.2021.11.304 (DOI)2-s2.0-85121588373 (Scopus ID)
Conference
54th CIRP Conference on Manufacturing Ssystems, CMS 2021, 22 September 2021 through 24 September 2021
Note

QC 20220913

Available from: 2022-09-13 Created: 2022-09-13 Last updated: 2025-02-05Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-4180-3809

Search in DiVA

Show all publications