Endre søk
Link to record
Permanent link

Direct link
Publikasjoner (10 av 595) Visa alla publikasjoner
Zhang, Z., Wang, Y., Zhang, Z., Wang, L., Huang, H. & Cao, Q. (2024). A residual reinforcement learning method for robotic assembly using visual and force information. Journal of manufacturing systems, 72, 245-262
Åpne denne publikasjonen i ny fane eller vindu >>A residual reinforcement learning method for robotic assembly using visual and force information
Vise andre…
2024 (engelsk)Inngår i: Journal of manufacturing systems, ISSN 0278-6125, E-ISSN 1878-6642, Vol. 72, s. 245-262Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Robotic autonomous assembly is critical in intelligent manufacturing and has always been a research hotspot. Most previous approaches rely on prior knowledge, such as geometric parameters and pose information of the assembled parts, which are hard to estimate in unstructured environments. This paper proposes a residual reinforcement learning (RL) policy for robotic assembly via combining visual and force information. The residual RL policy, which consists of a visual-based policy and a force-based policy, is trained and tested in an end-to-end manner. In the assembly procedure, the visual-based policy focuses on spatial search, while the force-based policy handles the interactive behaviors. The experimental results reveal the high sample efficiency of our approach, which exhibits the ability to generalize across diverse assembly tasks involving variations in geometries, clearances, and configurations. The validation experiments are conducted both in simulation and on a real robot.

sted, utgiver, år, opplag, sider
Elsevier BV, 2024
Emneord
Compliance control, Residual reinforcement learning, Robotic assembly, Visual and force perception
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-341697 (URN)10.1016/j.jmsy.2023.11.008 (DOI)001140126900001 ()2-s2.0-85179753333 (Scopus ID)
Merknad

QC 20231229

Tilgjengelig fra: 2023-12-29 Laget: 2023-12-29 Sist oppdatert: 2024-02-01bibliografisk kontrollert
Li, X., Yue, C., Liu, X., Zhou, J. & Wang, L. (2024). ACWGAN-GP for milling tool breakage monitoring with imbalanced data. Robotics and Computer-Integrated Manufacturing, 85, Article ID 102624.
Åpne denne publikasjonen i ny fane eller vindu >>ACWGAN-GP for milling tool breakage monitoring with imbalanced data
Vise andre…
2024 (engelsk)Inngår i: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 85, artikkel-id 102624Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Tool breakage monitoring (TBM) during milling operations is crucial for ensuring workpiece quality and mini-mizing economic losses. Under the premise of sufficient training data with a balanced distribution, TBM methods based on statistical analysis and artificial intelligence enable accurate recognition of tool breakage conditions. However, considering the actual manufacturing safety, cutting tools usually work in normal wear conditions, and acquiring tool breakage signals is extremely difficult. The data imbalance problem seriously affects the recog-nition accuracy and robustness of the TBM model. This paper proposes a TBM method based on the auxiliary classier Wasserstein generative adversarial network with gradient penalty (ACWGAN-GP) from the perspective of data generation. By introducing Wasserstein distance and gradient penalty terms into the loss function of ACGAN, ACWGAN-GP can generate multi-class fault samples while improving the network's stability during adversarial training. A sample filter based on multiple statistical indicators is designed to ensure the quality and diversity of the generated data. Qualified samples after quality assessment are added to the original imbalanced dataset to improve the tool breakage classifier's performance. Artificially controlled face milling experiments for TBM are carried out on a five-axis CNC machine to verify the effectiveness of the proposed method. Experimental results reveal that the proposed method outperforms other popular imbalance fault diagnosis methods in terms of data generation quality and TBM accuracy, and can meet the real-time requirements of TBMs.

sted, utgiver, år, opplag, sider
Elsevier BV, 2024
Emneord
Tool breakage monitoring, Milling tool, Imbalanced data, Generative adversarial network, ACWGAN-GP, Deep learning
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-335183 (URN)10.1016/j.rcim.2023.102624 (DOI)001051111800001 ()2-s2.0-85166476791 (Scopus ID)
Merknad

QC 20230901

Tilgjengelig fra: 2023-09-01 Laget: 2023-09-01 Sist oppdatert: 2023-09-04bibliografisk kontrollert
Urgo, M., Berardinucci, F., Zheng, P. & Wang, L. (2024). AI-Based Pose Estimation of Human Operators in Manufacturing Environments. In: Lecture Notes in Mechanical Engineering: (pp. 3-38). Springer Nature, Part F2256
Åpne denne publikasjonen i ny fane eller vindu >>AI-Based Pose Estimation of Human Operators in Manufacturing Environments
2024 (engelsk)Inngår i: Lecture Notes in Mechanical Engineering, Springer Nature , 2024, Vol. Part F2256, s. 3-38Kapittel i bok, del av antologi (Annet vitenskapelig)
Abstract [en]

The fast development of AI-based approaches for image recognition has driven the availability of fast and reliable tools for identifying the human body in captured videos (both 2D and 3D). This has increased the feasibility and effectiveness of approaches for human pose estimation in industrial environments. This essay will cover different approaches for estimating the human pose based on neural networks (e.g., CNN, LSTM, etc.), addressing the workflow and requirements for their implementation and use. A brief analysis and comparison of the existing AI-based frameworks and approaches will be carried out (e.g. OpenPose, MediaPipe) together with a listing of the related hardware and software requirements. Finally, two case studies presenting applications in the manufacturing sector are provided.

sted, utgiver, år, opplag, sider
Springer Nature, 2024
Emneord
Computer vision, Human pose estimation, Manual processes, Monitoring
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-344039 (URN)10.1007/978-3-031-54034-9_1 (DOI)2-s2.0-85185519584 (Scopus ID)
Merknad

QC 20240229

Tilgjengelig fra: 2024-02-28 Laget: 2024-02-28 Sist oppdatert: 2024-02-29bibliografisk kontrollert
Liu, Y., Sun, S., Shen, G., Wang, X. V., Wiktorsson, M. & Wang, L. (2024). An Auction-Based Approach for Multi-Agent Uniform Parallel Machine Scheduling with Dynamic Jobs Arrival. Engineering, 35, 32-45
Åpne denne publikasjonen i ny fane eller vindu >>An Auction-Based Approach for Multi-Agent Uniform Parallel Machine Scheduling with Dynamic Jobs Arrival
Vise andre…
2024 (engelsk)Inngår i: Engineering, ISSN 2095-8099, Vol. 35, s. 32-45Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This paper addresses a multi-agent scheduling problem with uniform parallel machines owned by a resource agent and competing jobs with dynamic arrival times that belong to different consumer agents. All agents are self-interested and rational with the aim of maximizing their own objectives, resulting in intense resource competition among consumer agents and strategic behaviors of unwillingness to disclose private information. Within the context, a centralized scheduling approach is unfeasible, and a decentralized approach is considered to deal with the targeted problem. This study aims to generate a stable and collaborative solution with high social welfare while simultaneously accommodating consumer agents' preferences under incomplete information. For this purpose, a dynamic iterative auction-based approach based on a decentralized decision-making procedure is developed. In the proposed approach, a dynamic auction procedure is established for dynamic jobs participating in a realtime auction, and a straightforward and easy-to-implement bidding strategy without price is presented to reduce the complexity of bid determination. In addition, an adaptive Hungarian algorithm is applied to solve the winner determination problem efficiently. A theoretical analysis is conducted to prove that the proposed approach is individually rational and that the myopic bidding strategy is a weakly dominant strategy for consumer agents submitting bids. Extensive computational experiments demonstrate that the developed approach achieves high-quality solutions and exhibits considerable stability on largescale problems with numerous consumer agents and jobs. A further multi-agent scheduling problem considering multiple resource agents will be studied in future work.

sted, utgiver, år, opplag, sider
Elsevier BV, 2024
Emneord
Multi -agent scheduling, Decentralized scheduling, Auction, Dynamic jobs, Private information
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-350108 (URN)10.1016/j.eng.2023.09.024 (DOI)001251742900001 ()2-s2.0-85191446719 (Scopus ID)
Merknad

QC 20240708

Tilgjengelig fra: 2024-07-08 Laget: 2024-07-08 Sist oppdatert: 2024-07-08bibliografisk kontrollert
Fan, J., Zheng, P., Li, S. & Wang, L. (2024). An Integrated Hand-Object Dense Pose Estimation Approach With Explicit Occlusion Awareness for Human-Robot Collaborative Disassembly. IEEE Transactions on Automation Science and Engineering, 21(1), 147-156
Åpne denne publikasjonen i ny fane eller vindu >>An Integrated Hand-Object Dense Pose Estimation Approach With Explicit Occlusion Awareness for Human-Robot Collaborative Disassembly
2024 (engelsk)Inngår i: IEEE Transactions on Automation Science and Engineering, ISSN 1545-5955, E-ISSN 1558-3783, Vol. 21, nr 1, s. 147-156Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Human-robot collaborative disassembly (HRCD) has gained much interest in the disassembly tasks of end-of-life products, integrating both robot’s high efficiency in repetitive works and human’s flexibility with higher cognition. Explicit human-object perceptions are significant but remain little reported in the literature for adaptive robot decision-makings, especially in the close proximity co-work with partial occlusions. Aiming to bridge this gap, this study proposes a vision-based 3D dense hand-object pose estimation approach for HRCD. First, a mask-guided attentive module is proposed to better attend to hand and object areas, respectively. Meanwhile, explicit consideration of the occluded area in the input image is introduced to mitigate the performance degradation caused by visual occlusion, which is inevitable during HRCD hand-object interactions. In addition, a 3D hand-object pose dataset is collected for a lithium-ion battery disassembly scenario in the lab environment with comparative experiments carried out, to demonstrate the effectiveness of the proposed method. —This work aims to overcome the challenge of joint hand-object pose estimation in a human-robot collaborative disassembly scenario, of which can also be applied to many other close-range human-robot/machine collaboration cases with practical values. The ability to accurately perceive the pose of the human hand and workpiece under partial occlusion is crucial for the collaborative robot to successfully carry out co-manipulation with human operators. This paper proposes an approach that can jointly estimate the 3D pose of the hand and object in an integrated model. An explicit prediction of the occlusion area is then introduced as a regularization term during model training. This can make the model more robust to partial occlusion between the hand and object. The comparative experiments suggest that the proposed approach outperforms many existing hand-object estimation ones. Nevertheless, the dependency on manually labeled training data can limit its application. In the future, we will consider semi-supervised or unsupervised training to address this issue and achieve faster adaptation to different industrial scenarios. 

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2024
Emneord
Collaboration, computer vision, Feature extraction, hand-object pose estimation, Human-robot collaborative disassembly, Image reconstruction, occlusion awareness, Pose estimation, Robots, Task analysis, Three-dimensional displays, Decision making, Job analysis, Lithium-ion batteries, Personnel training, Three dimensional computer graphics, Three dimensional displays, Features extraction, Human robots, Images reconstruction, Object pose, Pose-estimation, Three-dimensional display
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-328928 (URN)10.1109/TASE.2022.3215584 (DOI)001139915600043 ()2-s2.0-85141545467 (Scopus ID)
Merknad

QC 20230613

Tilgjengelig fra: 2023-06-13 Laget: 2023-06-13 Sist oppdatert: 2024-03-26bibliografisk kontrollert
Li, D., Li, Y., Liu, C., Liu, X. & Wang, L. (2024). An online inference method for condition identification of workpieces with complex residual stress distributions. Journal of manufacturing systems, 73, 192-204
Åpne denne publikasjonen i ny fane eller vindu >>An online inference method for condition identification of workpieces with complex residual stress distributions
Vise andre…
2024 (engelsk)Inngår i: Journal of manufacturing systems, ISSN 0278-6125, E-ISSN 1878-6642, Vol. 73, s. 192-204Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

The residual stress field of structural components significantly influences their comprehensive performance and service life. Due to the lack of effective representation means and inference methods, existing methods are confined to inspecting local residual stress rather than the entire residual stress field, rendering the inference of complex residual stress fields quite difficult. In response to the challenges associated with the requirement for extensive sets of deformation force data from the current workpiece and the inherent difficulty in establishing a stable relationship between deformation forces and residual stress fields, this paper introduces a novel inference method of residual stress field is proposed based on a data-causal knowledge fusion model, where causal knowledge is introduced to eliminate the coupling effect of geometric change on residual stress, which can make up the drawback of pure data driven model. The proposed approach can accurately inference the residual stress within the workpieces, which provides an important basis for deformation control and part property improvement.

sted, utgiver, år, opplag, sider
Elsevier BV, 2024
Emneord
Condition identification, Coupling effect, Data-causal knowledge fusion model, Residual stress field, Stable relationship
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-343673 (URN)10.1016/j.jmsy.2024.01.012 (DOI)001184741100001 ()2-s2.0-85184522579 (Scopus ID)
Merknad

QC 20240222

Tilgjengelig fra: 2024-02-22 Laget: 2024-02-22 Sist oppdatert: 2024-04-04bibliografisk kontrollert
Liu, S., Wang, L. & Gao, R. X. (2024). Cognitive neuroscience and robotics: Advancements and future research directions. Robotics and Computer-Integrated Manufacturing, 85, Article ID 102610.
Åpne denne publikasjonen i ny fane eller vindu >>Cognitive neuroscience and robotics: Advancements and future research directions
2024 (engelsk)Inngår i: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 85, artikkel-id 102610Artikkel, forskningsoversikt (Fagfellevurdert) Published
Abstract [en]

In recent years, brain-based technologies that capitalise on human abilities to facilitate human–system/robot interactions have been actively explored, especially in brain robotics. Brain–computer interfaces, as applications of this conception, have set a path to convert neural activities recorded by sensors from the human scalp via electroencephalography into valid commands for robot control and task execution. Thanks to the advancement of sensor technologies, non-invasive and invasive sensor headsets have been designed and developed to achieve stable recording of brainwave signals. However, robust and accurate extraction and interpretation of brain signals in brain robotics are critical to reliable task-oriented and opportunistic applications such as brainwave-controlled robotic interactions. In response to this need, pervasive technologies and advanced analytical approaches to translating and merging critical brain functions, behaviours, tasks, and environmental information have been a focus in brain-controlled robotic applications. These methods are composed of signal processing, feature extraction, representation of neural activities, command conversion and robot control. Artificial intelligence algorithms, especially deep learning, are used for the classification, recognition, and identification of patterns and intent underlying brainwaves as a form of electroencephalography. Within the context, this paper provides a comprehensive review of the past and the current status at the intersection of robotics, neuroscience, and artificial intelligence and highlights future research directions.

sted, utgiver, år, opplag, sider
Elsevier BV, 2024
Emneord
Brain robotics, Brainwave/electroencephalography, Brain–computer interface, Deep learning, Robot control, Signal processing
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-333952 (URN)10.1016/j.rcim.2023.102610 (DOI)001049545100001 ()2-s2.0-85165534271 (Scopus ID)
Merknad

QC 20230818

Tilgjengelig fra: 2023-08-18 Laget: 2023-08-18 Sist oppdatert: 2023-09-01bibliografisk kontrollert
Huang, Z., Li, W., Zhu, J. & Wang, L. (2024). Cross-domain tool wear condition monitoring via residual attention hybrid adaptation network. Journal of manufacturing systems, 72, 406-423
Åpne denne publikasjonen i ny fane eller vindu >>Cross-domain tool wear condition monitoring via residual attention hybrid adaptation network
2024 (engelsk)Inngår i: Journal of manufacturing systems, ISSN 0278-6125, E-ISSN 1878-6642, Vol. 72, s. 406-423Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Intelligent models for tool wear condition monitoring (TWCM) have been extensively researched. However, in industrial scenarios, limited acquired monitoring signals and variations of machining parameters lead to insufficient training samples and data distribution shifts for the models. To address the issues, this research presents a novel residual attention hybrid adaptation network (RAHAN) model based on a residual attention network (ResAttNet) and a hybrid adaptation strategy. In the RAHAN model, by integrating a channel attention mechanism and deep residual modules, ResAttNet is designed as a feature extractor to acquire features from monitoring signals for tool wear conditions. Embedding subdomain adaptation into a condition recognizer while establishing separate adversarial learning in a domain obfuscator, the hybrid adaptation strategy is developed to eliminate global distribution shifts and align local distributions of each tool wear phase simultaneously. Six migration tasks under a laboratory and two factory machining platforms were conducted to evaluate the effectiveness of the RAHAN model. Compared with a baseline model, four ablation models, and six state-of-the-art transfer learning models, the RAHAN model achieved the highest average accuracy of 92.70% on six migration tasks. Furthermore, the RAHAN model shows clearer feature representations of each tool wear condition than other compared models. The comparative results demonstrate that the RAHAN model has superior transferability and therefore can be considered as a good potential solution to support cross-domain TWCM under different machining processes.

sted, utgiver, år, opplag, sider
Elsevier BV, 2024
Emneord
Adversarial domain adaptation, Deep transfer learning, Subdomain adaptation, Tool wear condition monitoring
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-342196 (URN)10.1016/j.jmsy.2023.12.003 (DOI)001146008200001 ()2-s2.0-85180983978 (Scopus ID)
Merknad

QC 20240115

Tilgjengelig fra: 2024-01-15 Laget: 2024-01-15 Sist oppdatert: 2024-02-06bibliografisk kontrollert
Wang, T., Liu, Z., Wang, L., Li, M. & Wang, X. V. (2024). Data-efficient multimodal human action recognition for proactive human–robot collaborative assembly: A cross-domain few-shot learning approach. Robotics and Computer-Integrated Manufacturing, 89, Article ID 102785.
Åpne denne publikasjonen i ny fane eller vindu >>Data-efficient multimodal human action recognition for proactive human–robot collaborative assembly: A cross-domain few-shot learning approach
Vise andre…
2024 (engelsk)Inngår i: Robotics and Computer-Integrated Manufacturing, ISSN 0736-5845, E-ISSN 1879-2537, Vol. 89, artikkel-id 102785Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

With the recent vision of Industry 5.0, the cognitive capability of robots plays a crucial role in advancing proactive human–robot collaborative assembly. As a basis of the mutual empathy, the understanding of a human operator's intention has been primarily studied through the technique of human action recognition. Existing deep learning-based methods demonstrate remarkable efficacy in handling information-rich data such as physiological measurements and videos, where the latter category represents a more natural perception input. However, deploying these methods in new unseen assembly scenarios requires first collecting abundant case-specific data. This leads to significant manual effort and poor flexibility. To deal with the issue, this paper proposes a novel cross-domain few-shot learning method for data-efficient multimodal human action recognition. A hierarchical data fusion mechanism is designed to jointly leverage the skeletons, RGB images and depth maps with complementary information. Then a temporal CrossTransformer is developed to enable the action recognition with very limited amount of data. Lightweight domain adapters are integrated to further improve the generalization with fast finetuning. Extensive experiments on a real car engine assembly case show the superior performance of proposed method over state-of-the-art regarding both accuracy and finetuning efficiency. Real-time demonstrations and ablation study further indicate the potential of early recognition, which is beneficial for the robot procedures generation in practical applications. In summary, this paper contributes to the rarely explored realm of data-efficient human action recognition for proactive human–robot collaboration.

sted, utgiver, år, opplag, sider
Elsevier BV, 2024
Emneord
Cross-domain few-shot learning, Data-efficient, Human action recognition, Human–robot collaborative assembly, Multimodal
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-346813 (URN)10.1016/j.rcim.2024.102785 (DOI)001242317200001 ()2-s2.0-85192910539 (Scopus ID)
Merknad

QC 20240626

Tilgjengelig fra: 2024-05-24 Laget: 2024-05-24 Sist oppdatert: 2024-06-26bibliografisk kontrollert
Li, X., Liu, X., Yue, C., Wang, L. & Liang, S. Y. (2024). Data-model linkage prediction of tool remaining useful life based on deep feature fusion and Wiener process. Journal of manufacturing systems, 73, 19-38
Åpne denne publikasjonen i ny fane eller vindu >>Data-model linkage prediction of tool remaining useful life based on deep feature fusion and Wiener process
Vise andre…
2024 (engelsk)Inngår i: Journal of manufacturing systems, ISSN 0278-6125, E-ISSN 1878-6642, Vol. 73, s. 19-38Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Accurately predicting the tool remaining useful life (RUL) is critical for maximizing tool utilization and saving machining costs. Various physical model-based or data-driven prediction methods have been developed and successfully applied in different machining operations. However, many uncertain factors affect tool RUL during the cutting process, making it challenging to create a precise physical model to characterize the degradation of tool performance. The success of the purely data-driven technique depends on the amount and quality of the training samples, it does not consider the physical law of tool wear, and the interpretability of the prediction results is poor. This paper presents a data-model linkage approach for tool RUL prediction based on deep feature fusion and Wiener process to address the above limitations. A convolutional stacked bidirectional long short-term memory network with time-space attention mechanism (CSBLSTM-TSAM) is developed in the data-driven module to fuse the multi-sensor signals collected during the cutting process and then obtain the mapping relationship between signal features and tool wear values. In the physical modeling module, a three-stage tool RUL prediction model based on the nonlinear Wiener process is established by considering the evolution law of different wear stages and multi-layer uncertainty, and the corresponding probability density function is derived. The real-time estimated tool wear of the data-driven module is used as the observed value of the physical model, and the model parameters are dynamically updated by the weight-optimized particle filter (WOPF) algorithm under a Bayesian framework, thereby realizing the data-model linkage tool RUL prediction. Milling experiments demonstrate that the proposed method not only improves RUL prediction accuracy, but also has good generalization ability and robustness for prediction tasks under different working conditions.

sted, utgiver, år, opplag, sider
Elsevier BV, 2024
Emneord
Data-model linkage, Feature fusion, Remaining useful life prediction, Tool wear, Weight-optimized particle filter, Wiener process
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-342827 (URN)10.1016/j.jmsy.2024.01.008 (DOI)2-s2.0-85183028791 (Scopus ID)
Merknad

QC 20240201

Tilgjengelig fra: 2024-01-31 Laget: 2024-01-31 Sist oppdatert: 2024-02-01bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0001-8679-8049