kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A data-efficient and general-purpose hand–eye calibration method for robotic systems using next best view
State Key Laboratory of Intelligent Manufacturing Equipment and Technology, Huazhong University of Science and Technology, China.
KTH, School of Industrial Engineering and Management (ITM), Production engineering.ORCID iD: 0000-0002-1909-0507
State Key Laboratory of Intelligent Manufacturing Equipment and Technology, Huazhong University of Science and Technology, China.
State Key Laboratory of Intelligent Manufacturing Equipment and Technology, Huazhong University of Science and Technology, China.
Show others and affiliations
2025 (English)In: Advanced Engineering Informatics, ISSN 1474-0346, E-ISSN 1873-5320, Vol. 66, article id 103432Article in journal (Refereed) Published
Abstract [en]

Calibration between robots and cameras is critical in automated robot vision systems. However, conventional manually conducted image-based calibration techniques are often limited by their accuracy sensitivity and poor adaptability to dynamic or unstructured environments. These approaches present challenges for ease of calibration and automatic deployment while being susceptible to rigid assumptions that degrade their performance. To close these limitations, this study proposes a data-efficient vision-driven approach for fast, accurate, and robust hand–eye camera calibration, and it aims to maximise the efficiency of robots in obtaining hand–eye calibration images without compromising accuracy. By analysing the previously captured images, the minimisation of the residual Jacobian matrix is utilised to predict the next optimal pose for robot calibration. A method to adjust the camera poses in dynamic environments is proposed to achieve efficient and robust hand–eye calibration. It requires fewer images, reduces dependence on manual expertise, and ensures repeatability. The proposed method is tested using experiments with actual industrial robots. The results demonstrate that our NBV strategy reduces rotational error by 8.8%, translational error by 26.4%, and the number of sampling frames by 25% compared to artificial sampling. The experimental results show that the average prediction time per frame is 3.26 seconds.

Place, publisher, year, edition, pages
Elsevier BV , 2025. Vol. 66, article id 103432
Keywords [en]
Hand–eye calibration, Non-linear optimisation, Robot control, Robot vision system
National Category
Robotics and automation Computer graphics and computer vision Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-364151DOI: 10.1016/j.aei.2025.103432Scopus ID: 2-s2.0-105005832045OAI: oai:DiVA.org:kth-364151DiVA, id: diva2:1964107
Note

QC 20250605

Available from: 2025-06-04 Created: 2025-06-04 Last updated: 2025-06-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Liu, SichaoWang, Xi VincentWang, Lihui

Search in DiVA

By author/editor
Liu, SichaoWang, Xi VincentWang, Lihui
By organisation
Production engineering
In the same journal
Advanced Engineering Informatics
Robotics and automationComputer graphics and computer visionControl Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 29 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf