Change search
ReferencesLink to record
Permanent link

Direct link
Fusion of Ladybug3 omnidirectional camera and Velodyne Lidar
KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Satellite Positioning.
2015 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

The advent of autonomous vehicles expedites the revolution of car industry. Volvo Car Corporation has an ambition of developing the next generation of autonomous vehicle. In the Volvo Car Corporation, Active Safety CAE group, enthusiastic engineers have initiated a series of relevant research to enhance the safety function for autonomous vehicle and this thesis work is also implemented at Active Safety CAE with their support.   

Perception of vehicle plays a pivotal role in autonomous driving, therefore an idea of improving vision by fusing two different types of data from Velodyne HDL-64E S3 High Definition LiDAR Sensor and Ladybug3 camera respectively, is proposed.

 This report presents the whole process of fusion of point clouds and image data. An experiment is implemented for collecting and synchronizing multi-sensor data streams by building a platform which supports the mounting of Velodyne, Ladybug 3 and their accessories, as well as the connection to GPS unit, laptop. Related software/programming environment for recording, synchronizing and storing data will also be mentioned.

Synchronization is mainly achieved by matching timestamps between different datasets. Creating log files for timestamps is the primary task in synchronization.

External Calibration between Velodyne and Ladybug3 camera for matching two different datasets correctly is the focus of this report. In the project, we will develop a semi-automatic calibration method with very little human intervention using a checkerboard for acquiring a small set of feature points from laser point cloud and image feature correspondences. Based on these correspondences, the displacement is computed. Using the computed result, the laser points are back-projected into the image. If the original and back-projected images are sufficiently consistent, then the transformation parameters can be accepted. Displacement between camera and laser scanner are estimated through two separate steps: first, we will estimate the pose for the checkerboard in image and get its depth information in camera coordinate system; and then a transformation relation between the camera and the laser scanner will be computed within three dimensional space. 

Fusion of datasets will finally be done by combing color information from image and range information from point cloud together. Other applications related to data fusion will be developed as the support of future work. 

In the end, a conclusion will be drawn. Possible improvements are also expected in future work. For example, better accuracy of calibration might be achieved with other methods and adding texture to cloud points will generate a more realistic model.

Place, publisher, year, edition, pages
2015. , 69 p.
TRITA-GIT EX, 15-011
National Category
Other Civil Engineering
URN: urn:nbn:se:kth:diva-172431OAI: diva2:848039
Subject / course
Educational program
Master of Science - Transport and Geoinformation Technology
2015-06-17, 10:00 (English)
Available from: 2015-08-25 Created: 2015-08-23 Last updated: 2015-08-25Bibliographically approved

Open Access in DiVA

fulltext(1779 kB)611 downloads
File information
File name FULLTEXT01.pdfFile size 1779 kBChecksum SHA-512
Type fulltextMimetype application/pdf

By organisation
Geodesy and Satellite Positioning
Other Civil Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 611 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 309 hits
ReferencesLink to record
Permanent link

Direct link