kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 11) Show all publications
Das, S., Boberg, B., Fallon, M. & Chatterjee, S. (2024). IMU-based Online Multi-lidar Calibration.
Open this publication in new window or tab >>IMU-based Online Multi-lidar Calibration
2024 (English)Manuscript (preprint) (Other academic)
Abstract [en]

Modern autonomous systems typically use several sensors for perception. For best performance, accurate and reliable extrinsic calibration is necessary. In this research, we proposea reliable technique for the extrinsic calibration of several lidars on a vehicle without the need for odometry estimation or fiducial markers. First, our method generates an initial guess of the extrinsics by matching the raw signals of IMUs co-located with each lidar. This initial guess is then used in ICP and point cloud feature matching which refines and verifies this estimate. Furthermore, we can use observability criteria to choose a subset of the IMU measurements that have the highest mutual information — rather than comparing all the readings. We have successfully validated our methodology using data gathered from Scania test vehicles.

National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-343534 (URN)
Note

Submitted to IEEE IV 2024​ 35th IEEE Intelligent Vehicles Symposium, June 2 - 5, 2024. Jeju Shinhwa World, Jeju Island, Korea

QC 20240216

Available from: 2024-02-16 Created: 2024-02-16 Last updated: 2025-02-07Bibliographically approved
Das, S., Boberg, B., Fallon, M. & Chatterjee, S. (2024). IMU-based Online Multi-lidar Calibration. In: 35th IEEE Intelligent Vehicles Symposium, IV 2024: . Paper presented at 35th IEEE Intelligent Vehicles Symposium, IV 2024, Jeju Island, Korea, Jun 2 2024 - Jun 5 2024 (pp. 3227-3234). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>IMU-based Online Multi-lidar Calibration
2024 (English)In: 35th IEEE Intelligent Vehicles Symposium, IV 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 3227-3234Conference paper, Published paper (Refereed)
Abstract [en]

Modern autonomous systems typically use several sensors for perception. For best performance, accurate and reliable extrinsic calibration is necessary. In this research, we propose a reliable technique for the extrinsic calibration of several lidars on a vehicle without the need for odometry estimation or fiducial markers. First, our method generates an initial guess of the extrinsics by matching the raw signals of IMUs co-located with each lidar. This initial guess is then used in ICP and point cloud feature matching which refines and verifies this estimate. Furthermore, we can use observability criteria to choose a subset of the IMU measurements that have the highest mutual information - rather than comparing all the readings. We have successfully validated our methodology using data gathered from Scania test vehicles.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-351753 (URN)10.1109/IV55156.2024.10588695 (DOI)001275100903063 ()2-s2.0-85199765715 (Scopus ID)
Conference
35th IEEE Intelligent Vehicles Symposium, IV 2024, Jeju Island, Korea, Jun 2 2024 - Jun 5 2024
Note

Part of ISBN [9798350348811]

QC 20240814

Available from: 2024-08-13 Created: 2024-08-13 Last updated: 2025-02-07Bibliographically approved
Das, S. (2024). State estimation with auto-calibrated sensor setup. (Doctoral dissertation). KTH Royal Institute of Technology
Open this publication in new window or tab >>State estimation with auto-calibrated sensor setup
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Localization and mapping is one of the key aspects of driving autonomously in unstructured environments. Often such vehicles are equipped with multiple sensor modalities to create a 360o sensing coverage and add redundancy to handle sensor dropout scenarios. As the vehicles operate in underground mining and dense urban environments the Global navigation satellite system (GNSS) is often unreliable. Hence, to create a robust localization system different sensor modalities like camera, lidar and IMU are used along with a GNSS solution. The system must handle sensor dropouts and work in real-time (~15 Hz), so that there is enough computation budget left for other tasks like planning and control. Additionally, precise localization is also needed to map the environment, which may be later used for re-localization of the autonomous vehicles as well. Finally, for all of these to work seamlessly, accurate calibration of the sensors is of utmost importance.

In this PhD thesis, first, a robust system for state estimation that fuses measurements from multiple lidars and inertial sensors with GNSS data is presented. State estimation was performed in real-time, which produced robust motion estimates in a global frame by fusing lidar and IMU signals with GNSS components using a factor graph framework. The proposed method handled signal loss with a novel synchronization and fusion mechanism. To validate the approach extensive tests were carried out on data collected using Scania test vehicles (5 sequences for a total of ~ 7 Km). An average improvement of 61% in relative translation and 42% rotational error compared to a state-of-the-art estimator fusing a single lidar/inertial sensor pair is reported.  

Since precise calibration is needed for the localization and mapping tasks, in this thesis, methods for real-time calibration of the sensor setup is proposed. First, a method is proposed to calibrate sensors with non-overlapping field-of-view. The calibration quality is verified by mapping known features in the environment. Nevertheless, the verification process was not real-time and no observability analysis was performed which could give us an indicator of the analytical traceability of the trajectory required for motion-based online calibration. Hence, a new method is proposed where calibration and verification were performed in real-time by matching estimated sensor poses in real-time with observability analysis. Both of these methods relied on estimating the sensor poses using the state estimator developed in our earlier works. However, state estimators have inherent drifts and they are computationally intensive as well. Thus, another novel method is developed where the sensors could be calibrated in real-time without the need for any state estimation. 

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2024. p. 151
Series
TRITA-EECS-AVL ; 2024:8
Keywords
SLAM, Sensor calibration, Autonomous driving
National Category
Signal Processing Robotics and automation
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-343412 (URN)978-91-8040-806-6 (ISBN)
Public defence
2024-03-08, https://kth-se.zoom.us/s/63372097801, F3, Lindstedtsvägen 26, Stockholm, 13:00 (English)
Opponent
Supervisors
Funder
Swedish Foundation for Strategic Research
Note

QC 20240213

Available from: 2024-02-14 Created: 2024-02-12 Last updated: 2025-02-05Bibliographically approved
Das, S., Mahabadi, N., Fallon, M. & Chatterjee, S. (2023). M-LIO: Multi-lidar, multi-IMU odometry with sensor dropout tolerance. In: IV 2023 - IEEE Intelligent Vehicles Symposium, Proceedings: . Paper presented at 34th IEEE Intelligent Vehicles Symposium, IV 2023, Anchorage, United States of America, Jun 4 2023 - Jun 7 2023. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>M-LIO: Multi-lidar, multi-IMU odometry with sensor dropout tolerance
2023 (English)In: IV 2023 - IEEE Intelligent Vehicles Symposium, Proceedings, Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper, Published paper (Refereed)
Abstract [en]

We present a robust system for state estimation that fuses measurements from multiple lidars and inertial sensors with GNSS data. To initiate the method, we use the prior GNSS pose information. We then perform motion estimation in real-time, which produces robust motion estimates in a global frame by fusing lidar and IMU signals with GNSS translation components using a factor graph framework. We also propose methods to account for signal loss with a novel synchronization and fusion mechanism. To validate our approach extensive tests were carried out on data collected using Scania test vehicles (5 sequences for a total of ≈ 7 Km). From our evaluations, we show an average improvement of 61% in relative translation and 42% rotational error compared to a state-of-the-art estimator fusing a single lidar/inertial sensor pair, in sensor dropout scenarios.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Odometry estimation, Sensor fusion, SLAM
National Category
Computer graphics and computer vision Signal Processing
Identifiers
urn:nbn:se:kth:diva-335040 (URN)10.1109/IV55152.2023.10186548 (DOI)001042247300023 ()2-s2.0-85168001725 (Scopus ID)
Conference
34th IEEE Intelligent Vehicles Symposium, IV 2023, Anchorage, United States of America, Jun 4 2023 - Jun 7 2023
Note

Part of ISBN 9798350346916

QC 20230831

Available from: 2023-08-31 Created: 2023-08-31 Last updated: 2025-02-01Bibliographically approved
Das, S., Klinteberg, L. a., Fallon, M. & Chatterjee, S. (2023). Observability-Aware Online Multi-Lidar Extrinsic Calibration. IEEE Robotics and Automation Letters, 8(5), 2860-2867
Open this publication in new window or tab >>Observability-Aware Online Multi-Lidar Extrinsic Calibration
2023 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, no 5, p. 2860-2867Article in journal (Refereed) Published
Abstract [en]

Accurate and robust extrinsic calibration is necessary for deploying autonomous systems which need multiple sensors for perception. In this letter, we present a robust system for real-time extrinsic calibration of multiple lidars in vehicle base framewithout the need for any fiducialmarkers or features. We base our approach on matching absolute GNSS (Global Navigation Satellite System) and estimated lidar poses in real-time. Comparing rotation components allows us to improve the robustness of the solution than traditional least-square approach comparing translation components only. Additionally, instead of comparing all corresponding poses, we select poses comprising maximum mutual information based on our novel observability criteria. This allows us to identify a subset of the poses helpful for real-time calibration. We also provide stopping criteria for ensuring calibration completion. To validate our approach extensive tests were carried out on data collected using Scania test vehicles (7 sequences for a total of approximate to 6.5 Km). The results presented in this letter show that our approach is able to accurately determine the extrinsic calibration for various combinations of sensor setups.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Calibration and identification, autonomous vehicle navigation, sensor fusion
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-326661 (URN)10.1109/LRA.2023.3262176 (DOI)000964797800011 ()2-s2.0-85151572135 (Scopus ID)
Note

QC 20230508

Available from: 2023-05-08 Created: 2023-05-08 Last updated: 2025-02-07Bibliographically approved
Das, S., Mahabadi, N., Djikic, A., Nassir, C., Chatterjee, S. & Fallon, M. (2022). Extrinsic Calibration and Verification of Multiple Non-overlapping Field of View Lidar Sensors. In: 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), MAY 23-27, 2022, Philadelphia, PA, USA. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Extrinsic Calibration and Verification of Multiple Non-overlapping Field of View Lidar Sensors
Show others...
2022 (English)In: 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), Institute of Electrical and Electronics Engineers (IEEE) , 2022Conference paper, Published paper (Refereed)
Abstract [en]

We demonstrate a multi-lidar calibration framework for large mobile platforms that jointly calibrate the extrinsic parameters of non-overlapping Field-of-View (FoV) lidar sensors, without the need for any external calibration aid. The method starts by estimating the pose of each lidar in its corresponding sensor frame in between subsequent timestamps. Since the pose estimates from the lidars are not necessarily synchronous, we first align the poses using a Dual Quaternion (DQ) based Screw Linear Interpolation. Afterward, a HandEye based calibration problem is solved using the DQ-based formulation to recover the extrinsics. Furthermore, we verify the extrinsics by matching chosen lidar semantic features, obtained by projecting the lidar data into the camera perspective after time alignment using vehicle kinematics. Experimental results on the data collected from a Scania vehicle [similar to 1 Km sequence] demonstrate the ability of our approach to obtain better calibration parameters than the provided vehicle CAD model calibration parameters. This setup can also be scaled to any combination of multiple lidars.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
National Category
Physical Sciences
Identifiers
urn:nbn:se:kth:diva-326490 (URN)10.1109/ICRA46639.2022.9811704 (DOI)000941265700085 ()2-s2.0-85136321668 (Scopus ID)
Conference
IEEE International Conference on Robotics and Automation (ICRA), MAY 23-27, 2022, Philadelphia, PA, USA
Note

QC 20230626

Available from: 2023-05-03 Created: 2023-05-03 Last updated: 2024-02-13Bibliographically approved
Das, S., Mahabadi, N., Chatterjee, S. & Fallon, M. (2022). Multi-modal curb detection and filtering. In: : . Paper presented at IEEE International Conference on Robotics and Automation (ICRA) Workshop: Robotic Perception and Mapping - Emerging Techniques, May 23, 2022, Philadelphia, USA.
Open this publication in new window or tab >>Multi-modal curb detection and filtering
2022 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Reliable knowledge of road boundaries is critical for autonomous vehicle navigation. We propose a robust curb detection and filtering technique based on the fusion of camera semantics and dense lidar point clouds. The lidar point clouds are collected by fusing multiple lidars for robust feature detection. The camera semantics are based on a modified EfficientNet architecture which is trained with labeled data collected from onboard fisheye cameras. The point clouds are associated with the closest curb segment with L2-norm analysis after projecting into the image space with the fisheye model projection. Next, the selected points are clustered using unsupervised density-based spatial clustering to detect different curb regions. As new curb points are detected in consecutive frames they are associated with the existing curb clusters using temporal reachability constraints. If no reachability constraints are found a new curb cluster is formed from these new points. This ensures we can detect multiple curbs present in road segments consisting of multiple lanes if they are in the sensors' field of view. Finally, Delaunay filtering is applied for outlier removal and its performance is compared to traditional RANSAC-based filtering. An objective evaluation of the proposed solution is done using a high-definition map containing ground truth curb points obtained from a commercial map supplier. The proposed system has proven capable of detecting curbs of any orientation in complex urban road scenarios comprising straight roads, curved roads, and intersections with traffic isles. 

National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-343533 (URN)
Conference
IEEE International Conference on Robotics and Automation (ICRA) Workshop: Robotic Perception and Mapping - Emerging Techniques, May 23, 2022, Philadelphia, USA
Note

QC 20240216

Available from: 2024-02-16 Created: 2024-02-16 Last updated: 2025-02-07Bibliographically approved
Das, S., Javid, A. M., Borpatra Gohain, P., Eldar, Y. C. & Chatterjee, S. (2022). Neural Greedy Pursuit for Feature Selection. In: 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN): . Paper presented at IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) / IEEE World Congress on Computational Intelligence (IEEE WCCI) / International Joint Conference on Neural Networks (IJCNN) / IEEE Congress on Evolutionary Computation (IEEE CEC), JUL 18-23, 2022, Padua, ITALY. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Neural Greedy Pursuit for Feature Selection
Show others...
2022 (English)In: 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), Institute of Electrical and Electronics Engineers (IEEE) , 2022Conference paper, Published paper (Refereed)
Abstract [en]

We propose a greedy algorithm to select N important features among P input features for a non-linear prediction problem. The features are selected one by one sequentially, in an iterative loss minimization procedure. We use neural networks as predictors in the algorithm to compute the loss and hence, we refer to our method as neural greedy pursuit (NGP). NGP is efficient in selecting N features when N << P, and it provides a notion of feature importance in a descending order following the sequential selection procedure. We experimentally show that NGP provides better performance than several feature selection methods such as DeepLIFT and Drop-one-out loss. In addition, we experimentally show a phase transition behavior in which perfect selection of all N features without false positives is possible when the training data size exceeds a threshold.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
IEEE International Joint Conference on Neural Networks (IJCNN), ISSN 2161-4393
Keywords
Feature selection, Deep learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-323022 (URN)10.1109/IJCNN55064.2022.9892946 (DOI)000867070908056 ()2-s2.0-85140774694 (Scopus ID)
Conference
IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) / IEEE World Congress on Computational Intelligence (IEEE WCCI) / International Joint Conference on Neural Networks (IJCNN) / IEEE Congress on Evolutionary Computation (IEEE CEC), JUL 18-23, 2022, Padua, ITALY
Note

Part of proceedings: ISBN 978-1-7281-8671-9

QC 20230112

Available from: 2023-01-12 Created: 2023-01-12 Last updated: 2023-01-12Bibliographically approved
Javid, A. M., Das, S., Skoglund, M. & Chatterjee, S. (2021). A Relu Dense Layer To Improve The Performance Of Neural Networks. In: 2021 IEEE International Conference On Acoustics, Speech And Signal Processing (ICASSP 2021): . Paper presented at IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), JUN 06-11, 2021, ELECTR NETWORK (pp. 2810-2814). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>A Relu Dense Layer To Improve The Performance Of Neural Networks
2021 (English)In: 2021 IEEE International Conference On Acoustics, Speech And Signal Processing (ICASSP 2021), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 2810-2814Conference paper, Published paper (Refereed)
Abstract [en]

We propose ReDense as a simple and low complexity way to improve the performance of trained neural networks. We use a combination of random weights and rectified linear unit (ReLU) activation function to add a ReLU dense (ReDense) layer to the trained neural network such that it can achieve a lower training loss. The lossless flow property (LFP) of ReLU is the key to achieve the lower training loss while keeping the generalization error small. ReDense does not suffer from vanishing gradient problem in the training due to having a shallow structure. We experimentally show that ReDense can improve the training and testing performance of various neural network architectures with different optimization loss and activation functions. Finally, we test ReDense on some of the state-of-the-art architectures and show the performance improvement on benchmark.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Keywords
Rectified linear unit, random weights, deep neural network
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-305410 (URN)10.1109/ICASSP39728.2021.9414269 (DOI)000704288403013 ()2-s2.0-85115078893 (Scopus ID)
Conference
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), JUN 06-11, 2021, ELECTR NETWORK
Note

Part of proceedings: ISBN 978-1-7281-7605-5, QC 20230118

Available from: 2021-12-01 Created: 2021-12-01 Last updated: 2023-01-18Bibliographically approved
Wisth, D., Camurri, M., Das, S. & Fallon, M. (2021). Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry. IEEE Robotics and Automation Letters, 6(2), 1004-1011
Open this publication in new window or tab >>Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry
2021 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 6, no 2, p. 1004-1011Article in journal (Refereed) Published
Abstract [en]

We present an efficient multi-sensor odometry system for mobile platforms that jointly optimizes visual, lidar, and inertial information within a single integrated factor graph. This runs in real-time at full framerate using fixed lag smoothing. To perform such tight integration, a new method to extract 3D line and planar primitives from lidar point clouds is presented. This approach overcomes the suboptimality of typical frame-to-frame tracking methods by treating the primitives as landmarks and tracking them over multiple scans. True integration of lidar features with standard visual features and IMU is made possible using a subtle passive synchronization of lidar and camera frames. The lightweight formulation of the 3D features allows for real-time execution on a single CPU. Our proposed system has been tested on a variety of platforms and scenarios, including underground exploration with a legged robot and outdoor scanning with a dynamically moving handheld device, for a total duration of 96 min and 2.4 km traveled distance. In these test sequences, using only one exteroceptive sensor leads to failure due to either underconstrained geometry (affecting lidar) or textureless areas caused by aggressive lighting changes (affecting vision). In these conditions, our factor graph naturally uses the best information available from each sensor modality without any hard switches.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2021
Keywords
Sensor fusion, visual-inertial SLAM, localization
National Category
Signal Processing
Identifiers
urn:nbn:se:kth:diva-291955 (URN)10.1109/LRA.2021.3056380 (DOI)000619380200008 ()2-s2.0-85100713314 (Scopus ID)
Note

QC 20210324

Available from: 2021-03-24 Created: 2021-03-24 Last updated: 2024-03-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7528-1383

Search in DiVA

Show all publications