kth.sePublications
Change search
Refine search result
1 - 10 of 10
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Das, Sandipan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    State estimation with auto-calibrated sensor setup2024Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Localization and mapping is one of the key aspects of driving autonomously in unstructured environments. Often such vehicles are equipped with multiple sensor modalities to create a 360o sensing coverage and add redundancy to handle sensor dropout scenarios. As the vehicles operate in underground mining and dense urban environments the Global navigation satellite system (GNSS) is often unreliable. Hence, to create a robust localization system different sensor modalities like camera, lidar and IMU are used along with a GNSS solution. The system must handle sensor dropouts and work in real-time (~15 Hz), so that there is enough computation budget left for other tasks like planning and control. Additionally, precise localization is also needed to map the environment, which may be later used for re-localization of the autonomous vehicles as well. Finally, for all of these to work seamlessly, accurate calibration of the sensors is of utmost importance.

    In this PhD thesis, first, a robust system for state estimation that fuses measurements from multiple lidars and inertial sensors with GNSS data is presented. State estimation was performed in real-time, which produced robust motion estimates in a global frame by fusing lidar and IMU signals with GNSS components using a factor graph framework. The proposed method handled signal loss with a novel synchronization and fusion mechanism. To validate the approach extensive tests were carried out on data collected using Scania test vehicles (5 sequences for a total of ~ 7 Km). An average improvement of 61% in relative translation and 42% rotational error compared to a state-of-the-art estimator fusing a single lidar/inertial sensor pair is reported.  

    Since precise calibration is needed for the localization and mapping tasks, in this thesis, methods for real-time calibration of the sensor setup is proposed. First, a method is proposed to calibrate sensors with non-overlapping field-of-view. The calibration quality is verified by mapping known features in the environment. Nevertheless, the verification process was not real-time and no observability analysis was performed which could give us an indicator of the analytical traceability of the trajectory required for motion-based online calibration. Hence, a new method is proposed where calibration and verification were performed in real-time by matching estimated sensor poses in real-time with observability analysis. Both of these methods relied on estimating the sensor poses using the state estimator developed in our earlier works. However, state estimators have inherent drifts and they are computationally intensive as well. Thus, another novel method is developed where the sensors could be calibrated in real-time without the need for any state estimation. 

    Download full text (pdf)
    fulltext
  • 2.
    Das, Sandipan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering. Scania, Sweden.
    Boberg, Bengt
    Scania, Sweden.
    Fallon, Maurice
    ORI, University of Oxford, UK.
    Chatterjee, Saikat
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    IMU-based Online Multi-lidar Calibration2024Manuscript (preprint) (Other academic)
    Abstract [en]

    Modern autonomous systems typically use several sensors for perception. For best performance, accurate and reliable extrinsic calibration is necessary. In this research, we proposea reliable technique for the extrinsic calibration of several lidars on a vehicle without the need for odometry estimation or fiducial markers. First, our method generates an initial guess of the extrinsics by matching the raw signals of IMUs co-located with each lidar. This initial guess is then used in ICP and point cloud feature matching which refines and verifies this estimate. Furthermore, we can use observability criteria to choose a subset of the IMU measurements that have the highest mutual information — rather than comparing all the readings. We have successfully validated our methodology using data gathered from Scania test vehicles.

    Download full text (pdf)
    fulltext
  • 3.
    Das, Sandipan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Javid, Alireza M.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Borpatra Gohain, Prakash
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Eldar, Yonina C.
    Weizmann Inst Sci, Math & Comp Sci, Rehovot, Israel..
    Chatterjee, Saikat
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Neural Greedy Pursuit for Feature Selection2022In: 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), Institute of Electrical and Electronics Engineers (IEEE) , 2022Conference paper (Refereed)
    Abstract [en]

    We propose a greedy algorithm to select N important features among P input features for a non-linear prediction problem. The features are selected one by one sequentially, in an iterative loss minimization procedure. We use neural networks as predictors in the algorithm to compute the loss and hence, we refer to our method as neural greedy pursuit (NGP). NGP is efficient in selecting N features when N << P, and it provides a notion of feature importance in a descending order following the sequential selection procedure. We experimentally show that NGP provides better performance than several feature selection methods such as DeepLIFT and Drop-one-out loss. In addition, we experimentally show a phase transition behavior in which perfect selection of all N features without false positives is possible when the training data size exceeds a threshold.

  • 4.
    Das, Sandipan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering. Scan CV AB, S-15132 Södertälje, Sweden..
    Klinteberg, Ludvig af
    Scan CV AB, S-15132 Södertälje, Sweden..
    Fallon, Maurice
    Oxford Robot Inst, Oxford OX2 6NN, England..
    Chatterjee, Saikat
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Observability-Aware Online Multi-Lidar Extrinsic Calibration2023In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, no 5, p. 2860-2867Article in journal (Refereed)
    Abstract [en]

    Accurate and robust extrinsic calibration is necessary for deploying autonomous systems which need multiple sensors for perception. In this letter, we present a robust system for real-time extrinsic calibration of multiple lidars in vehicle base framewithout the need for any fiducialmarkers or features. We base our approach on matching absolute GNSS (Global Navigation Satellite System) and estimated lidar poses in real-time. Comparing rotation components allows us to improve the robustness of the solution than traditional least-square approach comparing translation components only. Additionally, instead of comparing all corresponding poses, we select poses comprising maximum mutual information based on our novel observability criteria. This allows us to identify a subset of the poses helpful for real-time calibration. We also provide stopping criteria for ensuring calibration completion. To validate our approach extensive tests were carried out on data collected using Scania test vehicles (7 sequences for a total of approximate to 6.5 Km). The results presented in this letter show that our approach is able to accurately determine the extrinsic calibration for various combinations of sensor setups.

  • 5.
    Das, Sandipan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering. Scania, Sweden.
    Mahabadi, Navid
    Scania, Sweden.
    Chatterjee, Saikat
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Fallon, Maurice
    Oxford Robotics Institute, UK.
    Multi-modal curb detection and filtering2022Conference paper (Other academic)
    Abstract [en]

    Reliable knowledge of road boundaries is critical for autonomous vehicle navigation. We propose a robust curb detection and filtering technique based on the fusion of camera semantics and dense lidar point clouds. The lidar point clouds are collected by fusing multiple lidars for robust feature detection. The camera semantics are based on a modified EfficientNet architecture which is trained with labeled data collected from onboard fisheye cameras. The point clouds are associated with the closest curb segment with L2-norm analysis after projecting into the image space with the fisheye model projection. Next, the selected points are clustered using unsupervised density-based spatial clustering to detect different curb regions. As new curb points are detected in consecutive frames they are associated with the existing curb clusters using temporal reachability constraints. If no reachability constraints are found a new curb cluster is formed from these new points. This ensures we can detect multiple curbs present in road segments consisting of multiple lanes if they are in the sensors' field of view. Finally, Delaunay filtering is applied for outlier removal and its performance is compared to traditional RANSAC-based filtering. An objective evaluation of the proposed solution is done using a high-definition map containing ground truth curb points obtained from a commercial map supplier. The proposed system has proven capable of detecting curbs of any orientation in complex urban road scenarios comprising straight roads, curved roads, and intersections with traffic isles. 

    Download full text (pdf)
    fulltext
  • 6.
    Das, Sandipan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering. Scania CV AB, Södertälje, Sweden..
    Mahabadi, Navid
    Scania CV AB, Södertälje, Sweden..
    Djikic, Addi
    Scania CV AB, Södertälje, Sweden..
    Nassir, Cesar
    Scania CV AB, Södertälje, Sweden..
    Chatterjee, Saikat
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Fallon, Maurice
    Oxford Robot Inst, Oxford, England..
    Extrinsic Calibration and Verification of Multiple Non-overlapping Field of View Lidar Sensors2022In: 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), Institute of Electrical and Electronics Engineers (IEEE) , 2022Conference paper (Refereed)
    Abstract [en]

    We demonstrate a multi-lidar calibration framework for large mobile platforms that jointly calibrate the extrinsic parameters of non-overlapping Field-of-View (FoV) lidar sensors, without the need for any external calibration aid. The method starts by estimating the pose of each lidar in its corresponding sensor frame in between subsequent timestamps. Since the pose estimates from the lidars are not necessarily synchronous, we first align the poses using a Dual Quaternion (DQ) based Screw Linear Interpolation. Afterward, a HandEye based calibration problem is solved using the DQ-based formulation to recover the extrinsics. Furthermore, we verify the extrinsics by matching chosen lidar semantic features, obtained by projecting the lidar data into the camera perspective after time alignment using vehicle kinematics. Experimental results on the data collected from a Scania vehicle [similar to 1 Km sequence] demonstrate the ability of our approach to obtain better calibration parameters than the provided vehicle CAD model calibration parameters. This setup can also be scaled to any combination of multiple lidars.

  • 7.
    Das, Sandipan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Mahabadi, Navid
    Stockholm, Sweden.
    Fallon, Maurice
    Oxford Robotics Institute University of Oxford, Oxford, United Kingdom.
    Chatterjee, Saikat
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    M-LIO: Multi-lidar, multi-IMU odometry with sensor dropout tolerance2023In: IV 2023 - IEEE Intelligent Vehicles Symposium, Proceedings, Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper (Refereed)
    Abstract [en]

    We present a robust system for state estimation that fuses measurements from multiple lidars and inertial sensors with GNSS data. To initiate the method, we use the prior GNSS pose information. We then perform motion estimation in real-time, which produces robust motion estimates in a global frame by fusing lidar and IMU signals with GNSS translation components using a factor graph framework. We also propose methods to account for signal loss with a novel synchronization and fusion mechanism. To validate our approach extensive tests were carried out on data collected using Scania test vehicles (5 sequences for a total of ≈ 7 Km). From our evaluations, we show an average improvement of 61% in relative translation and 42% rotational error compared to a state-of-the-art estimator fusing a single lidar/inertial sensor pair, in sensor dropout scenarios.

  • 8.
    Errando-Herranz, Carlos
    et al.
    KTH, School of Electrical Engineering (EES), Micro and Nanosystems.
    Das, Sandipan
    KTH, School of Electrical Engineering (EES), Micro and Nanosystems.
    Gylfason, Kristinn
    KTH, School of Electrical Engineering (EES), Micro and Nanosystems.
    Suspended polarization beam splitter on silicon-on-insulator2018In: Optics Express, E-ISSN 1094-4087, Vol. 26, no 3, p. 2675-2681Article in journal (Refereed)
    Abstract [en]

    Polarization handling in suspended silicon photonics has the potential to enable new applications in fields such as optomechanics, photonic microelectromechanical systems, and mid-infrared photonics. In this work, we experimentally demonstrate a suspended polarization beam splitter on a silicon-on-insulator waveguide platform, based on an asymmetric directional coupler. Our device presents polarization extinction ratios above 10 and 15 dB, and insertion losses below 5 and 1 dB, for TM and TE polarized input, respectively, across a 40 nm wavelength range at 1550 nm, with a device length below 8 µm. These results make our suspended polarization beam splitter a promising building block for future systems based on polarization diversity suspended photonics.

    Download full text (pdf)
    fulltext
  • 9.
    Javid, Alireza M.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Das, Sandipan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Skoglund, Mikael
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Chatterjee, Saikat
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    A Relu Dense Layer To Improve The Performance Of Neural Networks2021In: 2021 IEEE International Conference On Acoustics, Speech And Signal Processing (ICASSP 2021), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 2810-2814Conference paper (Refereed)
    Abstract [en]

    We propose ReDense as a simple and low complexity way to improve the performance of trained neural networks. We use a combination of random weights and rectified linear unit (ReLU) activation function to add a ReLU dense (ReDense) layer to the trained neural network such that it can achieve a lower training loss. The lossless flow property (LFP) of ReLU is the key to achieve the lower training loss while keeping the generalization error small. ReDense does not suffer from vanishing gradient problem in the training due to having a shallow structure. We experimentally show that ReDense can improve the training and testing performance of various neural network architectures with different optimization loss and activation functions. Finally, we test ReDense on some of the state-of-the-art architectures and show the performance improvement on benchmark.

  • 10. Wisth, David
    et al.
    Camurri, Marco
    Das, Sandipan
    KTH.
    Fallon, Maurice
    Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry2021In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 6, no 2, p. 1004-1011Article in journal (Refereed)
    Abstract [en]

    We present an efficient multi-sensor odometry system for mobile platforms that jointly optimizes visual, lidar, and inertial information within a single integrated factor graph. This runs in real-time at full framerate using fixed lag smoothing. To perform such tight integration, a new method to extract 3D line and planar primitives from lidar point clouds is presented. This approach overcomes the suboptimality of typical frame-to-frame tracking methods by treating the primitives as landmarks and tracking them over multiple scans. True integration of lidar features with standard visual features and IMU is made possible using a subtle passive synchronization of lidar and camera frames. The lightweight formulation of the 3D features allows for real-time execution on a single CPU. Our proposed system has been tested on a variety of platforms and scenarios, including underground exploration with a legged robot and outdoor scanning with a dynamically moving handheld device, for a total duration of 96 min and 2.4 km traveled distance. In these test sequences, using only one exteroceptive sensor leads to failure due to either underconstrained geometry (affecting lidar) or textureless areas caused by aggressive lighting changes (affecting vision). In these conditions, our factor graph naturally uses the best information available from each sensor modality without any hard switches.

1 - 10 of 10
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf