kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Toward a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-3432-6151
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0009-0004-4304-4588
Hamburg Univ Technol, Inst Tech Logist, D-21073 Hamburg, Germany..ORCID iD: 0000-0002-7249-1203
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-1170-7162
2023 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, no 11, p. 7018-7025Article in journal (Refereed) Published
Abstract [en]

Multimodal sensor fusion methods for 3D object detection have been revolutionizing the autonomous driving research field. Nevertheless, most of these methods heavily rely on dense LiDAR data and accurately calibrated sensors which is often not the case in real-world scenarios. Data from LiDAR and cameras often come misaligned due to the miscalibration, decalibration, or different frequencies of the sensors. Additionally, some parts of the LiDAR data may be occluded and parts of the data may be missing due to hardware malfunction or weather conditions. This work presents a novel fusion step that addresses data corruptions and makes sensor fusion for 3D object detection more robust. Through extensive experiments, we demonstrate that our method performs on par with state-of-the-art approaches on normal data and outperforms them on misaligned data.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2023. Vol. 8, no 11, p. 7018-7025
Keywords [en]
Object detection, segmentation and categorization, sensor fusion, deep learning for visual perception
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-344105DOI: 10.1109/LRA.2023.3313924ISI: 001157878900002Scopus ID: 2-s2.0-85171591218OAI: oai:DiVA.org:kth-344105DiVA, id: diva2:1841977
Note

QC 20240301

Available from: 2024-03-01 Created: 2024-03-01 Last updated: 2024-03-01Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Wozniak, Maciej K.Kårefjärd, ViktorJensfelt, Patric

Search in DiVA

By author/editor
Wozniak, Maciej K.Kårefjärd, ViktorThiel, MarkoJensfelt, Patric
By organisation
Robotics, Perception and Learning, RPL
In the same journal
IEEE Robotics and Automation Letters
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 27 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf