Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Toward a Robust Sensor Fusion Step for 3D Object Detection on Corrupted Data
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.ORCID-id: 0000-0002-3432-6151
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.ORCID-id: 0009-0004-4304-4588
Hamburg Univ Technol, Inst Tech Logist, D-21073 Hamburg, Germany..ORCID-id: 0000-0002-7249-1203
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.ORCID-id: 0000-0002-1170-7162
2023 (engelsk)Inngår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 8, nr 11, s. 7018-7025Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Multimodal sensor fusion methods for 3D object detection have been revolutionizing the autonomous driving research field. Nevertheless, most of these methods heavily rely on dense LiDAR data and accurately calibrated sensors which is often not the case in real-world scenarios. Data from LiDAR and cameras often come misaligned due to the miscalibration, decalibration, or different frequencies of the sensors. Additionally, some parts of the LiDAR data may be occluded and parts of the data may be missing due to hardware malfunction or weather conditions. This work presents a novel fusion step that addresses data corruptions and makes sensor fusion for 3D object detection more robust. Through extensive experiments, we demonstrate that our method performs on par with state-of-the-art approaches on normal data and outperforms them on misaligned data.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE) , 2023. Vol. 8, nr 11, s. 7018-7025
Emneord [en]
Object detection, segmentation and categorization, sensor fusion, deep learning for visual perception
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-344105DOI: 10.1109/LRA.2023.3313924ISI: 001157878900002Scopus ID: 2-s2.0-85171591218OAI: oai:DiVA.org:kth-344105DiVA, id: diva2:1841977
Merknad

QC 20240301

Tilgjengelig fra: 2024-03-01 Laget: 2024-03-01 Sist oppdatert: 2024-03-01bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

Wozniak, Maciej K.Kårefjärd, ViktorJensfelt, Patric

Søk i DiVA

Av forfatter/redaktør
Wozniak, Maciej K.Kårefjärd, ViktorThiel, MarkoJensfelt, Patric
Av organisasjonen
I samme tidsskrift
IEEE Robotics and Automation Letters

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 27 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf