kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
S3PT: Scene Semantics and Structure Guided Clustering to Boost Self-Supervised Pre-Training for Autonomous Driving
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Qualcomm Technologies International GmbH, Automated Driving; KTH Royal Institute of Technology, Sweden.ORCID iD: 0000-0002-3432-6151
Arriver Software AB and Linköping University, Sweden.
Qualcomm Technologies International GmbH, Automated Driving.
Qualcomm Technologies International GmbH, Automated Driving.
Show others and affiliations
2025 (English)In: Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 1660-1670Conference paper, Published paper (Refereed)
Abstract [en]

Recent self-supervised clustering-based pre-training techniques like DINO and CrIBo have shown impressive results for downstream detection and segmentation tasks. However, real-world applications such as autonomous driving face challenges with imbalanced object class and size distributions and complex scene geometries. In this paper, we propose S3PT a novel scene semantics and structure guided clustering to provide more scene-consistent objectives for self-supervised training. Specifically, our contributions are threefold: First, we incorporate semantic distribution consistent clustering to encourage better representation of rare classes such as motorcycles or animals. Second, we introduce object diversity consistent spatial clustering, to handle imbalanced and diverse object sizes, ranging from large background areas to small objects such as pedestrians and traffic signs. Third, we propose a depth-guided spatial clustering to regularize learning based on geometric information of the scene, thus further refining region separation on the feature level. Our learned representations significantly improve performance in downstream semantic segmentation and 3D object detection tasks on the nuScenes, nuImages, and Cityscapes datasets and show promising domain translation properties.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2025. p. 1660-1670
Keywords [en]
3d object detection, autonomous driving, self-supervised learning, semantic segmentation, visual fundational models
National Category
Computer graphics and computer vision Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-363208DOI: 10.1109/WACV61041.2025.00169Scopus ID: 2-s2.0-105003631689OAI: oai:DiVA.org:kth-363208DiVA, id: diva2:1956915
Conference
2025 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025, Tucson, United States of America, Feb 28 2025 - Mar 4 2025
Note

Part of ISBN 9798331510831

QC 20250512

Available from: 2025-05-07 Created: 2025-05-07 Last updated: 2025-05-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Wozniak, Maciej K.

Search in DiVA

By author/editor
Wozniak, Maciej K.
By organisation
Robotics, Perception and Learning, RPL
Computer graphics and computer visionComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 10 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf