Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors
Tongji Univ, Coll Automot Engn, Shanghai, Peoples R China.;Tech Univ Munich, Robot Artificial Intelligence & Real Time Syst, Munich, Germany..
Hunan Univ, State Key Lab Adv Design & Mfg Vehicle Body, Changsha, Hunan, Peoples R China..
Tongji Univ, Coll Automot Engn, Shanghai, Peoples R China..
Tongji Univ, Coll Automot Engn, Shanghai, Peoples R China..
Show others and affiliations
2019 (English)In: Frontiers in Neurorobotics, ISSN 1662-5218, Vol. 13, article id 10Article in journal (Refereed) Published
Abstract [en]

Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors.

Place, publisher, year, edition, pages
FRONTIERS MEDIA SA , 2019. Vol. 13, article id 10
Keywords [en]
neuromorphic vision sensor, event-stream encoding, object detection, convolutional neural network, multi-Cue event information fusion
National Category
Robotics
Identifiers
URN: urn:nbn:se:kth:diva-251207DOI: 10.3389/fnbot.2019.00010ISI: 000464528200001PubMedID: 31001104Scopus ID: 2-s2.0-85065577073OAI: oai:DiVA.org:kth-251207DiVA, id: diva2:1338742
Note

QC 20190724

Available from: 2019-07-24 Created: 2019-07-24 Last updated: 2019-07-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records BETA

Conradt, Jörg

Search in DiVA

By author/editor
Conradt, Jörg
By organisation
Computational Science and Technology (CST)
In the same journal
Frontiers in Neurorobotics
Robotics

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 118 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf