kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
SeeFar: Vehicle Speed Estimation and Flow Analysis from a Moving UAV
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-6037-1661
KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Transport planning.ORCID iD: 0000-0001-5526-4511
Univ Bristol, Bristol, Avon, England..
Univ Modena & Reggio Emilia, Modena, Italy..
Show others and affiliations
2022 (English)In: Image Analysis and Processing, ICIAP 2022, PT III / [ed] Sclaroff, S Distante, C Leo, M Farinella, GM Tombari, F, Springer Nature , 2022, Vol. 13233, p. 278-289Conference paper, Published paper (Refereed)
Abstract [en]

Visual perception from drones has been largely investigated for Intelligent Traffic Monitoring System (ITMS) recently. In this paper, we introduce SeeFar to achieve vehicle speed estimation and traffic flow analysis based on YOLOv5 and DeepSORT from a moving drone. SeeFar differs from previous works in three key ways: the speed estimation and flow analysis components are integrated into a unified framework; our method of predicting car speed has the least constraints while maintaining a high accuracy; our flow analysor is direction-aware and outlier-aware. Specifically, we design the speed estimator only using the camera imaging geometry, where the transformation between world space and image space is completed by the variable Ground Sampling Distance. Besides, previous papers do not evaluate their speed estimators at scale due to the difficulty of obtaining the ground truth, we therefore propose a simple yet efficient approach to estimate the true speeds of vehicles via the prior size of the road signs. We evaluate SeeFar on our ten videos that contain 929 vehicle samples. Experiments on these sequences demonstrate the effectiveness of SeeFar by achieving 98.0% accuracy of speed estimation and 99.1% accuracy of traffic volume prediction, respectively.

Place, publisher, year, edition, pages
Springer Nature , 2022. Vol. 13233, p. 278-289
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 13233
Keywords [en]
Visual speed estimation, Traffic monitoring system
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-321257DOI: 10.1007/978-3-031-06433-3_24ISI: 000870308100024Scopus ID: 2-s2.0-85131150502OAI: oai:DiVA.org:kth-321257DiVA, id: diva2:1710263
Conference
21st International Conference on Image Analysis and Processing (ICIAP), MAY 23-27, 2022, Lecce, Italy
Note

QC 20221111

Part of proceedings: ISBN 978-3-031-06433-3; 978-3-031-06432-6

Available from: 2022-11-11 Created: 2022-11-11 Last updated: 2022-11-11Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Ning, MangMa, Xiaoliang

Search in DiVA

By author/editor
Ning, MangMa, Xiaoliang
By organisation
Robotics, Perception and Learning, RPLTransport planning
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 54 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf