Real-time Distributed Visual Feature Extraction from Video in Sensor Networks
2014 (English)In: Proceedings - IEEE International Conference on Distributed Computing in Sensor Systems, DCOSS 2014, 2014Conference paper (Refereed)
Enabling visual sensor networks to perform visual analysis tasks in real-time is challenging due to the computational complexity of detecting and extracting visual features. A promising approach to address this challenge is to distribute the detection and the extraction of local features among the sensor nodes, in which case the time to complete the visual analysis of an image is a function of the number of features found and of the distribution of the features in the image. In this paper we formulate the minimization of the time needed to complete the distributed visual analysis for a video sequence subject to a mean average precision requirement as a stochastic optimization problem. We propose a solution based on two composite predictors that reconstruct randomly missing data, and use a quantile-based linear approximation of the feature distribution and time series analysis methods. The composite predictors allow us to compute an approximate optimal solution through linear programming. We use two surveillance videos to evaluate the proposed algorithms, and show that prediction is essential for controlling the completion time. The results show that the last value predictor together with regular quantile-based distribution approximation provide a low complexity solution with very good performance.
Place, publisher, year, edition, pages
IdentifiersURN: urn:nbn:se:kth:diva-156173DOI: 10.1109/DCOSS.2014.30ISI: 000361020100023ScopusID: 2-s2.0-84904420756ISBN: 978-147994619-8OAI: oai:DiVA.org:kth-156173DiVA: diva2:765482
IEEE Intl. Conference on Distributed Computing in Sensor Systems (DCOSS)
QC 201502232014-11-242014-11-242015-10-05Bibliographically approved