Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Designing compact convolutional neural network for embedded stereo vision systems
Show others and affiliations
2018 (English)In: Proceedings - 2018 IEEE 12th International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2018, Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 244-251, article id 8540240Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous systems are used in a wide range of domains from indoor utensils to autonomous robot surgeries and self-driving cars. Stereo vision cameras probably are the most flexible sensing way in these systems since they can extract depth, luminance, color, and shape information. However, stereo vision based applications suffer from huge image sizes and computational complexity leading system to higher power consumption. To tackle these challenges, in the first step, GIMME2 stereo vision system [1] is employed. GIMME2 is a high-throughput and cost efficient FPGA-based stereo vision embedded system. In the next step, we present a framework for designing an optimized Deep Convolutional Neural Network (DCNN) for time constraint applications and/or limited resource budget platforms. Our framework tries to automatically generate a highly robust DCNN architecture for image data receiving from stereo vision cameras. Our proposed framework takes advantage of a multi-objective evolutionary optimization approach to design a near-optimal network architecture for both the accuracy and network size objectives. Unlike recent works aiming to generate a highly accurate network, we also considered the network size parameters to build a highly compact architecture. After designing a robust network, our proposed framework maps generated network on a multi/many core heterogeneous System-on-Chip (SoC). In addition, we have integrated our framework to the GIMME2 processing pipeline such that it can also estimate the distance of detected objects. The generated network by our framework offers up to 24x compression rate while losing only 5% accuracy compare to the best result on the CIFAR-10 dataset.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018. p. 244-251, article id 8540240
Keywords [en]
Deep Convolutional Neural Network, Neural Network Architecture Search, Neural Processing Unit, Stereo Vision Systems
National Category
Computer Engineering
Identifiers
URN: urn:nbn:se:kth:diva-241416DOI: 10.1109/MCSoC2018.2018.00049Scopus ID: 2-s2.0-85059750226ISBN: 9781538666890 (print)OAI: oai:DiVA.org:kth-241416DiVA, id: diva2:1280996
Conference
12th IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, MCSoC 2018; Hanoi; Viet Nam; 12 September 2018 through 14 September 2018
Note

QC 20190121

Available from: 2019-01-21 Created: 2019-01-21 Last updated: 2019-01-21Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Troubitsyna, Elena

Search in DiVA

By author/editor
Troubitsyna, Elena
By organisation
School of Electrical Engineering and Computer Science (EECS)
Computer Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 2 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf