Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Low-Power Arithmetic Element for Multi-Base Logarithmic Computation on Deep Neural Networks
KTH, School of Electrical Engineering and Computer Science (EECS).
2019 (English)In: International System on Chip Conference, IEEE Computer Society , 2019, p. 260-265Conference paper, Published paper (Refereed)
Abstract [en]

Computational complexity and memory intensity are crucial in deep convolutional neural network algorithms for deployment to embedded systems. Recent advances in logarithmic quantization has manifested great potential in reducing the inference cost of neural network models. However, current base-2 logarithmic quantization suffers from performance upper limit and there is few work that studies hardware implementation of other bases. This paper presents a multi-base logarithmic scheme for Deep Neural Networks (DNNs). The performance of Alexnet is studied with respects to different quantization resolutions. Base -\sqrt2 logarithmic quantization is able to raise the ceiling of top-5 classifying accuracy from 69.3% to 75.5% at 5-bit resolution. A segmented logarithmic quantization method that combines both base-2 and base \sqrt2 is then proposed to improve the network top-5 accuracy to 72.3% in 4-bit resolution. The corresponding arithmetic element hardware has been designed, which supports base sqrt2 logarithmic quantization and segmented logarithmic quantization respectively. Evaluated in UMC 65nm process, the proposed arithmetic element operating at 500MHz and 1.2V consumes as low as 120 μW. Compared with 16-bit fixed point multiplier, our design achieves 58.03% smaller in area, with 73.74% energy reduction.

Place, publisher, year, edition, pages
IEEE Computer Society , 2019. p. 260-265
Keywords [en]
Embedded systems, Low power electronics, Neural networks, Programmable logic controllers, Convolutional neural network, Energy reduction, Fixed points, Hardware implementations, Logarithmic computation, Low power arithmetic, Neural network model, Quantization resolution, Deep neural networks
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-248272DOI: 10.1109/SOCC.2018.8618560ISI: 000462047000009Scopus ID: 2-s2.0-85062221602ISBN: 9781538614907 (print)OAI: oai:DiVA.org:kth-248272DiVA, id: diva2:1303935
Conference
31st IEEE International System on Chip Conference, SOCC 2018, 4 September 2018 through 7 September 2018
Note

QC 20190411

Available from: 2019-04-11 Created: 2019-04-11 Last updated: 2019-06-26Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopusconference proceedings

Authority records BETA

Huan, Yuxiang

Search in DiVA

By author/editor
Huan, Yuxiang
By organisation
School of Electrical Engineering and Computer Science (EECS)
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 112 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf