Change search
Refine search result
1 - 1 of 1
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Huan, Y.
    et al.
    Qin, Y.
    You, Yantian
    KTH.
    Zheng, Lirong
    KTH, School of Information and Communication Technology (ICT), Electronics, Integrated devices and circuits. Fudan University, China.
    Zou, Zhuo
    KTH. Fudan University, China.
    A multiplication reduction technique with near-zero approximation for embedded learning in IoT devices2017In: International System on Chip Conference, IEEE Computer Society , 2017, p. 102-107Conference paper (Refereed)
    Abstract [en]

    This paper presents a multiplication reduction technique through near-zero approximation, enabling embedded learning in resource-constrained IoT devices. The intrinsic resilience of neural network and the sparsity of data are identified and utilized. Based on the analysis of leading zero counting and adjustable threshold, intentional approximation is applied to reduce near-zero multiplications. By setting the threshold of the multiplication result to 2-5 and employing ReLU as the neuron activation function, the sparsity of the CNN model can reach 75% with negligible loss in accuracy when recognizing the MNIST data set. Corresponding hardware implementation has been designed and simulated in UMC 65nm process. It can achieve more than 70% improvement of energy efficiency with only 0.37% area overhead of a 256 Multiply-Accumulator array.

1 - 1 of 1
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf