A NoC-Based Spatial DNN Inference Accelerator With Memory-Friendly DataflowShow others and affiliations
2023 (English)In: IEEE design & test, ISSN 2168-2356, E-ISSN 2168-2364, Vol. 40, no 6, p. 39-50Article in journal (Refereed) Published
Abstract [en]
This article addresses the challenges of excessive storage overhead and the absence of sparsity-aware design in Network-on-Chip (NoC)-based spatial deep neural network accelerators. The authors present a prototype chip that outperforms existing accelerators in both energy and area efficiency, demonstrated on TSMC 28-nm process technology. —Mahdi Nikdast, Colorado State University, USA —Miquel Moreto, Barcelona Supercomputing Center, Spain —Masoumeh (Azin) Ebrahimi, KTH Royal Institute of Technology, Sweden —Sujay Deb, IIIT Delhi, India
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2023. Vol. 40, no 6, p. 39-50
Keywords [en]
Frequency modulation, Computer architecture, System-on-chip, Artificial neural networks, Memory management, Hardware, Random access memory, Network on Chip (NoC), deep neural network accelerator, scalable architecture, memory-friendly dataflow, activation sparsity
National Category
Computer Engineering
Identifiers
URN: urn:nbn:se:kth:diva-340689DOI: 10.1109/MDAT.2023.3310199ISI: 001098095900005Scopus ID: 2-s2.0-85169685475OAI: oai:DiVA.org:kth-340689DiVA, id: diva2:1818838
Conference
International Symposium on Networks-on-Chip (NOCS), SEP 21-22, 2023, Hamburg, GERMANY
Note
QC 20231212
2023-12-122023-12-122023-12-12Bibliographically approved