Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Cache Access Fairness in 3D Mesh-Based NUCA
Natl Univ Def Technol, Coll Comp, Changsha 410073, Hunan, Peoples R China..
KTH, School of Electrical Engineering and Computer Science (EECS), Electronics, Electronic and embedded systems. Natl Univ Def Technol, Coll Comp, Changsha 410073, Hunan, Peoples R China.
KTH, School of Electrical Engineering and Computer Science (EECS), Electronics, Electronic and embedded systems.ORCID iD: 0000-0003-0061-3475
Natl Univ Def Technol, Coll Comp, Changsha 410073, Hunan, Peoples R China..
2018 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 42984-42996Article in journal (Refereed) Published
Abstract [en]

Given the increase in cache capacity over the past few decades, cache access effciency has come to play a critical role in determining system performance. To ensure effcient utilization of the cache resources, non-uniform cache architecture (NUCA) has been proposed to allow for a large capacity and a short access latency. With the support of networks-on-chip (NoC), NUCA is often employed to organize the last level cache. However, this method also hurts cache access fairness, which denotes the degree of non-uniformity for cache access latencies. This drop in fairness can result in an increased number of cache accesses with overhigh latency, which leads to a bottleneck in system performance. This paper investigates the cache access fairness in the context of NoC-based 3-D chip architecture, and provides new insights into 3-D architecture design. We propose fair-NUCA (F-NUCA), a co-design scheme intended to optimize cache access fairness. In F-NUCA, we strive to improve fairness by equalizing cache access latencies. To achieve this goal, the memory mapping and the channel width are both redistributed non-uniformly, thereby equalizing the non-contention and contention latencies, respectively. The experimental results reveal that F-NUCA can effectively improve cache access fairness. When F-NUCA is compared with the traditional static NUCA in a simulation with PARSEC benchmarks, the average reductions in average latency and latency standard deviation are 4.64%/9.38% for a 4 x 4 x 2 mesh network, as well as 6.31%/13.51% for a 4 x 4 x 4 mesh network. In addition, a 4.0%/ 6.4% improvement in system throughput can be achieved for the two scales of mesh networks, respectively.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018. Vol. 6, p. 42984-42996
Keywords [en]
3D chip architecture, cache memory, memory architecture, memory mapping, multiprocessor interconnection networks, networks-on-chip, non-uniform cache architecture
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-240191DOI: 10.1109/ACCESS.2018.2862633ISI: 000443905300001Scopus ID: 2-s2.0-85050975554OAI: oai:DiVA.org:kth-240191DiVA, id: diva2:1272426
Note

QC 20181219

Available from: 2018-12-19 Created: 2018-12-19 Last updated: 2018-12-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Chen, XiaowenLu, Zhonghai

Search in DiVA

By author/editor
Chen, XiaowenLu, Zhonghai
By organisation
Electronic and embedded systems
In the same journal
IEEE Access
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 193 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf