Change search
ReferencesLink to record
Permanent link

Direct link
Locality-aware Task Scheduling and Data Distribution for OpenMP Programs on NUMA Systems and Manycore Processors
KTH, School of Information and Communication Technology (ICT), Electronic Systems.ORCID iD: 0000-0003-3958-4659
SICS Swedish ICT AB.
KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS. SICS Swedish ICT AB.ORCID iD: 0000-0002-9637-2065
2015 (English)In: Scientific Programming, ISSN 1058-9244, E-ISSN 1875-919X, 981759Article in journal (Refereed) Published
Abstract [en]

Performance degradation due to nonuniform data access latencies has worsened on NUMA systems and can now be felt on-chip in manycore processors. Distributing data across NUMA nodes and on manycore processors is necessary to reduce the impact of nonuniform latencies. However, techniques for distributing data are error-prone and fragile and require low-level architectural knowledge. Existing task scheduling policies favor quick load-balancing at the expense of locality and ignore NUMA node access latencies while scheduling. Locality-aware scheduling, in conjunction with or as a replacement for existing scheduling, is necessary to minimize NUMA effects and sustain performance. We present a data distribution and locality-aware scheduling technique for task-based OpenMP programs executing on NUMA systems and manycore processors. Our technique relieves the programmer from thinking of NUMA architecture details by delegating data distribution to the runtime system and uses task data dependence information to guide the scheduling of OpenMP tasks to reduce data stall times. We demonstrate our technique on a four-socket AMD Opteron machine with eight NUMA nodes and on the TILEPro64 processor, and we identify that data distribution and locality-aware task scheduling improve performance up to 69% for scientific benchmarks compared to default policies and yet provide an architecture-oblivious approach for programmers.

Place, publisher, year, edition, pages
Hindawi Publishing Corporation, 2015. 981759
National Category
Computer Engineering
URN: urn:nbn:se:kth:diva-166580DOI: 10.1155/2015/981759ISI: 000364899300001ScopusID: 2-s2.0-84947272497OAI: diva2:811372

QC 20150615

Available from: 2015-05-11 Created: 2015-05-11 Last updated: 2015-12-21Bibliographically approved
In thesis
1. Improving OpenMP Productivity with Data Locality Optimizations and High-resolution Performance Analysis
Open this publication in new window or tab >>Improving OpenMP Productivity with Data Locality Optimizations and High-resolution Performance Analysis
2016 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The combination of high-performance parallel programming and multi-core processors is the dominant approach to meet the ever increasing demand for computing performance today. The thesis is centered around OpenMP, a popular parallel programming API standard that enables programmers to quickly get started with writing parallel programs. However, in contrast to the quickness of getting started, writing high-performance OpenMP programs requires high effort and saps productivity.

Part of the reason for impeded productivity is OpenMP’s lack of abstractions and guidance to exploit the strong architectural locality exhibited in NUMA systems and manycore processors. The thesis contributes with data distribution abstractions that enable programmers to distribute data portably in NUMA systems and manycore processors without being aware of low-level system topology details. Data distribution abstractions are supported by the runtime system and leveraged by the second contribution of the thesis – an architecture-specific locality-aware scheduling policy that reduces data access latencies incurred by tasks, allowing programmers to obtain with minimal effort upto 69% improved performance for scientific programs compared to state-of-the-art work-stealing scheduling.

Another reason for reduced programmer productivity is the poor support extended by OpenMP performance analysis tools to visualize, understand, and resolve problems at the level of grains– task and parallel for-loop chunk instances. The thesis contributes with a cost-effective and automatic method to extensively profile and visualize grains. Grain properties and hardware performance are profiled at event notifications from the runtime system with less than 2.5% overheads and visualized using a new method called theGrain Graph. The grain graph shows the program structure that unfolded during execution and highlights problems such as low parallelism, work inflation, and poor parallelization benefit directly at the grain level with precise links to problem areas in source code. The thesis demonstrates that grain graphs can quickly reveal performance problems that are difficult to detect and characterize in fine detail using existing tools in standard programs from SPEC OMP 2012, Parsec 3.0 and Barcelona OpenMP Tasks Suite (BOTS). Grain profiles are also applied to study the input sensitivity and similarity of BOTS programs.

All thesis contributions are assembled together to create an iterative performance analysis and optimization work-flow that enables programmers to achieve desired performance systematically and more quickly than what is possible using existing tools. This reduces pressure on experts and removes the need for tedious trial-and-error tuning, simplifying OpenMP performance analysis.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2016. 208 p.
, TRITA-ICT, 2016:1
OpenMP, Performance Analysis, Scheduling, Locality Optimizations
National Category
Computer Systems
urn:nbn:se:kth:diva-179670 (URN)978-91-7595-818-7 (ISBN)
Public defence
2016-01-29, Sal C, Sal C, Electrum, Isafjordsgatan 26, Kista, 09:00 (English)

QC 20151221

Available from: 2015-12-21 Created: 2015-12-18 Last updated: 2016-01-15Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Muddukrishna, AnanyaBrorsson, Mats
By organisation
Electronic SystemsSoftware and Computer systems, SCS
In the same journal
Scientific Programming
Computer Engineering

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 87 hits
ReferencesLink to record
Permanent link

Direct link