Change search
ReferencesLink to record
Permanent link

Direct link
Grain Graphs: OpenMP Performance Analysis Made Easy
KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.ORCID iD: 0000-0003-3958-4659
(SICS Swedish ICT)
KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.ORCID iD: 0000-0002-9637-2065
2016 (English)Conference paper (Refereed)
Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2016.
Keyword [en]
OpenMP, Performance Analysis, Parallel Programming
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:kth:diva-179668DOI: 10.1145/2851141.2851156ScopusID: 2-s2.0-84963732767ISBN: 978-1-4503-4092-2/16/03OAI: oai:DiVA.org:kth-179668DiVA: diva2:885513
Conference
21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP'16)
Note

NV 201512

Available from: 2015-12-18 Created: 2015-12-18 Last updated: 2015-12-21Bibliographically approved
In thesis
1. Improving OpenMP Productivity with Data Locality Optimizations and High-resolution Performance Analysis
Open this publication in new window or tab >>Improving OpenMP Productivity with Data Locality Optimizations and High-resolution Performance Analysis
2016 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The combination of high-performance parallel programming and multi-core processors is the dominant approach to meet the ever increasing demand for computing performance today. The thesis is centered around OpenMP, a popular parallel programming API standard that enables programmers to quickly get started with writing parallel programs. However, in contrast to the quickness of getting started, writing high-performance OpenMP programs requires high effort and saps productivity.

Part of the reason for impeded productivity is OpenMP’s lack of abstractions and guidance to exploit the strong architectural locality exhibited in NUMA systems and manycore processors. The thesis contributes with data distribution abstractions that enable programmers to distribute data portably in NUMA systems and manycore processors without being aware of low-level system topology details. Data distribution abstractions are supported by the runtime system and leveraged by the second contribution of the thesis – an architecture-specific locality-aware scheduling policy that reduces data access latencies incurred by tasks, allowing programmers to obtain with minimal effort upto 69% improved performance for scientific programs compared to state-of-the-art work-stealing scheduling.

Another reason for reduced programmer productivity is the poor support extended by OpenMP performance analysis tools to visualize, understand, and resolve problems at the level of grains– task and parallel for-loop chunk instances. The thesis contributes with a cost-effective and automatic method to extensively profile and visualize grains. Grain properties and hardware performance are profiled at event notifications from the runtime system with less than 2.5% overheads and visualized using a new method called theGrain Graph. The grain graph shows the program structure that unfolded during execution and highlights problems such as low parallelism, work inflation, and poor parallelization benefit directly at the grain level with precise links to problem areas in source code. The thesis demonstrates that grain graphs can quickly reveal performance problems that are difficult to detect and characterize in fine detail using existing tools in standard programs from SPEC OMP 2012, Parsec 3.0 and Barcelona OpenMP Tasks Suite (BOTS). Grain profiles are also applied to study the input sensitivity and similarity of BOTS programs.

All thesis contributions are assembled together to create an iterative performance analysis and optimization work-flow that enables programmers to achieve desired performance systematically and more quickly than what is possible using existing tools. This reduces pressure on experts and removes the need for tedious trial-and-error tuning, simplifying OpenMP performance analysis.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2016. 208 p.
Series
, TRITA-ICT, 2016:1
Keyword
OpenMP, Performance Analysis, Scheduling, Locality Optimizations
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-179670 (URN)978-91-7595-818-7 (ISBN)
Public defence
2016-01-29, Sal C, Sal C, Electrum, Isafjordsgatan 26, Kista, 09:00 (English)
Opponent
Supervisors
Note

QC 20151221

Available from: 2015-12-21 Created: 2015-12-18 Last updated: 2016-01-15Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Muddukrishna, AnanyaPodobas, ArturBrorsson, Mats
By organisation
Software and Computer systems, SCS
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 367 hits
ReferencesLink to record
Permanent link

Direct link