Change search
ReferencesLink to record
Permanent link

Direct link
Resource management for task-based parallel programs over a multi-kernel.: BIAS: Barrelfish Inter-core Adaptive Scheduling
KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS. (Multicore center)ORCID iD: 0000-0002-7860-6593
KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.ORCID iD: 0000-0002-9637-2065
2012 (English)In: Proceedings of the 2012 workshop on Runtime Environments, Systems, Layering and Virtualized Environments (RESoLVE’12), Association for Computing Machinery (ACM), 2012, 32-36 p.Conference paper (Refereed)
Abstract [en]

Trying to attack the problem of resource contention, created by multiple parallel applications running simultaneously, we propose a space-sharing, two-level, adaptive scheduler for the Barrelfish operating system.The first level is system-wide, running close to the OS’ kernel, and has knowledge of the available resources, while the second level, integrated into the application’s runtime, is aware of its type and amount of parallelism. Feedback on efficiency from the second-level to the first-level, allows the latter to adaptively modify the allotment of cores (domain), intelligently promoting space-sharing of resources while still allowing time-sharing when needed.In order to avoid excess inter-core communication, the system-level scheduler is designed as a distributed service, taking advantage of the message-passing nature of Barrelfish. The processor topology is partitioned so that each instance of the scheduler handles an appropriately sized subset of cores.Malleability is achieved by suspending worker-threads. Two different methodologies are introduced and explained, each suitable for distinct programming models and applications.Preliminary results are quite promising and show minimal added overhead. In specific multiprogramming configurations, initial experiments proved significant performance improvement by avoiding contention.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2012. 32-36 p.
Keyword [en]
Scheduling, parallel programming, multicore, manycore
National Category
Software Engineering
Research subject
URN: urn:nbn:se:kth:diva-107665OAI: diva2:577041
RESoLVE '12, Second workshop on Runtime Environments, Systems, Layering and Virtualized Environments, London UK, March 3, 2012.
Swedish e‐Science Research Center

QC 20130116

Available from: 2013-01-16 Created: 2012-12-14 Last updated: 2013-09-10Bibliographically approved
In thesis
1. Cooperative user- and system-level scheduling of task-centric parallel programs
Open this publication in new window or tab >>Cooperative user- and system-level scheduling of task-centric parallel programs
2013 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Emerging architecture designs include tens of processing cores on a single chip die; it is believed that the number of cores will reach the hundreds in not so many years from now. However, most common workloads cannot expose fluctuating parallelism, insufficient to utilize such systems. The combination of these issues suggests that large-scale systems will be either multiprogrammed or have their unneeded resources powered off. To achieve these features, workloads must be able to provide a metric on their parallelism which the system can use to dynamically adapt per-application resource allotments.Adaptive resource management requires scheduling abstractions to be split into two cooperating layers. The system layer that is aware of the availability of resources and the application layer which can accurately and iteratively estimate the workload's true resource requirements.This thesis addresses these issues and provides a self-adapting work-stealing scheduling method that can achieve expected performance while conserving resources. This method is based on deterministic victim selection (DVS) that controls the concentration of the load among the worker threads. It allows to use the number of spawned but not yet processed tasks as a metric for the requirements. Because this metric measures work to be executed in the future instead of past behavior, DVS is versatile to handlevery irregular workloads.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2013. vi, 29 p.
Trita-ICT-ECS AVH, ISSN 1653-6363 ; 13:15
parallel, workload, runtime, task, adaptive, resource management, load balancing, work-stealing
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
urn:nbn:se:kth:diva-127708 (URN)978-91-7501-816-4 (ISBN)
2013-09-27, Sal/Hall D, Forum, KTH-ICT, Isafjordsgatan 39, Kista, 12:10 (English)

QC 20130910

Available from: 2013-09-10 Created: 2013-09-04 Last updated: 2013-09-17Bibliographically approved

Open Access in DiVA

resolve_2012.pdf(401 kB)96 downloads
File information
File name FULLTEXT01.pdfFile size 401 kBChecksum SHA-512
Type fulltextMimetype application/pdf

Other links


Search in DiVA

By author/editor
Varisteas, GeorgiosBrorsson, Mats
By organisation
Software and Computer systems, SCS
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 96 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 356 hits
ReferencesLink to record
Permanent link

Direct link