Change search
ReferencesLink to record
Permanent link

Direct link
Gossip-based Resource Allocationfor Green Computing in Large Clouds
KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES), Communication Networks.ORCID iD: 0000-0002-2680-9065
KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES), Communication Networks.
KTH, School of Electrical Engineering (EES), Communication Networks. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
2011 (English)In: The 7th International Conference on Network and Service Management, IFIP , 2011Conference paper (Refereed)
Abstract [en]

We address the problem of resource allocation in a large-scale cloud environment, which we formalize as that of dynamically optimizing a cloud configuration for green computing objectives under CPU and memory constraints. We propose a generic gossip protocol for resource allocation, which can be instantiated for specific objectives. We develop an instantiation of this generic protocol which aims at minimizing power consumption through server consolidation, while satisfying a changing load pattern. This protocol, called GRMP-Q, provides an efficient heuristic solution that performs well in most cases—in special cases it is optimal. Under overload, the protocol gives a fair allocation of CPU resources to clients.

Simulation results suggest that key performance metrics do not change with increasing system size, making the resource allocation process scalable to well above 100,000 servers. Generally, the effectiveness of the protocol in achieving its objective increases with increasing memory capacity in the servers.

Place, publisher, year, edition, pages
IFIP , 2011.
Keyword [en]
cloud computing, green computing, distributed management, power management, resource allocation, gossip protocols, server consolidation
National Category
Computer and Information Science
URN: urn:nbn:se:kth:diva-37883ScopusID: 2-s2.0-84855744790ISBN: 9781457715884OAI: diva2:435431
The 7th International Conference on Network and Service Management,Paris, France, 24-28 October, 2011
ICT - The Next Generation

QC 20110818

Available from: 2011-08-18 Created: 2011-08-18 Last updated: 2016-04-11Bibliographically approved
In thesis
1. Data-driven Performance Prediction and Resource Allocation for Cloud Services
Open this publication in new window or tab >>Data-driven Performance Prediction and Resource Allocation for Cloud Services
2016 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Cloud services, which provide online entertainment, enterprise resource management, tax filing, etc., are becoming essential for consumers, businesses, and governments. The key functionalities of such services are provided by backend systems in data centers. This thesis focuses on three fundamental problems related to management of backend systems. We address these problems using data-driven approaches: triggering dynamic allocation by changes in the environment, obtaining configuration parameters from measurements, and learning from observations. 

The first problem relates to resource allocation for large clouds with potentially hundreds of thousands of machines and services. We developed and evaluated a generic gossip protocol for distributed resource allocation. Extensive simulation studies suggest that the quality of the allocation is independent of the system size for the management objectives considered.

The second problem focuses on performance modeling of a distributed key-value store, and we study specifically the Spotify backend for streaming music. We developed analytical models for system capacity under different data allocation policies and for response time distribution. We evaluated the models by comparing model predictions with measurements from our lab testbed and from the Spotify operational environment. We found the prediction error to be below 12% for all investigated scenarios.

The third problem relates to real-time prediction of service metrics, which we address through statistical learning. Service metrics are learned from observing device and network statistics. We performed experiments on a server cluster running video streaming and key-value store services. We showed that feature set reduction significantly improves the prediction accuracy, while simultaneously reducing model computation time. Finally, we designed and implemented a real-time analytics engine, which produces model predictions through online learning.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2016. 53 p.
TRITA-EE, ISSN 1653-5146 ; 2016:020
National Category
Communication Systems Computer Systems Telecommunications Computer Engineering Other Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Electrical Engineering
urn:nbn:se:kth:diva-184601 (URN)978-91-7595-876-7 (ISBN)
Public defence
2016-05-03, F3, Lindstedtsvägen 26, KTH Campus, Stockholm, 14:00 (English)
VINNOVA, 2013-03895

QC 20160411

Available from: 2016-04-11 Created: 2016-04-01 Last updated: 2016-05-30Bibliographically approved

Open Access in DiVA

fulltext(590 kB)1517 downloads
File information
File name FULLTEXT02.pdfFile size 590 kBChecksum SHA-512
Type fulltextMimetype application/pdf


Search in DiVA

By author/editor
Yanggratoke, RerngvitWuhib, FetahiStadler, Rolf
By organisation
ACCESS Linnaeus CentreCommunication Networks
Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 1523 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 166 hits
ReferencesLink to record
Permanent link

Direct link