Change search
Refine search result
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Liu, Ying
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Towards Elastic High Performance Distributed Storage Systems in the Cloud2015Licentiate thesis, comprehensive summary (Other academic)
  • 2.
    Liu, Ying
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Towards Elastic High-Performance Geo-Distributed Storage in the Cloud2016Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this thesis, we have presented techniques and algorithms to reduce request latency of distributed storage services that are deployed geographically. In addition, we have proposed and designed elasticity controllers to maintain predictable performance of distributed storage systems under dynamic workloads and platform uncertainties.

     Firstly, we have proposed a lease-based data consistency algorithm that allows a distributed storage system to serve read-dominant workload efficiently in a global scale. The leasing algorithm allows replicas with valid leases to serve read requests locally. As a result, most of the read requests are served with little latency. Then, we have investigated the efficiency of quorum-based data consistency algorithms when deployed globally. We have proposed MeteorShower framework, which is based on replicated logs and loosely synchronized clocks, to augment quorum-based data consistency algorithms. As a result, the quorum-based data consistency algorithms no longer need to query for updates from remote replicas, which significantly reduces request latency.  Based on similar insights, we build a transaction framework, Catenae, for geo-distributed data stores. It employs replicated logs to distribute transactions and aggregate the execution results. This allows Catenae to commit a serializable read-write transaction experiencing only a single inter-DC RTT delay in most of the cases.

    We examine and control the factors that cause performance degradation when scaling a distributed storage system. First, we have proposed BwMan, which is a model-based network bandwidth manager. It alleviates performance degradation caused by data migration activities. Then, we have systematically modeled the impact of data migrations.  Using this model, we have built an elasticity controller, namely, ProRenaTa, which combines proactive and reactive controls to achieve better control accuracy. ProRenaTa is able to calculate the best possible scaling plan to resize a distributed storage system under the constraint of achieving scaling deadlines, reducing latency SLO violations and minimizing VM provisioning cost. Consequently, ProRenaTa yields much higher resource utilization and less latency SLO violations comparing to state-of-the-art approaches. Based on ProRenaTa, we have built an elasticity controller named Hubbub-scale, which adopts a control model that generalizes the data migration overhead to the impact of performance interference caused by multi-tenancy in the Cloud.

  • 3.
    Liu, Ying
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Guan, Xi
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Haridi, Seif
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    MeteorShower: Minimizing Request Latency for Majority Quorum-Based Data Consistency Algorithms in Multiple Data Centers2017In: 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 57-67, article id 7979955Conference paper (Refereed)
    Abstract [en]

    With the increasing popularity of serving and storing data in multiple data centers, we investigate the efficiency of majority quorum-based data consistency algorithms under this scenario. Because of the failure-prone nature of distributed storage systems, majority quorum-based data consistency algorithms become one of the most widely adopted approaches. In this paper, we propose the MeteorShower framework, which provides fault-tolerant read/write key-value storage service across multiple data centers with sequential consistency guarantees. A major feature is that most read operations are executed locally within a single data center. This results in lowering read latency from hundreds of milliseconds to tens of milliseconds. The data consistency algorithm in MeteorShower augments majority quorum-based algorithms. Thus, it keeps all the desirable properties of majority quorums, such as fault tolerance, balanced load, etc. An implementation of MeteorShower on top of Cassandra is deployed and evaluated in multiple data centers using the Google Cloud Platform. Evaluations of MeteorShower framework have shown that it can consistently serve read requests without paying the communication delays among replicas maintained in multiple data centers. As a result, we are able to improve the latency of read requests from hundreds of milliseconds to tens of milliseconds while achieving the same latency on write requests and the same fault tolerance guarantee. Thus, MeteorShower is optimized for read intensive workloads.

  • 4. Liu, Ying
    et al.
    Gureya, Daharewa
    KTH, School of Information and Communication Technology (ICT).
    Al-Shishtawy, Ahmad
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    OnlineElastMan: Self-Trained Proactive Elasticity Manager for Cloud-Based Storage Services2016In: 2016 INTERNATIONAL CONFERENCE ON CLOUD AND AUTONOMIC COMPUTING (ICCAC), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 50-59Conference paper (Refereed)
    Abstract [en]

    The pay-as-you-go pricing model and the illusion of unlimited resources in the Cloud initiate the idea to provision services elastically. Elastic provisioning of services allocates/deallocates resources dynamically in response to the changes of the workload. It minimizes the service provisioning cost while maintaining the desired service level objectives (SLOs). Model-predictive control is often used in building such elasticity controllers that dynamically provision resources. However, they need to be trained, either online or offline, before making accurate scaling decisions. The training process involves tedious and significant amount of work as well as some expertise, especially when the model has many dimensions and the training granularity is fine, which is proved to be essential in order to build an accurate elasticity controller. In this paper, we present OnlineElastMan, which is a self-trained proactive elasticity manager for cloud-based storage services. It automatically trains and evolves itself while serving the workload. Experiments using OnlineElastMan with Cassandra indicate that OnlineElastMan continuously improves its provision accuracy, i.e., minimizing provisioning cost and SLO violations, under various workload patterns.

  • 5.
    Liu, Ying
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Li, Xiaxi
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    GlobLease: A Globally Consistent and Elastic Storage System using Leases2014In: The 20th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2014), IEEE conference proceedings, 2014, p. 701-709Conference paper (Refereed)
    Abstract [en]

    The paper present GlobLease, an elastic, globally-distributed and consistent key-value store. It is organised as multiple distributed hash tables storing replicated data and namespace. Across DHTs, data lookups and accesses are processed with respect to the locality of DHT deployments. The leases enable GlobLease to provide fast and consistent read access in a global scale with reduced global communications.

  • 6.
    Liu, Ying
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS. Université Catholique de Louvain, Belgium.
    Rameshan, Navaneeth
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS. Universitat Politècnica de Catalunya, Spain.
    Monte, E.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Navarro, L.
    ProRenaTa: Proactive and reactive tuning to scale a distributed storage system2015In: Proceedings - 2015 IEEE/ACM 15th International Symposium on Cluster, Cloud, and Grid Computing, CCGrid 2015, Institute of Electrical and Electronics Engineers (IEEE), 2015, p. 453-464Conference paper (Refereed)
    Abstract [en]

    Provisioning tasteful services in the Cloud that guarantees high quality of service with reduced hosting cost is challenging to achieve. There are two typical auto-scaling approaches: predictive and reactive. A prediction based controller leaves the system enough time to react to workload changes while a feedback based controller scales the system with better accuracy. In this paper, we show the limitations of using a proactive or reactive approach in isolation to scale a tasteful system and the overhead involved. To overcome the limitations, we implement an elasticity controller, ProRenaTa, which combines both reactive and proactive approaches to leverage on their respective advantages and also implements a data migration model to handle the scaling overhead. We show that the combination of reactive and proactive approaches outperforms the state of the art approaches. Our experiments with Wikipedia workload trace indicate that ProRenaTa guarantees a high level of SLA commitments while improving the overall resource utilization.

  • 7.
    Liu, Ying
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Rameshan, Navaneeth
    Monte, Enric
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Navarro, Leandro
    ProRenaTa: Proactive and Reactive tuning to scale a Distributed Storage SystemManuscript (preprint) (Other academic)
  • 8.
    Liu, Ying
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Replication in Distributed Storage Systems: State of the Art, Possible Directions, and Open Issues2013In: Proceedings - 2013 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, CyberC 2013, IEEE , 2013, p. 225-232Conference paper (Refereed)
    Abstract [en]

    Large-scale distributed storage systems have gained increasing popularity for providing highly available and scalable services. Most of these systems have the advantages of high performance, tolerant to failures, and elasticity. These desired properties are achieved mainly by means of the proper adaptation of replication techniques. We discuss the state-of-art in replication techniques for distributed storage systems. We present and compare four representative systems in this realm. We define a design space for replication techniques, identify current limitations, challenges and open future trends.

  • 9.
    Liu, Ying
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Navarro, Leandro
    Towards a Community Cloud Storage2014In: 2014 IEEE 28th International Conference on Advanced Information Networking and Applications (AINA), IEEE Computer Society, 2014, p. 837-844Conference paper (Refereed)
    Abstract [en]

    Community Clouds, usually built upon community networks, operate in a more disperse environment compared to a data center Cloud, with lower capacity and less reliable servers separated by a more heterogeneous and less predictable network interconnection. These differences raise challenges when deploying Cloud applications in a community Cloud. Open Stack Swift is an open source distributed storage system, which provides stand alone highly available and scalable storage from Open Stack Cloud computing components. Swift is initially designed as a backend storage system operating in a data center Cloud environment. In this work, we illustrate the performance and sensitivity of Open Stack Swift in a typical community Cloud setup. The evaluation of Swift is conducted in a simulated environment, using the most essential environment parameters that distinguish a community Cloud environment from a data center Cloud environment.

  • 10.
    Liu, Ying
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Xhagjika, Vamis
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS. Universitat Politecnica de Catalunya, Spain.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Shishtawy, A. A.
    BwMan: Bandwidth manager for elastic services in the cloud2014In: Proceedings - 2014 IEEE International Symposium on Parallel and Distributed Processing with Applications, ISPA 2014, IEEE , 2014, p. 217-224Conference paper (Refereed)
    Abstract [en]

    The flexibility of Cloud computing allows elastic services to adapt to changes in workload patterns in order to achieve desired Service Level Objectives (SLOs) at a reduced cost. Typically, the service adapts to changes in workload by adding or removing service instances (VMs), which for stateful services will require moving data among instances. The SLOs of a distributed Cloud-based service are sensitive to the available network bandwidth, which is usually shared by multiple activities in a single service without being explicitly allocated and managed as a resource. We present the design and evaluation of BwMan, a network bandwidth manager for elastic services in the Cloud. BwMan predicts and performs the bandwidth allocation and tradeoffs between multiple service activities in order to meet service specific SLOs and policies. To make management decisions, BwMan uses statistical machine learning (SML) to build predictive models. This allows BwMan to arbitrate and allocate bandwidth dynamically among different activities to satisfy specified SLOs. We have implemented and evaluated BwMan for the OpenStack Swift store. Our evaluation shows the feasibility and effectiveness of our approach to bandwidth management in an elastic service. The experiments show that network bandwidth management by BwMan can reduce SLO violations in Swift by a factor of two or more.

  • 11.
    Rameshan, Navaneeth
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Liu, Ying
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Navarro, Leandro
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Augmenting Elasticity Controllers for Improved Accuracy2016In: 2016 IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING (ICAC), IEEE Computer Society, 2016, p. 117-126Conference paper (Refereed)
    Abstract [en]

    Elastic resource provisioning is used to guarantee service level objectives (SLO) at reduced cost in a Cloud platform. However, performance interference in the hosting platform introduces uncertainty in the performance guarantees of provisioned services. Existing elasticity controllers are either unaware of this interference or over-provision resources to meet the SLO. In this paper, we show that assuming predictable performance of VMs in a multi-tenant environment to scale, will result in long periods of SLO violations. We augment the elasticity controller to be aware of interference and improve the convergence time of scaling without over provisioning. We perform experiments with Memcached and compare our solution against a baseline elasticity controller that is unaware of performance interference. Our results show that augmentation can reduce SLO violations by 65% or more and also save provisioning costs compared to an interference oblivious controller.

  • 12. Rameshan, Navaneeth
    et al.
    Liu, Ying
    KTH.
    Navarro, Leandro
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Elastic Scaling in the Cloud: A Multi-Tenant Perspective2016In: 2016 IEEE 36TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS WORKSHOPS (ICDCSW 2016), IEEE conference proceedings, 2016, p. 25-30Conference paper (Refereed)
    Abstract [en]

    Performance interference in the hosting platform introduces uncertainty in the performance guarantees of provisioned services. Existing elasticity controllers are either unaware of this interference or over-provision resources to meet the SLO. In this paper, we take a holistic view on elastic scaling from a multi-tenant perspective. We show that performance interference can significantly impact the accuracy of scaling and result in long periods of SLO violation. Using Memcached as a case-study, we show that making an elasticity controller interference aware can improve the accuracy of scaling decisions and significantly reduce the periods of SLO violation.

  • 13.
    Rameshan, Navaneeth
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Liu, Ying
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Navarro, Leandro
    Department of Computer Architecture. Universitat Politecnica de Catalunya. Barcelona, Spain.
    Vlassov, Vladimir
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Hubbub-Scale: Towards Reliable Elastic Scaling under Multi-tenancy2016In: Proceedings - 2016 16th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing, CCGrid 2016, IEEE conference proceedings, 2016, p. 233-244Conference paper (Refereed)
    Abstract [en]

    Elastic resource provisioning is used to guarantee service level objective (SLO) with reduced cost in a Cloud platform. However, performance interference in the hosting platform introduces uncertainty in the performance guarantees of provisioned services. Existing elasticity controllers are either unaware of this interference or over-provision resources to meet the SLO. In this paper, we show that assuming predictable performance of VMs to build an elasticity controller will fail if interference is not modelled. We identify and control the different sources of unpredictability and build Hubbub-Scale, an elasticity controller that is reliable in the presence of performance interference. Our evaluation with Redis and Memcached show that Hubbub-Scale efficiently conforms to the SLO requirements under scenarios where standard modelling approaches fail.

1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf