Change search
Refine search result
1234 1 - 50 of 184
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abbas, Zainab
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Al-Shishtawy, Ahmad
    RISE SICS, Stockholm, Sweden.
    Girdzijauskas, Sarunas
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS. RISE SICS, Stockholm, Sweden..
    Vlassov, Vladimir
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Short-Term Traffic Prediction Using Long Short-Term Memory Neural Networks2018Conference paper (Refereed)
    Abstract [en]

    Short-term traffic prediction allows Intelligent Transport Systems to proactively respond to events before they happen. With the rapid increase in the amount, quality, and detail of traffic data, new techniques are required that can exploit the information in the data in order to provide better results while being able to scale and cope with increasing amounts of data and growing cities. We propose and compare three models for short-term road traffic density prediction based on Long Short-Term Memory (LSTM) neural networks. We have trained the models using real traffic data collected by Motorway Control System in Stockholm that monitors highways and collects flow and speed data per lane every minute from radar sensors. In order to deal with the challenge of scale and to improve prediction accuracy, we propose to partition the road network into road stretches and junctions, and to model each of the partitions with one or more LSTM neural networks. Our evaluation results show that partitioning of roads improves the prediction accuracy by reducing the root mean square error by the factor of 5. We show that we can reduce the complexity of LSTM network by limiting the number of input sensors, on average to 35% of the original number, without compromising the prediction accuracy.

  • 2. Ahmed, J.
    et al.
    Johnsson, A.
    Moradi, F.
    Pasquini, R.
    Flinta, C.
    Stadler, Rolf
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Online approach to performance fault localization for cloud and datacenter services2017In: Proceedings of the IM 2017 - 2017 IFIP/IEEE International Symposium on Integrated Network and Service Management, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 873-874Conference paper (Refereed)
    Abstract [en]

    Automated detection and diagnosis of the performance faults in cloud and datacenter environments is a crucial task to maintain smooth operation of different services and minimize downtime. We demonstrate an effective machine learning approach based on detecting metric correlation stability violations (CSV) for automated localization of performance faults for datacenter services running under dynamic load conditions. © 2017 IFIP.

  • 3. Ahmed, J.
    et al.
    Johnsson, A.
    Yanggratoke, Rerngvit
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES), Communication Networks.
    Ardelius, J.
    Flinta, C.
    Stadler, Rolf
    KTH, School of Electrical Engineering (EES), Communication Networks. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Predicting SLA conformance for cluster-based services using distributed analytics2016In: Proceedings of the NOMS 2016 - 2016 IEEE/IFIP Network Operations and Management Symposium, IEEE conference proceedings, 2016, p. 848-852Conference paper (Refereed)
    Abstract [en]

    Service assurance for the telecom cloud is a challenging task and is continuously being addressed by academics and industry. One promising approach is to utilize machine learning to predict service quality in order to take early mitigation actions. In previous work we have shown how to predict service-level metrics, such as frame rate for a video application on the client side, from operational data gathered at the server side. This gives the service provider early indications on whether the platform can support the current load demand. This paper extends previous work by addressing scalability issues for cluster-based services. Operational data being generated in large volumes, from several sources, and at high velocity puts strain on computational and communication resources. We propose and evaluate a distributed machine learning system based on the Winnow algorithm to tackle scalability issues, and then compare the new distributed solution with the previously proposed centralized solution. We show that network overhead and computational execution time is substantially reduced while maintaining high prediction accuracy making it possible to achieve real-time service quality predictions in large systems.

  • 4. Ahmed, J.
    et al.
    Josefsson, T.
    Johnsson, A.
    Flinta, C.
    Moradi, F.
    Pasquini, R.
    Stadler, Rolf
    KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Automated diagnostic of virtualized service performance degradation2018In: IEEE/IFIP Network Operations and Management Symposium: Cognitive Management in a Cyber World, NOMS 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 1-9Conference paper (Refereed)
    Abstract [en]

    Service assurance for cloud applications is a challenging task and is an active area of research for academia and industry. One promising approach is to utilize machine learning for service quality prediction and fault detection so that suitable mitigation actions can be executed. In our previous work, we have shown how to predict service-level metrics in real-time just from operational data gathered at the server side. This gives the service provider early indications on whether the platform can support the current load demand. This paper provides the logical next step where we extend our work by proposing an automated detection and diagnostic capability for the performance faults manifesting themselves in cloud and datacenter environments. This is a crucial task to maintain the smooth operation of running services and minimizing downtime. We demonstrate the effectiveness of our approach which exploits the interpretative capabilities of Self- Organizing Maps (SOMs) to automatically detect and localize different performance faults for cloud services. © 2018 IEEE.

  • 5.
    Apolonia, Nuno
    et al.
    Universitat Politecnica de Catalunya (UPC) Barcelona, Spain.
    Antaris, Stefanos
    Girdzijauskas, Šarunas
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Pallis, G.
    Dikaiakos, Marios
    SELECT: A distributed publish/subscribe notification system for online social networks2018In: Proceedings - 2018 IEEE 32nd International Parallel and Distributed Processing Symposium, IPDPS 2018, Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 970-979, article id 8425250Conference paper (Refereed)
    Abstract [en]

    Publish/subscribe (pub/sub) mechanisms constitutean attractive communication paradigm in the design of large-scale notification systems for Online Social Networks (OSNs). Toaccommodate the large-scale workloads of notifications producedby OSNs, pub/sub mechanisms require thousands of serversdistributed on different data centers all over the world, incurringlarge overheads. To eliminate the pub/sub resources used, wepropose SELECT - a distributed pub/sub social notificationsystem over peer-to-peer (P2P) networks. SELECT organizesthe peers on a ring topology and provides an adaptive P2Pconnection establishment algorithm where each peer identifiesthe number of connections required, based on the social structureand user availability. This allows to propagate messages to thesocial friends of the users using a reduced number of hops.The presented algorithm is an efficient heuristic to an NP-hard problem which maps workload graphs to structured P2Poverlays inducing overall, close to theoretical, minimal number ofmessages. Experiments show that SELECT reduces the numberof relay nodes up to 89% versus the state-of-the-art pub/subnotification systems. Additionally, we demonstrate the advantageof SELECT against socially-aware P2P overlay networks andshow that the communication between two socially connectedpeers is reduced on average by at least 64% hops, while achieving100% communication availability even under high churn.

  • 6. Artho, Cyrille
    et al.
    Gros, Quentin
    Rousset, Guillaume
    Precondition Coverage in Software Testing2016In: Proc. 1st Int. Workshop on Validating Software Tests (VST 2016), IEEE conference proceedings, 2016Conference paper (Refereed)
    Abstract [en]

    Preconditions indicate when it is permitted to use a given function. However, it is not always the case that both outcomes of a precondition are observed during testing. A precondition that is always false makes a function unusable, a precondition that is always true may turn out to be actually an invariant. In model-based testing, preconditions describes when a transition may be executed from a given state. If no outgoing transition is enabled in a given state because all preconditions of all outgoing transitions are false, the test model may be flawed. Experiments show a low test coverage of preconditions in the Scala library. We also investigate preconditions in Modbat models for model-based testing, in that case, a certain number of test cases is needed to produce sufficient coverage, but remaining cases of low coverage indeed point to legitimate flaws in test models or code.

  • 7.
    Artho, Cyrille
    et al.
    KTH, School of Computer Science and Communication (CSC). National Institute of Advanced Industrial Science and Technology (AIST), Japan.
    Gros, Quentin
    Rousset, Guillaume
    Banzai, Kazuaki
    Ma, Lei
    Kitamura, Takashi
    Hagiya, Masami
    Tanabe, Yoshinori
    Yamamoto, Mitsuharu
    Model-based API Testing of Apache ZooKeeper2017In: 2017 10TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST), IEEE , 2017, p. 288-298Conference paper (Refereed)
    Abstract [en]

    Apache ZooKeeper is a distributed data storage that is highly concurrent and asynchronous due to network communication; testing such a system is very challenging. Our solution using the tool "Modbat" generates test cases for concurrent client sessions, and processes results from synchronous and asynchronous callbacks. We use an embedded model checker to compute the test oracle for non-deterministic outcomes; the oracle model evolves dynamically with each new test step. Our work has detected multiple previously unknown defects in ZooKeeper. Finally, a thorough coverage evaluation of the core classes show how code and branch coverage strongly relate to feature coverage in the model, and hence modeling effort.

  • 8. Artho, Cyrille
    et al.
    Ma, Lei
    Classification of Randomly Generated Test Cases2016In: Proc. 1st Int. Workshop on Validating Software Tests (VST 2016), IEEE conference proceedings, 2016Conference paper (Refereed)
    Abstract [en]

    Random test case generation produces relatively diverse test sequences, but the validity of the test verdict is always uncertain. Because tests are generated without taking the specification and documentation into account, many tests are invalid. To understand the prevalent types of successful and invalid tests, we present a classification of 56 issues that were derived from 208 failed, randomly generated test cases. While the existing workflow successfully eliminated more than half of the tests as irrelevant, half of the remaining failed tests are false positives. We show that the new @NonNull annotation of Java 8 has the potential to eliminate most of the false positives, highlighting the importance of machine-readable documentation.

  • 9.
    Artho, Cyrille
    et al.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Olveczky, Peter Csaba
    Formal Techniques for Safety-Critical Systems (FTSCS 2014) Preface2017In: Science of Computer Programming, ISSN 0167-6423, E-ISSN 1872-7964, Vol. 133, p. 89-90Article in journal (Other academic)
  • 10. Artho, Cyrille
    et al.
    Rousset, Guillaume
    Model-based Testing of the Java network API2017In: Electronic Proceedings in Theoretical Computer Science, ISSN 2075-2180, E-ISSN 2075-2180, no 245, p. 46-51Article in journal (Refereed)
    Abstract [en]

    Testing networked systems is challenging. The client or server side cannot be tested by itself. We present a solution using tool "Modbat" that generates test cases for Java's network library java.nio, where we test both blocking and non-blocking network functions. Our test model can dynamically simulate actions in multiple worker and client threads, thanks to a carefully orchestrated design that covers non-determinism while ensuring progress.

  • 11.
    Artho, Cyrille
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Theoretical Computer Science, TCS.
    Ölveczky, P.C.
    Formal Techniques for Safety-Critical Systems (FTSCS 2015)2018In: Science of Computer Programming, ISSN 0167-6423, E-ISSN 1872-7964, Vol. 154, p. 1-2Article in journal (Refereed)
  • 12.
    Artho, Cyrille
    et al.
    KTH.
    Ölveczky, P.C.
    Preface2017In: 5th International Workshop on Formal Techniques for Safety-Critical Systems, FTSCS 2016, Springer Verlag , 2017Conference paper (Refereed)
  • 13. Ashjaei, Mohammad
    et al.
    Moghaddami Khalilzad, Nima
    KTH.
    Mubeen, Saad
    Behnam, Moris
    Sander, Ingo
    Almeida, Luis
    Nolte, Thomas
    Designing end-to-end resource reservations in predictable distributed embedded systems2017In: Real-time systems, ISSN 0922-6443, E-ISSN 1573-1383, Vol. 53, no 6, p. 916-956Article in journal (Refereed)
  • 14. Attarzadeh-Niaki, S. -H
    et al.
    Altinel, E.
    KTH.
    Koedam, M.
    Molnos, A.
    Sander, Ingo
    KTH.
    Goossens, K.
    A composable and predictable MPSoC design flow for multiple real-time applications2016In: Model-Implementation Fidelity in Cyber Physical System Design, Springer International Publishing , 2016, p. 157-174Chapter in book (Other academic)
    Abstract [en]

    Design of real-time MPSoC systems including multiple applications is challenging because temporal requirements of each application must be respected throughout the entire design flow. Currently the design of different applications is often interdependent, making converge to a solution for each application difficult. This chapter proposes a compositional method to design applications independently, and then to execute them without interference. We define a formal modeling framework as a suitable entry point for application design. The models are executable, which enables early detection of specification errors, and include the formal properties of the applications based on well-defined models of computation. We combine this with a predictable MPSoC platform template that has a supporting design flow but lacks a simulation front-end. The structure and behavior of the application models are exported to an intermediate format via introspection which is iteratively transformed for the backend flow. We identify the problems arising in this transformation and provide appropriate solutions. The design flow is demonstrated by a system consisting of two streaming applications where less than half of the design time is dedicated to operating on the integrated system model.

  • 15. Attarzadeh-Niaki, S. -H
    et al.
    Sander, Ingo
    KTH, School of Information and Communication Technology (ICT), Electronics.
    Automatic construction of models for analytic system-level design space exploration problems2017In: Proceedings of the 2017 Design, Automation and Test in Europe, DATE 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 670-673, article id 7927074Conference paper (Refereed)
    Abstract [en]

    Due to the variety of application models and also the target platforms used in embedded electronic system design, it is challenging to formulate a generic and extensible analytic design-space exploration (DSE) framework. Current approaches support a restricted class of application and platform models and are difficult to extend. This paper proposes a framework for automatic construction of system-level DSE problem models based on a coherent, constraint-based representation of system functionality, flexible target platforms, and binding policies. Heterogeneous semantics is captured using constraints on logical clocks. The applicability of this method is demonstrated by constructing DSE problem models from different combinations of application and platforms models. Time-triggered and untimed models of the system functionality and heterogeneous target platforms are used for this purpose. Another potential advantage of this approach is that constructed models can be solved using a variety of standard and ad-hoc solvers and search heuristics.

  • 16.
    Attarzadeh-Niaki, Seyed-Hosein
    et al.
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    Sander, Ingo
    KTH, School of Information and Communication Technology (ICT), Electronic Systems.
    An extensible modeling methodology for embedded and cyber-physical system design2016In: Simulation (San Diego, Calif.), ISSN 0037-5497, E-ISSN 1741-3133, Vol. 92, no 8, p. 771-794Article in journal (Refereed)
    Abstract [en]

    models are important tools to manage the increasing complexity of system design. The choice of a modeling language for constructing models governs what types of systems can be modeled, and which subsequent design activities can be performed. This is especially true for the area of embedded electronic and cyber-physical system design, which poses several challenging requirements of modeling and design methodologies. This article argues that the Formal System Design (ForSyDe) methodology with the necessary presented extensions fulfills these requirements, and thus qualifies for the design of tomorrow's systems. Based on the theory of models of computation and the concept of process constructors, heterogeneous models are captured in ForSyDe with formal semantics. A refined layer of the formalism is introduced to make its denotational-style semantics easy to implement on top of commonly used imperative languages, and an open-source realization on top of the IEEE standard language SystemC is presented. The introspection mechanism is introduced to automatically export an intermediate representation of the constructed models for further analysis/synthesis by external tools. Flexibility and extensibility of ForSyDe is emphasized by integrating a new timed model of computation without central synchronization, and by providing mechanisms for integrating foreign models, parallel and distributed simulation, modeling adaptive, data-parallel, and non-deterministic systems. A set of ForSyDe features is demonstrated in practice, and compared with similar approaches using a running example and two relevant case studies.

  • 17.
    Bahri, Leila
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Girdzijauskas, Sarunas
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    When Trust Saves Enegry - A Reference Framework for Proof-of-Trust (PoT) Blockchains2018In: WWW '18 Companion Proceedings of the The Web Conference 2018, ACM Digital Library, 2018, p. 1165-1169Conference paper (Refereed)
    Abstract [en]

    Blockchains are attracting the attention of many technical, financial, and industrial parties, as a promising infrastructure for achieving secure peer-to-peer (P2P) transactional systems. At the heart of blockchains is proof-of-work (PoW), a trustless leader election mechanism based on demonstration of computational power. PoW provides blockchain security in trusless P2P environments, but comes at the expense of wasting huge amounts of energy. In this research work, we question this energy expenditure of PoW under blockchain use cases where some form of trust exists between the peers. We propose a Proof-of-Trust (PoT) blockchain where peer trust is valuated in the network based on a trust graph that emerges in a decentralized fashion and that is encoded in and managed by the blockchain itself. This trust is then used as a waiver for the difficulty of PoW; that is, the more trust you prove in the network, the less work you do.

  • 18. Bahri, Leila
    et al.
    Soliman, Amira
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Squillaci, Jacopo
    Carminati, Barbara
    Ferrari, Elena
    Girdzijauskas, Sarunas
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Beat the DIVa: Decentralized Identity Validation for Online Social Networks2016In: 2016 32ND IEEE INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE), 2016, p. 1330-1333Conference paper (Refereed)
    Abstract [en]

    Fake accounts in online social networks (OSNs) have known considerable sophistication and are now attempting to gain network trust by infiltrating within honest communities. Honest users have limited perspective on the truthfulness of new online identities requesting their friendship. This facilitates the task of fake accounts in deceiving honest users to befriend them. To address this, we have proposed a model that learns hidden correlations between profile attributes within OSN communities, and exploits them to assist users in estimating the trustworthiness of new profiles. To demonstrate our method, we suggest, in this demo, a game application through which players try to cheat the system and convince nodes in a simulated OSN to befriend them. The game deploys different strategies to challenge the players and to reach the objectives of the demo. These objectives are to make participants aware of how fake accounts can infiltrate within their OSN communities, to demonstrate how our suggested method could aid in mitigating this threat, and to eventually strengthen our model based on the data collected from the moves of the players.

  • 19.
    Bastys, Iulia
    et al.
    Chalmers University of Technology.
    Balliu, Musard
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Sabelfeld, Andrei
    Chalmers University of Technology.
    If This Then What? Controlling Flows in IoT Apps2018Conference paper (Refereed)
  • 20.
    Baumann, Christoph
    et al.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Schwarz, Oliver
    RISE SICS.
    Dam, Mads
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Compositional Verification of Security Properties for Embedded Execution Platforms2017In: PROOFS 2017: 6th International Workshop on Security Proofs for Embedded Systems / [ed] Ulrich Kühne and Jean-Luc Danger and Sylvain Guilley, 2017, Vol. 49, p. 1-16Conference paper (Refereed)
    Abstract [en]

    The security of embedded systems can be dramatically improved through the use of formally verified isolation mechanisms such as separation kernels, hypervisors, or microkernels. For trustworthiness, particularly for system level behaviour, the verifications need precise models of the underlying hardware. Such models are hard to attain, highly complex, and proofs of their security properties may not easily apply to similar but different platforms. This may render verification economically infeasible. To address these issues, we propose a compositional top-down approach to embedded system specification and verification, where the system-on-chip is modeled as a network of distributed automata communicating via paired synchronous message passing. Using abstract specifications for each component allows to delay the development of detailed models for cores, devices, etc., while still being able to verify high level security properties like integrity and confidentiality, and soundly refine the result for different instantiations of the abstract components at a later stage. As a case study, we apply this methodology to the verification of information flow security for an industry scale security-oriented hypervisor on the ARMv8-A platform. The hypervisor statically assigns (multiple) cores to each guest system and implements a rudimentary, but usable, inter guest communication discipline. We have completed a pen-and-paper security proof for the hypervisor down to state transition level and report on a partially completed verification of guest mode security in the HOL4 theorem prover.

  • 21. Bennaceur, A.
    et al.
    Meinke, Karl
    KTH.
    Machine learning for software analysis: Models, methods, and applications2018In: International Dagstuhl Seminar 16172 Machine Learning for Dynamic Software Analysis: Potentials and Limits, 2016, Springer, 2018, Vol. 11026, p. 3-49Conference paper (Refereed)
    Abstract [en]

    Machine Learning (ML) is the discipline that studies methods for automatically inferring models from data. Machine learning has been successfully applied in many areas of software engineering including: behaviour extraction, testing and bug fixing. Many more applications are yet to be defined. Therefore, a better fundamental understanding of ML methods, their assumptions and guarantees can help to identify and adopt appropriate ML technology for new applications. In this chapter, we present an introductory survey of ML applications in software engineering, classified in terms of the models they produce and the learning methods they use. We argue that the optimal choice of an ML method for a particular application should be guided by the type of models one seeks to infer. We describe some important principles of ML, give an overview of some key methods, and present examples of areas of software engineering benefiting from ML. We also discuss the open challenges for reaching the full potential of ML for software engineering and how ML can benefit from software engineering methods.

  • 22. Bessani, A.
    et al.
    Brandt, J.
    Bux, M.
    Cogo, V.
    Dimitrova, L.
    Dowling, Jim
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Gholami, Ali
    KTH.
    Hakimzadeh, Kamal
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Hummel, M.
    Ismail, Mahmoud
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Laure, Erwin
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for High Performance Computing, PDC. KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).
    Leser, U.
    Litton, J. -E
    Martinez, R.
    Niazi, Salman
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Reichel, J.
    Zimmermann, K.
    BiobankCloud: A platform for the secure storage, sharing, and processing of large biomedical data sets2016In: 1st International Workshop on Data Management and Analytics for Medicine and Healthcare, DMAH 2015 and Workshop on Big-Graphs Online Querying, Big-O(Q) 2015 held in conjunction with 41st International Conference on Very Large Data Bases, VLDB 2015, Springer, 2016, p. 89-105Conference paper (Refereed)
    Abstract [en]

    Biobanks store and catalog human biological material that is increasingly being digitized using next-generation sequencing (NGS). There is, however, a computational bottleneck, as existing software systems are not scalable and secure enough to store and process the incoming wave of genomic data from NGS machines. In the BiobankCloud project, we are building a Hadoop-based platform for the secure storage, sharing, and parallel processing of genomic data. We extended Hadoop to include support for multi-tenant studies, reduced storage requirements with erasure coding, and added support for extensible and consistent metadata. On top of Hadoop, we built a scalable scientific workflow engine featuring a proper workflow definition language focusing on simple integration and chaining of existing tools, adaptive scheduling on Apache Yarn, and support for iterative dataflows. Our platform also supports the secure sharing of data across different, distributed Hadoop clusters. The software is easily installed and comes with a user-friendly web interface for running, managing, and accessing data sets behind a secure 2-factor authentication. Initial tests have shown that the engine scales well to dozens of nodes. The entire system is open-source and includes pre-defined workflows for popular tasks in biomedical data analysis, such as variant identification, differential transcriptome analysis using RNA-Seq, and analysis of miRNA-Seq and ChIP-Seq data.

  • 23.
    Blouin, Arnaud
    et al.
    Univ Rennes, INSA Rennes, INRIA, CNRS,IRISA, Rennes, France..
    Lelli, Valeria
    Univ Fed Ceara, Fortaleza, Ceara, Brazil..
    Baudry, Benoit
    KTH.
    Coulon, Fabien
    Univ Toulouse Jean Jaures, Toulouse, France..
    User interface design smell: Automatic detection and refactoring of Blob listeners2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 102, p. 49-64Article in journal (Refereed)
    Abstract [en]

    Context. User Interfaces (UIs) intensively rely on event-driven programming: interactive objects send UI events, which capture users' interactions, to dedicated objects called controllers. Controllers use several UI listeners that handle these events to produce UI commands. Objective. First, we reveal the presence of design smells in the code that describes and controls UIs. Second, we demonstrate that specific code analyses are necessary to analyze and refactor UI code, because of its coupling with the rest of the code. Method. We conducted an empirical study on four large Java software systems. We studied to what extent the number of UI commands that a UI listener can produce has an impact on the change- and fault-proneness of the UI listener code. We developed a static code analysis for detecting UI commands in the code. Results. We identified a new type of design smell, called Blob listener, that characterizes UI listeners that can produce more than two UI commands. We proposed a systematic static code analysis procedure that searches for Blob listener that we implement in InspectorGuidget. We conducted experiments on the four software systems for which we manually identified 53 instances of Blob listener. InspectorGuidget successfully detected 52 Blob listeners out of 53. The results exhibit a precision of 81.25% and a recall of 98.11%. We then developed a semi-automatically and behavior-preserving refactoring process to remove Blob listeners. 49.06% of the 53 Blob listeners were automatically refactored. Patches have been accepted and merged. Discussions with developers of the four software systems assess the relevance of the Blob listener. Conclusion. This work shows that UI code also suffers from design smells that have to be identified and characterized. We argue that studies have to be conducted to find other UI design smells and tools that analyze UI code must be developed.

  • 24.
    Bogdanov, Kirill
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Reda, W.
    Maguire Jr., Gerald Q.
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Kostic, Dejan
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems, CoS.
    Canini, M.
    Fast and accurate load balancing for geo-distributed storage systems2018In: SoCC 2018 - Proceedings of the 2018 ACM Symposium on Cloud Computing, Association for Computing Machinery (ACM), 2018, p. 386-400Conference paper (Refereed)
    Abstract [en]

    The increasing density of globally distributed datacenters reduces the network latency between neighboring datacenters and allows replicated services deployed across neighboring locations to share workload when necessary, without violating strict Service Level Objectives (SLOs). We present Kurma, a practical implementation of a fast and accurate load balancer for geo-distributed storage systems. At run-time, Kurma integrates network latency and service time distributions to accurately estimate the rate of SLO violations for requests redirected across geo-distributed datacenters. Using these estimates, Kurma solves a decentralized rate-based performance model enabling fast load balancing (in the order of seconds) while taming global SLO violations. We integrate Kurma with Cassandra, a popular storage system. Using real-world traces along with a geo-distributed deployment across Amazon EC2, we demonstrate Kurma’s ability to effectively share load among datacenters while reducing SLO violations by up to a factor of 3 in high load settings or reducing the cost of running the service by up to 17%.

  • 25.
    Bogdanov, Kirill
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Network Systems Laboratory (NS Lab).
    Reda, Waleed
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Network Systems Laboratory (NS Lab). Université catholique de Louvain.
    Kostic, Dejan
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Network Systems Laboratory (NS Lab).
    Maguire Jr., Gerald Q.
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS.
    Canini, Marco
    KAUST.
    Kurma: Fast and Efficient Load Balancing for Geo-Distributed Storage Systems: Evaluation of Convergence and Scalability2018Report (Other academic)
    Abstract [en]

    This report provides an extended evaluation of Kurma, a practical implementation of a geo-distributed load balancer for backend storage systems. In this report we demonstrate the ability of distributed Kurma instances to accurately converge to the same solutions within 1% of the total datacenter’s capacity and the ability of Kurma to scale up to 8 datacenters using a single CPU core at each datacenter.

  • 26. Bousse, Erwan
    et al.
    Leroy, Dorian
    Combemale, Benoit
    Wimmer, Manuel
    Baudry, Benoit
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Omniscient debugging for executable DSLs2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 137, p. 261-288Article in journal (Refereed)
    Abstract [en]

    Omniscient debugging is a promising technique that relies on execution traces to enable free traversal of the states reached by a model (or program) during an execution. While a few General-Purpose Languages (GPLs) already have support for omniscient debugging, developing such a complex tool for any executable Domain Specific Language (DSL) remains a challenging and error prone task. A generic solution must: support a wide range of executable DSLs independently of the metaprogramming approaches used for implementing their semantics; be efficient for good responsiveness. Our contribution relies on a generic omniscient debugger supported by efficient generic trace management facilities. To support a wide range of executable DSLs, the debugger provides a common set of debugging facilities, and is based on a pattern to define runtime services independently of metaprogramming approaches. Results show that our debugger can be used with various executable DSLs implemented with different metaprogramming approaches. As compared to a solution that copies the model at each step, it is on average sixtimes more efficient in memory, and at least 2.2 faster when exploring past execution states, while only slowing down the execution 1.6 times on average.

  • 27.
    Broman, David
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Hybrid Simulation Safety: Limbos and Zero Crossings2018In: Principles of Modeling: Essays Dedicated to Edward A. Lee on the Occasion of His 60th Birthday, Springer, 2018, p. 106-121Chapter in book (Refereed)
    Abstract [en]

    Physical systems can be naturally modeled by combining continuous and discrete models. Such hybrid models may simplify the modeling task of complex system, as well as increase simulation performance. Moreover, modern simulation engines can often efficiently generate simulation traces, but how do we know that the simulation results are correct? If we detect an error, is the error in the model or in the simulation itself? This paper discusses the problem of simulation safety, with the focus on hybrid modeling and simulation. In particular, two key aspects are studied: safe zero-crossing detection and deterministic hybrid event handling. The problems and solutions are discussed and partially implemented in Modelica and Ptolemy II.

  • 28.
    Broman, David
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Siek, J. G.
    United States.
    Gradually typed symbolic expressions2017In: PEPM 2018 - Proceedings of the ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation, Co-located with POPL 2018, Association for Computing Machinery (ACM), 2017, p. 15-29Conference paper (Refereed)
    Abstract [en]

    Embedding a domain-specific language (DSL) in a general purpose host language is an efficient way to develop a new DSL. Various kinds of languages and paradigms can be used as host languages, including object-oriented, functional, statically typed, and dynamically typed variants, all having their pros and cons. For deep embedding, statically typed languages enable early checking and potentially good DSL error messages, instead of reporting runtime errors. Dynamically typed languages, on the other hand, enable flexible transformations, thus avoiding extensive boilerplate code. In this paper, we introduce the concept of gradually typed symbolic expressions that mix static and dynamic typing for symbolic data. The key idea is to combine the strengths of dynamic and static typing in the context of deep embedding of DSLs. We define a gradually typed calculus <*>, formalize its type system and dynamic semantics, and prove type safety. We introduce a host language called Modelyze that is based on <*>, and evaluate the approach by embedding a series of equation-based domain-specific modeling languages, all within the domain of physical modeling and simulation.

  • 29.
    Carbone, Paris
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Ewen, Stephan
    Fora, Gyula
    Haridi, Seif
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Richter, Stefan
    Tzoumas, Kostas
    State Management in Apache Flink (R) Consistent Stateful Distributed Stream Processing2017In: Proceedings of the VLDB Endowment, ISSN 2150-8097, E-ISSN 2150-8097, Vol. 10, no 12, p. 1718-1729Article in journal (Refereed)
    Abstract [en]

    Stream processors are emerging in industry as an apparatus that drives analytical but also mission critical services handling the core of persistent application logic. Thus, apart from scalability and low-latency, a rising system need is first-class support for application state together with strong consistency guarantees, and adaptivity to cluster reconfigurations, software patches and partial failures. Although prior systems research has addressed some of these specific problems, the practical challenge lies on how such guarantees can be materialized in a transparent, non-intrusive manner that relieves the user from unnecessary constraints. Such needs served as the main design principles of state management in Apache Flink, an open source, scalable stream processor. We present Flink's core pipelined, in-flight mechanism which guarantees the creation of lightweight, consistent, distributed snapshots of application state, progressively, without impacting continuous execution. Consistent snapshots cover all needs for system reconfiguration, fault tolerance and version management through coarse grained rollback recovery. Application state is declared explicitly to the system, allowing efficient partitioning and transparent commits to persistent storage. We further present Flink's backend implementations and mechanisms for high availability, external state queries and output commit. Finally, we demonstrate how these mechanisms behave in practice with metrics and largedeployment insights exhibiting the low performance trade-offs of our approach and the general benefits of exploiting asynchrony in continuous, yet sustainable system deployments.

  • 30.
    Carbone, Paris
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Ewen, Stephan
    data Artisans.
    Fóra, Gyula
    King Digital Entertainment Limited.
    Haridi, Seif
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Richter, Stefan
    data Artisans.
    Tzoumas, Kostas
    data Artisans.
    State Management in Apache Flink: Consistent Stateful Distributed Stream Processing2017In: Proceedings of the VLDB Endowment, ISSN 2150-8097, E-ISSN 2150-8097, Vol. 10, p. 1718-1729, article id 12Article in journal (Refereed)
    Abstract [en]

    Stream processors are emerging in industry as an apparatus that drives analytical but also mission critical services handling the core of persistent application logic. Thus, apart from scalability and low-latency, a rising system need is first-class support for application state together with strong consistency guarantees, and adaptivity to cluster reconfigurations, software patches and partial failures. Although prior systems research has addressed some of these specific problems, the practical challenge lies on how such guarantees can be materialized in a transparent, non-intrusive manner that relieves the user from unnecessary constraints. Such needs served as the main design principles of state management in Apache Flink, an open source, scalable stream processor.

    We present Flink’s core pipelined, in-flight mechanism which guarantees the creation of lightweight, consistent, distributed snapshots of application state, progressively, without impacting continuous execution. Consistent snapshots cover all needs for system reconfiguration, fault tolerance and version management through coarse grained rollback recovery. Application state is declared explicitly to the system, allowing efficient partitioning and transparent commits to persistent storage. We further present Flink’s backend implementations and mechanisms for high availability, external state queries and output commit. Finally, we demonstrate how these mechanisms behave in practice with metrics and large deployment insights exhibiting the low performance trade-offs of our approach and the general benefits of exploiting asynchrony in continuous, yet sustainable system deployments.

  • 31.
    Carbone, Paris
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Gévay, G. E.
    Hermann, G.
    Katsifodimos, A.
    Soto, J.
    Markl, V.
    Haridi, Seif
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Large-scale data stream processing systems2017In: Handbook of Big Data Technologies, Springer International Publishing , 2017, p. 219-260Chapter in book (Other academic)
    Abstract [en]

    In our data-centric society, online services, decision making, and other aspects are increasingly becoming heavily dependent on trends and patterns extracted from data. A broad class of societal-scale data management problems requires system support for processing unbounded data with low latency and high throughput. Large-scale data stream processing systems perceive data as infinite streams and are designed to satisfy such requirements. They have further evolved substantially both in terms of expressive programming model support and also efficient and durable runtime execution on commodity clusters. Expressive programming models offer convenient ways to declare continuous data properties and applied computations, while hiding details on how these data streams are physically processed and orchestrated in a distributed environment. Execution engines provide a runtime for such models further allowing for scalable yet durable execution of any declared computation. In this chapter we introduce the major design aspects of large scale data stream processing systems, covering programming model abstraction levels and runtime concerns. We then present a detailed case study on stateful stream processing with Apache Flink, an open-source stream processor that is used for a wide variety of processing tasks. Finally, we address the main challenges of disruptive applications that large-scale data streaming enables from a systemic point of view.

  • 32.
    Carbone, Paris
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Traub, Jonas
    Katsifodimo, Asterios
    Haridi, Seif
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Mark, Volker
    Cutty: Aggregate Sharing for User-Defined Windows2016In: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, Association for Computing Machinery (ACM), 2016, Vol. 24-28-, p. 1201-1210Conference paper (Refereed)
    Abstract [en]

    Aggregation queries on data streams are evaluated over evolving and often overlapping logical views called windows. While the aggregation of periodic windows were extensively studied in the past through the use of aggregate sharing techniques such as Panes and Pairs, little to no work has been put in optimizing the aggregation of very common, non-periodic windows. Typical examples of non-periodic windows are punctuations and sessions which can implement complex business logic and are often expressed as user-defined operators on platforms such as Google Dataflow or Apache Storm. The aggregation of such non-periodic or user-defined windows either falls back to expensive, best-effort aggregate sharing methods, or is not optimized at all.

    In this paper we present a technique to perform efficient aggregate sharing for data stream windows, which are declared as user-defined functions (UDFs) and can contain arbitrary business logic. To this end, we first introduce the concept of User-Defined Windows (UDWs), a simple, UDF-based programming abstraction that allows users to programmatically define custom windows. We then define semantics for UDWs, based on which we design Cutty, a low-cost aggregate sharing technique. Cutty improves and outperforms the state of the art for aggregate sharing on single and multiple queries. Moreover, it enables aggregate sharing for a broad class of non-periodic UDWs. We implemented our techniques on Apache Flink, an open source stream processing system, and performed experiments demonstrating orders of magnitude of reduction in aggregation costs compared to the state of the art.

  • 33.
    Castañeda Lozano, Roberto
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS. RISE SICS (Swedish Institute of Computer Science).
    Carlsson, Mats
    RISE SICS (Swedish Institute of Computer Science).
    Hjort Blindell, Gabriel
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Schulte, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Combinatorial Register Allocation and Instruction Scheduling2018Report (Other academic)
    Abstract [en]

    This paper introduces a combinatorial optimization approach to register allocation and instruction scheduling, two central compiler problems. Combinatorial optimization has the potential to solve these problems optimally and to exploit processor-specific features readily. Our approach is the first to leverage this potential in practice: it captures the complete set of program transformations used in state-of-the-art compilers, scales to medium-sized functions of up to 1000 instructions, and generates executable code. This level of practicality is reached by using constraint programming, a particularly suitable combinatorial optimization technique. Unison, the implementation of our approach, is open source, used in industry, and integrated with the LLVM toolchain.

    An extensive evaluation of estimated speed, code size, and scalability confirms that Unison generates better code than LLVM while scaling to medium-sized functions. The evaluation uses systematically selected benchmarks from MediaBench and SPEC CPU2006 and different processor architectures (Hexagon, ARM, MIPS). Mean estimated speedup ranges from 1% to 9.3% and mean code size reduction ranges from 0.8% to 3.9% for the different architectures. Executing the generated code on Hexagon confirms that the estimated speedup indeed results in actual speedup. Given a fixed time limit, Unison solves optimally functions of up to 647 instructions, delivers improved solutions for functions of up to 874 instructions, and achieves more than 85% of the potential speed for 90% of the functions on Hexagon.

    The results in this paper show that our combinatorial approach can be used in practice to trade compilation time for code quality beyond the usual compiler optimization levels, fully exploit processor-specific features, and identify improvement opportunities in existing heuristic algorithms.

  • 34.
    Castañeda Lozano, Roberto
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS. SICS (Swedish Institute of Computer Science).
    Carlsson, Mats
    SICS (Swedish Institute of Computer Science).
    Hjort Blindell, Gabriel
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Schulte, Christian
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Register allocation and instruction scheduling in Unison2016In: Proceedings of CC 2016: The 25th International Conference on Compiler Construction, Association for Computing Machinery (ACM), 2016, p. 263-264Conference paper (Refereed)
    Abstract [en]

    This paper describes Unison, a simple, flexible, and potentially optimal software tool that performs register allocation and instruction scheduling in integration using combinatorial optimization. The tool can be used as an alternative or as a complement to traditional approaches, which are fast but complex and suboptimal. Unison is most suitable whenever high-quality code is required and longer compilation times can be tolerated (such as in embedded systems or library releases), or the targeted processors are so irregular that traditional compilers fail to generate satisfactory code.

  • 35.
    Castañeda Lozano, Roberto
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS. RISE SICS (Swedish Institute of Computer Science).
    Schulte, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Survey on Combinatorial Register Allocation and Instruction Scheduling2018In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341Article in journal (Refereed)
    Abstract [en]

    Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to increase instruction-level parallelism) are essential tasks for generating efficient assembly code in a compiler. In the last three decades, combinatorial optimization has emerged as an alternative to traditional, heuristic algorithms for these two tasks. Combinatorial optimization approaches can deliver optimal solutions according to a model, can precisely capture trade-offs between conflicting decisions, and are more flexible at the expense of increased compilation time.

    This paper provides an exhaustive literature review and a classification of combinatorial optimization approaches to register allocation and instruction scheduling, with a focus on the techniques that are most applied in this context: integer programming, constraint programming, partitioned Boolean quadratic programming, and enumeration. Researchers in compilers and combinatorial optimization can benefit from identifying developments, trends, and challenges in the area; compiler practitioners may discern opportunities and grasp the potential benefit of applying combinatorial optimization.

  • 36.
    Chen, Chen
    et al.
    Middleware System Research Group, University of Toronto.
    Tock, Yoav
    IBM Research - Haifa.
    Girdzijauskas, Sarunas
    KTH, School of Electrical Engineering and Computer Science (EECS).
    BeaConvey: Co-Design of Overlay and Routing for Topic-basedPublish/Subscribe on Small-World Networks2018Conference paper (Refereed)
  • 37. Choi, Eun-Hye
    et al.
    Artho, Cyrille
    KTH.
    Kitamura, Takashi
    Mizuno, Osamu
    Yamada, Akihisa
    Distance-Integrated Combinatorial Testing2016In: 27th IEEE Int. Symposium on Software Reliability Engineering (ISSRE 2016), IEEE conference proceedings, 2016, p. 93-104Conference paper (Refereed)
    Abstract [en]

    This paper proposes a novel approach to combinatorial test generation, which achieves an increase of not only the number of new combinations but also the distance between test cases. We applied our distance-integrated approach to a state-of-the-art greedy algorithm for traditional combinatorial test generation by using two distance metrics, Hamming distance, and a modified chi-square distance. Experimental results using numerous benchmark models show that combinatorial test suites generated by our approach using both distance metrics can improve interaction coverage for higher interaction strengths with low computational overhead.

  • 38. Choi, Eun-Hye
    et al.
    Kawabata, Shunya
    Mizuno, Osamu
    Artho, Cyrille
    Kitamura, Takashi
    Test Effectiveness Evaluation of Prioritized Combinatorial Testing: A Case Study2016In: 2016 IEEE Int. Conf. on Software Quality, Reliability and Security (QRS 2016), IEEE conference proceedings, 2016, p. 61-68Conference paper (Refereed)
    Abstract [en]

    Combinatorial testing is a widely-used technique to detect system interaction failures. To improve test effectiveness with given priority weights of parameter values in a system under test, prioritized combinatorial testing constructs test suites where highly weighted parameter values appear earlier or more frequently. Such order-focused and frequency-focused combinatorial test generation algorithms have been evaluated using metrics called weight coverage and KL divergence but not sufficiently with fault detection effectiveness so far. We evaluate the fault detection effectiveness on a collection of open source utilities, applying prioritized combinatorial test generation and investigating its correlation with weight coverage and KL divergence.

  • 39. Ciccozzi, F.
    et al.
    Di Ruscio, D.
    Malavolta, I.
    Pelliccione, P.
    Tumova, Jana
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Engineering the software of robotic systems2017In: Proceedings - 2017 IEEE/ACM 39th International Conference on Software Engineering Companion, ICSE-C 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 507-508, article id 7965406Conference paper (Refereed)
    Abstract [en]

    The production of software for robotic systems is often case-specific, without fully following established engineering approaches. Systematic approaches, methods, models, and tools are pivotal for the creation of robotic systems for real-world applications and turn-key solutions. Well-defined (software) engineering approaches are considered the 'make or break' factor in the development of complex robotic systems. The shift towards well-defined engineering approaches will stimulate component supply-chains and significantly reshape the robotics marketplace. The goal of this technical briefing is to provide an overview on the state of the art and practice concerning solutions and open challenges in the engineering of software required to develop and manage robotic systems. Model-Driven Engineering (MDE) is discussed as a promising technology to raise the level of abstraction, promote reuse, facilitate integration, boost automation and promote early analysis in such a complex domain.

  • 40.
    Corcoran, Diarmuid
    et al.
    KTH. Ericsson AB.
    Andimeh, Loghman
    Ericsson AB.
    Ermedahl, Andreas
    Ericsson AB.
    Kreuger, Per
    RISE SICS AB.
    Schulte, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Data Driven Selection of DRX for Energy Efficient 5G RAN2017In: 2017 13TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT (CNSM), IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    The number of connected mobile devices is increasing rapidly with more than 10 billion expected by 2022. Their total aggregate energy consumption poses a significant concern to society. The current 3gpp (3rd Generation Partnership Project) LTE/LTE-Advanced standard incorporates an energy saving technique called discontinuous reception (DRX). It is expected that 5G will use an evolved variant of this scheme. In general, the single selection of DRX parameters per device is non trivial. This paper describes how to improve energy efficiency of mobile devices by selecting DRX based on the traffic profile per device. Our particular approach uses a two phase data-driven strategy which tunes the selection of DRX parameters based on a smart fast energy model. The first phase involves the off-line selection of viable DRX combinations for a particular traffic mix. The second phase involves an on-line selection of DRX from this viable list. The method attempts to guarantee that latency is not worse than a chosen threshold. Alternatively, longer battery life for a device can be traded against increased latency. We built a lab prototype of the system to verify that the technique works and scales on a real LTE system. We also designed a sophisticated traffic generator based on actual user data traces. Complementary method verification has been made by exhaustive off-line simulations on recorded LTE network data. Our approach shows significant device energy savings, which has the aggregated potential over billions of devices to make a real contribution to green, energy efficient networks.

  • 41. Corcoran, Diarmuid
    et al.
    Andimeh, Logman
    Ermedahl, Andreas
    Kreuger, Per
    Schulte, Christian
    KTH.
    Data Driven Selection of DRX for Energy Efficient 5G RAN2017In: 13th International Conference on Network and Service Management (CNSM), 2017, IEEE conference proceedings, 2017, p. 1-9Conference paper (Refereed)
    Abstract [en]

    The number of connected mobile devices is increasing rapidly with more than 10 billion expected by 2022. Their total aggregate energy consumption poses a significant concern to society. The current 3gpp (3rd Generation Partnership Project) LTE/LTE-Advanced standard incorporates an energy saving technique called discontinuous reception (DRX). It is expected that 5G will use an evolved variant of this scheme. In general, the single selection of DRX parameters per device is non trivial. This paper describes how to improve energy efficiency of mobile devices by selecting DRX based on the traffic profile per device. Our particular approach uses a two phase data-driven strategy which tunes the selection of DRX parameters based on a smart fast energy model. The first phase involves the off-line selection of viable DRX combinations for a particular traffic mix. The second phase involves an on-line selection of DRX from this viable list. The method attempts to guarantee that latency is not worse than a chosen threshold. Alternatively, longer battery life for a device can be traded against increased latency. We built a lab prototype of the system to verify that the technique works and scales on a real LTE system. We also designed a sophisticated traffic generator based on actual user data traces. Complementary method verification has been made by exhaustive off-line simulations on recorded LTE network data. Our approach shows significant device energy savings, which has the aggregated potential over billions of devices to make a real contribution to green, energy efficient networks.

  • 42.
    Cremona, Fabio
    et al.
    University of California Berkeley, Scuola Superiore Sant’Anna, ALES .
    Lohstroh, Marten
    University of California, Berkeley.
    Broman, David
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Di Natale, Marco
    Scuola Superiore Sant’Anna.
    Lee, Edward
    University of California, Berkeley.
    Stavros, Tripakis
    University of California, Berkeley and Aalto University.
    Step Revision in Hybrid Co-simulation with FMI2016In: Proceedings of the 14th ACM-IEEE International Conference on formal Methods and Models for System Design (MEMOCODE), IEEE conference proceedings, 2016Conference paper (Refereed)
    Abstract [en]

    This paper presents a master algorithm for co-simulation of hybrid systems using the Functional Mock-up Interface (FMI) standard. Our algorithm introduces step revision to achieve an accurate and precise handling of mixtures of continuous-time and discrete-event signals, particularly in the situation where components are unable to accurately extrapolate their input. Step revision provides an efficient means to respect the error bounds of numerical approximation algorithms that operate inside co-simulated FMUs. We first explain the most fundamental issues associated with hybrid co-simulation and analyze them in the framework of FMI. We demonstrate the necessity for step revision to address some of these issues and formally describe a master algorithm that supports it. Finally, we present experimental results obtained through our reference implementation that is part of our publicly available open-source toolchain called FIDE.

  • 43. Danglot, Benjamin
    et al.
    Preux, Philippe
    Baudry, Benoit
    Monperrus, Martin
    KTH, School of Electrical Engineering and Computer Science (EECS), Theoretical Computer Science, TCS.
    Correctness attraction: a study of stability of software behavior under runtime perturbation2018In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 23, no 4, p. 2086-2119Article in journal (Refereed)
    Abstract [en]

    Can the execution of software be perturbed without breaking the correctness of the output? In this paper, we devise a protocol to answer this question from a novel perspective. In an experimental study, we observe that many perturbations do not break the correctness in ten subject programs. We call this phenomenon “correctness attraction”. The uniqueness of this protocol is that it considers a systematic exploration of the perturbation space as well as perfect oracles to determine the correctness of the output. To this extent, our findings on the stability of software under execution perturbations have a level of validity that has never been reported before in the scarce related work. A qualitative manual analysis enables us to set up the first taxonomy ever of the reasons behind correctness attraction.

  • 44. Danglot, Benjamin
    et al.
    Preux, Philippe
    Baudry, Benoit
    Monperrus, Martin
    KTH, School of Electrical Engineering and Computer Science (EECS), Theoretical Computer Science, TCS.
    Correctness Attraction: A Study of Stability of Software Behavior Under Runtime Perturbation2018In: PROCEEDINGS 2018 IEEE/ACM 40TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE), IEEE , 2018, p. 481-481Conference paper (Refereed)
  • 45.
    de C. Gomes, Pedro
    et al.
    KTH.
    Gurov, Dilian
    KTH.
    Huisman, M.
    Artho, Cyrille
    KTH.
    Specification and verification of synchronization with condition variables2018In: Science of Computer Programming, ISSN 0167-6423, E-ISSN 1872-7964, Vol. 163, p. 174-189Article in journal (Refereed)
    Abstract [en]

    This paper proposes a technique to specify and verify the correct synchronization of concurrent programs with condition variables. We define correctness of synchronization as the liveness property: “every thread synchronizing under a set of condition variables eventually exits the synchronization block”, under the assumption that every such thread eventually reaches its synchronization block. Our technique does not avoid the combinatorial explosion of interleavings of thread behaviours. Instead, we alleviate it by abstracting away all details that are irrelevant to the synchronization behaviour of the program, which is typically significantly smaller than its overall behaviour. First, we introduce SyncTask, a simple imperative language to specify parallel computations that synchronize via condition variables. We consider a SyncTask program to have a correct synchronization iff it terminates. Further, to relieve the programmer from the burden of providing specifications in SyncTask, we introduce an economic annotation scheme for Java programs to assist the automated extraction of SyncTask programs capturing the synchronization behaviour of the underlying program. We show that every Java program annotated according to the scheme (and satisfying the assumption mentioned above) has a correct synchronization iff its corresponding SyncTask program terminates. We then show how to transform the verification of termination of the SyncTask program into a standard reachability problem over Coloured Petri Nets that is efficiently solvable by existing Petri Net analysis tools. Both the SyncTask program extraction and the generation of Petri Nets are implemented in our STAVE tool. We evaluate the proposed framework on a number of test cases.

  • 46.
    De Carvalho Gomes, Pedro
    et al.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.