Change search
Refine search result
1 - 23 of 23
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Balliu, Musard
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Bastys, Iulia
    Chalmers Univ Technol, Dept Comp Sci & Engn, Gothenburg, Sweden..
    Sabelfeld, Andrei
    Chalmers Univ Technol, Dept Comp Sci & Engn, Gothenburg, Sweden..
    Securing IoT Apps2019In: IEEE Security and Privacy, ISSN 1540-7993, E-ISSN 1558-4046, Vol. 17, no 5, p. 22-29Article in journal (Refereed)
    Abstract [en]

    Users increasingly rely on Internet of Things (IoT) apps to manage their digital lives through the overwhelming diversity of IoT services and devices. Are the IoT app platforms doing enough to protect the privacy and security of their users? By securing IoT apps, how can we help users reclaim control over their data?

  • 2. Baumann, Christoph
    et al.
    Schwarz, Oliver
    Dam, Mads
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    On the verification of system-level information flow properties for virtualized execution platforms2019In: Journal of Cryptographic Engineering, ISSN 2190-8508, Vol. 9, no 3, p. 243-261Article in journal (Refereed)
    Abstract [en]

    The security of embedded systems can be dramatically improved through the use of formally verified isolation mechanisms such as separation kernels, hypervisors, or microkernels. For trustworthiness, particularly for system-level behavior, the verifications need precise models of the underlying hardware. Such models are hard to attain, highly complex, and proofs of their security properties may not easily apply to similar but different platforms. This may render verification economically infeasible. To address these issues, we propose a compositional top-down approach to embedded system specification and verification, where the system-on-chip is modeled as a network of distributed automata communicating via paired synchronous message passing. Using abstract specifications for each component allows to delay the development of detailed models for cores, devices, etc., while still being able to verify high-level security properties like integrity and confidentiality, and soundly refine the result for different instantiations of the abstract components at a later stage. As a case study, we apply this methodology to the verification of information flow security for an industry-scale security-oriented hypervisor on the ARMv8-A platform and report on the complete verification of guest mode security properties in the HOL4 theorem prover.

  • 3.
    Boman, Magnus
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    ben Abdesslem, Fehmi
    Forsell, Erik
    Gillblad, Daniel
    Görnerup, Olof
    Isacsson, Nils
    Sahlgren, Magnus
    Kaldo, Viktor
    Learning machines in Internet-delivered psychological treatment2019In: Progress in artificial intelligence, ISSN 2192-6352, Vol. 8, no 4, p. 475-485Article in journal (Refereed)
    Abstract [en]

    A learning machine, in the form of a gating network that governs a finite number of different machine learning methods, is described at the conceptual level with examples of concrete prediction subtasks. A historical data set with data from over 5000 patients in Internet-based psychological treatment will be used to equip healthcare staff with decision support for questions pertaining to ongoing and future cases in clinical care for depression, social anxiety, and panic disorder. The organizational knowledge graph is used to inform the weight adjustment of the gating network and for routing subtasks to the different methods employed locally for prediction. The result is an operational model for assisting therapists in their clinical work, about to be subjected to validation in a clinical trial.

  • 4.
    Danglot, Benjamin
    et al.
    Inria Lille Nord Europe, Parc Sci Haute Borne 40,Ave Halley,Bat A,Pk Plaza, F-59650 Villeneuve Dascq, France..
    Vera-Perez, Oscar Luis
    Inria Rennes Bretagne Atlantique, Campus Beaulieu,263 Ave Gen Leclerc, F-35042 Rennes, France..
    Baudry, Benoit
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Monperrus, Martin
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Automatic test improvement with DSpot: a study with ten mature open-source projects2019In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 24, no 4, p. 2603-2635Article in journal (Refereed)
    Abstract [en]

    In the literature, there is a rather clear segregation between manually written tests by developers and automatically generated ones. In this paper, we explore a third solution: to automatically improve existing test cases written by developers. We present the concept, design and implementation of a system called DSpot, that takes developer-written test cases as input (JUnit tests in Java) and synthesizes improved versions of them as output. Those test improvements are given back to developers as patches or pull requests, that can be directly integrated in the main branch of the test code base. We have evaluated DSpot in a deep, systematic manner over 40 real-world unit test classes from 10 notable and open-source software projects. We have amplified all test methods from those 40 unit test classes. In 26/40 cases, DSpot is able to automatically improve the test under study, by triggering new behaviors and adding new valuable assertions. Next, for ten projects under consideration, we have proposed a test improvement automatically synthesized by DSpot to the lead developers. In total, 13/19 proposed test improvements were accepted by the developers and merged into the main code base. This shows that DSpot is capable of automatically improving unit-tests in real-world, large scale Java software.

  • 5.
    Danglot, Benjamin
    et al.
    INRIA, Lille, France..
    Vera-Perez, Oscar
    INRIA, Rennes, France..
    Yu, Zhongxing
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Zaidman, Andy
    Delft Univ Technol, Delft, Netherlands..
    Monperrus, Martin
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Baudry, Benoit
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    A snowballing literature study on test amplification2019In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 157, article id UNSP 110398Article in journal (Refereed)
    Abstract [en]

    The adoption of agile approaches has put an increased emphasis on testing, resulting in extensive test suites. These suites include a large number of tests, in which developers embed knowledge about meaningful input data and expected properties as oracles. This article surveys works that exploit this knowledge to enhance manually written tests with respect to an engineering goal (e.g., improve coverage or refine fault localization). While these works rely on various techniques and address various goals, we believe they form an emerging and coherent field of research, which we coin "test amplification". We devised a first set of papers from DBLP, searching for all papers containing "test" and "amplification" in their title. We reviewed the 70 papers in this set and selected the 4 papers that fit the definition of test amplification. We use them as the seeds for our snowballing study, and systematically followed the citation graph. This study is the first that draws a comprehensive picture of the different engineering goals proposed in the literature for test amplification. We believe that this survey will help researchers and practitioners entering this new field to understand more quickly and more deeply the intuitions, concepts and techniques used for test amplification.

  • 6.
    Ghoorchian, Kambiz
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Sahlgren, Magnus
    Research Institute of Sweden (RISE).
    GDTM: Graph-based Dynamic Topic ModelsIn: Progress in Artificial Intelligence, ISSN 2192-6352Article in journal (Refereed)
    Abstract [en]

    Dynamic Topic Modeling (DTM) is the ultimate solution for extracting topics from short texts generated in Online Social Networks (OSNs) like Twitter. A DTM solution is required to be scalable and to be able to account for sparsity in short texts and dynamicity of topics. Current solutions combine probabilistic mixture models like Dirichlet Multinomial or PitmanYor Process with approximate inference approaches like Gibbs Sampling and Stochastic Variational Inference to, respectively, account for dynamicity and scalability in DTM. However, these solutions rely on weak probabilistic language models, which do not account for sparsity in short texts. In addition, their inference is based on iterative optimization algorithms, which have scalability issues when it comes to DTM. We present GDTM, a single-pass graph-based DTM algorithm, to solve the problem. GDTM combines a context-rich and incremental feature representation model, called Random Indexing (RI), with a novel online graph partitioning algorithm to address scalability and dynamicity. In addition, GDTM uses a rich language modeling approach based on the Skip-gram technique to account for sparsity. We run multiple experiments over a large-scale Twitter dataset to analyze the accuracy and scalability of GDTM and compare the results with four state-of-the-art approaches. The results show that GDTM outperforms the best approach by 11% on accuracy and performs by an order of magnitude faster while creating 4 times better topic quality over standard evaluation metrics.

  • 7.
    Giaretta, Lodovico
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Girdzijauskas, Sarunas
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Gossip Learning: Off the Beaten Path2019Conference paper (Refereed)
    Abstract [en]

    The growing computational demands of model training tasks and the increased privacy awareness of consumers call for the development of new techniques in the area of machine learning. Fully decentralized approaches have been proposed, but are still in early research stages. This study analyses gossip learning, one of these state-of-the-art decentralized machine learning protocols, which promises high scalability and privacy preservation, with the goal of assessing its applicability to realworld scenarios.

    Previous research on gossip learning presents strong and often unrealistic assumptions on the distribution of the data, the communication speeds of the devices and the connectivity among them. Our results show that lifting these requirements can, in certain scenarios, lead to slow convergence of the protocol or even unfair bias in the produced models. This paper identifies the conditions in which gossip learning can and cannot be applied, and introduces extensions that mitigate some of its limitations.

  • 8. Lei, L.
    et al.
    You, L.
    He, Qing
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Vu, T. X.
    Chatzinotas, S.
    Yuan, D.
    Ottersten, Björn
    University of Luxembourg, Luxembourg City, 1855, Luxembourg.
    Learning-Assisted Optimization for Energy-Efficient Scheduling in Deadline-Aware NOMA Systems2019In: IEEE Transactions on Green Communications and Networking, ISSN 2473-2400, Vol. 3, no 3, p. 615-627, article id 8657758Article in journal (Refereed)
    Abstract [en]

    In this paper, we study a class of minimum-energy scheduling problems in non-orthogonal multiple access (NOMA) systems. NOMA is adopted to enable efficient channel utilization and interference mitigation, such that base stations can consume minimal energy to empty their queued data in presence of transmission deadlines, and each user can obtain all the requested data timely. Due to the high computational complexity in resource scheduling and the stringent execution-time constraints in practical systems, providing a time-efficient and high-quality solution to 5G real-time systems is challenging. The conventional iterative optimization approaches may exhibit their limitations in supporting online optimization. We herein explore a viable alternative and develop a learning-assisted optimization framework to improve the computational efficiency while retaining competitive energy-saving performance. The idea is to use deep-learning-based predictions to accelerate the optimization process in conventional optimization methods for tackling the NOMA resource scheduling problems. In numerical studies, the proposed optimization framework demonstrates high computational efficiency. Its computational time is insensitive to the input size. The framework is able to provide optimal solutions as long as the learning-based predictions satisfy a derived optimality condition. For the general cases with imperfect predictions, the algorithmic solution is error-tolerable and performance scaleable, leading the energy-saving performance close to the global optimum.

  • 9.
    Li, Jing-Rebecca
    et al.
    INRIA Saclay, Equipe DEFI, CMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex, France.
    Nguyen, Van Dang
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Tran, Try Nguyen
    INRIA Saclay, Equipe DEFI, CMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex, France.
    Valdman, Jan
    Institute of Mathematics, Faculty of Science, University of South Bohemia, Ceske Budejovice and Institute of Information Theory and Automation of the ASCR, Prague, Czech Republic.
    Trang, Bang Cong
    INRIA Saclay, Equipe DEFI, CMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex, France.
    Nguyen, Khieu Van
    INRIA Saclay, Equipe DEFI, CMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex, France.
    Thach Son, Vu Duc
    INRIA Saclay, Equipe DEFI, CMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex, France.
    Tran, Hoang An
    INRIA Saclay, Equipe DEFI, CMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex, France.
    Tran, Hoang Trong An
    INRIA Saclay, Equipe DEFI, CMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex, France.
    Nguyen, Thi Minh Phuong
    INRIA Saclay, Equipe DEFI, CMAP, Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex, France.
    SpinDoctor: a Matlab toolbox for diffusion MRI simulation2019In: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, ISSN 1053-8119, Vol. 202, article id 116120Article in journal (Refereed)
    Abstract [en]

    The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch-Torrey partial differential equation (BTPDE). A mathematical model for the time-dependent apparent diffusion coefficient (ADC), called the H-ADC model, was obtained recently using homogenization techniques on the BTPDE. Under the assumption of negligible water exchange between compartments, the H-ADC model produces the ADC of a diffusion medium from the solution of a diffusion equation (DE) subject to a time-dependent Neumann boundary condition. This paper describes a publicly available Matlab toolbox called SpinDoctor that can be used 1) to solve the BTPDE to obtain the dMRI signal (the toolbox provides a way of robustly fitting the dMRI signal to obtain the fitted ADC); 2) to solve the DE of the H-ADC model to obtain the ADC; 3) a short-time approximation formula for the ADC is also included in the toolbox for comparison with the simulated ADC. The PDEs are solved by P 1 finite elements combined with built-in Matlab routines for solving ordinary differential equations. The finite element mesh generation is performed using an external package called Tetgen that is included in the toolbox. SpinDoctor provides built-in options of including 1) spherical cells with a nucleus; 2) cylindrical cells with a myelin layer; 3) an extra-cellular space (ECS) enclosed either a) in a box or b) in a tight wrapping around the cells; 4) deformation of canonical cells by bending and twisting. 5) permeable membranes for the BT-PDE (the H-ADC assumes negligible permeability). Built-in diffusion-encoding pulse sequences include the Pulsed Gradient Spin Echo and the Oscillating Gradient Spin Echo.

  • 10. Martinez, M.
    et al.
    Monperrus, Martin
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Astor: Exploring the design space of generate-and-validate program repair beyond GenProg2019In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 151, p. 65-80Article in journal (Refereed)
    Abstract [en]

    This article contributes to defining the design space of program repair. Repair approaches can be loosely characterized according to the main design philosophy, in particular “generate- and-validate” and synthesis-based approaches. Each of those repair approaches is a point in the design space of program repair. Our goal is to facilitate the design, development and evaluation of repair approaches by providing a framework that: a) contains components commonly present in most approaches, b) provides built-in implementations of existing repair approaches. This paper presents a Java framework named Astor that focuses on the design space of generate-and-validate repair approaches. The key novelty of Astor is to provides explicit extension points to explore the design space of program repair. Thanks to those extension points, researchers can both reuse existing program repair components and implement new ones. Astor includes 6 unique implementations of repair approaches in Java, including GenProg for Java called jGenProg. Researchers have already defined new approaches over Astor. The implementations of program repair approaches built already available in Astor are capable of repairing, in total, 98 real bugs from 5 large Java programs. Astor code is publicly available on Github: https://github.com/SpoonLabs/astor.

  • 11.
    Nguyen, Van Dang
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    High Performance Finite Element Methods with Application to Simulation of Vertical Axis Wind Turbines and Diffusion MRI2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Finite element methods have been developed over decades, and together with the growth of computer power, they become more and more important in dealing with large-scale simulations in science and industry.The objective of this thesis is to develop high-performance finite element methods, with two concrete applications: computational fluid dynamics (CFD) with simulation of turbulent flow past a vertical axis wind turbine (VAWT), and computational diffusion magnetic resonance imaging (CDMRI). The thesis presents contributions in the form of both new numerical methods for high-performance computing frameworks and efficient, tested software, published open source as part of the FEniCS/FEniCS-HPC platform. More specifically, we have four main contributions through the thesis work.

    First, we develop a DFS-ALE method which combines the Direct finite element simulation method (DFS) with the Arbitrary Lagrangian-Eulerian method (ALE) to solve the Navier-Stokes equations for a rotating turbine. This method is enhanced with dual-based a posteriori error control and automated mesh adaptation. Turbulent boundary layers are modeled by a slip boundary condition to avoid a full resolution which is impossible even with the most powerful computers available today. The method is validated against experimental data with a good agreement.

    Second, we propose a partition of unity finite element method to tackle interface problems. In CFD, it allows for imposing slip velocity boundary conditions on conforming internal interfaces for a fluid-structure interaction model. In CDMRI, it helps to overcome the difficulties that the standard approaches have when imposing the microscopic heterogeneity of the biological tissues and allows for efficient solutions of the Bloch-Torrey equation in heterogeneous domains. The method facilitates a straightforward implementation on the FEniCS/ FEniCS-HPC platform. The method is validated against reference solutions, and the implementation shows a strong parallel scalability.

    Third, we propose a finite element discretization on manifolds in order to efficiently simulate the diffusion MRI signal in domains that have a thin layer or a thin tube geometrical structure. The method helps to significantly reduce the required simulation time, computer memory, and difficulties associated with mesh generation, while maintaining the accuracy. Thus, it opens the possibility to simulate complicated structures at a low cost, for a better understanding of diffusion MRI in the brain.

    Finally, we propose an efficient portable simulation framework that integrates recent advanced techniques in both mathematics and computer science to enable the users to perform simulations with the Cloud computing technology. The simulation framework consists of Python, IPython and C++ solvers working either on a web browser with Google Colaboratory notebooks or on the Google Cloud Platform with MPI parallelization.

  • 12.
    Nguyen, Van Dang
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Fang, Chengran
    INRIA Saclay, Equipe DEFI, CMAP, Ecole Polytechnique, 91128 Palaiseau Cedex, France.
    Wassermann, Demian
    INRIA Saclay, Equipe Parietal, 1 Rue Honor ́e d’Estienne d’Orves, 91120 Palaiseau, France.
    Li, Jing-Rebecca
    INRIA Saclay, Equipe DEFI, CMAP, Ecole Polytechnique, 91128 Palaiseau Cedex, France.
    Diffusion MRI simulation of realistic neurons with SpinDoctor and the Neuron ModuleManuscript (preprint) (Other academic)
    Abstract [en]

    The diffusion MRI signal arising from neurons can be numerically simulated by solving the Bloch- Torrey partial differential equation. In this paper we present the Neuron Module that we imple- mented within the Matlab-based diffusion MRI simulation toolbox SpinDoctor. SpinDoctor uses finite element discretization and adaptive time integration to solve the Bloch-Torrey partial dif- ferential equation for general diffusion-encoding sequences, at multiple b-values and in multiple diffusion directions.

    In order to facilitate the diffusion MRI simulation of realistic neurons by the research community, we constructed finite element meshes for a group of 36 pyramidal neurons and a group of 29 spindle neurons whose morphological descriptions were found in the publicly available neuron repositoryNeuroMorpho.Org. These finite elements meshes range from having 15163 nodes to 622553 nodes. We also broke the neurons into the soma and dendrite branches and created finite elements meshes for these cell components. Through the Neuron Module, these neuron and components finite element meshes can be seamlessly coupled with the functionalities of SpinDoctor to provide the diffusion MRI signal attributable to spins inside neurons. We make these meshes and the source code of the Neuron Module available to the public as an open-source package.

    To illustrate some potential uses of the Neuron Module, we show numerical examples of the simu- lated dMRI signals in multiple diffusion directions from whole neurons as well as from the soma and dendrite branches, include a comparison of the high b-value behavior between dendrite branches and whole neurons.

  • 13. Nurcan, S.
    et al.
    Johnson, Pontus
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Message from the EDOC 2018 program chairs2018In: 22nd IEEE International Enterprise Distributed Object Computing Conference, EDOC 2018, article id 8536137Article in journal (Refereed)
  • 14.
    Riese, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Bälter, Olle
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS. KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Mosavat, Vahid
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Don’t get stuck in the tool, use the method!: Lessons learned by teaching test driven program development2019In: KTH SoTL 2019, Stockholm: KTH Royal Institute of Technology , 2019Conference paper (Refereed)
    Abstract [en]

    Since 2014, we have embedded Test Driven Development (TDD) in an introductory programming course. TDD is common industry practice for developing code, and has also become a part of curriculums at different levels and proven beneficial in educational settings (Kollanus and Isomöttönen, 2008). The method itself is rather simple: you start with writing test cases for your program (what output you expect for certain input) and then you write code that fullfils these tests. In that way, the use of the TDD enables you to test your code immediately and throughout the development, in opposed to the more traditional way in which you first finish the code and then write test cases to verify it. Teaching this method in an introductory course would also enable students to use it in later courses and be well accustomed to the method when they graduate. Researchers that conducted a previous study on this recommends that TDD should be mandatory (Marrero and Settle, 2005).

    TDD has during the years 2014-2017 been a mandatory part of an introductory programming course offered to non-computer science majors. The approach to teaching TDD has evolved and been a bit different each year. However, since TDD have been a mandatory part of the course, it was also part of what the students were assessed on, in coherent with constructive alignment (Biggs, 1996). Making it part of the assessment was also believed to motivate students to use the method, since the assessments can make students take part in learnings situations they otherwise would not (Ramsden, 2003). Hence, the students were required to not only submit and present their code, but also their test cases, that had to be written in a standard tool, doctest, that was presented and explained during lectures. In 2017, all 64 students that presented their final assignment during the spring filled out a survey about their experiences with TDD and in addition, nine of the students were interviewed.

    From the open-ended questions on the surveys and from the interviews, it became evident that many of the students had not understood nor used the method TDD, but had instead used the testing tool to create test cases when their program was already finished. They had handed in test cases since that was a requirement to pass the course, but they had forgotten all about the method. From these results, the lesson we learned was that even though our intention had been to make TDD mandatory, and we planned the assessment with that in mind, we had actually only made the use of the testing tool mandatory. 

    We did try to convince the students that using the TDD method would be beneficial in the development of the program, but failed. One of the benefits of TDD is for code maintenance, but the structure of our courses does not easily lend itself to requiring adjustments of a student project say six months after the first submission, especially for students who are non-CS majors.

    When teaching your students a method through the usage of a tool, you need to make sure your students can distinguish between the method and the tool. You will also have to emphasize the method and plan the assessment in such a way that the use of the method, the process, is assessed. If the focus is only on the finished product, it will more likely be an assessment of how well the students used the tool and the students are at risk of neglecting the method altogether.

  • 15.
    Riese, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Kann, Viggo
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Teaching assistants’ experience of their roles and responsibilities in relation to tutorials2019In: KTH SoTL 2019, Stockholm: KTH Royal Institute of Technology , 2019Conference paper (Refereed)
  • 16.
    Rivas-Gomez, Sergio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Fanfarillo, Alessandro
    National Center for Atmospheric Research, Boulder, CO, United States..
    Narasimhamurthy, Sai
    Seagate Syst UK, Havant PO9 1SA, England..
    Markidis, Stefano
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Persistent Coarrays: Integrating MPI Storage Windows in Coarray Fortran2019In: Proceedings of the 26th European MPI Users' Group Meeting (EuroMPI 2019), ACM Digital Library, 2019, p. 1-8, article id 3Conference paper (Refereed)
    Abstract [en]

    The inherent integration of novel hardware and software components on HPC is expected to considerably aggravate the Mean Time Between Failures (MTBF) on scientific applications, while simultaneously increase the programming complexity of these clusters. In this work, we present the initial steps towards the integration of transparent resilience support inside Coarray Fortran. In particular, we propose persistent coarrays, an extension of OpenCoarrays that integrates MPI storage windows to leverage its transport layer and seamlessly map coarrays to files on storage. Preliminary results indicate that our approach provides clear benefits on representative workloads, while incurring in minimal source code changes.

  • 17.
    Rivas-Gomez, Sergio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Fanfarillo, Alessandro
    National Center for Atmospheric Research, Boulder, CO, United States..
    Valat, Sebastien
    Atos, 1 Rue de Provence, 38130 Echirolles, France.
    Laferriere, Christophe
    Atos, 1 Rue de Provence, 38130 Echirolles, France.
    Couvee, Philippe
    Atos, 1 Rue de Provence, 38130 Echirolles, France.
    Narasimhamurthy, Sai
    Seagate Syst UK, Havant PO9 1SA, England..
    Markidis, Stefano
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    uMMAP-IO: User-level Memory-mapped I/O for HPC2019In: Proceedings of the 26th IEEE International Conference on High-Performance Computing, Data, and Analytics (HiPC'19),, Institute of Electrical and Electronics Engineers (IEEE), 2019Conference paper (Refereed)
    Abstract [en]

    The integration of local storage technologies alongside traditional parallel file systems on HPC clusters, is expected to rise the programming complexity on scientific applications aiming to take advantage of the increased-level of heterogeneity. In this work, we present uMMAP-IO, a user-level memory-mapped I/O implementation that simplifies data management on multi-tier storage subsystems. Compared to the memory-mapped I/O mechanism of the OS, our approach features per-allocation configurable settings (e.g., segment size) and transparently enables access to a diverse range of memory and storage technologies, such as the burst buffer I/O accelerators. Preliminary results indicate that uMMAP-IO provides at least 5-10x better performance on representative workloads in comparison with the standard memory-mapped I/O of the OS, and approximately 20-50% degradation on average compared to using conventional memory allocations without storage support up to 8192 processes.

  • 18.
    Simonsson, Jesper
    et al.
    KTH.
    Zhang, Long
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Morin, Brice
    Baudry, Benoit
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Monperrus, Martin
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Observability and Chaos Engineering on System Calls for Containerized Applications in DockerManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper, we present a novel fault injection system called ChaosOrca for system calls in containerized applications. ChaosOrca aims at evaluating a given application's self-protection capability with respect to system call errors. The unique feature of ChaosOrca is that it conducts experiments under production-like workload without instrumenting the application. We exhaustively analyze all kinds of system calls and utilize different levels of monitoring techniques to reason about the behaviour under perturbation. We evaluate ChaosOrca on three real-world applications: a file transfer client, a reverse proxy server and a micro-service oriented web application. Our results show that it is promising to detect weaknesses of resilience mechanisms related to system calls issues.

  • 19.
    Staicu, Cristian-Alexandru
    et al.
    TU Darmstadt.
    Schoepe, Daniel
    Chalmers University of Technology.
    Balliu, Musard
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Pradel, Michael
    TU Darmstadt.
    Sabelfeld, Andrei
    Chalmers University of Technology.
    An Empirical Study of Information Flows in Real-World JavaScript2019In: Proceedings of the 14th ACM SIGSAC Workshop on Programming Languages and Analysis for Security, ACM Digital Library, 2019, p. 45-59Conference paper (Refereed)
    Abstract [en]

    Information flow analysis prevents secret or untrusted data from flowing into public or trusted sinks. Existing mechanisms cover a wide array of options, ranging from lightweight taint analysis to heavyweight information flow control that also considers implicit flows. Dynamic analysis, which is particularly popular for languages such as JavaScript, faces the question whether to invest in analyzing flows caused by not executing a particular branch, so-called hidden implicit flows. This paper addresses the questions how common different kinds of flows are in real-world programs, how important these flows are to enforce security policies, and how costly it is to consider these flows. We address these questions in an empirical study that analyzes 56 real-world JavaScript programs that suffer from various security problems, such as code injection vulnerabilities, denial of service vulnerabilities, memory leaks, and privacy leaks. The study is based on a state-of-the-art dynamic information flow analysis and a formalization of its core. We find that implicit flows are expensive to track in terms of permissiveness, label creep, and runtime overhead. We find a lightweight taint analysis to be sufficient for most of the studied security problems, while for some privacy-related code, observable tracking is sometimes required. In contrast, we do not find any evidence that tracking hidden implicit flows reveals otherwise missed security problems. Our results help security analysts and analysis designers to understand the cost-benefit tradeoffs of information flow analysis and provide empirical evidence that analyzing information flows in a cost-effective way is a relevant problem.

  • 20.
    Vasiloudis, Theodore
    et al.
    RISE SICS, Stockholm, Sweden..
    Morales, Gianmarco De Francisci
    ISI Fdn, Turin, Italy..
    Boström, Henrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Quantifying Uncertainty in Online Regression Forests2019In: Journal of machine learning research, ISSN 1532-4435, E-ISSN 1533-7928, Vol. 20, article id 155Article in journal (Refereed)
    Abstract [en]

    Accurately quantifying uncertainty in predictions is essential for the deployment of machine learning algorithms in critical applications where mistakes are costly. Most approaches to quantifying prediction uncertainty have focused on settings where the data is static, or bounded. In this paper, we investigate methods that quantify the prediction uncertainty in a streaming setting, where the data is potentially unbounded. We propose two meta-algorithms that produce prediction intervals for online regression forests of arbitrary tree models; one based on conformal prediction, and the other based on quantile regression. We show that the approaches are able to maintain specified error rates, with constant computational cost per example and bounded memory usage. We provide empirical evidence that the methods outperform the state-of-the-art in terms of maintaining error guarantees, while being an order of magnitude faster. We also investigate how the algorithms are able to recover from concept drift.

  • 21.
    Ye, He
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Martinez, Matias
    Univ Valenciennes, Valenciennes, France..
    Durieux, Thomas
    INESC ID, Lisbon, Portugal..
    Monperrus, Martin
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    A Comprehensive Study of Automatic Program Repair on the QuixBugs Benchmark2019In: 2019 1st IEEE International Workshop on Intelligent Bug Fixing, IBF 2019 / [ed] Cheung, SC Sun, X Zhang, T, Institute of Electrical and Electronics Engineers (IEEE), 2019, p. 1-10, article id 8665475Conference paper (Refereed)
    Abstract [en]

    Automatic program repair papers tend to repeatedly use the same benchmarks. This poses a threat to the external validity of the findings of the program repair research community. In this paper, we perform an automatic repair experiment on a benchmark called QuixBugs that has never been studied in the context of program repair. In this study, we report on the characteristics of QuixBugs, and study five repair systems, Arja, Astor, Nopol, NPEfix and RSRepair, which are representatives of generate-and-validate repair techniques and synthesis repair techniques. We propose three patch correctness assessment techniques to comprehensively study overfitting and incorrect patches. Our key results are: 1) 15 / 40 buggy programs in the QuixBugs can be repaired with a test-suite adequate patch; 2) a total of 64 plausible patches for those 15 buggy programs in the QuixBugs are present in the search space of the considered tools; 3) the three patch assessment techniques discard in total 33 / 64 patches that are overfitting. This sets a baseline for future research of automatic repair on QuixBugs. Our experiment also highlights the major properties and challenges of how to perform automated correctness assessment of program repair patches. All experimental results are publicly available on Github in order to facilitate future research on automatic program repair.

  • 22.
    Zabetian, Negar
    et al.
    Amirkabir Univ Technol, Dept Elect Engn, Microwave & Wireless Commun Res Lab, Tehran Polytech, Tehran 1591634311, Iran..
    Mohammadi, Abbas
    Amirkabir Univ Technol, Dept Elect Engn, Microwave & Wireless Commun Res Lab, Tehran Polytech, Tehran 1591634311, Iran..
    Masoudi, Meysam
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS, Radio Systems Laboratory (RS Lab).
    Energy-efficient power allocation for device-to-device communications underlaid cellular networks using stochastic geometry2019In: European transactions on telecommunications, ISSN 1124-318X, E-ISSN 2161-3915, article id e3768Article in journal (Refereed)
    Abstract [en]

    In this paper, we study an energy efficiency maximization problem in uplink for device-to-device (D2D) communications underlaid with cellular networks on multiple bands. Utilizing stochastic geometry, we derive closed-form expressions for the average sum rate, successful transmission probability, and energy efficiency of cellular and D2D users. Then, we formulate an optimization problem to jointly maximize the energy efficiency of D2D and cellular users and obtain optimum transmission power of both D2D and cellular users. In the optimization problem, we guarantee the quality-of-service of users by taking into account the success transmission probability on each link. To solve the problem, first we convert the problem into canonical convex form. Afterwards, we solve the problem in two phases: energy efficiency maximization of devices and energy efficiency maximization of cellular users. In the first phase, we maximize the energy efficiency of D2D users and feed the solution to the second phase where we maximize the energy efficiency of cellular users. Simulation results reveal that significant energy efficiency can be attained, eg, 10% energy efficiency improvement compared to fix transmission power in a high-density scenario.

  • 23.
    Zhang, Lu
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS, Optical Network Laboratory (ON Lab).
    Udalcovs, Aleksejs
    Lin, Rui
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS, Optical Network Laboratory (ON Lab).
    Ozolins, Oskars
    Pang, Xiaodan
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS, Optical Network Laboratory (ON Lab).
    Gan, L.
    Schatz, Richard
    Djupsjöbacka, A.
    Mårtensson, J.
    Tang, M.
    Fu, S.
    Liu, D.
    Tong, W.
    Popov, Sergei
    KTH, School of Engineering Sciences (SCI), Applied Physics, Photonics.
    Jacobsen, Gunnar
    Hu, W.
    Xiao, S.
    Chen, Jiajia
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS, Optical Network Laboratory (ON Lab).
    Digital Radio-Over-Multicore-Fiber System with Self-Homodyne Coherent Detection and Entropy Coding for Mobile Fronthaul2018In: European Conference on Optical Communication, ECOC, Institute of Electrical and Electronics Engineers Inc. , 2018Conference paper (Refereed)
    Abstract [en]

    We experimentally demonstrate a 28-Gbaud 16-QAM self-homodyne digital radio-over-33.6km-7-core-fiber system with entropy coding for mobile fronthaul, achieving error-free carrier aggregation of 330 100-MHz 4096-QAM 5G-new-radio channels and 921 100-MHz QPSK 5G-new-radio channels with CPRI-equivalent data rate up to 3.73-Tbit/s.

1 - 23 of 23
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf