kth.sePublications
Change search
Refine search result
1234567 101 - 150 of 1224
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Barbette, Tom
    et al.
    University of Liege.
    Soldani, Cyril
    University of Liege.
    Mathy, Laurent
    University of Liege.
    Fast userspace packet processing2015In: Architectures for Networking and Communications Systems (ANCS' 15), IEEE Press, 2015, p. 5-16Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 102.
    Barbette, Tom
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS, Network Systems Laboratory (NS Lab).
    Tang, Chen
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Yao, Haoran
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Kostic, Dejan
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Maguire Jr., Gerald Q.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Papadimitratos, Panagiotis
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Chiesa, Marco
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    A High-Speed Load-Balancer Design with Guaranteed Per-Connection-Consistency2020In: Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2020 / [ed] USENIX Association, Santa Clara, CA, USA: USENIX Association , 2020, p. 667-683Conference paper (Refereed)
    Abstract [en]

    Large service providers use load balancers to dispatch millions of incoming connections per second towards thousands of servers. There are two basic yet critical requirements for a load balancer: uniform load distribution of the incoming connections across the servers and per-connection-consistency (PCC), i.e., the ability to map packets belonging to the same connection to the same server even in the presence of changes in the number of active servers and load balancers. Yet, meeting both these requirements at the same time has been an elusive goal. Today's load balancers minimize PCC violations at the price of non-uniform load distribution.

    This paper presents Cheetah, a load balancer that supports uniform load distribution and PCC while being scalable, memory efficient, resilient to clogging attacks, and fast at processing packets. The Cheetah LB design guarantees PCC for any realizable server selection load balancing mechanism and can be deployed in both a stateless and stateful manner, depending on the operational needs. We implemented Cheetah on both a software and a Tofino-based hardware switch. Our evaluation shows that a stateless version of Cheetah guarantees PCC, has negligible packet processing overheads, and can support load balancing mechanisms that reduce the flow completion time by a factor of 2–3×.

    Download full text (pdf)
    cheetah.pdf
  • 103.
    Barbette, Tom
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Wu, Erfan
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Kostic, Dejan
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Maguire Jr., Gerald Q.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Papadimitratos, Panagiotis
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Chiesa, Marco
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Cheetah: A High-Speed Programmable Load-Balancer Framework with Guaranteed Per-Connection-Consistency2022In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 30, no 1, p. 354-367Article in journal (Refereed)
    Abstract [en]

    Large service providers use load balancers to dispatch millions of incoming connections per second towards thousands of servers. There are two basic yet critical requirements for a load balancer: uniform load distribution of the incoming connections across the servers, which requires to support advanced load balancing mechanisms, and per-connection-consistency (PCC), i.e, the ability to map packets belonging to the same connection to the same server even in the presence of changes in the number of active servers and load balancers. Yet, simultaneously meeting these requirements has been an elusive goal. Today's load balancers minimize PCC violations at the price of non-uniform load distribution. This paper presents Cheetah, a load balancer that supports advanced load balancing mechanisms and PCC while being scalable, memory efficient, fast at processing packets, and offers comparable resilience to clogging attacks as with today's load balancers. The Cheetah LB design guarantees PCC for any realizable server selection load balancing mechanism and can be deployed in both stateless and stateful manners, depending on operational needs. We implemented Cheetah on both a software and a Tofino-based hardware switch. Our evaluation shows that a stateless version of Cheetah guarantees PCC, has negligible packet processing overheads, and can support load balancing mechanisms that reduce the flow completion time by a factor of 2-3 ×.

  • 104. Barendregt, W.
    et al.
    Biørn-Hansen, Aksel
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology.
    Andersson, D.
    Users' experiences with the use of transaction data to estimate consumption-based emissions in a carbon calculator2020In: Sustainability (Switzerland), E-ISSN 2071-1050, Vol. 12, no 18, article id 7777Article in journal (Refereed)
    Abstract [en]

    With global greenhouse gas (GHG) emissions ever increasing, we are currently seeing a renewed interest in carbon footprint calculators (or carbon calculators for short). While carbon calculators have traditionally calculated emissions based on user input about e.g., food, heating, and traveling, a new development in this area is the use of transaction data to also estimate emissions based on consumption. Such carbon calculators should be able to provide users with more accurate estimations, easier input possibilities, and an incentive to continue using them. In this paper, we present the results from a survey sent to the users of such a novel carbon calculator, called Svalna. Svalna offers users the possibility to connect their bank account. The transaction data are then coupled with Environmental Extended Multi Regional Input Output data (EE-MRIO) for Swedish conditions which are used to determine a continuous overview of the user’s greenhouse gas emissions from consumption. The aim of the survey was to (a) understand whether people are willing to connect their bank account, (b) whether they trust the calculations of their emissions, and (c) whether they think the use of Svalna has an effect on their behaviour. Furthermore, we wanted to know how Svalna could be improved. While the results of the survey showed that many users were willing to connect their bank account, a rather large part of the users perceived safety risks in doing so. The users also showed an only average level of trust in the correctness of the estimated greenhouse gas emissions. A lack of trust was attributed to experiencing technical problems but also to not knowing how the emissions were calculated and because the calculator could not capture all details of the user’s life. However, many users still indicated that the use of Svalna had helped them to initiate action to reduce their emissions. In order to improve Svalna, the users wanted to be able to provide more details, e.g., by scanning receipts and get better options for dealing with a shared economy. We conclude this paper by discussing some opportunities and challenges for the use of transaction data in carbon footprint calculators.

  • 105. Baroroh, D. K.
    et al.
    Chu, C. -H
    Wang, Lihui
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Systematic literature review on augmented reality in smart manufacturing: Collaboration between human and computational intelligence2021In: Journal of manufacturing systems, ISSN 0278-6125, E-ISSN 1878-6642, Vol. 61, p. 696-711Article in journal (Refereed)
    Abstract [en]

    Smart manufacturing offers a high level of adaptability and autonomy to meet the ever-increasing demands of product mass customization. Although digitalization has been used on the shop floor of modern factory for decades, some manufacturing operations remain manual and humans can perform these better than machines. Under such circumstances, a feasible solution is to have human operators collaborate with computational intelligence (CI) in real time through augmented reality (AR). This study conducts a systematic review of the recent literature on AR applications developed for smart manufacturing. A classification framework consisting of four facets, namely interaction device, manufacturing operation, functional approach, and intelligence source, is proposed to analyze the related studies. The analysis shows how AR has been used to facilitate various manufacturing operations with intelligence. Important findings are derived from a viewpoint different from that of the previous reviews on this subject. The perspective here is on how AR can work as a collaboration interface between human and CI. The outcome of this work is expected to provide guidelines for implementing AR assisted functions with practical applications in smart manufacturing in the near future.

  • 106.
    Barriga, L.
    et al.
    KTH, Superseded Departments (pre-2005), Teleinformatics.
    Brorsson, Mats
    Lund university.
    Ayani, Rassul
    KTH, Superseded Departments (pre-2005), Teleinformatics.
    Hybrid Parallel Simulation of Distributed Shared-Memory Architectures1996Report (Other academic)
  • 107.
    Barriga, Luis
    et al.
    KTH, Superseded Departments (pre-2005), Teleinformatics.
    Brorsson, Mats
    Lund university.
    Ayani, Rassul
    KTH, Superseded Departments (pre-2005), Teleinformatics.
    A model for parallel simulation of distributed shared memory1996Conference paper (Refereed)
    Abstract [en]

    We present an execution model for parallel simulation of a distributed shared memory architecture. The model captures the processor-memory interaction and abstracts the memory subsystem. Using this model we show how parallel, on-line, partially-ordered memory traces can be correctly predicted without interacting with the memory subsystem. We also outline a parallel optimistic memory simulator that uses these traces, finds a global order among all events, and returns correct data and timing to each processor. A first evaluation of the amount of concurrency that our model can extract for an ideal multiprocessor shows that processors may execute relatively long instruction sequences without violating the causality constraints. However parallel simulation efficiency is highly dependent on the memory consistency model and the application characteristics.

  • 108.
    Basloom, Huda Saleh
    et al.
    King Abdulaziz Univ, Fac Comp & Informat Technol, Dept Comp Sci, Jeddah 21514, Saudi Arabia..
    Dahab, Mohamed Yehia
    King Abdulaziz Univ, Fac Comp & Informat Technol, Dept Comp Sci, Jeddah 21514, Saudi Arabia.;Agr Res Ctr ARC, Giza 12619, Egypt..
    Alghamdi, Ahmed Mohammed
    Univ Jeddah, Coll Comp Sci & Engn, Dept Software Engn, Jeddah 21493, Saudi Arabia..
    Eassa, Fathy Elbouraey
    King Abdulaziz Univ, Fac Comp & Informat Technol, Dept Comp Sci, Jeddah 21514, Saudi Arabia..
    Al-Ghamdi, Abdullah Saad Al-Malaise
    King Abdulaziz Univ, Fac Comp & Informat Technol, Dept Informat Syst, Jeddah 21589, Saudi Arabia..
    Haridi, Seif
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Errors Classification and Static Detection Techniques for Dual-Programming Model (OpenMP and OpenACC)2022In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 117808-117826Article in journal (Refereed)
    Abstract [en]

    Recently, incorporating more than one programming model into a system designed for high performance computing (HPC) has become a popular solution to implementing parallel systems. Since traditional programming languages, such as C, C++, and Fortran, do not support parallelism at the level of multi-core processors and accelerators, many programmers add one or more programming models to achieve parallelism and accelerate computation efficiently. These models include Open Accelerators (OpenACC) and Open Multi-Processing (OpenMP), which have recently been used with various models, including Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA). Due to the difficulty of predicting the behavior of threads, runtime errors cannot be predicted. The compiler cannot identify runtime errors such as data races, race conditions, deadlocks, or livelocks. Many studies have been conducted on the development of testing tools to detect runtime errors when using programming models, such as the combinations of OpenACC with MPI models and OpenMP with MPI. Although more applications use OpenACC and OpenMP together, no testing tools have been developed to test these applications to date. This paper presents a testing tool for detecting runtime using a static testing technique. This tool can detect actual and potential runtime errors during the integration of the OpenACC and OpenMP models into systems developed in C++. This tool implement error dependency graphs, which are proposed in this paper. Additionally, a dependency graph of the errors is provided, along with a classification of runtime errors that result from combining the two programming models mentioned earlier.

  • 109.
    Bastys, Iulia
    et al.
    Chalmers University of Technology.
    Balliu, Musard
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Rezk, Tamara
    INRIA Sophia-Antipolis.
    Sabelfeld, Andrei
    Chalmers University of Technology.
    Clockwork: Tracking Remote Timing Attacks2020In: Proceedings IEEE Computer Security Foundations Symposium, CSF 2020, IEEE , 2020Conference paper (Refereed)
    Abstract [en]

    Timing leaks have been a major concern for the security community. A common approach is to prevent secrets from affecting the execution time, thus achieving security with respect to a strong, local attacker who can measure the timing of program runs. However, this approach becomes restrictive as soon as programs branch on a secret. This paper focuses on timing leaks under remote execution. A key difference is that the remote attacker does not have a reference point of when a program run has started or finished, which significantly restricts attacker capabilities. We propose an extensional security characterization that captures the essence of remote timing attacks. We identify patterns of combining clock access, secret branching, and output in a way that leads to timing leaks. Based on these patterns, we design Clockwork, a monitor that rules out remote timing leaks. We implement the approach for JavaScript, leveraging JSFlow, a state-of-the-art information flow tracker. We demonstrate the feasibility of the approach on case studies with IFTTT, a popular IoT app platform, and VJSC, an advanced JavaScript library for e-voting.

    Download full text (pdf)
    fulltext
  • 110.
    Baumann, Christoph
    et al.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Schwarz, Oliver
    RISE SICS.
    Dam, Mads
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Compositional Verification of Security Properties for Embedded Execution Platforms2017In: PROOFS 2017: 6th International Workshop on Security Proofs for Embedded Systems / [ed] Ulrich Kühne and Jean-Luc Danger and Sylvain Guilley, 2017, Vol. 49, p. 1-16Conference paper (Refereed)
    Abstract [en]

    The security of embedded systems can be dramatically improved through the use of formally verified isolation mechanisms such as separation kernels, hypervisors, or microkernels. For trustworthiness, particularly for system level behaviour, the verifications need precise models of the underlying hardware. Such models are hard to attain, highly complex, and proofs of their security properties may not easily apply to similar but different platforms. This may render verification economically infeasible. To address these issues, we propose a compositional top-down approach to embedded system specification and verification, where the system-on-chip is modeled as a network of distributed automata communicating via paired synchronous message passing. Using abstract specifications for each component allows to delay the development of detailed models for cores, devices, etc., while still being able to verify high level security properties like integrity and confidentiality, and soundly refine the result for different instantiations of the abstract components at a later stage. As a case study, we apply this methodology to the verification of information flow security for an industry scale security-oriented hypervisor on the ARMv8-A platform. The hypervisor statically assigns (multiple) cores to each guest system and implements a rudimentary, but usable, inter guest communication discipline. We have completed a pen-and-paper security proof for the hypervisor down to state transition level and report on a partially completed verification of guest mode security in the HOL4 theorem prover.

    Download full text (pdf)
    fulltext
  • 111.
    Becker, Matthias
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Meeting Job-Level Dependencies by Task Merging2024In: 29TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 792-798Conference paper (Refereed)
    Abstract [en]

    Industrial applications are often time critical and subject to end-to-end latency constraints. Job-level dependencies can be leveraged to specify a partial ordering on tasks' jobs already at early design phases, agnostic of the hardware platform or scheduling algorithm, and guarantee that end-to-end latency constraints of task chains are met as long as the job-level dependencies are respected. However, their realization at runtime can introduce overheads and complicates the scheduling and timing analysis. This work presents an approach that merges multi-periodic tasks that are connected by job-level dependencies to a single task. A Constraint Programming formulation is presented that optimally merges such task clusters while all job-level dependencies are respected. Such an approach removes the need to consider job-level dependencies at runtime without being bound to a specific scheduling algorithm. Evaluations highlight the applicability of the approach by systemlevel experiments and showcase the scalability of the approach using synthetic task clusters.

  • 112.
    Behere, Sagar
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    Architecting Autonomous Automotive Systems: With an emphasis on Cooperative Driving2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The increasing usage of electronics and software in a modern automobile enables realization of many advanced features. One such feature is autonomous driving. Autonomous driving means that a human driver’s intervention is not required to drive the automobile; rather, theautomobile is capable of driving itself. Achieving automobile autonomyrequires research in several areas, one of which is the area of automotive electrical/electronics (E/E) architectures. These architectures deal with the design of the computer hardware and software present inside various subsystems of the vehicle, with particular attention to their interaction and modularization. The aim of this thesis is to investigate how automotive E/E architectures should be designed so that 1) it ispossible to realize autonomous features and 2) a smooth transition canbe made from existing E/E architectures, which have no explicit support for autonomy, to future E/E architectures that are explicitly designed for autonomy.The thesis begins its investigation by considering the specific problem of creating autonomous behavior under cooperative driving condi-tions. Cooperative driving conditions are those where continuous wireless communication exists between a vehicle and its surroundings, which consist of the local road infrastructure as well as the other vehicles in the vicinity. In this work, we define an original reference architecture for cooperative driving. The reference architecture demonstrates how a subsystem with specific autonomy features can be plugged into an existing E/E architecture, in order to realize autonomous driving capabilities. Two salient features of the reference architecture are that it isminimally invasive and that it does not dictate specific implementation technologies. The reference architecture has been instantiated on two separate occasions and is the main contribution of this thesis. Another contribution of this thesis is a novel approach to the design of general, autonomous, embedded systems architectures. The approach introduces an artificial consciousness within the architecture, that understands the overall purpose of the system and also how the different existing subsystems should work together in order to meet that purpose.This approach can enable progressive autonomy in existing embedded systems architectures, over successive design iterations.

    Download full text (pdf)
    SagarLicentiate
  • 113.
    Behere, Sagar
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    Scoop Technical Report: Year 20112011Report (Other academic)
    Abstract [en]

    This report deals with the technical solution that was implemented for the Grand Cooperative Driving Challenge (GCDC) 2011. The GCDC involved developing a system to drive a vehicle autonomously in specific situations. Some reflections on the design process are also included. The goal of the report is to make the user understand the technical solution and the motivations behind the design choices made.

    Download full text (pdf)
    scoop-report
  • 114.
    Behere, Sagar
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    Törngren, Martin
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    Chen, DeJiu
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    A reference architecture for cooperative driving2013In: Journal of systems architecture, ISSN 1383-7621, E-ISSN 1873-6165, Vol. 59, no 10: Part C, p. 1095-1112Article in journal (Refereed)
    Abstract [en]

    Cooperative driving systems enable vehicles to adapt their motion to the surrounding traffic situation by utilizing information communicated by other vehicles and infrastructure in the vicinity. How should these systems be designed and integrated into the modern automobile? What are the needed functions, key architectural elements and their relationships? We created a reference architecture that systematically answers these questions and validated it in real world usage scenarios. Key findings concern required services and enabling them via the architecture. We present the reference architecture and discuss how it can influence the design and implementation of such features in automotive systems.

    Download full text (pdf)
    refArch.pdf
  • 115.
    Beillevaire, Marc
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Inside the Black Box: How to Explain Individual Predictions of a Machine Learning Model: How to automatically generate insights on predictive model outputs, and gain a better understanding on how the model predicts each individual data point.2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Machine learning models are becoming more and more powerful and accurate, but their good predictions usually come with a high complexity. Depending on the situation, such a lack of interpretability can be an important and blocking issue. This is especially the case when trust is needed on the user side in order to take a decision based on the model prediction. For instance, when an insurance company uses a machine learning algorithm in order to detect fraudsters: the company would trust the model to be based on meaningful variables before actually taking action and investigating on a particular individual.

    In this thesis, several explanation methods are described and compared on multiple datasets (text data, numerical), on classification and regression problems.

    Download full text (pdf)
    fulltext
  • 116. Bentley, Frank
    et al.
    Tollmar, Konrad
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Mobile Service Laboratory (MS Lab). KTH, School of Information and Communication Technology (ICT), Centres, Center for Wireless Systems, Wireless@kth.
    Stephenson, Peter
    Levy, Laura
    Jones, Brian
    Robertson, Scott
    Price, Ed
    Catrambone, Richard
    Wilson, Jeff
    Health Mashups: Presenting Statistical Patterns between Wellbeing Data and Context in Natural Language to Promote Behavior Change2013In: ACM Transactions on Computer-Human Interaction, ISSN 1073-0516, E-ISSN 1557-7325, Vol. 20, no 5, p. 30-Article in journal (Refereed)
    Abstract [en]

    People now have access to many sources of data about their health and wellbeing. Yet, most people cannot wade through all of this data to answer basic questions about their long-term wellbeing: Do I gain weight when I have busy days? Do I walk more when I work in the city? Do I sleep better on nights after I work out? We built the Health Mashups system to identify connections that are significant over time between weight, sleep, step count, calendar data, location, weather, pain, food intake, and mood. These significant observations are displayed in a mobile application using natural language, for example, "You are happier on days when you sleep more." We performed a pilot study, made improvements to the system, and then conducted a 90-day trial with 60 diverse participants, learning that interactions between wellbeing and context are highly individual and that our system supported an increased self-understanding that lead to focused behavior changes.

  • 117.
    Berezovskyi, Andrii
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    El-khoury, Jad
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Fersman, Elena
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Linked Data Architecture for Plan Execution in Distributed CPS2019In: 2019 IEEE International Conference on Industrial Technology (IEEE ICIT), Institute of Electrical and Electronics Engineers (IEEE), 2019, p. 1393-1399Conference paper (Refereed)
    Abstract [en]

    Future cyber-physical systems (CPS) require their components to perform autonomously. To do that safely and efficiently, CPS components will need access to the global state of the whole CPS. These components will require near real-time updates to a subset of the global state to react to changes in the environment. A particular challenge is to monitor state updates from the distributed CPS components: one needs to ensure that only states consistent with the PDDL plan execution semantics can be observed within the system. In order to guarantee that, a component to monitor plan execution is proposed. Microservices based on Linked Data technologies are used to provide a uniform way to access component states, represented as Resource Description Framework (RDF) resources. To ensure the correct ordering of state updates, we present an extension of the OASIS OSLC TRS protocol. Specifically, we strengthen the ordering guarantees of state change events and introduce inlining of the state with the events to prevent state mismatch at the dereferencing stage.

    Download full text (pdf)
    hal-02114503v1
  • 118.
    Berezovskyi, Andrii
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    El-khoury, Jad
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Kacimi, Omar
    Loiret, Frédéric
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Improving lifecycle query in integrated toolchains using linked data and MQTT-based data warehousing2017Conference paper (Refereed)
    Abstract [en]

      The development of increasingly complex IoT systems requires large engineering environments. These environments generally consist of tools from different vendors and are not necessarily integrated well with each other. In order to automate various analyses, queries across resources from multiple tools have to be executed in parallel to the engineering activities. In this paper, we identify the necessary requirements on such a query capability and evaluate different architectures according to these requirements. We propose an improved lifecycle query architecture, which builds upon the existing Tracked Resource Set (TRS) protocol, and complements it with the MQTT messaging protocol in order to allow the data in the warehouse to be kept updated in real-time. As part of the case study focusing on the development of an IoT automated warehouse, this architecture was implemented for a toolchain integrated using RESTful microservices and linked data. 

    Download full text (pdf)
    fulltext
  • 119.
    Berggren, Robert
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Nielsen, Timmy
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Investigating the Reliability of Known University Course Timetabling Problem Solving Algorithms with Updated Constraints2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Scheduling lectures, exams, seminars etc. for a university turns out to be a harder task than what it seems to be at first glance. This problem is known as the University Course Timetabling Problem (UCTP). The UCTP has been hosted for a number of competitions throughout the years by an organization called Practice and Theory of Automated Timetabling (PATAT). Because of these competitions, the problem has been given a standard description and set of constraints as well as standard problem instances for easier comparison of research and work on the subject. However, setting a standard like this have a major drawback; no variety is introduced since new research for finding the greatest method to solve the UCTP is forced to focus on a specific set of constraints, and algorithms developed will only be optimized with these constraints in consideration.

    In this research we compared five well known UCTP algorithms with the standard set of constraints to a different set of constraints. The comparisons showed a difference in the rank of performance between the algorithms when some constraints were changed to fit a certain need. The differences were not great but big enough to state that previous research declaring what algorithms are best for the UCTP problem cannot be relied upon unless you use close to identical sets of constraints. If the goal is to find the best algorithm for a new set of constraints then one should not rely on a single previously defined great algorithm but instead take two or three of the top performing ones for the greatest chance of finding the most optimized solution possible.

    Download full text (pdf)
    fulltext
  • 120. Berkholz, Christoph
    et al.
    Nordström, Jakob
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Near-Optimal Lower Bounds on Quantifier Depth and Weisfeiler-Leman Refinement Steps2016In: PROCEEDINGS OF THE 31ST ANNUAL ACM-IEEE SYMPOSIUM ON LOGIC IN COMPUTER SCIENCE (LICS 2016), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 267-276Conference paper (Refereed)
    Abstract [en]

    We prove near-optimal trade-offs for quantifier depth versus number of variables in first-order logic by exhibiting pairs of n-element structures that can be distinguished by a k-variable first-order sentence but where every such sentence requires quantifier depth at least n(Omega(k/logk)). Our trade-offs also apply to first-order counting logic, and by the known connection to the k-dimensional Weisfeiler-Leman algorithm imply near-optimal lower bounds on the number of refinement iterations. A key component in our proof is the hardness condensation technique recently introduced by [Razborov ' 16] in the context of proof complexity. We apply this method to reduce the domain size of relational structures while maintaining the quantifier depth required to distinguish them.

  • 121.
    Besker, Terese
    et al.
    RISE Research Institutes of Sweden AB, Systems Engineering, Gothenburg, Sweden.
    Franke, Ulrik
    KTH, School of Electrical Engineering and Computer Science (EECS). RISE Research Institutes of Sweden AB.
    Axelsson, Jakob
    Mälardalen University, RISE Research Institutes of Sweden AB, Västerås, Sweden.
    Navigating the Cyber-Security Risks and Economics of System-of-Systems2023In: 2023 18th Annual System of Systems Engineering Conference, SoSe 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper (Refereed)
    Abstract [en]

    Cybersecurity is an important concern in systems-of-systems (SoS), where the effects of cyber incidents, whether deliberate attacks or unintentional mistakes, can propagate from an individual constituent system (CS) throughout the entire SoS. Unfortunately, the security of an SoS cannot be guaranteed by separately addressing the security of each CS. Security must also be addressed at the SoS level. This paper reviews some of the most prominent cybersecurity risks within the SoS research field and combines this with the cyber and information security economics perspective. This sets the scene for a structured assessment of how various cyber risks can be addressed in different SoS architectures. More precisely, the paper discusses the effectiveness and appropriateness of five cybersecurity policy options in each of the four assessed SoS archetypes and concludes that cybersecurity risks should be addressed using both traditional design-focused and more novel policy-oriented tools.

  • 122. Bhatti, Muhammad Khurram
    et al.
    Oz, Isil
    Popov, Konstantin
    Brorsson, Mats
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Farooq, Umer
    Scheduling of Parallel Tasks with Proportionate Priorities2016In: ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, ISSN 2193-567X, Vol. 41, no 8, p. 3279-3295Article in journal (Refereed)
    Abstract [en]

    Parallel computing systems promise higher performance for computationally intensive applications. Since programmes for parallel systems consist of tasks that can be executed simultaneously, task scheduling becomes crucial for the performance of these applications. Given dependence constraints between tasks, their arbitrary sizes, and bounded resources available for execution, optimal task scheduling is considered as an NP-hard problem. Therefore, proposed scheduling algorithms are based on heuristics. This paper presents a novel list scheduling heuristic, called the Noodle heuristic. Noodle is a simple yet effective scheduling heuristic that differs from the existing list scheduling techniques in the way it assigns task priorities. The priority mechanism of Noodle maintains a proportionate fairness among all ready tasks belonging to all paths within a task graph. We conduct an extensive experimental evaluation of Noodle heuristic with task graphs taken from Standard Task Graph. Our experimental study includes results for task graphs comprising of 50, 100, and 300 tasks per graph and execution scenarios with 2-, 4-, 8-, and 16-core systems. We report results for average Schedule Length Ratio (SLR) obtained by producing variations in Communication to Computation cost Ratio. We also analyse results for different degree of parallelism and number of edges in the task graphs. Our results demonstrate that Noodle produces schedules that are within a maximum of 12 % (in worst-case) of the optimal schedule for 2-, 4-, and 8-core systems. We also compare Noodle with existing scheduling heuristics and perform comparative analysis of its performance. Noodle outperforms existing heuristics for average SLR values.

  • 123. Bhatti, Muhammad Khurram
    et al.
    Oz, Isil
    Popov, Konstantin
    Muddukrishna, Ananya
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Brorsson, Mats
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS. SICS Swedish ICT, Sweden.
    Noodle: A Heuristic Algorithm for Task Scheduling in MPSoC Architectures2014In: Proceedings - 2014 17th Euromicro Conference on Digital System Design, DSD 2014, 2014, p. 667-670Conference paper (Refereed)
    Abstract [en]

    Task scheduling is crucial for the performance of parallel applications. Given dependence constraints between tasks, their arbitrary sizes, and bounded resources available for execution, optimal task scheduling is considered as an NP-hard problem. Therefore, proposed scheduling algorithms are based on heuristics. This paper(1) presents a novel heuristic algorithm, called the Noodle heuristic, which differs from the existing list scheduling techniques in the way it assigns task priorities. We conduct an extensive experimental to validate Noodle for task graphs taken from Standard Task Graph (STG). Results show that Noodle produces schedules that are within a maximum of 12% (in worst-case) of the optimal schedule for 2, 4, and 8 core systems. We also compare Noodle with existing scheduling heuristics and perform comparative analysis of its performance.

  • 124.
    Biehl, Matthias
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    A Modeling Language for the Description and Development of Tool Chains for Embedded Systems2013Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The development of embedded systems is typically supported by a number of diverse development tools. To achieve seamless tool support throughout the embedded systems development process, tool chains are constructed as software solutions that integrate the development tools. Tool chains have grown from ad-hoc solutions to complex software systems, since they need to support distributed engineering, integration conventions, a specific set of tools and the complete product development process used in a company. In practice, the development of tool chains that fulfill these needs is difficult and time-consuming, since it is a largely unsupported, manual engineering task. In addition, tool chains are typically described using general purpose modeling languages or languages borrowed from other domains, which contributes to the accidental complexity of tool chain development. Due to the increasing sophistication and size of tool chains, there is a need for a systematic, targeted description and development approach for tool chains.

    This thesis contributes with a language for the systematic description of tool chains and semi-automated techniques to support their development.

    The Tool Integration Language (TIL) is a domain-specific modeling language (DSML) for tool chains that allows describing tool chains explicitly, systematically and at an appropriate level of abstraction. TIL concepts are from the domain of tool integration and express the essential design decisions of tool chains at an architectural level of abstraction. A TIL model serves as a basis for the development of a tailored tool chain.

    Semi-automated techniques for the specification, analysis and synthesis support the development of tool chains that are described as TIL models. Specification techniques support the creation and refinement of a tool chain model that is aligned to a given development process and set of tools. Domain-specific analysis techniques are used to check the alignment of the tool chain model with the supported process. Synthesis techniques support the efficient realization of the specified tool chain model as a software solution that conforms to integration conventions.

    Experiences from case studies are presented which apply TIL to support the creation of tool chains. The approach is evaluated, both qualitatively and quantitatively, by comparing it to traditional development methods for tool chains. The approach enables the efficient development of tailored tool chains, which have the potential to improve the productivity of embedded systems development.

    Download full text (pdf)
    fulltext
  • 125.
    Biehl, Matthias
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Analysis of Tool Chains2011Report (Other academic)
  • 126.
    Biehl, Matthias
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Tool Integration Language (TIL)2011Report (Other academic)
  • 127.
    Biehl, Matthias
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    El-Khoury, Jad
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Loiret, Frédéric
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Törngren, Martin
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    A Domain Specific Language for Generating Tool Integration Solutions2011Conference paper (Other academic)
    Abstract [en]

    Model-based development of complex systems requires toolsupport for the dierent phases of the system life cycle. To allow for anecient development process, the involved tools need to be integrated.Despite the availability of modern tool integration platforms and frameworks,it is complex, labor-intensive and costly to build tool integrationsolutions. For managing the growing complexity of tool integration solutions,a need for systematic engineering arises. A missing piece is thehigh-level architectural description of tool integration solutions. We proposethe domain specic language TIL for describing tool integrationsolutions at a high level of abstraction. We propose an approach thattakes advantage of modeling technologies to systematize and automatethe process of building tool integration solutions. By automatically generatingintegration solutions from a TIL model, we can reduce the manualimplementation eort.

  • 128.
    Biehl, Matthias
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    El-Khoury, Jad
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    Loiret, Frédéric
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    Törngren, Martin
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    On the modeling and generation of service-oriented tool chains2014In: Software and Systems Modeling, ISSN 1619-1366, E-ISSN 1619-1374, Vol. 13, no 2, p. 461-480Article in journal (Refereed)
    Abstract [en]

    Tool chains have grown from ad-hoc solutions to complex software systems, which often have a service-oriented architecture. With service-oriented tool integration, development tools are made available as services, which can be orchestrated to form tool chains. Due to the increasing sophistication and size of tool chains, there is a need for a systematic development approach for service-oriented tool chains. We propose a domain-specific modeling language (DSML) that allows us to describe the tool chain on an appropriate level of abstraction. We present how this language supports three activities when developing service-oriented tool chains: communication, design and realization. A generative approach supports the realization of the tool chain using the service component architecture. We present experiences from an industrial case study, which applies the DSML to support the creation of a service-oriented tool chain. We evaluate the approach both qualitatively and quantitatively by comparing it with a traditional development approach.

  • 129.
    Biehl, Matthias
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Gu, Wenqing
    Ericsson AB, Kista, Sweden.
    Loiret, Frédéric
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    Model-based Service Discovery and Orchestration for OSLC Services in Tool Chains2012In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Volume 7387 LNCS, 2012, p. 283-290Conference paper (Refereed)
    Abstract [en]

    Globally distributed development of complex systems relies on the use of sophisticated development tools but today the tools provide only limited possibilities for integration into seamless tool chains. If development tools could be integrated, development data could be exchanged and tracing across remotely located tools would be possible and would increase the efficiency of globally distributed development. We use a domain specific modeling language to describe tool chains as models on a high level of abstraction. We use model-driven technology to synthesize the implementation of a service-oriented wrapper for each development tool based on OSLC (Open Services for Lifecyle Collaboration) and the orchestration of the services exposed by development tools. The wrapper exposes both tool data and functionality as web services, enabling platform independent tool integration. The orchestration allows us to discover remote tools via their service wrapper, integrate them and check the correctness of the orchestration.

  • 130.
    Biehl, Matthias
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    Hong, Jiarui
    Loiret, Frederic
    A Generative Approach for Developing Data Exchange in Tool Chains2012Conference paper (Other academic)
  • 131.
    Biehl, Matthias
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Hong, Jiarui
    Loiret, Frederic
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    Automated Construction of Data Integration Solutions for Tool Chains2012In: ICSEA 2012 : The Seventh International Conference on Software Engineering Advances, 2012, p. 102-111Conference paper (Refereed)
    Abstract [en]

    Modern software development relies increasingly on the orchestrated use of development tools in the form of seamless, automated tool chains. Tool chains are becoming complex software systems themselves, however, the efficient development of tool chains is a largely unsupported, manual engineering task. We propose both a domain specific modeling language for systematically specifying tool chains and generators for efficiently realizing the tool chain as software. Tool chain software consists of diverse components, such as service-oriented applications, models and model transformations, which we produce by different generative techniques. We study both the separate generative techniques and the dependencies between the generated artifacts to ensure that they can be integrated. We evaluate the approach both quantitatively and qualitatively, and show in a case study that the approach is practically applicable when building a tool chain for industrially relevant tools.

  • 132. Biehl, Matthias
    et al.
    Löwe, Welf
    Automated Architecture Consistency Checking for Model Driven Software Development2009In: ARCHITECTURES FOR ADAPTIVE SOFTWARE SYSTEMS, 2009, p. 36-51Conference paper (Refereed)
    Abstract [en]

    When software projects evolve their actual implementation and their intended architecture may drift apart resulting in problems for further maintenance. As a countermeasure it is good software engineering practice to check the implementation against the architectural description for consistency. In this work we check software developed by a Model Driven Software Development (MDSD) process. This allows us to completely automate consistency checking by deducing information from implementation, design documents, and model transformations. We have applied our approach on a Java project and found several inconsistencies hinting at design problems. With our approach we can find inconsistencies early, keep the artifacts of an MDSD process consistent, and, thus, improve the maintainability and understandability of the software.

  • 133.
    Biehl, Matthias
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    Törngren, Martin
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Embedded Control Systems.
    A cost-efficiency model for tool chains2012In: Global Software Engineering Workshops (ICGSEW), 2012 IEEE Seventh International Conference on, IEEE , 2012, p. 6-11Conference paper (Refereed)
    Abstract [en]

    The seamless integration of development tools can help to improve the productivity of software development and reduce development costs. When tool chains are used in the context of global software engineering, they are deployed as globally distributed systems. Tool chains have the potential to bring productivity gains but they are also expensive to realize. The decision to introduce a tool chain is often made based only on a qualitative analysis of the situation. More precise analysis of the trade-offs would be possible if a quantitative model describing the cost-efficiency of tool chains would be available. We apply the COCOMO model for cost analysis in combination with the TIL model for tool chain design to create a generic quantitative estimation model for predicting the cost-efficiency of tool chains. We validate the cost-efficiency model with a case study of an industrial tool chain.

  • 134.
    Biehl, Matthias
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Törngren, Martin
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    An Estimation Model for the Savings Achievable by Tool Chains2012In: Computer Software and Applications Conference Workshops (COMPSACW), 2012 IEEE 36th Annual, 2012, p. 488-492Conference paper (Refereed)
    Abstract [en]

    Tool chains are sought after by industry due to their promise to improve the productivity of software development by reducing costs. Despite these promises, there are few attempts to quantify costs and productivity improvements achievable with a tool chain. The decision for or against realizing a tool chain design requires a quantitative analysis of the economic benefits achievable with a tool chain. We apply the COCOMO model for cost estimation to create a quantitative model for predicting the cost-savings of tool chains. The cost-savings model can provide support for practitioners and decision makers when facing the decision to create a new tool chain.

  • 135.
    Biehl, Matthias
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Törngren, Martin
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Constructing Tool Chains based on SPEM Process Models2012Conference paper (Refereed)
    Abstract [en]

    The development of embedded systems requires a number of tools and it is widely believed that integrating the tools into an automated tool chain can improve the productivity of development. However, tool chains are not accepted by practitioners if they are not aligned with the established development culture, processes and standards. Process models exist for a variety of reasons, i.e., for documenting, planning or tracking progress in a development project and SPEM is the standardized formalism by the OMG for this purpose. We explore in how far a SPEM process models can be used for creating the skeleton of a tool chain, which is aligned with the process. We identify a number of relationship patterns between the development process and its supporting tool chain and show how the patterns can be used for constructing a tool chain. In two case studies, we examine the practical applicability of the patterns, when tailoring the design of a tool chain to a development process.

  • 136.
    Bilal, Aldala
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Abel, Girma
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Tillämpning av IMU för användning inom kulstötning2020Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Inom kulstötning är det viktigt att optimera kulstötarens teknik för att förbättra prestationen på stöten. Friidrottsförbundet önskar använda en applikation som kan identifiera acceleration och rörelsemönster i syfte att optimera kulstötarens teknik. Denna rapport innefattar utvecklandet av en applikation som använder sig av Inertial Measurement Unit (IMU) för att upptäcka acceleration. Genom den utvecklade applikationen presenteras erhållna mätdata visuellt tillsammans med en tillhörande synkroniserad video för rörelseanalys. Tester som utförs undersöker den optimala positionen för var en sensor som ska mäta acceleration ska sättas fast på en idrottsutövarens arm. Resultaten visar att det är fullt möjligt att identifiera acceleration och rörelsemönster genom kombinationen av erhållen IMU-data och synkroniserad video. Vidare studier bör dock genomföras för att undersöka tillförlitligheten på mätdata som anskaffas.

    Download full text (pdf)
    Tillämpning av IMU för användning inom kulstötning
  • 137.
    Birgersson, Marcus
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Artho, Cyrille
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Balliu, Musard
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Security-Aware Multi-User Architecture for IoT2021In: 2021 IEEE 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY (QRS 2021), Institute of Electrical and Electronics Engineers Inc. , 2021, p. 102-113Conference paper (Refereed)
    Abstract [en]

    IoT systems, such as in smart cities or hospitals, generate data that may be subject to different security classifications, privacy regulations, and access rights. However, popular IoT platforms do not consider data classification and security-aware data analysis. In this paper, we present a novel architecture based on open-source solutions that handles the issue of collecting and classifying data at the source and presents the data analysis to users at different authorization levels. Our architecture consists of three layers: a layer for exposing collected and classified data to a middleware, the middleware to handle storage and analysis of the data and expose it to a dashboard, and the dashboard responsible for authenticating users and visualizing data according to the users' classification level. Our solution distinguishes itself by focusing on data classification rather than data collection, supporting fine-grained access control and declassification. Our implementation, using the Web of Things API, Node-RED and Grafana, demonstrates the security benefits of our design on use cases in the smart city and healthcare domains.

    Download full text (pdf)
    fulltext
  • 138.
    Bitalebi, Hossein
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Geraeinejad, Vahid
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Safaei, Farshad
    Shahid Beheshti University Iran, Tehran.
    Ebrahimi, Masoumeh
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems, Electronic and embedded systems.
    LATOA: Load-Aware Task Offloading and Adoption in GPU2023In: Proceedings of the 15th Workshop on General Purpose Processing Using GPU, GPGPU 2023, Association for Computing Machinery (ACM) , 2023, p. 7-13Conference paper (Refereed)
    Abstract [en]

    The emerging new applications, such as data mining and graph analysis, demand extra processing power at the hardware level. Conventional static task scheduling is no longer able to meet the requirements of such complicated applications. This inefficiency is a major concern when the application is supposed to run on a Graphics Processing Unit (GPU), where millions of instructions should be distributed among a limited number of processing cores. A non-optimal scheduling strategy leads to unfair load distribution among the GPU’s processing cores. Consequently, while busy cores are stalled due to the lack of resources, waiting for their data from the main memory, other cores are idle, waiting for busy cores to complete their tasks. Our study introduces LATOA, a Load-Aware Task Offloading and Adoption method that tackles this problem by reducing both stall and idle cycles. LATOA is the first study moving from static to dynamic task scheduling based on run-time information obtained from the Miss Status Holding Register (MSHR) tables. In LATOA, all processing cores are dynamically tagged with critical, neutral, or relaxed states. Then, irregular warps with low locality properties are detected and offloaded from critical cores (going to the stall state) to relaxed ones (going to the idle state). Based on our experiments, LATOA reduces the number of stall cycles on average by 24% and increases the neutral states on average by 38%. In addition, with negligible hardware overhead, LATOA improves system performance and power efficiency on average by 26% and 7%, respectively.

  • 139.
    Björnström, Tommie
    et al.
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Cederqvist, Reidar
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Comparison and Implementation of Software Frameworks for Internet of Things2015Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    There is no established standard for how Internet of Things devices are communicating with each other, every manufacturer uses their own proprietary software and protocols. This makes it difficult to ensure the best possible user experience. There are several projects that can become a standard for how devices discovering, communicating, networking etc. The goal for this thesis work was to compare such software frameworks in some areas and investigate how Inteno’s operating system Iopsys OS can be complemented by implement one of these frameworks. A literature study gave two candidates for the comparison, AllJoyn and Bonjour. The result of the comparison showed that AllJoyn was the most appropriate choice for Inteno to implement into their OS. AllJoyn was chosen because it has a potential to become an established standard and includes tools for easy implementation. To make a proof of concept, an AllJoyn application was created. The application together with a JavaScript web page, can show and control options for an AllJoyn Wi-Fi manager application and AllJoyn enabled lamps.

    Download full text (pdf)
    fulltext
  • 140. Blom, Rikard
    et al.
    Korman, Matus
    KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
    Robert, Lagerström
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Mathias, Ekstedt
    KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
    Analyzing attack resilience of an advanced meter infrastructure reference model2016In: Joint Workshop on Cyber-Physical Security and Resilience in Smart Grids (CPSR-SG), IEEE conference proceedings, 2016Conference paper (Refereed)
    Abstract [en]

    Advanced metering infrastructure (AMI) is a key component of the concept of smart power grids. Although several functional/logical reference models of AMI exist, they are not suited for automated analysis of properties such as cyber security. This paper briefly presents a reference model of AMI that follows a tested and even commercially adopted formalism allowing automated analysis of cyber security. Finally, this paper presents an example cyber security analysis, and discusses its results.

  • 141.
    Boberg, Alice
    KTH, School of Industrial Engineering and Management (ITM), Learning.
    Challenges and Opportunities for Digital Mentorship Programs in ICT2022Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The covid-19 pandemic required an emergency transition to digital education all over the world and led to a huge learning loss in most countries. Informal learning programs teaching children digital skills were stopped or digitalized due to the pandemic. Studies show that just having access to remote education does not ensure that children absorb knowledge in the same way as face-to-face education (UNESCO, UNICEF, the World Bank and OECD, 2021) and indicates that there are new challenges that appear in digital education compared to face to face education. This thesis is investigating what problems and opportunities mentors face in a digital informal learning program compared to a face-to-face informal learning program. It also aims to find pedagogical and technical solutions to the problems faced by the mentors. The study is interviewing mentors from the Ericsson programs Digital Lab and the learning program Technovation Girls. It also surveys mentors to generalize the findings and evaluates what are the most important challenges. The logistical advantages are one of the main opportunities for digital mentoring. Mentors experience flexibility and time efficiency and more mentors can join the learning programs. The digitalization of the program makes it possible to reach children in rural areas and mentors and mentees can connect all over the world. The study shows that many mentees have a lack of access to ICT which is a big problem as not all children get the opportunity to participate in digital education. Motivation is a bigger challenge for mentees in the digital environment than in the physical environment, and the mentor has a bigger role in helping motivate the mentees in digital environments. The learning outcome differentiates from different environments and physical and digital education might be more or less effective depending on the skills that should be learned. Mentors also struggle with not having free and accessible tools that are adapted to helping mentees with programming, answer questions and have video call meetings. As an example of a solution to this problem, the features of a selection of commonly used communication tools are compiled and a prototype is developed of a selection tool that could help mentors choose the most efficient digital communication tool fit for purpose. The study highlights challenges and opportunities common to mentors in a digital environment, teaching skills in ICT and the results could be used for further studies to investigate what reasons there are for the learning loss in digital education.

    Download full text (pdf)
    fulltext
  • 142.
    Bocci, Alessandro
    et al.
    Univ Pisa, Dept Comp Sci, I-56127 Pisa, Italy..
    Forti, Stefano
    Univ Pisa, Dept Comp Sci, I-56127 Pisa, Italy..
    Guanciale, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Ferrari, Gian-Luigi
    Univ Pisa, Dept Comp Sci, I-56127 Pisa, Italy..
    Brogi, Antonio
    Univ Pisa, Dept Comp Sci, I-56127 Pisa, Italy..
    Secure Partitioning of Cloud Applications, with Cost Look-Ahead2023In: Future Internet, E-ISSN 1999-5903, Vol. 15, no 7, article id 224Article in journal (Refereed)
    Abstract [en]

    The security of Cloud applications is a major concern for application developers and operators. Protecting users' data confidentiality requires methods to avoid leakage from vulnerable software and unreliable Cloud providers. Recently, trusted execution environments (TEEs) emerged in Cloud settings to isolate applications from the privileged access of Cloud providers. Such hardware-based technologies exploit separation kernels, which aim at safely isolating the software components of applications. In this article, we propose a methodology to determine safe partitionings of Cloud applications to be deployed on TEEs. Through a probabilistic cost model, we enable application operators to select the best trade-off partitioning in terms of future re-partitioning costs and the number of domains. To the best of our knowledge, no previous proposal exists addressing such a problem. We exploit information-flow security techniques to protect the data confidentiality of applications by relying on declarative methods to model applications and their data flow. The proposed solution is assessed by executing a proof-of-concept implementation that shows the relationship among the future partitioning costs, number of domains and execution times.

  • 143.
    Bogdanov, Kirill
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Network Systems Laboratory (NS Lab).
    Latency Dataset for the paper "The Nearest Replica Can Be Farther Than You Think"2015Data set
  • 144.
    Bogdanov, Kirill
    et al.
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Network Systems Laboratory (NS Lab).
    Peón-Quirós, Miguel
    Complutense University of Madrid.
    Maguire Jr., Gerald Q.
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Radio Systems Laboratory (RS Lab).
    Kostic, Dejan
    KTH, School of Information and Communication Technology (ICT), Communication Systems, CoS, Network Systems Laboratory (NS Lab).
    The Nearest Replica Can Be Farther Than You Think2015In: Proceedings of the ACM Symposium on Cloud Computing 2015, Association for Computing Machinery (ACM), 2015, p. 16-29Conference paper (Refereed)
    Abstract [en]

    Modern distributed systems are geo-distributed for reasons of increased performance, reliability, and survivability. At the heart of many such systems, e.g., the widely used Cassandra and MongoDB data stores, is an algorithm for choosing a closest set of replicas to service a client request. Suboptimal replica choices due to dynamically changing network conditions result in reduced performance as a result of increased response latency. We present GeoPerf, a tool that tries to automate the process of systematically testing the performance of replica selection algorithms for geodistributed storage systems. Our key idea is to combine symbolic execution and lightweight modeling to generate a set of inputs that can expose weaknesses in replica selection. As part of our evaluation, we analyzed network round trip times between geographically distributed Amazon EC2 regions, and showed a significant number of daily changes in nearestK replica orders. We tested Cassandra and MongoDB using our tool, and found bugs in each of these systems. Finally, we use our collected Amazon EC2 latency traces to quantify the time lost due to these bugs. For example due to the bug in Cassandra, the median wasted time for 10% of all requests is above 50 ms.

    Download full text (pdf)
    fulltext
  • 145.
    Bolakhrif, Amin
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Özger, Mustafa
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    Sandberg, David
    Ericsson Research, Stockholm, Sweden..
    Cavdar, Cicek
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.
    AI-Assisted Network Traffic Prediction Without Warm-Up Periods2022In: 95th IEEE Vehicular Technology Conference - Spring, VTC 2022-SPRING, Institute of Electrical and Electronics Engineers (IEEE) , 2022Conference paper (Refereed)
    Abstract [en]

    Network traffic prediction in cellular networks improves reliability and efficiency of network resource use via proactive network management schemes. To this end, future traffic arrivals are anticipated via machine learning (ML)-based network traffic predictions based on historical network traffic data. Current literature on ML-based network traffic predictions employs warm-up periods, which are the required duration traffic flows are observed to make meaningful predictions. However, most flows are shorter than the warm-up period. This paper proposes a residual neural network (ResNet) architecture for individual network flow predictions, based on a deep-learning approach that removes the required warm-up period seen in other proposed methods. The ResNet architecture demonstrates the ability to accurately predict the magnitude of packet count, size, and duration of flows using only the information available at the arrival of the first packet such as IP addresses and utilized transport-layer protocols. The results indicate that the proposed method is able to predict the order of magnitude of individual flow characteristics with over 80% accuracy, outperforming traditional ML methods such as linear regression and decision trees.

  • 146.
    Boman, Magnus
    et al.
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS. SICS.
    Gillblad, Daniel
    SICS.
    Learning Machines for Computational Epidemiology2014In: Proceedings - 2014 IEEE International Conference on Big Data, Washington DC: IEEE conference proceedings, 2014, p. 1-5Conference paper (Refereed)
    Abstract [en]

    Resting on our experience of computational epidemiologyin practice and of industrial projects on analytics ofcomplex networks, we point to an innovation opportunity forimproving the digital services to epidemiologists for monitoring,modeling, and mitigating the effects of communicable disease.Artificial intelligence and intelligent analytics of syndromicsurveillance data promise new insights to epidemiologists, butthe real value can only be realized if human assessments arepaired with assessments made by machines. Neither massivedata itself, nor careful analytics will necessarily lead to betterinformed decisions. The process producing feedback to humanson decision making informed by machines can be reversed toconsider feedback to machines on decision making informed byhumans, enabling learning machines. We predict and argue forthe fact that the sensemaking that such machines can perform intandem with humans can be of immense value to epidemiologistsin the future.

  • 147.
    Bonacina, Ilario
    et al.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Galesi, Nicola
    Thapen, Neil
    TOTAL SPACE IN RESOLUTION2016In: SIAM journal on computing (Print), ISSN 0097-5397, E-ISSN 1095-7111, Vol. 45, no 5, p. 1894-1909Article in journal (Refereed)
    Abstract [en]

    We show quadratic lower bounds on the total space used in resolution refutations of random k-CNFs over n variables and of the graph pigeonhole principle and the bit pigeonhole principle for n holes. This answers the open problem of whether there are families of k-CNF formulas of polynomial size that require quadratic total space in resolution. The results follow from a more general theorem showing that, for formulas satisfying certain conditions, in every resolution refutation there is a memory configuration containing many clauses of large width.

  • 148. Bongo, Lars Ailo
    et al.
    Ciegis, Raimondas
    Frasheri, Neki
    Gong, Jing
    KTH, Centres, SeRC - Swedish e-Science Research Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for High Performance Computing, PDC.
    Kimovski, Dragi
    Kropf, Peter
    Margenov, Svetozar
    Mihajlovic, Milan
    Neytcheva, Maya
    Rauber, Thomas
    Rünger, Gudula
    Trobec, Roman
    Wuyts, Roel
    Wyrzykowski, Roman
    Applications for Ultrascale Computing2015In: Supercomputing Frontiers and Innovations, ISSN 2409-6008, Vol. 2, no 1, p. 19-48Article in journal (Refereed)
    Abstract [en]

    Studies of complex physical and engineering systems, represented by multi-scale and multi-physics computer simulations have an increasing demand for computing power, especially when the simulations of realistic problems are considered. This demand is driven by the increasing size and complexity of the studied systems or the time constraints. Ultrascale computing systems offer a possible solution to this problem. Future ultrascale systems will be large-scale complex computing systems combining technologies from high performance computing, distributed systems, big data, and cloud computing. Thus, the challenge of developing and programming complex algorithms on these systems is twofold. Firstly, the complex algorithms have to be either developed from scratch, or redesigned in order to yield high performance, while retaining correct functional behaviour. Secondly, ultrascale computing systems impose a number of non-functional cross-cutting concerns, such as fault tolerance or energy consumption, which can significantly impact the deployment of applications on large complex systems. This article discusses the state-of-the-art of programming for current and future large scale systems with an emphasis on complex applications. We derive a number of programming and execution support requirements by studying several computing applications that the authors are currently developing and discuss their potential and necessary upgrades for ultrascale execution.

  • 149. Bordencea, D.
    et al.
    Shafaat, Tallat Mahmood
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Arad, Cosmin
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Haridi, Seif
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Valean, H.
    Efficient linearizable write operations using bounded global time uncertainty2013In: Proceedings - 2013 IEEE 12th International Symposium on Parallel and Distributed Computing, ISPDC 2013, IEEE , 2013, p. 59-66Conference paper (Refereed)
    Abstract [en]

    Distributed key-value stores employed in data centers treat each key-value pair as a shared memory register. For fault-tolerance and performance, each key-value pair is replicated. Various models exist for the consistency of data amongst the replicas. While atomic consistency, also known as linearizability, provides the strongest form of consistency for read and write operations, various key-value stores, such as Cassandra, and Dynamo, offer only eventual consistency instead. One main motivation for such a decision is performance degradation when guaranteeing atomic consistency. In this paper, we use time with known bounded uncertainty to improve the performance of write operations, while maintaining atomic consistency. We show how to use the concept of commit wait in a shared memory register to perform a write operation in one phase (message round trip), instead of two. We evaluate the solution experimentally by comparing it to ABD, a well-known algorithm for achieving atomic consistency in an asynchronous network, which uses two phases for write operations. We also compare our protocol to an eventually consistent register. Our experiments show an improved throughput, and lower write latency, compared to the ABD algorithm.

  • 150.
    Bosk, Daniel
    et al.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Buchegger, Sonja
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Privacy-preserving access control in publicly readable storage systems2016In: 10th IFIP WG 9.2, 9.5, 9.6/11.7, 11.4, 11.6/SIG 9.2.2 International Summer School on Privacy and Identity Management, 2015, Springer-Verlag New York, 2016, p. 327-342Conference paper (Refereed)
    Abstract [en]

    In this paper, we focus on achieving privacy-preserving access control mechanisms for decentralized storage, primarily intended for an asynchronous message passing setting. We propose two modular constructions, one using a pull strategy and the other a push strategy for sharing data. These models yield different privacy properties and requirements on the underlying system. We achieve hidden policies, hidden credentials and hidden decisions. We additionally achieve what could be called ‘hidden policy-updates’, meaning that previously-authorized subjects cannot determine if they have been excluded from future updates or not.

1234567 101 - 150 of 1224
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf