kth.sePublications
Change search
Refine search result
1234567 1 - 50 of 515
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abd Alwaheb, Sofia
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Implementering DevSecOps metodik vid systemutveckling för hälso och sjukvård2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In healthcare, IT security is crucial for protecting both personal information and patient safety. Currently, the implementation of security measures and testing is done after software development, which can reduce efficiency, and pose a potential risk to patient privacy. This study examined the implementation of the DevSecOps methodology in healthcare, focusing on the development phase. By interviewing employees and using security tools such as SAST, code review, penetration testing, and DAST, benefits and challenges were identified. The challenges included a lack of security knowledge and difficulty integrating tools for free. Despite this, the results demonstrated the potential to enhance security, streamline operations, and save money by utilizing free tools and implementing security during the development phase. Training and hiring security-competent personnel were also emphasized as important for maintaining high security standards.

    Download full text (pdf)
    fulltext
  • 2.
    Abdi Dahir, Najiib
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Dahir Ali, Ikran
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Privacy preserving data access mechanism for health data2023Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Due to the rise of digitalization and the growing amount of data, ensuring the integrity and security of patient data has become increasingly vital within the healthcare industry, which has traditionally managed substantial quantities of sensitive patient and personal information. This bachelor's thesis focused on designing and implementing a secure data sharing infrastructure to protect the integrity and confidentiality of patient data. Synthetic data was used to enable access for researchers and students in regulated environments without compromising patient privacy. The project successfully achieved its goals by evaluating different privacy-preserving mechanisms and developing a machine learning-based application to demonstrate the functionality of the secure data sharing infrastructure. Despite some challenges, the chosen algorithms showed promising results in terms of privacy preservation and statistical similarity. Ultimately, the use of synthetic data can promote fair decision-making processes and contribute to secure data sharing practices in the healthcare industry.

    Download full text (pdf)
    Examensarbete
  • 3.
    Abdollahi, Meisam
    et al.
    Iran Univ Sci & Technol, Tehran, Iran..
    Baharloo, Mohammad
    Inst Res Fundamental Sci IPM, Tehran, Iran..
    Shokouhinia, Fateme
    Amirkabir Univ Technol, Tehran, Iran..
    Ebrahimi, Masoumeh
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems, Electronic and embedded systems.
    RAP-NoC: Reliability Assessment of Photonic Network-on-Chips, A simulator2021In: Proceedings of the 8th ACM international conference on nanoscale computing and communication (ACM NANOCOM 2021), Association for Computing Machinery (ACM) , 2021Conference paper (Refereed)
    Abstract [en]

    Nowadays, optical network-on-chip is accepted as a promising alternative solution for traditional electrical interconnects due to lower transmission delay and power consumption as well as considerable high data bandwidth. However, silicon photonics struggles with some particular challenges that threaten the reliability of the data transmission process.The most important challenges can be considered as temperature fluctuation, process variation, aging, crosstalk noise, and insertion loss. Although several attempts have been made to investigate the effect of these issues on the reliability of optical network-on-chip, none of them modeled the reliability of photonic network-on-chip in a system-level approach based on basic element failure rate. In this paper, an analytical model-based simulator, called Reliability Assessment of Photonic Network-on-Chips (RAP-NoC), is proposed to evaluate the reliability of different 2D optical network-on-chip architectures and data traffic. The experimental results show that, in general, Mesh topology is more reliable than Torus considering the same size. Increasing the reliability of Microring Resonator (MR) has a more significant impact on the reliability of an optical router rather than a network.

  • 4.
    Abdulnoor, John
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Gawriyeh, Ramy
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    A study of methods to synchronize different sensors between two smartphones2021Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Obtaining data simultaneously from different sensors located on different mobile devices can be useful for applications such as sports and medicine. In order for the data from the different sensors to be combined for analysis, the mobile devices need to be time synchronized first. This paper presents an application that can be used to calculate the difference between the internal clocks of two android devices using a combination of the Cristian and Marzullo algorithms. Different methods to connect the devices over Wi-Fi as well as the internet are tested to determine the optimal method for clock synchronization. The paper also validates the synchronization by testing different sensors on two identical android smartphones. The results show that clock synchronization between two mobile devices can be achieved with a round-trip time of 2 milliseconds or less using Wi-Fi Direct. Validation of the synchronization shows that a delay of 7 milliseconds or less can be achieved between two sensors of the same type on two identical android smartphones. It also shows that the least achievable delay between sensors of different types is 16 milliseconds. The conclusion is that once two android smartphones’ clocks are synchronized, only data from sensors of the same type can be combined, with the exception of the camera sensor. Further testing with more robust equipment is needed in order to eliminate human error which could possibly yield more desirable results.

    Download full text (pdf)
    fulltext
  • 5.
    Abebe, Henok Girma
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History, Philosophy.
    Road Safety Policy in Addis Ababa: A Vision Zero Perspective2022In: Sustainability, E-ISSN 2071-1050, Vol. 14, no 9, p. 1-22Article in journal (Refereed)
    Abstract [en]

    In this article, the Addis Ababa city road safety policies are examined and analysed based on the Vision Zero approach to road safety work. Three major policy documents are explored and assessed in terms of how they compare with Vision Zero policy in Sweden, concerning how road safety problems are conceptualised, the responsibility ascriptions promoted, the nature of goal setting concerning road safety objectives, and the specific road safety interventions promoted. It is concluded that there is a big difference between the Swedish Vision Zero approach to road safety work and the Addis Ababa road safety approach in terms of how road safety problems are framed and how responsibility ascriptions are made. In Addis Ababa, policy documents primarily frame road safety problems as individual road user problems and, hence, the responsibility for traffic safety is mainly left to the individual road users. The responsibility extended to other system components such as the vehicles, road design, and the operation of the traffic is growing but still very limited. It is argued that in order to find and secure long-term solutions for traffic safety in the city, a paradigm shift is needed, both regarding what are perceived to be the main causes of road safety problems in the city and who should be responsible for ensuring that road fatalities and injuries are prevented.

    Download full text (pdf)
    fulltext
  • 6.
    Adhi, Boma
    et al.
    RIKEN, Ctr Computat Sci R CCS, Wako, Saitama, Japan..
    Cortes, Carlos
    RIKEN, Ctr Computat Sci R CCS, Wako, Saitama, Japan..
    Tan, Yiyu
    RIKEN, Ctr Computat Sci R CCS, Wako, Saitama, Japan..
    Kojima, Takuya
    RIKEN, Ctr Computat Sci R CCS, Wako, Saitama, Japan.;Univ Tokyo, Grad Sch Informat Sci & Technol, Tokyo, Japan..
    Podobas, Artur
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Sano, Kentaro
    RIKEN, Ctr Computat Sci R CCS, Wako, Saitama, Japan..
    Exploration Framework for Synthesizable CGRAs Targeting HPC: Initial Design and Evaluation2022In: 2022 IEEE 36Th International Parallel And Distributed Processing Symposium Workshops (IPDPSW 2022), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 639-646Conference paper (Refereed)
    Abstract [en]

    Among the more salient accelerator technologies to continue performance scaling in High-Performance Computing (HPC) are Coarse-Grained Reconfigurable Arrays (CGRAs). However, what benefits CGRAs will bring to HPC workloads and how those benefits will be reaped is an open research question today. In this work, we propose a framework to explore the design space of CGRAs for HPC workloads, which includes a tool flow of compilation and simulation, a CGRA HDL library written in SystemVerilog, and a synthesizable CGRA design as a baseline. Using RTL simulation, we evaluate two well-known computation kernels with the baseline CGRA for multiple different architectural parameters. The simulation results demonstrate both correctness and usefulness of our exploration framework.

  • 7.
    Afzal, Ayesha
    et al.
    Erlangen National High Performance Computing Center (NHR@FAU), 91058, Erlangen, Germany.
    Hager, Georg
    Erlangen National High Performance Computing Center (NHR@FAU), 91058, Erlangen, Germany.
    Wellein, Gerhard
    Erlangen National High Performance Computing Center (NHR@FAU), 91058, Erlangen, Germany; Department of Computer Science, University of Erlangen-Nürnberg, 91058, Erlangen, Germany.
    Markidis, Stefano
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Exploring Techniques for the Analysis of Spontaneous Asynchronicity in MPI-Parallel Applications2023In: Parallel Processing and Applied Mathematics - 14th International Conference, PPAM 2022, Revised Selected Papers, Springer Nature , 2023, p. 155-170Conference paper (Refereed)
    Abstract [en]

    This paper studies the utility of using data analytics and machine learning techniques for identifying, classifying, and characterizing the dynamics of large-scale parallel (MPI) programs. To this end, we run microbenchmarks and realistic proxy applications with the regular compute-communicate structure on two different supercomputing platforms and choose the per-process performance and MPI time per time step as relevant observables. Using principal component analysis, clustering techniques, correlation functions, and a new “phase space plot,” we show how desynchronization patterns (or lack thereof) can be readily identified from a data set that is much smaller than a full MPI trace. Our methods also lead the way towards a more general classification of parallel program dynamics.

  • 8.
    Ahmed, Olfet
    et al.
    KTH, School of Technology and Health (STH), Data- och elektroteknik.
    Saman, Nawar
    KTH, School of Technology and Health (STH), Data- och elektroteknik.
    Utvärdering av nätverkssäkerheten på J Bil AB2013Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The aim of this project is to evaluate the network security at J Bil AB. The focus will be on both social and technical issues. For the employees to be able to con-nect to remote servers and external services and perform their daily work tasks, secure connections is needed. J Bil Ab has no IT manager who actively maintains and monitors the network; rather they consult a computer company when changes and implementations are required. The projects’ goal is to identify gaps, come up with suggestions for improvement and to some extent implement so-lutions. To do this, an observation of the employees hav been made, an inter-view have been held, and several attacks on the network have been performed. Based on the data collected, it was concluded that the company has shortcom-ings in IT security. Above all, the social security appeared to have major gaps in it and that is mainly because the lack of knowledge among the employees and they have never been informed of how to manage their passwords, computers and IT issues in general. Suggestions for improvement have been given and some implementations have been performed to eliminate the deficiencies.

    Download full text (pdf)
    Utvärdering av nätverkssäkerheten
  • 9.
    Ahmed, Tanvir Saif
    et al.
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Markovic, Bratislav
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Distribuerade datalagringssystem för tjänsteleverantörer: Undersökning av olika användningsfall för distribuerade datalagringssystem2016Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this thesis, a study of three different uses cases has been made within the field of data storage, which are as following: Cold Storage, High Performance Storage and Virtual Machine Storage. The purpose of the survey is to give an overview of commercial distributed file systems and a deeper study of open source codes distributed file systems in order to find the most optimal solution for these use cases. Within the study, previous works concerning performance, data protection and costs were an-alyzed and compared in means to find different functionalities (snapshotting, multi-tenancy, data duplication and data replication) which distinguish modern distributed file systems. Both commercial and open distributed file systems were examined. A cost estimation for commercial and open distrib-uted file systems were made in means to find out the profitability for these two types of distributed file systems.After comparing and analyzing previous works, it was clear that the open source distributed file sys-tem Ceph was proper as a solution in accordance to the objectives that were set for High Performance Storage and Virtual Machine Storage. The cost estimation showed that it was more profitable to im-plement an open distributed file system. This study can be used as guidance to choose between different distributed file systems.

    Download full text (pdf)
    FULLTEXT99
  • 10.
    Al Hafiz, Muhammad Ihsan
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Implementation of Bolt Detection and Visual-Inertial Localization Algorithm for Tightening Tool on SoC FPGA2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With the emergence of Industry 4.0, there is a pronounced emphasis on the necessity for enhanced flexibility in assembly processes. In the domain of bolt-tightening, this transition is evident. Tools are now required to navigate a variety of bolts and unpredictable tightening methodologies. Each bolt, possessing distinct tightening parameters, necessitates a specific sequence to prevent issues like bolt cross-talk or unbalanced force.

    This thesis introduces an approach that integrates advanced computing techniques with machine learning to address these challenges in the tightening areas. The primary objective is to offer edge computation for bolt detection and tightening tools' precise localization. It is realized by leveraging visual-inertial data, all encapsulated within a System-on-Chip (SoC) Field Programmable Gate Array (FPGA).

    The chosen approach combines visual information and motion detection, enabling tools to quickly and precisely do the localization of the tool. All the computing is done inside the SoC FPGA. The key element for identifying different bolts is the YOLOv3-Tiny-3L model, run using the Deep-learning Processor Unit (DPU) that is implemented in the FPGA. In parallel, the thesis employs the Error-State

    Extended Kalman Filter (ESEKF) algorithm to fuse the visual and motion data effectively. The ESEKF is accelerated via a full implementation in Register Transfer Level (RTL) in the FPGA fabric.

    We examined the empirical outcomes and found that the visual-inertial localization exhibited a Root Mean Square Error (RMSE) position of 39.69 mm and a standard deviation of 9.9 mm. The precision in orientation determination yields a mean error of 4.8 degrees, offset by a standard deviation of 5.39 degrees. Notably, the entire computational process, from the initial bolt detection to its final localization, is executed in 113.1 milliseconds.

    This thesis articulates the feasibility of executing bolt detection and visual-inertial localization using edge computing within the SoC FPGA framework. The computation trajectory is significantly streamlined by harnessing the adaptability of programmable logic within the FPGA. This evolution signifies a step towards realizing a more adaptable and error-resistant bolt-tightening procedure in industrial areas.

    Download full text (pdf)
    fulltext
  • 11.
    Albaloua, Mark
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Kizilkaya, Kenan
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Användning av högnivåspråket Swift i webbläsaren och i Android: En studie på möjligheterna att återanvända högnivåspråket Swift utanför iOS i andra plattformar som webbläsare och Android2023Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The purpose of this work was to study the possibilities of using the high-level language Swift outside of iOS in the browser and on Android. This is to reduce the amount of code written thus reducing development time to create applications for iOS, browser, and Android. To find suitable tools to solve the problem, a study on previous works and methods has been made. The results of the study led to the use of the framework Tokamak together with WebAssembly to reuse Swift in the browser and the tool SwiftKotlin to reuse Swift on Android.

    An application using the Model-View-ViewModel (MVVM) design pattern was created with the intention of testing reusability. The results showed that Tokamak with WebAssembly made it possible to use all the code from the original iOS application except platform-specific functions such as local saving and network calls. SwiftKotlin made it possible to reuse the model class with some small adjustments while the viewmodel and view classes must be manually written. 

    Download full text (pdf)
    fulltext
  • 12. Alesii, Roberto
    et al.
    Congiu, Roberto
    Santucci, Fortunato
    Di Marco, Piergiuseppe
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Fischione, Carlo
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Architectures and protocols for fast identification in large-scale RFID systems2014In: ISCCSP 2014 - 2014 6th International Symposium on Communications, Control and Signal Processing, Proceedings, 2014, p. 243-246Conference paper (Refereed)
    Abstract [en]

    Passive tags based on backscattered signals yield low energy consumption for large-scale applications of RFIDs. In this paper, system architectures and protocol enhancements for fast identifications in ISO/IEC 18000-6C systems that integrate UWB technology are investigated. The anti-collision protocol is studied by considering various tag populations. A novel algorithm is proposed to adapt the UHF air interface parameters with the use of UWB ranging information. The results show that the proposed algorithm yields up to 25% potential performance improvement compared to the ISO/IEC 18000-6C standard.

    Download full text (pdf)
    fulltext
  • 13.
    Alevärn, Marcus
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Simplifying Software Testing in Microservice Architectures through Service Dependency Graphs2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A popular architecture for developing large-scale systems is the microservice architecture, which is currently in use by companies such as Amazon, LinkedIn, and Uber. The are many benefits of the microservice architecture with respect to maintainability, resilience, and scalability. However, despite these benefits, the microservice architecture presents its own unique set of challenges, particularly related to software testing. Software testing is exacerbated in the microservice architecture due to its complexity and distributed nature. To mitigate this problem, this project work investigated the use of a graph-based visualization system to simplify the software testing process of microservice systems. More specifically, the role of the visualization system was to provide an analysis platform for identifying the root cause of failing test cases. The developed visualization system was evaluated in a usability test with 22 participants. Each participant was asked to use the visualization system to solve five tasks. The average participant could on average solve 70.9% of the five tasks correctly with an average effort rating of 3.5, on a scale from one to ten. The perceived average satisfaction of the visualization system was 8.0, also on a scale from one to ten. The project work concludes that graph-based visualization systems can simplify the process of identifying the root cause of failing test cases for at least five different error types. The visualization system is an effective analysis tool that enables users to follow communication flows and pinpoint problematic areas. However, the results also show that the visualization system cannot automatically identify the root cause of failing test cases. Manual analysis and an adequate understanding of the microservice system are still necessary.

    Download full text (pdf)
    fulltext
  • 14.
    Ali, Umar
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Sulaiman, Rabi
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Using UX design principles for comprehensive data visualisation2023Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Workplace safety, particularly in manual handling tasks, is a critical concern that hasbeen increasingly addressed using advanced risk assessment tools. However, pre-senting the complex results of these assessments in an easily digestible format re-mains a challenge. This thesis focused on designing and developing a user-friendlyweb application to visualise risk assessment data effectively. Grounded in a robusttheoretical framework that combines user experience principles, and data visualisa-tion techniques. The study employed an iterative, user-centric design process to de-velop the web application. Multiple visualisation methods, such as pie charts for vis-ualising risk distribution, bar chart, and line chart for time-based analysis, were eval-uated for their effectiveness through usability testing. The application's primary con-tribution lies in its efficient data visualisation techniques, aimed at simplifying com-plex datasets into actionable insights. This work lays the groundwork enabling futuredevelopment by pinpointing areas for improvement like enhanced interactivity andaccessibility.

    Download full text (pdf)
    fulltext
  • 15.
    Alkass, Jakob
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    UX spelar roll: Förbättra prestanda hos webbsida för förbättrad användarupplevelse av webbapplikation.2022Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The company Telia has a web application that is under development and will serve as an internal portal for clients of the company. They experience a lack of performance in the frontend part of the application in the form of long loading times. They therefore want to explore possibilities for optimizing the performance of their web application in hope of improving the user experience. The goal was to investigate possibilities for the application of different optimization techniques that can improve parts of the performance with close connection to the user experience.

    For this thesis, previous research was examined in the field of user experience related to digital products. Research of similar work such as appropriate performance measures and optimization techniques was also conducted. To test, analyse and evaluate the optimization techniques, automatic tests were created that stored measurement data on selected performance metrics. Measurement data from the tests was then analysed in order to suggest further development for Telia’s web application. An analysis of the measurement data showed an overall improvement in Telia’s web application performance for the two examined performance metrics by 33% and 35%respectively. 

    Download full text (pdf)
    Examensarbete_rapport
  • 16.
    Allamand Moraga, Katarina Viktoria
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH).
    Addae, Edmund
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH).
    Improvement Model of an established Web Application in the form of a Website (Wikipedia)2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This study promotes the effort of making knowledge accessible to all members of society. The Wikimedia Foundation contributes to this effort through Wikipedia, the free encyclopedia. There is room for improvement of the website that would lead to improved usability, increased number of visitors and with that a greater reach with the knowledge they possess. An investigation has been conducted on the UI of Wikipedia where user friendliness and usability has been the focus. The investigation consisted of two parts, a heuristic evaluation, and user tests. The intended demographic of participants for the user tests were pre-determined, upon selection they were divided into two groups, one consisting of younger subjects with computer experience and the other of older subjects with less computer experience. Participants were asked to first complete a survey, which was then followed by individual interviews. The investigation laid the foundation for the proposal of suggested improvements which would come in the form of a prototype. The developed prototype was then subjected to user tests in order to verify that it in fact was an improved version in comparison with the current (then) UI. Upon completion, the developed prototype could entirely or in part be implemented by Wikipedia to improve user friendliness, increase number of visitors, and consequently the willingness of these visitors to contribute economically to their cause. 

  • 17.
    Alsayfi, Majed S.
    et al.
    King Abdelaziz Univ, Fac Comp & Informat Technol, Dept Comp Sci, Jeddah 21589, Saudi Arabia..
    Dahab, Mohamed Y.
    King Abdelaziz Univ, Fac Comp & Informat Technol, Dept Comp Sci, Jeddah 21589, Saudi Arabia..
    Eassa, Fathy E.
    King Abdelaziz Univ, Fac Comp & Informat Technol, Dept Comp Sci, Jeddah 21589, Saudi Arabia..
    Salama, Reda
    King Abdelaziz Univ, Fac Comp & Informat Technol, Dept Informat Technol, Jeddah 21589, Saudi Arabia..
    Haridi, Seif
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Al-Ghamdi, Abdullah S.
    King Abdelaziz Univ, Fac Comp & Informat Technol, Dept Comp Sci, Jeddah 21589, Saudi Arabia.;King Abdelaziz Univ, Fac Comp & Informat Technol, Dept Informat Technol, Jeddah 21589, Saudi Arabia..
    Big Data in Vehicular Cloud Computing: Review, Taxonomy, and Security Challenges2022In: ELEKTRONIKA IR ELEKTROTECHNIKA, ISSN 1392-1215, Vol. 28, no 2, p. 59-71Article, review/survey (Refereed)
    Abstract [en]

    Modern vehicles equipped with various smart sensors have become a means of transportation and have become a means of collecting, creating, computing, processing, and transferring data while traveling through modern and rural cities. A traditional vehicular ad hoc network (VANET) cannot handle the enormous and complex data that are collected by modern vehicle sensors (e.g., cameras, lidar, and global positioning systems (GPS)) because they require rapid processing, analysis, management, storage, and uploading to trusted national authorities. Furthermore, the integrated VANET with cloud computing presents a new concept, vehicular cloud computing (VCC), which overcomes the limitations of VANET, brings new services and applications to vehicular networks, and generates a massive amount of data compared to the data collected by individual vehicles alone. Therefore, this study explored the importance of big data in VCC. First, we provide an overview of traditional vehicular networks and their limitations. Then we investigate the relationship between VCC and big data, fundamentally focusing on how VCC can generate, transmit, store, upload, and process big data to share it among vehicles on the road. Subsequently, a new taxonomy of big data in VCC was presented. Finally, the security challenges in big data-based VCCs are discussed.

  • 18.
    Altayr, Hydar
    et al.
    KTH, Superseded Departments (pre-2005), Computer and Systems Sciences, DSV.
    Adis, Michael
    KTH, Superseded Departments (pre-2005), Computer and Systems Sciences, DSV.
    Utveckling och design av WiGID2003Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The Center for Genomics and Bioinformatics (CGB) is an academic department at Karolinska Institute. Generally stated, the CGB department is committed to the generation and management of genetic information by approaches aiming at elucidating the connection between genes, protein and function.

    WiGID is a genome information database that is available through WAP (Wireless Application Protocol).

    Our version of WiGID is based on WML, PHP and PostgreSQL as a database server.

    One of the changes on the old WiGID application was the creation of a relational database with seven tables and one view, instead of the file that represented the database on the old version. We also changed the script language from python to PHP.

    The search engine ability has been extended with three new search alternatives for a user to choose from. Each choice leads to other, sometimes multiple choices.

    A GUI has been created for the administrator, to be able to insert information into the database.

    The structure of the search engine is primarily for narrowing down the search result on the phone display, thereby making the search efficient.

    Download full text (pdf)
    FULLTEXT01
  • 19.
    Alvaeus, Edvin
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Lindén, Ludvig
    KTH, School of Electrical Engineering and Computer Science (EECS).
    A comparison of Azure’s Function-as-a-Service and Infrastructure-as-a-Service solutions2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Cloud computing is a growing industry. More and more companies are moving away from on-premise infrastructure. Instead, the choice is often to build their systems based on cloud services. This growth in the industry has brought with it new needs and consequently, new solutions. There have never existed as many different cloud providers and services offered by these providers. One of the newer paradigms in this industry is the serverless approach.

    The problem of this thesis is that there is a lack of research into how Azure's serverless Function-as-a-Service offering compare to their more traditional Infrastructure-as-a-Service one. Therefore, the purpose of this work is to compare the two with regards to their performance, cost, and required developer effort. The goal is to provide a comparison that can help software professionals in choosing an appropriate cloud solution for their application. Additionally, it aims to contribute to the increased knowledge of modern serverless solutions while providing a basis for future research.

    A qualitative method supported by measurements is used. The two cloud solutions are compared with regards to their performance, cost and developer effort. This is done by implementing and deploying equivalent Representational State Transfer applications with the two Azure offerings. The two implementations are stress-tested to measure their performance, and their incurred costs are compared. Additionally, the effort involved in developing the two solutions is compared by studying the amount of time required to deploy them, and the amount of code needed for them.

    The results show that the serverless Function-as-a-Service solution performed worse under the types of high loads used in the study. The incurred costs for the performed tests were higher for the serverless option, while the developer effort involved was lower. Additionally, further testing showed that the performance of the Function-as-a-Service solution was highly dependent on the concept of cold starts.

    Download full text (pdf)
    fulltext
  • 20.
    Amgren, Pontus
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Olausson, Emil
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Developing Guidelines for Structured Process Data Transfer2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Today, society is ever-increasing in its use of technology and computers. The increase in technology creates a need for different programming languages with unique properties. The creation of a system may require multiple languages for multiple processes that need to transfer data between one another. There are several solutions for sharing data between processes with their respective strengths and weaknesses. The differences create a problem of needing to understand the data transfer solutions to use them effectively. This thesis addresses the problem of there not existing any guidelines for data transfer solutions. The purpose is to create guidelines for choosing a data transfer solution. The goal is to help software developers find a data transfer solution that fits their needs. The thesis is meant to inform and contribute to the understanding of possible solutions for sharing data between processes. A literature study and practical study were needed to get that understanding. The literature study was conducted to understand the solutions and to be able to compare them. After that, a practical study was performed to work with the solutions and gain experience. The study was meant to gain measurements for later comparisons of data transfer solutions. The measurements followed the comparative criteria of speed , resource usage , and language support . The result was the creation of guidelines that displayed different scenarios based on the comparative criteria. For each situation, there was a recommendation of solutions that would help in the given situation. These results accomplished the goal and purpose by providing guidelines that could help software developers choose a data transfer solution.

    Download full text (pdf)
    fulltext
  • 21. Ananthanarayanan, G.
    et al.
    Ghodsi, Ali
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS. University of California, Berkeley, CA, United States.
    Wang, A.
    Borthakur, D.
    Kandula, S.
    Shenker, S.
    Stoica, I.
    PACMan: Coordinated memory caching for parallel jobs2012In: Proceedings of NSDI 2012: 9th USENIX Symposium on Networked Systems Design and Implementation, USENIX Association , 2012, p. 267-280Conference paper (Refereed)
    Abstract [en]

    Data-intensive analytics on large clusters is important for modern Internet services. As machines in these clusters have large memories, in-memory caching of inputs is an effective way to speed up these analytics jobs. The key challenge, however, is that these jobs run multiple tasks in parallel and a job is sped up only when inputs of all such parallel tasks are cached. Indeed, a single task whose input is not cached can slow down the entire job. To meet this "all-or-nothing" property, we have built PACMan, a caching service that coordinates access to the distributed caches. This coordination is essential to improve job completion times and cluster efficiency. To this end, we have implemented two cache replacement policies on top of PACMan's coordinated infrastructure fb-LIFE that minimizes average completion time by evicting large incomplete inputs, and LFU-F that maximizes cluster efficiency by evicting less frequently accessed inputs. Evaluations on production workloads from Facebook and Microsoft Bing show that PACMan reduces average completion time of jobs by 56% and 51% (small interactive jobs improve by 77%), and improves efficiency of the cluster by 47% and 54%, respectively.

  • 22.
    Andersen, Linda
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Andersson, Philip
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Deep Learning Approach for Diabetic Retinopathy Grading with Transfer Learning2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Diabetic retinopathy (DR) is a complication of diabetes and is a disease that affects the eyes. It is one of the leading causes of blindness in the Western world. As the number of people with diabetes grows globally, so does the number of people affected by diabetic retinopathy. This demand requires that better and more effective resources are developed in order to discover the disease in an early stage which is key to preventing that the disease progresses into more serious stages which ultimately could lead to blindness, and streamline further treatment of the disease. However, traditional manual screenings are not enough to meet this demand. This is where the role of computer-aided diagnosis comes in. The purpose of this report is to investigate how a convolutional neural network together with transfer learning can perform when trained for multiclass grading of diabetic retinopathy. In order to do this, a pre-built and pre-trained convolutional neural network from Keras was used and further trained and fine-tuned in Tensorflow on a 5-class DR grading dataset. Twenty training sessions were performed and accuracy, recall and specificity were evaluated in each session. The results show that testing accuracies achieved were in the range of 35% to 48.5%. The average testing recall achieved for class 0, 1, 2, 3 and 4 was 59.7%, 0.0%, 51.0%, 38.7% and 0.8%, respectively. Furthermore, the average testing specificity achieved for class 0, 1, 2, 3 and 4 was 77.8%, 100.0%, 62.4%, 80.2% and 99.7%, respectively. The average recall of 0.0% and average specificity of 100.0% for class 1 (mild DR) were obtained because the CNN model never predicted this class.

    Download full text (pdf)
    fulltext
  • 23.
    Andersson, Andreas
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Spirometri med en smarttelefon: Utveckling av en app för att mäta rotationshastigheten till en spirometerprototyp för smarttelefoner2017Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The goal with this bachelor thesis was to develop an application with an algorithm to measure the rotation speed of a prototype, as a low-cost solution for measuring spirometry with a smartphone. In a pilot study it was investigated how a smartphone can be used to measure health and what algorithms there are to detect motion in videos. After the pilot study an app with the function to record a video by using the camera of a smartphone and then use an algorithm to detect the rotation speed in the spirometry-prototype’s turbine was developed. To make it work it is important that the rotation speed is low enough so it does not exceed half of the cameras fps. Therefore, to capture the rotation speed of the spirometry-prototype’s turbine the rotation needs to be limited and a smartphone with a camera with at least 120 fps is required.The result of this work is an algorithm that can measure the rotation speed in the spirometry prototype turbine. The algorithm is detecting the peaks in a PPG. To minimize the computation time and to increase the accuracy the algorithm analyses the colour intensity over a ROI in every frame. There is great potential to use this algorithm to further develop this alternative method of measuring spirometry.

    Download full text (pdf)
    Spirometri med en smarttelefon
  • 24.
    Andersson, Måns
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Keeping an Indefinitely Growing Audit Log2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    An audit log enables us to discover malfeasance in a system and to understand a security breach after it has happened. An audit log is meant to preserve information about important events in a system in a non-repudiable manner. Naturally, the audit log is often a target for malicious actors trying to cover the traces of an attack. The most common type of attack would be to try to remove or modify entries which contain information about some events in the system that a malicious actor does not want anyone to know about. In this thesis, the state-of-the-art research on secure logging is presented together with a design for a new logging system. The new design has superior properties in terms of both security and functionality compared to the current EJBCA implementation. The design is based on a combination of two well-cited logging schemes presented in the literature. Our design is an audit log built on a Merkle tree structure which enables efficient integrity proofs, flexible auditing schemes, efficient queries and exporting capabilities. On top of the Merkle tree structue, an FssAgg (Forward secure sequential Aggregate) MAC (Message Authentication Code) is introduced which strengthens the resistance to truncation-attacks and provides more options for auditing schemes. A proof-of-concept implementation was created and performance was measured to show that the combination of the Merkle tree log and the FssAgg MAC does not significantly reduce the performance compared to the individual schemes, while offering better security. The logging system design and the proof-of-concept implementation presented in this project will serve as a starting point for PrimeKey when developing a new audit log for EJBCA.

    Download full text (pdf)
    fulltext
  • 25.
    Anggraini, Dita
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Reliability and Cost-Benefit Analysis of the Battery Energy Storage System2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The battery energy storage system (BESS) is crucial for the energy transition and decarbonisation of the energy sector. However, reliability assessment and capital cost challenges can hinder their widespread deployment. Reliability and cost-benefit analysis help address these challenges and assess BESS adoption's feasibility and viability, which is the aim of this project.

    A BESS contains various components such as battery packs, inverters, a DC/DC converter, a Battery Thermal Management System (BTMS), electrical protection devices, a transformer, and an Energy Management System (EMS). All these fundamental components must be considered to obtain a complete reliability prediction. Most previous studies focused on the reliability analysis of individual components, but few consider all the abovementioned components in collective reliability analysis. In this thesis, each component is mathematically modelled to estimate failure rates and then used to predict the reliability of the overall BESS system. The model accuracy is verified by comparing the computed reliability indices with the values from standards/references, showing that the proposed reliability prediction methods provide reasonable outcomes.

    Different scenarios to enhance BESS reliability through component redundancy are explored in this project. It is proved that applying component redundancy can boost the overall BESS reliability at the price of an increased capital cost. However, the enhancement in reliability and lifespan due to component redundancy can also curtail maintenance costs. A cost-benefit analysis assesses each scenario's profitability, considering manufacturers' and owners' perspectives. It helps determine the optimal balance between reliability and profitability. Redundancy applied to components with higher failure rates and lower costs improves the reliability and profitability of the BESS. The finding highlights the importance of strategic component selection for enhancing BESS reliability. Careful reliability and cost analysis should be performed simultaneously to find the most optimised BESS scenario.

    Download full text (pdf)
    fulltext
  • 26.
    Araujo, Jose
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Sandberg, Henrik
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Johansson, Karl Henrik
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Experimental Validation of a Localization System Based on a Heterogeneous Sensor Network2009In: ASCC: 2009 7TH ASIAN CONTROL CONFERENCE, NEW YORK: IEEE , 2009, p. 465-470Conference paper (Refereed)
    Abstract [en]

    The experimental implementation and validation of a localization system based on a heterogeneous sensor network is described. The sensor network consists of ultrasound ranging sensors and web cameras. They are used to localize a mobile robot under sensor communication constraints. Applying a recently proposed sensor fusion algorithm that explicitly takes communication delay and cost into account, it is shown that one can accurately trade off the estimation performance by using low-quality ultrasound sensors with low processing time and low communication cost versus the use of the high-quality cameras with longer processing time and higher communication cost. It is shown that a periodic schedule of the sensors is suitable in many cases. The experimental setup is discussed in detail and experimental results are presented.

    Download full text (pdf)
    networked_control_ascc09
  • 27.
    Araya, Cristian
    et al.
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Singh, Manjinder
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Web API protocol and security analysis2017Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    There is problem that every company has its own customer portal. This problem can be solved by creating a platform that gathers all customers’ portals in one place. For such platform, it is required a web API protocol that is fast, secure and has capacity for many users. Consequently, a survey of various web API protocols has been made by testing their performance and security.

    The task was to find out which web API protocol offered high security as well as high performance in terms of response time both at low and high load. This included an investigation of previous work to find out if certain protocols could be ruled out. During the work, the platform’s backend was also developed, which needed to implement chosen web API protocols that would later be tested. The performed tests measured the APIs’ connection time and their response time with and without load. The results were analyzed and showed that the protocols had both pros and cons. Finally, a protocol was chosen that was suitable for the platform because it offered high security and fast connection. In addition, the server was not affected negatively by the number of connections. Reactive REST was the web API protocol chosen for this platform.

    Download full text (pdf)
    Web API protocol and security analysis. CA MS
  • 28.
    Aroush, Georgek
    KTH, School of Electrical Engineering and Computer Science (EECS).
    An Evaluation of Testing Frameworks for Beginners inJavaScript Programming: An evaluation of testing frameworks with beginners in mind2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Software testing is an essential part of any development, ensuring the validity and verification of projects. As the usage and footprint of JavaScript expand, new testing frameworks in its community have made statements about being the best overall solution using minimal intervention from developers. The statements from these frameworks about being the greatest can make it difficult for JavaScript beginners to pick a framework that could affect current and future projects. By comparing different types of frameworks and establishing a guideline for others to do the same, it becomes easier for beginners and others to choose a framework according to their own required needs. The overall method uses Mario Bunge’s scientific method via stages, which helps validate the thesis as scientific. Research, empirical data from a qualitative, and objective data from a survey decide the criteria, their priority (to determine their impact and hierarchy), what frameworks to include, and how to compare them. The frameworks Jest, AVA, and Node TAP are compared based on the main criteria of simplicity, documentation, features, and their sub-criteria. Evaluating the frameworks and ranking their performance in each criterion was done through an experiment conducted on a pre-made website without any testing included. The analytic hierarchy process is the primary method used to combine the information gathered and output a result. It makes it possible to create a priority hierarchy for each criterion and subsequently makes it possible to evaluate the choices available on their fulfillment of those criteria. One of these choices will eventually be an overall more suitable fit as the optimal framework for the research question. Combining the survey and experiment data into the analytic hierarchy process revealed that Jest fit the previous criteria better than AVA and Node TAP because of Jest’s better learning curve and Stack overflow presence. AVA was just behind in those areas, while Node TAP had a poor fit for all sub-criteria compared to the other two. AVA’s almost similar evaluation to Jest shows how the open-source community and small development teams can keep up with solutions from big corporations.

    Download full text (pdf)
    fulltext
  • 29.
    Arrospide Echegaray, Daniel
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Utvärdering av Självstyrandes-utvecklarramverket2016Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Within software engineering there is a diversity of process methods where each one has its specific purpose. A process method can be described as being a repeatable set of step with the purpose to achieve a task and reach a specific result. The majority of process methods found in this study are focused on the software product being developed. There seems to be a lack of process methods that can be used by software developers for there individual soft- ware process improvement. Individual software process improvement, refers to how the in- dividual software developer chooses to structure their own work with the purpose to obtain a specific result

    The Self-Governance Developer Framework (also called SGD-framework) whilst writing this is a newly developed process framework with the purpose of aiding the individual soft- ware developer to improve his own individual software process. Briefly explained the framework is intended to contain all the activities that can come up in a software project. The problem is that this tool has not yet been evaluated and therefore it is unknown if it is relevant for its purpose. To frame and guide the study three problem questions has been for- mulated (1) Is the framework complete for a smaller company in regards to it activities? (2) How high is the cost for the SGD-framework in regard of time?

    The goal of the study is to contribute for future studies for the framework by performing an action study where the Self-Governance Developer Framework is evaluated against a set of chosen evaluation criteria.

    An inductive qualitative research method was used when conducting the study. An induc- tive method means that conclusions are derived from empirically gathered data and from that data form general theories. Specifically, the action study method was used. Data was gathered by keeping a logbook and also time logging during the action study. To evaluate the framework, some evaluation criteria was used which were (1) Completeness, (2) Se- mantic correctness, (3) Cost. A narrative analysis was conducted over the data that was gathered for the criteria. The analysis took the problem formulations in regard.

    The results from the evaluation showed that the framework was not complete with the re- gards of the activities. Although next to complete as only a few activities were further needed during the action study. A total of 3 extra activities were added over the regular 40 activities. Around 10% of the time spent in activities were in activities outside of the Self- Governance Developer Framework. The activities were considered to finely comminute for the context of a smaller company. The framework was considered highly relevant for im- proving the individual software developers own process. The introduction cost in this study reflect on the time it took until the usage of the framework was considered consistent. In this study it was approximately 24 working days with a usage about 3.54% of an eight-hour work day. The total application cost of usage of the framework in the performed action study was on average 4.143 SEK/hour or 662,88 SEK/month. The template cost used was on 172.625 SEK/hour. 

    Download full text (pdf)
    fulltext
  • 30. Asad, H. A.
    et al.
    Wouters, Erik Henricus
    KTH.
    Bhatti, N. A.
    Mottola, L.
    Voigt, T.
    On Securing Persistent State in Intermittent Computing2020In: ENSsys 2020 - Proceedings of the 8th International Workshop on Energy Harvesting and Energy-Neutral Sensing Systems, Association for Computing Machinery, Inc , 2020, p. 8-14Conference paper (Refereed)
    Abstract [en]

    We present the experimental evaluation of different security mechanisms applied to persistent state in intermittent computing. Whenever executions become intermittent because of energy scarcity, systems employ persistent state on non-volatile memories (NVMs) to ensure forward progress of applications. Persistent state spans operating system and network stack, as well as applications. While a device is off recharging energy buffers, persistent state on NVMs may be subject to security threats such as stealing sensitive information or tampering with configuration data, which may ultimately corrupt the device state and render the system unusable. Based on modern platforms of the Cortex M*series, we experimentally investigate the impact on typical intermittent computing workloads of different means to protect persistent state, including software and hardware implementations of staple encryption algorithms and the use of ARM TrustZone protection mechanisms. Our results indicate that i) software implementations bear a significant overhead in energy and time, sometimes harming forward progress, but also retaining the advantage of modularity and easier updates; ii) hardware implementations offer much lower overhead compared to their software counterparts, but require a deeper understanding of their internals to gauge their applicability in given application scenarios; and iii) TrustZone shows almost negligible overhead, yet it requires a different memory management and is only effective as long as attackers cannot directly access the NVMs.

  • 31.
    Asbai, Ali
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Prestanda- och användbarhetsanalys av decentraliserad ledger-teknik utvecklad med antingen SQL eller Blockkedja2022Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    B-SPORT+ is a project that was interested in developing an application for advice and guidance in regards to physical exercise adapted for people with disabilities. B-SPORT+ identified the need for a decentralized ledger in the application. A decentralized ledger is a register that stores data on transactions performed in an application. In previous work on the application, a Blockchain was highlighted as a possible solution. However, B-SPORT+ experienced that this technology contained disadvantages such as high energy consumption and expensive implementation. Therefore, this work investigated, developed and evaluated an alternative to Blockchain using relational databases.

    The result was two prototypes. The first prototype mimicked Blockchain technology by horizontally fragmenting a relational database that stored a table of performed transactions. Then, cryptographic hashing was used to validate transactions between database fragments. A prototype was also developed using Blockchain technology, this prototype was used to evaluate the first prototype. The evaluation showed that the structure of the SQL prototype reduced memory utilization in user computers it also reduced energy consumption and time when performing transactions. This structure also allowed moderation of data in the ledger, which was vital for the application BSPORT + wanted to develop. 

    Download full text (pdf)
    Fulltext20220224
  • 32.
    Aslamy, Benjamin
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Utveckling av ett multisensorsystem för falldetekteringsanordningar2016Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Accidental falls among the elderly is a major public health problem. As a result, a variety of systems have been developed for remote monitoring of the elderly to permit early detection of falls. The majority of the research that has been done so far in fall accidents has focused on developing new more successful algorithms spe- cifically to identify fall from non-fall. Although the statistics show that mortality and injuries caused by falls are increasing every year in conjunction with the in- creasing proportion of older people in the population.

    This thesis is about improving the current fall detection devices by covering the gaps and meet the needs of the current fall detection techniques. The improve- ments that have been identified is to provide a secure assessment of the patient's health and be able to call for aid more quickly when a fall occurs. Another im- provement is the mobility for the elderly to be outdoors and have the ability to per- form daily activities without being limited by the location position.

    In summary it can be said that a multisensor system in form of a prototype has been designed to cover the deficiencies and improvements that have been identi- fied. Apart from detection of falls and body movements through an accelerometer sensor the prototype does also include a sensor for detecting vital signs in form of ECG. It also supports cellular and wireless network communication in form of GPRS and Wi-Fi to enable freedom of movement for the elderly. Furthermore, the prototype includes a sensor for GPS that provides information about location position. 

    Download full text (pdf)
    fulltext
  • 33.
    Aurell, Erik
    et al.
    KTH, School of Engineering Sciences (SCI), Physics.
    El-Ansary, Sameh
    KTH, School of Engineering Sciences (SCI), Physics.
    A physics-style approach to scalability of distributed systems2005In: Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349, Vol. 3267, p. 266-272Article in journal (Refereed)
    Abstract [en]

    Is it possible to treat large scale distributed systems as physical systems? The importance of that question stems from the fact that the behavior of many P2P systems is very complex to analyze analytically, and simulation of scales of interest can be prohibitive. In Physics, however, one is accustomed to reasoning about large systems. The limit of very large systems may actually simplify the analysis. As a first example, we here analyze the effect of the density of populated nodes in an identifier space in a P2P system. We show that while the average path length is approximately given by a function of the number of populated nodes, there is a systematic effect which depends on the density. In other words, the dependence is both on the number of address nodes and the number of populated nodes, but only through their ratio. Interestingly, this effect is negative for finite densities, showing that an amount of randomness somewhat shortens average path length.

  • 34.
    Axbrink, William
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Factors Behind Successful Software-as-a-Service Integrations: A Case Study of a SaaS Integration at Scania2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The topic of this thesis is to evaluate Software-as-a-Service (SaaS) integrations in order to create a set of guidelines to help ease the integration of SaaS systems into internal in-house developed systems. It was achieved by performing a case study on a successful SaaS integration to locate relevant success factors to incorporate into upcoming SaaS integrations. The primary findings included a focus on the usage of standard solutions, experienced cooperation from the SaaS contractor and tactical usage of technical debt that extends the whole life cycle. While there are many success factors that aided to create a successful integration, there are still drawbacks to certain techniques that will have to be decided by the specific integrations requirements if the trade-off is worth it. There were other success factors that weighted in but was not crucial to the success, and certain factors that should be treated with caution due to the harmful effect it might have upon the project. Using the factors found in the case study, a set of guidelines with a focus on the processes and work methodology were created to ease future SaaS integrations for organizations and institutions.

    Download full text (pdf)
    fulltext
  • 35.
    Axtelius, Mathias
    et al.
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Alsawadi, Rami
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    An improved selection algorithm for access points in wireless local area networks: An improved selection algorithm for wireless iopsys devices2016Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Wireless devices search for access points when they want to connect to a network. A devicechooses an access point based on the received signal strength between the device and theaccess point. That method is good for staying connected in a local area network but it doesnot always offer the best performance, which can result in a slower connection. This is thestandard method of connection for wireless clients, which will be referred to as the standardprotocol. Larger networks commonly have a lot of access points in an area, which increasesthe coverage area and makes loss of signal a rare occurrence. Overlapping coverage zonesare also common, offering multiple choices for a client. The company Inteno wanted an alternativeconnection method for their gateways. The new method that was developed wouldforce the client to connect to an access point depending on the bitrate to the master, as wellas the received signal strength. These factors are affected by many different parameters.These parameters were noise, signal strength, link-rate, bandwidth usage and connectiontype. A new metric had to be introduced to make the decision process easier by unifying theavailable parameters. The new metric that was introduced is called score. A score system wascreated based on these metrics. The best suited access point would be the one with the highestscore. The developed protocol chose the gateway with the highest bitrate available, while thestandard protocol would invariably pick the closest gateway regardless. The developed protocolcould have been integrated to the standard protocol to gain the benefits of both. Thiscould not be accomplished since the information was not easily accessible on Inteno’s gatewaysand had to be neglected in this thesis.

    Download full text (pdf)
    fulltext
  • 36.
    Baccelli, Guido
    et al.
    Politecn Torino, DET, Turin, Italy..
    Stathis, Dimitrios
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Hemani, Ahmed
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems, Electronic and embedded systems.
    Martina, Maurizio
    Politecn Torino, DET, Turin, Italy..
    NACU: A Non-Linear Arithmetic Unit for Neural Networks2020In: PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), IEEE , 2020Conference paper (Refereed)
    Abstract [en]

    Reconfigurable architectures targeting neural networks are an attractive option. They allow multiple neural networks of different types to be hosted on the same hardware, in parallel or sequence. Reconfig-urability also grants the ability to morph into different micro-architectures to meet varying power-performance constraints. In this context, the need for a reconfigurable non-linear computational unit has not been widely researched. In this work, we present a formal and comprehensive method to select the optimal fixed-point representation to achieve the highest accuracy against the floating-point implementation benchmark. We also present a novel design of an optimised reconfigurable arithmetic unit for calculating non-linear functions. The unit can be dynamically configured to calculate the sigmoid, hyperbolic tangent, and exponential function using the same underlying hardware. We compare our work with the state-of-the-art and show that our unit can calculate all three functions without loss of accuracy.

  • 37.
    Baharloo, Mohammad
    et al.
    University of Tehran, Tehran, Iran.
    Khonsari, Ahmen
    University of Tehran, Tehran, Iran.
    Dolati, Mahdi
    University of Tehran, Tehran, Iran.
    Shiri, Pouya
    University of Victoria, BC, Canada.
    Ebrahimi, Masoumeh
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems, Electronic and embedded systems.
    Rahmati, Dara
    University of Tehran, Tehran, Iran.
    Traffic-aware performance optimization in Real-time wireless network on chip2020In: Nano Communication Networks, ISSN 1878-7789, E-ISSN 1878-7797, Vol. 26, article id 100321Article in journal (Refereed)
    Abstract [en]

    Network on Chip (NoC) is a prevailing communication platform for multi-core embedded systems. Wireless network on chip (WNoC) employs wired and wireless technologies simultaneously to improve the performance and power-efficiency of traditional NoCs. In this paper, we propose a deterministic and scalable arbitration mechanism for the medium access control in the wireless plane and present its analytical worst-case delay model in a certain use-case scenario that considers both Real-time (RT) and Non Real-time (NRT) flows with different packet sizes. Furthermore, we design an optimization model to jointly consider the worst-case and the average-case performance parameters of the system. The Optimization technique determines how NRT flows are allowed to use the wireless plane in a way that all RT flows meet their deadlines, and the average case delay of the WNoC is minimized. Results show that our proposed approach decreases the average latency of network flows up to 17.9%, and 11.5% in 5 × 5, and 6 × 6 mesh sizes, respectively.

    Download full text (pdf)
    fulltext
  • 38.
    Baheux, Ivan
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Context-aware security testing of Android applications: Detecting exploitable vulnerabilities through Android model-based security testing2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This master’s thesis explores ways to uncover and exploit vulnerabilities in Android applications by introducing a novel approach to security testing. The research question focuses on discovering an effective method for detecting vulnerabilities related to the context of an application. The study begins by reviewing recent papers on Android security flaws affecting application in order to guide our tool creation. Thus, we are able to introduce three Domain Specific Languages (DSLs) for Model-Based Security Testing (MBST): Context Definition Language (CDL), Context-Driven Modelling Language (CDML), and Vulnerability Pattern (VPat). These languages provide a fresh perspective on evaluating the security of Android apps by accounting for the dynamic context that is present on smartphones and can greatly impact user security. The result of this work is the development of VPatChecker[1], a tool that detects vulnerabilities and creates abstract exploits by integrating an application model, a context model, and a set of vulnerability patterns. This set of vulnerability patterns can be defined to represent a wide array of vulnerabilities, allowing the tool to be indefinitely updated with each new CVE. The tool was evaluated on the GHERA benchmark, showing that at least 38% (out of a total of 60) of the vulnerabilities in the benchmark can be modelled and detected. The research underscores the importance of considering context in Android security testing and presents a viable and extendable solution for identifying vulnerabilities through MBST and DSLs.

    Download full text (pdf)
    fulltext
  • 39.
    Balachandran, Sarugan
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Perez Legrand, Diego
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Evaluating machine learning models for time series forecasting in smart buildings2023Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Temperature regulation in buildings can be tricky and expensive. A common problem when heating buildings is that an unnecessary amount of energy is supplied. This waste of energy is often caused by a faulty regulation system. This thesis presents a machine learning ap- proach, using time series data, to predict the energy supply needed to keep the inside tem- perature at around 21 degrees Celsius. The machine learning models LSTM, Ensemble LSTM, AT-LSTM, ARIMA, and XGBoost were used for this project. The validation showed that the ensemble LSTM model gave the most accurate predictions with the Mean Absolute Error of 22486.79 (Wh) and Symmetric Mean Absolute Percentage Error of 5.41 % and was the model used for comparison with the current system. From the performance of the different models, the conclusion is that machine learning can be a useful tool to pre- dict the energy supply. But on the other hand, there exist other complex factors that need to be given more attention to, to evaluate the model in a better way.

    Download full text (pdf)
    fulltext
  • 40. Ben Dhaou, I.
    et al.
    Tenhunen, Hannu
    KTH, Superseded Departments (pre-2005), Electronic Systems Design.
    Efficient library characterization for high-level power estimation2004In: IEEE Transactions on Very Large Scale Integration (vlsi) Systems, ISSN 1063-8210, E-ISSN 1557-9999, Vol. 12, no 6, p. 657-661Article in journal (Refereed)
    Abstract [en]

    This paper describes LP-DSM, which is an algorithm used for efficient library characterization in high-level power estimation. LP-DSM characterizes the power consumption of building blocks using the entropy of primary inputs and primary outputs. The experimental results showed that over a wide range of benchmark circuits implemented using full custom design in 0.35-mum 3.3 V CMOS process the statistical performance (mean and maximum error) of LP-DSM is comparable or sometimes better than most of the published algorithms. Moreover, it was found that LP-DSM has the lowest prediction sum of squares, which makes it an efficient tool for power prediction. Furthermore, the complexity of the LP-DSM is linear in relation to the number of primary inputs (O(NI)), whereas state of the art published library characterization algorithms have a complexity of O(NI2).

  • 41.
    Berg, Johan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Mebrahtu Redi, Daniel
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Benchmarking the request throughput of conventional API calls and gRPC: A Comparative Study of REST and gRPC2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    As the demand for better and faster applications increase every year, so does the demand for new communication systems between computers. Today, a common method for computers and software systems to exchange information is the use of REST APIs, but there are cases where more efficient solutions are needed. In such cases, RPC can provide a solution. There are many RPC libraries to choose from, but gRPC is the most widely used today.

    gRPC is said to offer faster and more efficient communication than conventional web-based API calls. The problem investigated in this thesis is that there are few available resources demonstrating how this performance difference translates into request throughput on a server.

    The purpose of the study is to benchmark the difference in request throughput for conventional API calls (REST) and gRPC. This was done with the goal of providing a basis for making better decisions regarding the choice of communication infrastructure between applications. A qualitative research method with support of quantitative data was used to evaluate the results.

    REST and gRPC servers were implemented in three programming languages. A benchmarking client was implemented in order to benchmark the servers and measure request throughput. The benchmarks were conducted on a local network between two hosts.

    The results indicate that gRPC performs better than REST for larger message payloads in terms of request throughput. REST initially outperforms gRPC for small payloads but falls behind as the payload size increases. The result can be beneficial for software developers and other stakeholders who strive to make informed decisions regarding communication infrastructure when developing and maintaining applications at scale.

    Download full text (pdf)
    fulltext
  • 42.
    Berg, Linus
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Ståhl, Felix
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Mainframes and media streaming solutions: How to make mainframes great again2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Mainframes has been used for well over 50 years and are built for processing demanding workloads fast, with the latest models using IBM’s z/Architecture processors. In the time of writing, the mainframes are a central unit of the world’s largest corporations in banking, finance and health care. Performing, for example, heavy loads of transaction processing. When IBM bought RedHat and acquired the container orchestration platform OpenShift, the IBM lab in Poughkeepsie figured that a new opportunity for the mainframe might have opened. A media streaming server built with OpenShift, running on a mainframe. This is interesting because a media streaming solution built with OpenShift might perform better on a mainframe than on a traditional server. The initial question they proposed was ’Is it worth running streaming solutions on OpenShift on a Mainframe?’. First, the solution has to be built and tested on a mainframe to confirm that such a solution actually works. Later, IBM will perform a benchmark to see if the solution is viable to sell. The authors method includes finding the best suitable streaming software according to some criterias that has to be met. Nginx was the winner, being the only tested software that was open-source, scalable, runnable in a container and supported adaptive streaming. With the software selected, configuration with Nginx, Docker and OpenShift resulted in a fully functional proof-of-concept. Unfortunately, due to the Covid-19 pandemic, the authors never got access to a mainframe, as promised, to test the solution, however, OpenShift is platform agnostic and should, theoretically, run on a mainframe. The authors built a base solution that can easily be expanded with functionality, the functionality left to be built by IBM engineers is included in the future works section, it includes for example, live streaming, and mainframe benchmarking.

    Download full text (pdf)
    fulltext
  • 43.
    Bhuddi, Amita
    et al.
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Somos, Oliver
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Återställningsverktyg för fordon baserat på applikationsintegration2016Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The software testing team at Scania use a manual and time consuming process to restore a test vehicle after working with it. Several different applications are used in this process to ensure the vehicle is in the same state as it was before their testing. To improve the workflow with a reduced workload and a more robust process, the test team was interested in the development of a restoration application. It was desired to develop the restoration application by reusing the components to the greatest possible extend. Since there were many components that fulfilled the needs of most functions, a pre-study of all the applications was done to decide which components can be re-used. This was study was based on the integration model, Enterprise Application Integration, which aims to create a single product combining the applications used in an organization to simplify processes such as maintenance, data management and employee training. A prototype was developed which implements three existing modules on different levels and, in line with the goals of EAI, is itself a simple application that enables the components to work in unision

    Download full text (pdf)
    fulltext
  • 44.
    Bilardi, Gianfranco
    et al.
    Univ Padua, Dept Informat Engn, I-35131 Padua, Italy..
    Scquizzato, Michele
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS. Univ Padua, Padua, Italy.
    Silvestri, Francesco
    Univ Padua, Dept Informat Engn, I-35131 Padua, Italy..
    A Lower Bound Technique for Communication in BSP2018In: ACM TRANSACTIONS ON PARALLEL COMPUTING, ISSN 2329-4949, Vol. 4, no 3, article id UNSP 14Article in journal (Refereed)
    Abstract [en]

    Communication is a major factor determining the performance of algorithms on current computing systems; it is therefore valuable to provide tight lower bounds on the communication complexity of computations. This article presents a lower bound technique for the communication complexity in the bulk-synchronous parallel (BSP) model of a given class of DAG computations. The derived bound is expressed in terms of the switching potential of a DAG, that is, the number of permutations that the DAG can realize when viewed as a switching network. The proposed technique yields tight lower bounds for the fast Fourier transform (FFT), and for any sorting and permutation network. A stronger bound is also derived for the periodic balanced sorting network, by applying this technique to suitable subnetworks. Finally, we demonstrate that the switching potential captures communication requirements even in computational models different from BSP, such as the I/O model and the LPRAM.

  • 45.
    Blanco Paananen, Adrian
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Storby, Johan
    KTH, School of Computer Science and Communication (CSC).
    Observing coevolution in simulated bacteria: Using asexual reproduction and simple direct mapped functions for decision-making2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this report we have presented the results of a program which performs simula- tions of artificial bacteria with the ability to evolve different characteristics and behaviours through genetic algorithms. Over time unfit bacteria will die out, and the more fit bacteria will produce offspring with slightly mutated variants of it’s genetic code resembling the evolutionary process. The simulation does not follow the traditional macro-scale hand-picked sexual reproduction often used in genetic algorithms to produce optimal results, but it instead uses individ- ual asexual reproduction which more closely resembles how bacteria reproduce in nature. Furthermore we do not use traditional neural networks for decision making, but simple functions which directly map the bacterias inputs to their decisions.

    The purpose of this study was to observe whether bacteria with different initial starting populations would coevolve, and specialize into heterogeneous populations. Furthermore we have tried to analyze how the populations inter- act with each other and how changing the different parameters of the simulation would affect the populations. We have performed three separate experiments that differ in their initial conditions, one with pre-created and heterogeneous herbivores and carnivores, one with homogeneous omnivores, and one with bac- teria whose genetic values have been decided at random. The result of our experiments was that we observed coevolution in the bacteria, and that they would despite very different initial starting conditions always grow towards sta- ble heterogeneous populations with very few exceptions. 

    Download full text (pdf)
    fulltext
  • 46.
    Blom, Marcus
    et al.
    KTH, School of Information and Communication Technology (ICT).
    Hammar, Kim
    KTH, School of Information and Communication Technology (ICT).
    Integrating Monitoring Systems - Pre-Study2016Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Failures in networks that reside in business environments cause harm to organizations depending on them. Every minute of inoperativety is hurtful and as a network adminstrator you want to minimize the rates of failures as well as the time of inoperation. Therefore, a fruitful network monitoring system is of great interest for such organizations. This bachelor’s thesis is the outcome of a pre-study performed on behalf of MIC Nordic and sought to advice them in the implementation of a new monitoring system.

    The aim of this study was to investigate how a Network Operation Center (NOC) can be implemented in an existing monitoring environment, to integrate current monitoring systems to a central point for monitoring. This study takes an holitstic approach to network management and the research can be divided into two main categories: Communication between network components and Presentation of information. Our process involves an analysis of the environment of MIC Nordic and an in depth inquiry on the current state of network monitoring and interface design. The study then culminates in the implementation of a prototype. The prototype serves in first hand as a research tool to collect experience and empirical evidence to increase the crediblity of our conclusions. It is also an attempt of demonstrating the complete process behind developing a NOC, that we believe can fill a gap among the previous research in the field.

    From our results you can expect a prototype with functionality for monitoring network components and a graphical user interface (GUI) for displaying information. The results are designed towards solving the specific network management problem that was given and the environment that it concerns. This pre-study suggests that the best solution for implementing a NOC in the given environment is to use SNMP for communication. From an investigation on how to present network management information in a effective way we propose to follow a user-centered approach and to utilize human perception theory in the design process. The authors recommend further research that maintain the holistic approach but applies more quantitative methods to broaden the scope.

    Download full text (pdf)
    fulltext
  • 47. Bonnichsen, L.
    et al.
    Podobas, Artur
    KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
    Using transactional memory to avoid blocking in OpenMP synchronization directives: Don’t wait, speculate!2015In: 11th International Workshop on OpenMP, IWOMP 2015, Springer, 2015, p. 149-161Conference paper (Refereed)
    Abstract [en]

    OpenMP applications with abundant parallelism are often characterized by their high-performance. Unfortunately, OpenMP applications with a lot of synchronization or serialization-points perform poorly because of blocking, i.e. the threads have to wait for each other. In this paper, we present methods based on hardware transactional memory (HTM) for executing OpenMP barrier, critical, and taskwait directives without blocking. Although HTM is still relatively new in the Intel and IBM architectures, we experimentally show a 73% performance improvement over traditional locking approaches, and 23% better than other HTM approaches on critical sections. Speculation over barriers can decrease execution time by up-to 41 %. We expect that future systems with HTM support and more cores will have a greater benefit from our approach as they are more likely to block.

  • 48.
    Brask, Anton
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Berendt, Filip
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Analyzing the scalability of R*-tree regarding the neuron touch detection task2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    A common task within research of neuronal morphology is neuron touch detection, that is finding the points in space where two neurites approach each other to form a synapse. In order to make efficient use of cache memory, it is important to store points that are close in space close in memory. One data structure that aims to tackle this complication is the R*-tree. In this thesis, a spatial query for touch detection was implemented and the scalability of the R*-tree was estimated on realistic neuron densities and extrapolated to explore execution times on larger volumes. It was found that touch detection on this data structure scaled much like the optimal algorithm in 3D-space and more specifically that the computing power needed to analyze a meaningful portion of the human cortex is not readily available.

    Download full text (pdf)
    fulltext
  • 49.
    Braun, Stefan
    KTH, School of Electrical Engineering (EES), Microsystem Technology.
    Wafer-level heterogeneous integration of MEMS actuators2010Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis presents methods for the wafer-level integration of shape memory alloy (SMA) and electrostatic actuators to functionalize MEMS devices. The integration methods are based on heterogeneous integration, which is the integration of different materials and technologies. Background information about the actuators and the integration method is provided.

    SMA microactuators offer the highest work density of all MEMS actuators, however, they are not yet a standard MEMS material, partially due to the lack of proper wafer-level integration methods. This thesis presents methods for the wafer-level heterogeneous integration of bulk SMA sheets and wires with silicon microstructures. First concepts and experiments are presented for integrating SMA actuators with knife gate microvalves, which are introduced in this thesis. These microvalves feature a gate moving out-of-plane to regulate a gas flow and first measurements indicate outstanding pneumatic performance in relation to the consumed silicon footprint area. This part of the work also includes a novel technique for the footprint and thickness independent selective release of Au-Si eutectically bonded microstructures based on localized electrochemical etching.

    Electrostatic actuators are presented to functionalize MEMS crossbar switches, which are intended for the automated reconfiguration of copper-wire telecommunication networks and must allow to interconnect a number of input lines to a number of output lines in any combination desired. Following the concepts of heterogeneous integration, the device is divided into two parts which are fabricated separately and then assembled. One part contains an array of double-pole single-throw S-shaped actuator MEMS switches. The other part contains a signal line routing network which is interconnected by the switches after assembly of the two parts. The assembly is based on patterned adhesive wafer bonding and results in wafer-level encapsulation of the switch array. During operation, the switches in these arrays must be individually addressable. Instead of controlling each element with individual control lines, this thesis investigates a row/column addressing scheme to individually pull in or pull out single electrostatic actuators in the array with maximum operational reliability, determined by the statistical parameters of the pull-in and pull-out characteristics of the actuators.

    Download full text (pdf)
    FULLTEXT01
  • 50.
    Brejcha, Kevin
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Prestandaanalys av HTTP/22015Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Swedbank is one of Sweden’s biggest banks with estimated four million private customers and they are constantly trying to improve their services so they become more user-friendly and faster. To satisfy their customer’s need of fast and easy services Swedbank wants to lower the loading times on the web services to the user experience is faster and smoother, especially for the users doing their banking on a smartphone. The mission is to do a per-formance analysis of the new HTTP protocol HTTP/2 and take out the most essential parts so Swedbank knows what to take advantage of when installing the new versions on their servers to achieve optimal services.The results showed that after implementing HTTP/2’s new features, Swedbank’s website performance increased with 44% in total loading time. The tests were performed in a local experimental environment where the earlier HTTP versions was installed and the perfor-mance metrics was documented.

    Download full text (pdf)
    fulltext
    Download full text (pdf)
    fulltext
1234567 1 - 50 of 515
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf