kth.sePublications KTH
123 51 - 100 of 117
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Seseke, Linda Maria
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Disassembling the Problem: A Frame Creation-Based Strategy for Human-Centered Automation in ELV Recycling2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    End-of-life vehicle recycling faces systemic obstacles: fragment dismantling practices, economic pressures, and regulatory misalignments undermine circular material flows. Automation is often reduced to task substitution, yet its potential lies in serving as a coordination platform that integrates human expertise, technology, and institutional frameworks. This thesis applies Dorst´s Frame Creation methodology, combining literature analysis with 27 expert interviews across dismantlers, recyclers, OEMs, suppliers, and regulators. The study identifies eight recurring themes-ranging from market barriers to information gaps-summarized in the overarching construct of “systemic coordination failure.” Building on this diagnosis, the thesis develops strategic frames that reconceptualize automation as boundary infrastructure. Rather than replacing labor, robotic system and human integration are positioned to align economic drivers, embodied work practices, and regulatory demands.  The results demonstrate that scalable and sustainable vehicle disassembly requires automation embedded in broader coordination architectures, inking information, incentive, and human capabilities

    Download full text (pdf)
    fulltext
  • Zhang, Fangpu
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Joint Sampling and Carrier Allocation for UAV–Satellite Video Transmission: Optimization and Reinforcement Learning Approaches2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Unmanned Aerial Vehicles (UAVs) deployed for rescue and search missions in remote regions, where terrestrial networks are unavailable, must rely on satellite connectivity to continuously upload video data. However, such satellite links face two major challenges: limited bandwidth availability and high uplink energy cost, both of which directly impact video quality and UAV battery life. As a key technology in 4G and 5G, carrier aggregation (CA) provides efficient services by assigning a varying number of component carriers (CCs) to users. However, activating all CCs simultaneously leads to excessive power consumption, while restricting CC usage risks buffer overflow and degraded Quality of Service (QoS). To address these challenges, we propose a joint component carrier assignment (CCA) and adaptive video sampling framework in which UAVs dynamically select active CCs and adjust their video sampling rate according to hotspot density detected in the sensing environment. This ensures that high-resolution video is transmitted in critical scenarios while conserving bandwidth and energy otherwise. We formulate the joint carrier–sampling problem as a multi-objective optimization that maximizes system throughput while minimizing communication power, subject to rate and buffer stability constraints. To handle the problem’s dynamic and combinatorial nature, we first develop a Mixed-Integer Linear Program (MILP) to establish a theoretical upper bound. We then develop a multi-agent Proximal Policy Optimization (PPO)-based Joint Sampling and Carrier Allocation (JSCA) algorithm to enable scalable online control. In this framework, each UAV is modeled as an autonomous agent that jointly determines CC activation and video sampling level based on local observations. This design enables efficient online learning of transmission policies that adapt to dynamic mission environments. Simulation results demonstrate that our proposed JSCA policy reduces UAV energy consumption by more than 45%, extends flight endurance by 6.3%, and achieves throughput within 10% of the MILP-based optimum scheme, while maintaining significantly lower complexity.

    Download full text (pdf)
    fulltext
  • Källström, Ivar
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Learning to Schedule with Graph Neural Networks and Reinforcement Learning: Investigation of Embeddings and Performance2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The increasing number of cores in commercial processors and the rising demand for high performance computing has made the problem of sched􀀸 uling tasks in parallel increasingly relevant. Previous methods have focused on classic heuristics and linear programming, achieving varying levels of performance for the problem. The success of machine learning based methods for combinatorial optimization suggests an entirely learning based method might be utilized. This thesis investigates the possibility of using an end􀀸to􀀸end machine learning method, with a graph neural network based architecture and reinforcement learning training, to produce schedules for the problem of scheduling tasks in parallel with precedence constraints. We focus on a simple task model where precedence between tasks is encoded as a directed acyclic graph, and each subtask has an associated runtime. The results indicate that the method has learned structural properties of the graph, achieving performance comparable to classic methods for small testcases. However a gap remains for larger testcases, with our method struggling to achieve parity to classic methods. An investigation of our graph representation module reveals that the produced node embeddings for tasks are highly explainable. Finally, we suggest further investigation of hybrid methods, more complex task models and with greater computational resources.

    Download full text (pdf)
    fulltext
  • Dexwik, Carolina
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Centering women: Designing for AI-assisted breast-cancer screening outside clinics2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study asks how an online, AI-supported educational platform about breast cancer screening can help women in Sweden make informed decisions outside clinics. A sequential mixed-methods design combined an online survey (n = 64) with three co-design workshops (n = 6). The survey revealed gaps in breast-density knowledge, self-examination practice and AI literacy, alongside conditional trust in both healthcare and AI tools. Workshop participants rejected numerical representations of AI-generated risk scores, calling them alienating and anxiety-provoking, and they stressed that high-risk results delivered digitally should instead come from a clinician who can add empathy, context, and immediate support. Discussions also exposed a deeper gap: women lack personalised guidance to interpret everyday breast changes, leaving them unsure when to seek care. From these findings, the thesis proposes three design guidelines: (1) Contextualise numbers within lived experience, (2) Respect human moments by routing emotionally charged results to humans, and (3) Educate to advocate through interactive, body-literate content, extending Human-Centred AI principles to preventive and proactive women’s health.

    Download full text (pdf)
    fulltext
  • Wendel, Erik
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Improving Glucose Prediction: Using Wearable Device Data in Machine Learning Systems2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Managing blood glucose levels is a significant daily challenge for individuals with Type 1 Diabetes (T1D). Predicting blood glucose fluctuations using continuous glucose monitoring (CGM) data is essential for reducing complications. The data from these CGM devices are becoming more easily available, creating better opportunities to train machine learning models to predict glucose fluctuations and help patients manage their diabetes. This thesis investigates the potential for fully autonomous blood glucose prediction models without manual inputs. By analyzing how current machine learning models could benefit from incorporating additional contextual data, such as heart rate (HR) and time of day, collected from wearable devices, the study aims to give better insights into what can be done to improve performance in everyday scenarios. Three machine learning models were evaluated: a baseline LSTM model using CGM data, a similarly structured LSTM model that used heart rate in addition to CGM data, and a new CNN-LSTM using heart rate. The models were tested using data collected during everyday activities and their performance was measured using Root Mean Squared Error (RMSE) and Clarke Error Grid (CEG) analysis to assess clinical relevance. The results indicated that the new CNN-LSTM model achieved higher prediction accuracy than both the baseline and the LSTM with added HR. In entire everyday scenarios, the new model showed consistently lower RMSE values across all subjects without significantly increasing size, thus remaining small enough for use on mobile devices. Some results were inconclusive when testing the models on data collected from specific activities recorded months later, indicating the need for further research on how well the models generalize over extended periods and whether another training approach could help maintain accuracy. The study also shows that data availability plays a significant role in model performance, far outweighing the benefits of contextual data and model choices.

    Download full text (pdf)
    fulltext
  • Kitsidis, Christos
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Between Flesh and Circuit: Tracing the Mechanics of Intimacy in a Human-Machine Embrace during an Artistic-Scientific Performance2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Digital technologies are increasingly embedded in our bodies, routines, and emotions, shaping intimate relationships between humans and machines. While Human-Computer Interaction (HCI) research has explored intimacy primarily in care, healthcare, and efficiency-driven contexts, less attention has been given to aesthetic, performative, and nonfunctional encounters with technology. This thesis adopts a performance-led research approach, drawing on soma design and proxemics, to analyze a single case from Embrace Angels, an artistic-scientific performance/installation in which humans and robotic arms engage in a choreographed, multi-agent embrace. The analysis reveals how intentional slowness, negotiated meaning, and embodied awareness can foster deeper, more reflective forms of human-machine intimacy. By unpacking the mechanics of this encounter, the research sheds light on the deeper, more ethical impact of human-machine relationships and proposes new directions for designing meaningful, affective connections between humans and robots beyond efficiency-driven paradigms.

    Download full text (pdf)
    fulltext
  • Ledung, Markus
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Toward the automated migration of power grid control systems: an evaluation of ETL tools and a custom-tailored data migration pipeline2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis explores the possibilities of automating display and signal data migration in power grid systems using ETL tools and artificial intelligence. The core problem addressed is the time-consuming and error-prone manual process of migrating data from a legacy or third-party Network Control System (NCS) to Hitachi Energy’s Network Manager (NM) platform. The contribution of this thesis involves evaluating Pentaho Data Integration (PDI) and NCNC (Network Control Nerve Center) ETL tools, and assessing a computer vision-based AI model for replicating engineering displays from Single Line Diagrams (SLDs). The final results demonstrate that PDI is capable of migrating crucial parts of the database, and that AI-driven engineering display generation can effectively automate tasks that were previously performed manually. Combining both tools in one pipeline would allow future projects to reduce migration time, improve data integrity, and enhance customer satisfaction through more efficient deployment processes.

    Download full text (pdf)
    fulltext
  • Hyberg, Jonatan
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Important Training Data Factors When Finetuning Retrieval Augmented Generators2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Retrieval-Augmented Generation (RAG) systems have emerged as a powerful tool for answering questions and generating models. What makes RAGs so powerful is that they integrate retrieval mechanisms with generative models. However, these RAG systems still often suffer from hallucinations—instances where the generated output includes information unsupported by the retrieved context. One strategy used to limit hallucinations in RAGs is to fine-tune the generative models on new data. However, fine-tuning is not always a stable method as imperfections in the training data can negatively affect the model. This thesis investigates how underlying factors in the training data affect hallucinations when fine-tuning RAG systems. The thesis specifically examines the impact of these three factors: no context, noise in context, and incomplete answers in training data. By studying these three factors, we aim to enhance the reliability and trustworthiness of RAG systems in real-world applications. To achieve this, we conducted an experiment where we fine-tuned a generative model on each factor and compared the trained models to the baseline fine-tuned model. We evaluated the models using a comprehensive framework that assessed the factors on three aspects: factuality, faithfulness to the context, and ability to refuse to answer impossible questions. Impossible questions are defined as questions where the retrieved context has no overlap with the question, making it impossible for the RAG to answer the question given the context. Our evaluation revealed that having no contexts in the training data had a negative impact on both the factualness and the faithfulness when compared to a model trained with contexts. This shows that the fine-tuning can be used to train a RAG system to utilize the context better to stop hallucinations. Furthermore, we found that noise in context and incomplete answers in the training data had little to no effect on the RAG system’s factualness and faithfulness. This suggests that fine-tuning is relatively robust to noise in the training data. We found that fine-tuning overall had a severe negative impact on the RAG system’s ability to decline to answer an impossible question. After training in the baseline model correctly declined to answer 0.4% of the time compared to the original untrained that was correct 99.3% of the time. Adding incomplete answers to the training data eliminated the negative effect of fine-tuning on this aspect. The model trained on incomplete answers correctly declined to answer 99.6% of the time. A great improvement from 0.4%, demonstrating that incomplete answers are very important datapoints to add to the training set.

    Download full text (pdf)
    fulltext
  • Edvardsson, Oskar
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Preference-based generation of custom-length round trip routes for running2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    There is estimated to be around 50 million people in Europe engaging in running. Most of these usually stick to a few typical routes when they go out for a run. However, having a greater variance in training routes can have a positive effect on both motivation as well as injury prevention. In this thesis, we look at the problem of generating running routes of a given length based on the runner's preferences, including for example surface and elevation. We prove that this problem is NP-hard and hence to solve it two approximation algorithms and one heuristic algorithm are proposed. Experiments were carried out on graphs representing different kinds of real-life street networks in Sweden. The results showed that none of the algorithms outperformed the others in all aspects. Each algorithm had its pros and cons, and seemed to complement each other. Hence the overall conclusion of this thesis is that for practical use a combination of the algorithms is preferred. This could be achieved by choosing which algorithm to use based on the given problem instance.

    Download full text (pdf)
    fulltext
  • Park, Eugene
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Optimizing State Machine Replication with Quorum-Coordinated Disk Writes2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Disk I/O is a common bottleneck in consensus algorithms like Raft, which typically require a replica to persist log entries before acknowledgment. While this ensures durability, theory only mandates that a quorum of replicas perform this operation to maintain correctness.

    This thesis introduces quorum-coordinated disk writes, a technique that reduces redundant I/O by coordinating persistence across a quorum rather than all replicas. This thesis project implements this approach in CockroachDB, a distributed SQL database built on Raft, and evaluates potential performance benefits under write heavy workloads, possible drawbacks during recovery, and long term efficiency.

    Our results show that quorum-based persistence can improve throughput and latency without compromising system stability. Although recovery becomes more complex, the overall trade-offs suggest quorum coordination is a promising strategy for optimizing consensus protocols in performance-sensitive environments.

    Download full text (pdf)
    fulltext
  • Söderberg, Eric
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Ethical Hacking of a Public Transport Mobile Ticket Service2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Public transport plays an important role in sustainability development globally, and is often considered critical infrastructure. Utilized by billions of people in an increasingly digital world, many providers have also started to offer tickets for purchase directly in mobile phones. Despite all this, there is currently little available research on cybersecurity in the public transport sector. To combat this, in this case study we explored a number of potential cybersecurity vulnerabilities in a mobile ticketing application through white­box penetration testing following a threat modeling of the targeted system. In addition to presenting discovered security concerns, mitigations and theoretical impact were discussed, as well as general security properties of the system. The study is the first of its kind to be conducted in collaboration with a public transport provider, in this case Trafikforvaltningen, which through Stockholms Lokaltrafik (SL) provides all public transport in Stockholm, Sweden. A total of seven vulnerabilities with a measurable impact and six other security discoveries were found during the study. The most severe vulnerabilities all related to impairing the availability of the service for other users. The study found that a number of commonly reported vulnerability categories in public transport providers, such as issuing fake tickets or extracting personal information, were precluded due to a strictly server-side public-key cryptography implementation, and storing almost no personal information, respectively. The results presented in this study provides novel data to the currently scant cybersecurity research of public transport ticketing services. The findings help harden existing solutions and provides future guidance for developers and other stakeholders of public transport providers.

    Download full text (pdf)
    fulltext
  • Hallkvist, Hampus
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Exploring DMA benefits over bounce-buffering in the context of the TDISP building block SPDM: A study of the maturity of a new confidential computing protocol and its practical adoption2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Confidential computing has gained significant traction since 2020. The goal of providing strong security to third-party hardware has been successful in both terms of integrity and confidentiality, allowing a broader class of users, including those with stringent security requirements, to adopt cloud services. However, these security guarantees have been mostly limited to CPU processing via implementations such as AMD’s Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP) and Intel’s Trust Domain Extensions (TDX). Existing peripheral interaction methods incur extra I/O operations through bounce buffers. This technique adds an intermediate memory space that all data must be copied in and out of, as mutual trust is not established. It is shown to be slow, although if applied correctly through existing methods such as AMD SEVSNP, it can be safe. Through joint efforts, TEE Device Interface Security Protocol (TDISP) has emerged as a standard to allow cohesive, secure, and fast communication for any PCI Express (PCIe) connected peripheral device. While TDISP is still in active development, emulated implementations have started to emerge, among those by the QEMU virtualization and emulation suite. This thesis examines the performance impact of a partial subset of TDISP, specifically the transport protocol Secure Protocol And Data Model (SPDM) and its joint overhead with Direct Memory Access (DMA) compared to traditional bounce buffers. The study finds that the new protocol remains immature and impractical, yet shows great potential. As a result, DMA outperforms bounce buffers, but with a greater amortization cost than anticipated, making it suitable for large datasets with large block sizes or longrunning connections.

    Download full text (pdf)
    fulltext
  • Guðmundsson, Oliver
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Parameter Optimization of PVD Tungsten Interconnects for CMOS Using Magnetron Sputtering with Focus on Film Stress2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Copper is widely used as the interconnect material in advanced CMOS technology due to its low resistivity. However, its applicability is limited in high-temperature environments, since copper exhibits poor resistance to electromigration and can diffuse into surrounding dielectrics. Tungsten, although having approximately twice the resistivity of copper, offers superior high-temperature stability, strong resistance to electromigration, and negligible diffusion through dielectrics. For this reason, tungsten is an attractive candidate for Complementary Metal-Oxide-Semiconductor (CMOS) technologies that must operate reliably above 250 °C. However, using tungsten as interconnect introduces its own challenges, and the main issue is the residual stress that it introduces. This thesis investigates the deposition via Physical Vapor Deposition (PVD) technique of tungsten interconnects for high temperature CMOS applications, focusing on film stress and resistivity. The experiment was divided into two phases, the first phase focused on systematically varying the deposition parameters to understand their influence on film stress and sheet resistance. In addition to the deposition parameters, stack composition and film thickness were varied to explore their effects on film properties. Three stack compositions were investigated: Si/W, Si/SiO2/W, and Si/SiO2 /TiW/W. The second phase focused on implementing a linear regression model based on the results from the first phase, and performing a second deposition round using deposition parameters that were estimated to provide a near stress free film. Results showed that the chamber pressure has the biggest impact on the stress in the film, where low pressure produces high compressive stress, while high pressure provides tensile stress. These findings contribute to a better understanding of stress management in tungsten interconnects, providing valuable insights into optimizing tungsten deposition.

    Download full text (pdf)
    fulltext
  • Blackwell, Niall
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Can You Hear The Music?: How music congruency and speech masking impacts sales and overall consumer experiences in an Irish pub setting.2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Pubs can often be at the centre of some communities, where people go to meet, celebrate and enjoy themselves outside of their working life. The pub experience is influenced by many factors including the company a person is with, the staff, the food and beverages they drink and a number of external factors including looks, smells, sounds and general atmosphere. This study wants to understand what impact music congruency, incongruency, the absence of music and speech masking in an Irish bar may have on sales and the overall customer experience. This study sets up the four conditions of congruent, incongruent, speech masking and no music in a real-life pub setting over the course of 4 separate Thursdays. Participants were asked to score their experience on a 5-point Likert scale based on if they noticed the music, if they felt it was a good fit and if it impacted their perception of the atmosphere and stay in the pub. Participants were able to notice if the music was a good fit on the nights of the congruent music but there were some interesting responses to the no music night where 50% of respondents said that they heard music and that it was a good fit for the pub. None of the four music conditions played any role in how respondents perceived the atmosphere, or did they have any influence on their stay over the course of the experiment, so results suggest that there may be many other factors that impact the social and emotional behaviours of customers on their nights out.

    Download full text (pdf)
    fulltext
  • Sävås, Jonas
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Code Coverage for Java Dependencies2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    As software reuse continues to grow in prevalence in modern software development, external code is often integrated to efficiently implement required functionality. In the Java ecosystem, this practice is accelerated by repositories like Maven Central and build tools that automate the integration of external software packages. However, these outsourced packages, or dependencies, often include more functionality than necessary to support various use cases within their domain. The resulting unused code is a potential source of increased maintenance overhead and elevated security risks. Despite this, to our knowledge, no standalone tool currently evaluates the extent of dependency usage in Java projects.

    This thesis presents JACT, a tool to measure dependency usage in Java by leveraging code coverage to report usage of both the project and dependency code. This is achieved in two main steps. First, the project is built using Maven to produce an executable that contains both the project and dependency code. Second, the executable, together with the test suite's execution trace, enables the creation of the code coverage report, where JACT maps the coverage to dependencies and presents a structured overview of their usage.

    We evaluate JACT on 30 open-source Java projects to analyze dependency usage and assess its accuracy in mapping coverage information to dependencies. A comparison with the dependency debloating tool DepTrim provides insights into the strengths and limitations of code coverage in uncovering dependency usage. The results indicate that the dependencies are generally underutilized, with coverage increasing as alignment with project goals improves, while broader dependency feature sets lead to lower coverage. JACT accurately maps coverage to dependencies when Java package names are unique, but identical package names across dependencies introduce slight inaccuracies. Although JACT only captures coverage of executable code, it identifies additional used dependency class files compared to DepTrim, offering insights that could enhance precision in future debloating efforts in the Java ecosystem.

    Download full text (pdf)
    fulltext
  • Hansson, Oscar
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Self supervised learning for semantic image segmentation in railway systems: A comparison of segmentation model pre-training methods2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In recent years, self-supervised learning (SSL) has emerged as a powerful alternative to supervised pre-training in deep learning, particularly in domains with limited labeled data and abundant unlabeled data. This thesis investigates the application of Self-Supervised Learning (SSL) for improving semantic segmentation of concrete railway sleepers, an important step toward improving railway infrastructure maintenance. The work is conducted in collaboration with Trafikverket and explores whether domain-specific SSL pre-training can outperform or complement traditional transfer learning approaches based on ImageNet. Two SSL methods are evaluated: Simple Contrastive Learning framework (SimCLR), which uses contrastive learning to create useful data representations, and Masked Autoencoders (MAE), which learn global semantic representations by reconstructing masked image patches. These encoders are pre-trained on large volumes of unlabeled sleeper images provided by Trafikverket, and the effectiveness of these models is evaluated through linear evaluation and segmentation performance on a downstream crack segmentation task using a UNet architecture. Results show that both SSL methods produce encoder weights that yield comparable performance to ImageNet-pretrained models, with Masked Auto-Encoder (MAE) achieving the highest performance of the two in terms of Intersection over Union (IoU) and recall. The findings demonstrate that SSL offers a viable path toward reducing the dependency on labeled data while improving domain adaptation in real-world segmentation tasks.

    Download full text (pdf)
    fulltext
  • Fang, Yutong
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Enhancing Generative User Interfaces with LLMs: A User-driven Iterative Refinement Process2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Generative user interfaces can offer a personalized experience by adapting content and layout to individual preferences in real time. Recent advancements in large language models (LLMs) have demonstrated significant capabilities for dynamic and real-time user interface (UI) generation based on natural language prompts. However, existing solutions have primarily focused on user interface code generation for developers using large language models, while their practical usability and personalization capabilities for non-technical end users remain underexplored. This study investigates how users interact with a UI personalization system driven by OpenAI’s GPT-4.1-nano model, integrated into a custom-built Android application, AdaptFit. This research aims to understand the user experience, the effectiveness of user involvement, the ease of user interface personalization, and the challenges users face in this UI personalization process. This study combines both quantitative and qualitative methods, including questionnaires, usability testing with six participants, and semi-structured interviews. Thematic analysis was applied to better understand user experiences, and user iteration behaviors were recorded to examiner user satisfaction, prompt specificity, and outcome quality. Results show that users found the concept of UI personalization intriguing and engaging, but the performance of the system was inconsistent, often limited by vague prompts, LLM hallucination, and fixed system parsing structures. Specifically, well-articulated, detailed prompts yielded better outcomes, which shows the importance of prompt quality in LLM-driven design. This thesis offers insights into the design of LLM-powered UI personalization system. Future work could explore better integration between generated outputs and UI framework to enhance real-world deployment.

    Download full text (pdf)
    fulltext
  • Eiderbäck, Jesper
    KTH, School of Engineering Sciences (SCI), Applied Physics.
    Super Resolution Live Cell Imaging on Electron Microscopy Grids for Correlation with Cryogenic Electron Tomography2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Correlated light and electron microscopy (CLEM) combines the molecular specificity of fluorescence microscopy (FM) with the single digit nanometer scale structural information of electron microscopy (EM). To reduce the large resolution gap, causing ambiguities in the correlation, between the modalities, CLEM workflows increasingly incorporate super resolution (SR) FM. Traditionally, SR-FM in CLEM is performed under cryogenic conditions to ensure sample stability during correlation. However, performing SR-FM under cryogenic conditions imposes limitations on illumination intensity and alters fluorophore photophysics. This thesis investigates whether the SR-FM part of the CLEM workflow can be performed at room temperature using U2OS cells expressing vimentin–rsEGFP2 on EM grids. RESOLFT and STED imaging were performed prior to vitrification. RESOLFT imaging on grids was feasible with standard optical intensities, although it yielded lower resolution than on glass coverslips and was affected by several grid related challenges. In contrast, STED led to fluorophore bleaching, likely due to reflections and heat accumulation in the metallic grid dueto the high intensity of the depletion laser, making STED incompatible with the EM grids under the tested conditions. After vitrification, cryogenic fluorescence microscopy (cryoFLM) and electron tomography (cryoET) confirmed that cells on EM grids previously imaged at room temperature could be localized, demonstrating that room-temperature RESOLFT is compatible with cryoET. However, further work is required to establish whether SR imaging at room temperature can be reliably correlated with cryoET.

    Download full text (pdf)
    fulltext
  • Khoraman, Sina
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Cybersecurity Awareness Training Through Attack Simulations and Behavioral Insights: A Qualitative Study with Theory of Planned Behavior in a Custom Cyber Attack Simulation Learning Platform2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study explores the effectiveness of simulation-based cybersecurity awareness training through the lens of behavioral psychology. Using the Theory of Planned Behavior (TPB), it investigates how individuals perceive and identify cybersecurity risks before and after completing a customdesigned cyberattack simulation platform. A qualitative methodology, including interviews and scenario-based assessments, was employed to analyze users’ attitudes, self-perceptions, and behaviors in response to various cybersecurity threats. Findings reveal a persistent gap between cybersecurity knowledge and real-world actions, often driven by convenience, privacy fatalism, and usability concerns. However, the simulation experience enhanced participants’ awareness of specific threats, especially phishing and password-based attacks. The results suggest that while simulations alone may not fully change behaviors, they serve as effective tools for increasing risk perception and should be integrated with tailored interventions for maximum impact.

    Download full text (pdf)
    fulltext
  • Faisol Haq, Muhammad
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Optimizing Hybrid Energy System Operations Based on Battery Decreasing Capacity Characteristics for Isolated Microgrid Islands in Indonesia2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Hybrid energy systems combining diesel generators, photovoltaic (PV) panels, and battery energy storage are increasingly deployed in remote microgrid settings, particularly on isolated islands where grid extension is unfeasible. While these systems offer improved reliability and reduced fuel dependence, battery degradation remains a critical challenge. Premature battery wear not only increases operational costs but also jeopardizes long-term energy access and system sustainability—especially in regions with limited infrastructure and replacement capacity. This thesis addresses the problem of optimizing energy dispatch in hybrid microgrids while explicitly accounting for battery degradation. The novelty lies in the integration of a cycle-based degradation cost model into a Mixed- Integer Linear Programming (MILP) framework, enabling the system to minimize fuel and maintenance costs without compromising battery lifespan. Although energy dispatch optimization iswell-studied, fewmodels incorporate battery aging in a computationally efficient manner, making this a timely and meaningful research topic. The complexity of balancing short-term diesel savings against long-term battery health presents a suitable challenge at the Master’s thesis level. The proposed model uses a piecewise linear approximation of degradation costs based on depth-of-discharge (DoD) breakpoints, ensuring compatibility with MILP solvers. Real-world data from an Indonesian island microgrid was used to evaluate two operational strategies: an optimized, degradationaware dispatch and an aggressive usage strategy. Results show that while both reduce diesel use, the optimized approach significantly preserves battery health, avoiding early degradation while achieving comparable fuel savings and lower CO2 emissions. This research contributes a scalable and practical tool for energy planners and microgrid operators, offering actionable insights into how smart dispatch strategies can extend battery lifespan and reduce long-term costs. The model enables informed operational decisions that balance economic and technical constraints—something that could not be done effectively without integrating battery aging behavior. As a result, it supports more sustainable and resilient energy systems for underserved regions.

    Download full text (pdf)
    fulltext
  • Sourmpati, Konstantina Maria
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Reimagining Intimate Data in Femtech through Speculative Feminist Design: The case of ‘Noei’2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Noei is a speculative data physicalisation companion that uses colored light, subtle motion, texture and projection to support interpretive engagement with bodily and affective experiences across the menstrual cycle. The project provokes discussion about menstrual tracking by challenging assumptions of quantification, prediction, and surveillance. Using a Research-through-Design method, I conducted an autoethnographic study to critique app-based tracking and generate design materials, created a speculative prototype with a narrative booklet to convey the experience, and held a design critique with HCI experts and practitioners. The interactions rely on bounded ambiguity, with reflective responses being open for interpretation yet grounded in a co-created vocabulary and a user-initiated sequence. Contributions include a multisensory prototype adhering to an interaction ritual that prioritises consentful activation, local short-lived traces, and graduated visibility alongside a speculative narrative that opens alternative imaginaries. This thesis reframes menstrual technology as witnessing rather than optimising and proposes a dual ecology in which reflective companions like Noei complement certainty-oriented tools.

    Download full text (pdf)
    fulltext
  • Li, Peiheng
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Empowering Automotive Workshop Processing with Advancements in Natural Language Processing2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis optimizes two key tasks in truck workshop environments: symptom-component diagnosis and generating workshop instructions. Symptom diagnosis is formulated as a text classification task, associating symptom descriptions with faulty components, while instruction generation is framed as a question-answering task requiring extensive domain knowledge. Through rigorous experimentation, multiple NLP methodologies, including discriminative classifiers and Large Language Models (LLMs), are evaluated. Initially, a discriminative BERT-based model demonstrated robust symptom classification performance. Subsequent testing revealed that pure LLM-based methods struggled due to limited statistical insight and hallucinations. To overcome these issues, a structured ReAct Agent combining discriminative models and LLMs was developed, enhancing diagnosis accuracy. Instruction generation was evaluated using synthetic datasets and real-world expert-validated scenarios, with hierarchical RAG systems performing optimally, closely followed by the ReAct Agent. The naive LLM baseline emphasized the necessity of domain-specific knowledge for accurate instruction creation. Our LLM-driven ReAct Agent achieves diagnostic accuracy comparable to top discriminative models, significantly improving precision, recall, and F1 metrics, and fully resolving label hallucination issues. For instruction generation, the ReAct Agent’s performance matches the best Retrieval- Augmented Generation (RAG) models, surpassing simpler LLM baselines. In conclusion, this research demonstrates the complementary strengths and limitations of discriminative and generative NLP models, highlighting the integrated ReAct Agent as a practical, effective solution for boosting workshop technician productivity and operational efficiency.

    Download full text (pdf)
    fulltext
  • Balannagari, Yamini
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Vulnerability Research of Mobile Applications Commonly Used in Sweden: What are the most effective static analysis, dynamic analysis, and reverse engineering methodologies for identifying and evaluating security vulnerabilities in mobile applications?2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This project investigates the security posture of widely used mobile applications, focusing on key challenges related to user data protection, application integrity, and the secure management of communication channels. By evaluating applications across different sectors, the study identifies critical vulnerabilities, including insecure data storage practices, weak cryptographic implementations, improper session management, and insufficient network security configurations. The analysis reveals the substantial risks associated with sensitive information exposure, unauthorized data access, and session hijacking, all of which could compromise user trust and organizational reputation. Through systematic threat modeling and prioritization of findings, the research demonstrates that adopting globally recognized security standards, such as the OWASP Mobile Security Testing Guide (MSTG) and the Mobile Application Security Verification Standard (MASVS), can significantly enhance the security resilience, reliability, and privacy assurances of mobile applications. The study concludes by highlighting the importance of continuous security evaluations, proactive vulnerability remediation, and secure-by-design development methodologies. It recommends future enhancements, including the automation of vulnerability detection processes, the strengthening of encryption standards, and the advancement of user-centric security frameworks to further elevate the overall security and sustainability of the mobile application ecosystem. Finally, a comparative analysis across all evaluated applications was conducted to quantify and contrast their security posture based on OWASP and MASVS compliance.

    Download full text (pdf)
    fulltext
  • Wu, Nicole
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Machine Learning for Integrated Communication and Sensing in Cell-Free Networks2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis investigates the use of machine learning, specifically deep neural networks (DNNs), to enhance integrated sensing and communication (ISAC) in cell-free massive MIMO networks. The main goals are to improve data throughput, sensing accuracy, and computational efficiency. A DNN-based approach is proposed for access point (AP) selection and resource allocation, tested through simulations. Results show significant improvements, with sensing accuracy reaching an average distance error of 3.05 meters and an angle error of 0.05 degrees, compared to 3.88 meters and 0.11 degrees with random AP selection. Communication performance also increases, achieving a downlink rate of 13.59 bits/s/Hz versus 12.42 bits/s/Hz for the baseline. However, computational delay rises, indicating a trade-off for future optimization. This work advances the development of intelligent wireless networks, with applications in autonomous systems and smart cities.

    Download full text (pdf)
    fulltext
  • Yadava, Prashant
    KTH, School of Electrical Engineering and Computer Science (EECS).
    AI-Powered Hybrid Legal Reasoning: Combining Graph and Structured Databases: Multi-Pipeline Query Execution for Legal Knowledge Retrieval2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    As artificial intelligence becomes increasingly integrated across industries, its ineffective or superficial application can lead not only to underwhelming performance but also to critical errors that undermine trust in automation. This risk is especially acute in domains with strict accuracy and reliability demands, such as the legal sector, where misinformation or incomplete analysis can have serious consequences. In precedent-based legal systems, efficient and accurate information retrieval is further complicated by the need to understand and leverage complex relationships among cases, statutes, and doctrines. Existing AI-powered research tools, like conventional Retrieval- Augmented Generation (RAG) systems, primarily rely on vector similarity search, often missing the intricate relational structures vital for legal analysis. Graph-based RAG systems, though using schema-less graph databases like Neo4j, typically rely on rigid, predefined sets of entity and relationship types in their extraction and query logic, limiting adaptability to evolving legal documents. This inability to dynamically capture and utilize both semantic and relational information restricts the effectiveness of automated legal research - a significant practical and academic challenge that remains unresolved. This thesis presents a novel Dynamic Entity-Aware Graphand Vector-Enhanced Retrieval-Augmented Generation system, advancing legal information retrieval through a Large Language Model-driven, adaptive query processing architecture. The system features a dual-database engine: PostgreSQL with pgvector for high-performance semantic search and neo4j for flexible, relationship-centric graph traversal. Key innovations include: (i) real-time semantic query decomposition without reliance on predefined patterns, (ii) intelligent entity matching with automatic generation of legal terminology variants, (iii) schema-aware graph query construction that dynamically adapts to evolving legal structures, and (iv) hybrid retrieval that combines graph-based entity identification with vector-based context enrichment, preserving relational integrity while capturing semantic nuance. Evaluation on complex legal queries-spanning case arguments, statutory relationships, and multi-entity reasoning-demonstrates that the proposed system significantly outperforms both conventional and existing graph-based RAG approaches in retrieval accuracy and transparency. The results show that legal professionals can now access more precise, interpretable, and contextually rich information, supporting advanced legal analysis that was previously infeasible with static or single-modality systems. This research thus represents a paradigm shift from static to adaptive legal research systems, enabling more effective, reliable, and interpretable access to critical legal knowledge and offering a foundation for future advancements in AI-driven legal practice. 

    Download full text (pdf)
    fulltext
  • Zhang, Kecheng
    KTH, School of Electrical Engineering and Computer Science (EECS).
    A Neuromorphic Solver for the Edge User Allocation Problem with Bayesian Confidence Propagation Neural Network: A Dynamic Heuristic Generator for External Unit Excitation2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Edge computing pushes computation from remote clouds to resourceconstrained servers close to end-users. Determining which users should be connected to which edge servers—the Edge User Allocation (EUA) problem —is NP-hard and becomes intractable for large instances when solved by conventional mixed-integer programming. While many approximate methods have been proposed, neuromorphic solutions are particularly appealing due to their potential for high-speed and energy-efficient hardware implementation. These approaches leverage parallel, stochastic dynamics and local competition to explore combinatorial spaces and naturally enforce exclusivity constraints. In this thesis, we present a neuromorphic formulation based on the Bayesian Confidence Propagation Neural Network (BCPNN), in which each user is represented by a winner-take-all module composed of neurons corresponding to possible user-server assignments, including a dedicated unit for the “no allocation” option. Instead of encoding all constraints in a static energy function, we introduce a dynamic bias generator that steers the network with three heuristics: (i) a load-bias curve that favours near-full servers while penalizing overutilization and underutilization, (ii) a size heuristic that prioritizes smaller-demand users and larger-capacity servers, and (iii) a cosine-similarity term that matches a user’s demand vector to the current residual capacity of each server, enforcing dual-resource feasibility online. A single global parameter controls the trade-off between the number of active servers and the number of served users; scanning a small grid of values enables exploration of different levels of user-server tradeoff. Experiments on a 30-instance synthetically generated benchmark, each accompanied by an optimal Gurobi solution, show that the proposed BCPNNEUA solver finds feasible allocations whose score lies within an average of 12.6% of the optimum while converging in a few hundred simulation steps. Because the algorithm relies on local updates and event-driven communication, it is well suited to low-power neuromorphic hardware, offering a scalable and energy-efficient alternative for real-time edge-resource management.

    Download full text (pdf)
    fulltext
  • Brilon, Malin
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Designing Human-Machine Interfaces for Maritime Engine Control Rooms: A User-Centered Co-Design Approach to Reduce Cognitive Load and Enhance Usability2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis explores how user-centered human-machine interfaces (HMIs) can support maritime engine control room (ECR) operators in managing cognitive load and improving usability. Current ECR HMIs are often fragmented, non-standardized, and cognitively demanding, posing safety and operational risks. In collaboration with the German Aerospace Center (DLR), this project addresses the challenge of designing modular, ergonomic, and contextsensitive HMIs that reflect user needs and operational realities. The research follows a user-centered design approach and includes two main phases: expert interviews and a co-design workshop. Five maritime professionals with extensive seagoing and instructional experience were interviewed to identify usability challenges and interface needs. Their insights informed a scenario-based co-design workshop in ECR simulator with six engineering participants, who collaboratively developed interface concepts and low-fidelity prototypes for a blackout scenario. Thematic analysis of the qualitative data revealed six key themes: embodied knowledge, automation challenges, visual clarity, information overload, lack of standardization, and communication barriers in multinational teams. Based on these findings, the thesis proposes seven design principles for ECR HMIs, emphasizing multisensory awareness, visual clarity, transparent automation, operational resilience, standardization, contextual prioritization, and communication support. In addition, the study presents UI mockups and multimodal design solutions derived from the workshop and interviews. This work contributes to the limited HCI research in ECR contexts and provides practical design guidance for future maritime HMIs. It highlights the value of co-design and user-in-the-loop methods in developing safety-critical systems and underlines the need for maritime-specific interface standards to enhance operational efficiency and crew well-being.

    Download full text (pdf)
    fulltext
  • Skoglund, Robert
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Conditional Football Player Image Generation: Generating pose-consistent images of football players leveraging pose skeletons and depth-maps2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis presents a study on generating photorealistic, pose-consistent images of football players to address the challenge of building accurate pose detection models that often struggle with infrequent or rare poses due to limited labeled data. The primary goal was to develop and evaluate image generation models capable of producing synthetic player images consistent with specified poses, resembling TRACAB’s data. The research utilizes an adapter-style framework employing ControlNet to guide powerful pretrained base models, specifically Stable Diffusion (SD) and Stable Diffusion XL (SDXL). Image generation was conditioned using 2D human pose skeletons and depth-maps, both individually and simultaneously (Multi- ControlNet). Performance was quantitatively assessed using visual fidelity metrics (FID, CMMD) and pose similarity metrics (AP, CAP). The results show that depth-conditioning consistently enhanced visual fidelity over pose-only conditioning, leading to improvements in FID and CMMD. Although the Pose-SD model achieved the highest pose consistency metrics (AP and CAP), pose-only models performed poorly on rare or difficult poses. Scaling the base model to SDXL further improved fidelity and strengthened semantic adaptation, but this incurred a higher computational cost, resulting in increased inference latency (up to 6.3s) compared to single-modality SD models (3s). The findings conclude that depth conditioning is the most reliable modality for generating photorealistic football player images, while SDXL architectures provide stronger generalization. This capability is valuable for creating diverse training datasets to improve downstream pose detection models.

    Download full text (pdf)
    fulltext
  • Eriksson, Matias
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Collision avoidance of MCTSVO path planner for omnidirectional AMRs in multi-agent warehouses2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Autonomous mobile robots (AMRs) are increasingly common in modern warehouses where large quantities of goods have to be stored and processed efficiently, they transport goods between points of interest in warehouses and need to navigate safely and quickly. A defining feature of AMRs is their ability to autonomously compute a path toward the goal with the capability to react to changes in their environment, such as finding a new path if their original path becomes obstructed. This require the AMRs to have robust navigational systems capable of local and global path planning. This study analyzes the path planning algorithm MCTSVO to its collision avoidance property in warehouse environments. MCTSVO is a preliminary approach designed to handle both local and global path planning. This is an uncommon approach since these two aspects of path planning typically are handled separately, MCTSVO instead combines Monte Carlo Tree Search (MCTS) with the Velocity Obstacle (VO) algorithm. Unity3D is used to simulate omnidirectional AMRs to evaluate MCTSVO’s collision avoidance for warehouse environments compared to MCTS and the standard collision avoidance algorithms VO, RVO and ORCA from the velocity obstacle paradigm. This is done by using 6 Movement Performance Scenarios derived from warehouse requirements and 7 Computational Scaling Scenarios. The scenarios run multiple times for every algorithm to create average results with confidence intervals that ensures drawn conclusions are statistically significant. The evaluation is derived from state-of-the art and established benchmark frameworks, to measure local planning performance with the following metrics: Unique Collisions, Minimum Distance to Closest Object, Percentage of Time Spent in Dangerous Areas, Path- and Velocity- Smoothness, Flow Time, Successful Runs, and Average- and 95th-Percentile Computation Time. The findings from this study show how MCTSVO, based on the tested scenarios, shows a clear improvement in movement performance to MCTS while not outperforming the standard algorithms. For safety its performance is in the middle of the standard algorithms, while for time efficiency it falls short to each of the standard algorithms. These movement performance results are also accompanied by computational requirements several orders of magnitude higher than the standard algorithms. The study highlights the limitations of MCTSVO to help shape potential future research for the most critical areas of the algorithm.

    Download full text (pdf)
    fulltext
  • Silfving, Marcus
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Liu, Donglin
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Evaluation of OCR Engines for Logistics Documents: Accuracy, Preprocessing and Cloud Integration: A Comparative Study Using Tesseract, Google Vision and AWS Textract2025Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis explores the performance of OCR systems on scanned documents, with a focus on comparing traditional and cloud-based engines. The problem addressed is the inconsistency in recognition accuracy across different OCR tools when applied to real-world low-quality images such as internal corporate documents and regulatory correspondence, which presents a challenge in digital archiving. The project investigates whether preprocessing techniques, such as grayscale conversion, contrast enhancement, thresholding, and morphological operations, can improve recognition quality. Three OCR engines were tested: Tesseract (open-source), Google Cloud Vision, and Amazon Textract. Each engine was evaluated on a dataset of 20 labeled images using CER as the primary metric. A custom Python pipeline was developed to automate batch processing of images, apply preprocessing, execute OCR, and log results for analysis. Both original and preprocessed versions of each image were tested. Preprocessing improved results in most cases, but also worsened them — especially for cloud-based models, suggesting these services already include internal preprocessing. Tesseract showed significant variation in CER, often benefiting solely from thresholding, while cloud-based engines performed more consistently. The findings indicate that naive preprocessing has limited benefit unless tailored per engine. Results also highlight the superior baseline performance of Google Cloud Vision in most cases. Future work could explore machine learning-based enhancement or layout-aware OCR for complex document types. The resulting framework supports fast benchmarking of OCR systems and can serve in future research or industry evaluation of document digitization tools.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-01-26 10:00 Room FA32, AlbaNova, Stockholm
    Wistemar, Oscar
    KTH, School of Engineering Sciences (SCI), Physics, Particle Physics, Astrophysics and Medical Imaging.
    Photospheric emission from gamma-ray bursts altered by radiation-mediated shocks2026Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis explores gamma-ray bursts (GRBs), and more specifically the prompt emission phase, which is the first ∼ 10 seconds of gamma-rays. GRBs come from the launching of a relativistic jet in connection with core-collapse supernovae or compact object (neutron star or black hole) mergers. The relativistic jet accelerates and eventually most of the energy is kinetic, and that energy is then somehow converted into internal energy that is emitted in gamma-rays, and is what we observe. The mechanism responsible for this conversion in the prompt phase is not fully understood and this thesis deals with one possible such mechanism, radiation-mediated shocks (RMSs). Such shocks occurring below the photosphere alters the photon spectral energy distribution, which is then released at the photosphere towards an observer. An analogue model of RMSs, called the Kompaneets RMS approximation (KRA) is discussed and then later applied in Paper I & II. In Paper I we generalize a method to measure the bulk outflow Lorentz factor based on the properties of photospheric emission and the evolution of the photon energy distribution in the jet. We find that depending on the quality of the data, either a value or an upper limit can be found for the Lorentz factor. In Paper II we do a time-resolved spectral analysis of GRB 211211A, a GRB with a broad spectrum containing two breaks, one in the tens of keV and one around a few MeV. Using the method presented in Paper I we find typical Lorentz factor values of ∼ 300. From the Lorentz factors and the KRA model we find the time evolution of the RMS parameters, here a strong shock occurring at moderate optical depths. We also show that the KRA model can fit these broad spectra with two breaks very well.

    Download full text (pdf)
    thesis_text
  • Public defence: 2026-02-03 14:00 https://kth-se.zoom.us/j/63739777936, Stockholm
    Joshi, Sushen
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Space and Plasma Physics.
    Insights into Uranus’ atmosphere from HST FUV observations and radiative transfer modelling2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Uranus is one of the extreme worlds in the Solar System. Its large axial tilt of 98o and orbital period of 84 years lead to unique seasons. It has been visited only by the Voyager 2 spacecraft and remains one of the poorly understood planets in the Solar System. Uranus’ atmosphere is primarily composed of atomic and molecular hydrogen (H and H2, respectively), helium (He), and methane (CH4). One of the strongest emission lines from the Sun in the ultraviolet is Lyman alpha (Lyα, 1215.67 Å). It is efficiently scattered by H and H2, and absorbed by hydrocarbons (mostly, CH4) in planetary atmospheres. This makes remote sensing observations at Lyα and associated wavelengths an excellent tool to study giant planets’ upper atmospheres. At giant planets, the upper atmosphere plays a key role in various processes such as photochemistry, interaction with the plasma environment and possibly solar wind, magnetosphere-ionosphere coupling, atmospheric escape, and interaction with ring particles. In this thesis, we analysed Hubble Space Telescope (HST) observations of Uranus obtained at Lyα and 1280 Å wavelengths, and performed radiative transfer simulations considering resonant scattering by H, Rayleigh-Raman scattering by H2, and absorption by CH4. The results and insights into Uranus’ neutral upper atmosphere gained from the work are presented in a series of papers.

    Our analyses of the first spatially resolved images of Uranus’ Lyα emissions, obtained in 1998 and 2011, revealed an extended exosphere of gravitationally bound hot H. The abundance of this hot H varied with time and cannot be explained by production mechanisms involving solar UV radiation alone, pointing to additional energetic processes (Paper I). Further, we analysed Uranus’ Raman-scattered Lyα emissions at 1280 Å, unique among the Solar System giant planets. Using the observed brightness of these emissions, we constrained the vertical distribution of methane in Uranus’ upper atmosphere, providing key inputs for photochemical modelling (Paper II). Our 2024 HST observations revealed a significant increase in exospheric hot H abundance compared to 1998 and 2011, indicating an increase in energetic processes creating this hot H. We also found a persistent azimuthal variation in the exospheric Lyα emissions. Thus, we provide tentative evidence of the role of energetic particles in the Uranian magnetosphere in producing the hot H observed in the exosphere (Paper III).

    Download full text (pdf)
    Sushen_Joshi_PhD_Thesis
  • Havenvid, Malena Ingemansson
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Real Estate and Construction Management, Construction and Facilities Management.
    Linné, Åse
    Uppsala University.
    Organisera för digital transformation - ett ekosystemperspektiv: En rapport om samhällsbyggnadssektorns omställningsförmåga2025Report (Other (popular science, discussion, etc.))
    Abstract [sv]

    Denna rapport undersöker samhällsbyggnadssektorns omställningsförmåga för digital transformation. Med digital transformation avses en förändringsprocess där digitalteknik inte enbart används för att effektivisera det befintliga, utan också för att omformaaffärsmodeller, arbetssätt och relationer i grunden. Rapporten bygger vidare på den tidigare studien av Löwstedt och Sundquist (2022),som analyserade framtidsscenarier för digitalisering i sektorn. Fokus i denna rapport äristället riktat mot de resurser och strategier som idag formar sektorns faktiska förmågaatt ställa om. Analysen bygger på intervjuer och workshops med aktörer från bådeetablerade och gränsöverskridande ekosystem, kompletterat med insikter från tillverkningsindustrin.

    Resultaten visar att många aktörer gör framsteg genom digitalisering av internaprocesser, men att stark specialisering, invanda aktörsroller och sätt att relatera tillvarandra ofta försvårar omprövning av affärsmodeller och samarbetsmönster. För attstärka omställningsförmågan behöver digitalisering därför kopplas till strategiska måloch positionering i samspel med andra, snarare än i isolering, samt till gemensamtlärande och värdeskapande.Studien tar sin utgångspunkt i tre aktörskategorier från den tidigare studien –Samhällsbyggarna, Digitaliserarna och Game changers – som på olika sätt formar ochformas av sektorns digitala omställning. Inspirationsexempel från etablerade aktörersom lyckats ställa om visar hur branschkunskap tillsammans med nya perspektivinspirerade av andra branscher, i kombination med digital teknik, kan fungera som enstark drivkraft för omställning, eller att tidigare operativa funktioner kan lyftas somstrategiska ledningsfrågor. Detta visar att omställning i grunden handlar om att utmanainvanda strukturer – antingen genom att agera utifrån nya perspektiv eller genom attomvärdera vilka frågor som ges strategisk betydelse.

    Analysen utgår från ett ramverk inom strategilitteraturen där sektorns omställningsförmåga förstås som tre speglande förmågor: sensing - being sensed – att upptäcka nya möjligheter och själv bli igenkänd som en relevant aktör; seizing - being seized – att ta tillvara på möjligheter i samarbete och bli inbjuden i andras initiativ; samtreconfiguring - being reconfigured – att omforma roller och resurser och samtidigtpåverkas av andras förändringsarbete. Detta visar att omställning är en ömsesidig process som formas i relationer. Rapporten presenterar en Omställningskompass somsynliggör hur organisationer kan förflytta sig mellan olika strategiska positioner – från att fördjupa sig i befintliga marknader till att bredda sin roll eller utveckla mer generelladigitala förmågor. De avslutande kapitlen ger praktiska rekommendationer för både enskilda aktörer och sektorn som helhet.

    En central slutsats är att nästa steg inte handlar om fler initiativ på sektornivå, utan omatt koppla ihop befintliga satsningar och arenor så att nya och etablerade aktörer kanmötas och förstå varandras förmågor och drivkrafter. I det mötet finns potentialen attfrigöra innovation och bygga den omställningsförmåga som krävs för en långsiktigt hållbar digital omställning.

    Download full text (pdf)
    fulltext
  • Lagerros, Baltzar
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Creating visualizations to aid decision making in multidisciplinary prostatectomy conferences2025Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Digitizing healthcare is shown to offer higher quality care at lower costs. However, many parts of Swedish healthcare still rely on pen and paper, when a digital solution would offer many advantages. One such example is multidisciplinary prostatectomy conferences, wherein complex surgical procedures are decided upon on a multitude of data written on a sheet of paper. Each participant in the conference needs to create a mental model of the prostate and its tumor anatomy.

    In this project I created two prototype digital visualizations. I also created an accompanying form. Together, these are used to help build a mental model, thereby aiding the decision process during these conferences.

    I used Cambio's form builder, which was based on openEHR archetypes and templates. The visualizations were made using web components, which included HTML, CSS and JavaScript. They were made by combining multiple image components using a coordinate system. The usage of web components was successful. However, there exists no standardized API to communicate form data to and from the web components. The absence of a standardized API limited the benefits of web components such as reusability and encapsulation.

    The form and the accompanying visualizations were integrated into an existing clinical user environment. My prototypes proved that digitization of this visualization challenge was possible using existing technologies. The project was considered a successful proof-of-concept. This work has the potential to lay the foundation for a future fully digitized prostatectomy visualization.

    Download full text (pdf)
    fulltext
  • Sushen, Joshi
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Space and Plasma Physics.
    Roth, Lorenz
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Space and Plasma Physics.
    Gladstone, Randy
    Southwest Research Institute, San Antonio, TX, USA.
    Ivchenko, Nickolay
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Space and Plasma Physics.
    Lamy, Laurent
    LIRA, Observatoire de Paris, Université PSL, Sorbonne Université, Université Paris Cité, Cergy Paris Université, CNRS, Meudon, France, Aix Marseille Université, CNRS, CNES, LAM, Marseille, France.
    Melin, Henrik
    Department of Mathematics, Physics, and Electrical Engineering, Northumbria University, Newcastle upon Tyne, UK.
    Strobel, Darrell
    Department of Earth and Planetary Sciences, The Johns Hopkins University, Baltimore, MD, USA.
    Pryor, Wayne
    Central Arizona College, Coolidge, AZ, USA.
    Uranus’ atmosphere near the northern solstice as seen from HST far-ultraviolet observationsManuscript (preprint) (Other academic)
    Download full text (pdf)
    Manuscript
  • Public defence: 2026-02-06 09:30 F3, Stockholm
    Terra, Ahmad
    KTH, School of Industrial Engineering and Management (ITM), Engineering Design, Mechatronics and Embedded Control Systems.
    Explainable Artificial Intelligence for Telecommunications2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Artificial Intelligence (AI) is a key driver of technological development in many industrial sectors. It is being embedded into many components of telecommunications networks to optimize their functionality in various ways. AI technologies are advancing rapidly, with increasingly sophisticated techniques being introduced. Therefore, understanding how an AI model operates and arrives at its output is crucial to ensure the integrity of the overall system. One way to achieve this is by applying Explainable  Artificial Intelligence (XAI) techniques to generate information about the operation of an AI model. This thesis develops and evaluates XAI techniques to improve the transparency of AI models.

    In supervised learning, several XAI methods that compute feature importance were applied to identify the root cause of network operation issues. Their characteristics were compared and analyzed for local, cohort, and global scopes. However, the generated attributive explanations do not provide actionable insight to resolve the underlying issue. Therefore, another type of explanation, namely counterfactual, was explored during the study. This type of explanation indicates the changes necessary to obtain a different result. Counterfactual explanations were utilized to prevent potential issues such as Service Level Agreement (SLA) violations from occurring. This method was shown to significantly reduce SLA violations in an emulated network, but requires explanation-to-action conversion.

    Unlike the previous method, a Reinforcement Learning (RL) agent can perform an action in its environment to achieve its goal, eliminating the need for explanation-to-action conversion. Therefore, understanding its behavior becomes important, especially when it controls a critical infrastructure. In this thesis, two state-of-the-art Explainable Reinforcement Learning (XRL) methods, namely reward decomposition and Autonomous Policy Explanation (APE), were investigated and implemented to generate explanations for different users, technical and non-technical, respectively. While the reward decomposition explains the output of a model and the feature attribution explains the input, the connection between them was missing in the literature. In this thesis, the combination of feature importance and reward decomposition methods was proposed to generate detailed explanations as well as to identify and mitigate bias in the AI models. In addition, a detailed contrastive explanation can be generated to explain why an action is preferred over another. For non-technical users, APE was integrated with the attribution method to generate explanations for a certain condition. APE was also integrated with a counterfactual method to generate a meaningful explanation. However, APE has a limitation in scaling up with the number of predicates. Therefore, an alternative textual explainer, namely Clustering-Based Summarizer (CBS), was proposed to address this limitation. The evaluation of textual explanations is limited in the literature. Therefore, a rule extraction technique was proposed to evaluate textual explanations based on their characteristics, fidelity, and performance. In addition, two refinement techniques were proposed to improve the F1 score and reduce the number of duplicate conditions. 

    In summary, this thesis has developed the following contributions: a) implementation and analysis of different XAI methods; b) methods to utilize explanations and explainers; c) evaluation methods for AI explanations; and d) methods to improve explanation quality. This thesis revolves around network automation in the telecommunications field. The explainability methods for supervised learning were applied to a network slice assurance use case, and for reinforcement learning, it was applied to a network optimization use case (namely, Remote Electrical Tilt (RET)). In addition, applications in other open-source environments were also presented, showing broader applications in different use cases.

    Download full text (pdf)
    kappa
  • Public defence: 2026-02-06 10:00 Q2, Stockholm
    Pucci, Giulia
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Probability, Mathematical Physics and Statistics.
    Deep Learning and Optimal Stochastic Control with Applications2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis brings together theoretical advances in stochastic optimal control and modern deep learning techniques, with particular emphasis on applications in environmental and energy systems. The first group of contributions investigates optimal control from a theoretical perspective, developing new results and illustrating their relevance through real world applications. The second part explores deep learning methods for solving stochastic differential equations and control problems that are analytically intractable.

    We begin by studying impulse control problems for conditional McKean--Vlasov jump diffusions, extending the classical verification theorem to the setting in which the state dynamics depend on their conditional distribution. We then examine an optimal control problem for pollution growth on a spatial network, formulated in a deterministic framework but capturing how environmental policies propagate across interconnected geographical regions. Finally, we develop a model for investment in renewable energy capacity under uncertainty, characterising how optimal installation strategies change in response to fluctuations in energy demand and production. These contributions show how stochastic control can be used to address pressing challenges in environmental regulation and energy planning.

    The second line of research focuses on deep learning methods for backward stochastic differential equations (BSDEs) and related formulations, together with direct machine learning approaches for high-dimensional stochastic control. Specifically, we solve Dynkin games by reformulating them as doubly reflected BSDEs, enabling the computation of optimal stopping strategies in energy market contracts. We further develop a deep learning solver for backward stochastic Volterra integral equations (BSVIEs), extending neural BSDE methods to systems with memory. In addition, we propose a machine learning framework for renewable capacity investment under jump uncertainty, treating the problem both through a direct control learning strategy and through a newly developed solver for pure jump BSDEs.

    Overall, this thesis lies at the intersection of rigorous mathematical analysis and machine learning-based approaches to stochastic optimal control. On the one hand, we show how careful modeling and theoretical results enable the formulation and study of complex, realistic control problems; on the other hand, we demonstrate how modern machine learning techniques provide powerful tools for solving these problems efficiently. The applications are motivated by urgent questions in environmental and energy sustainability.

    Download full text (pdf)
    kappa
  • Public defence: 2026-02-06 09:00 Kollegiesalen, Stockholm
    Truong, Minh
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Aerospace, moveability and naval architecture.
    Decoding gait in individuals with spinal cord injury: From explainable AI to predictive simulations2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    While current biomechanics research based on normal models and assumptions of normalcy has substantial merit, it fails to reliably describe individuals with impairments. Spinal cord injury (SCI), whether traumatic or nontraumatic, can partially or completely damage sensorimotor pathways, leading to heterogeneous gait abnormalities. A substantial knowledge gap exists regarding biomechanical and neurological movement strategies in this population due to complex, interacting factors including age, weight, time since injury, pain, sensorimotor impairment, and spasticity. The ASIA Impairment Scale, while recommended for classifying injury severity, was not designed to characterize individual ambulatory capacity. Other standardized assessments based on subjective ratings or timing/distance measures have limited ability to characterize functional capacity in this population comprehensively.

    This thesis therefore aims to create computational frameworks for studying walking strategies in individuals with SCI, particularly incomplete SCI (iSCI), through two complementary approaches: developing machine learning algorithms that link individual characteristics to gait outcomes, and individualizing objective functions and constraints in predictive simulations using neuromusculoskeletal modeling.

    Study I proposed and evaluated a framework applying Gaussian Process Regression and SHapley Additive exPlanations (SHAP) to quantify how neurological impairments and other demographic and anthropometric factors contribute to walking speed and net Oxygen cost during a six-minute walk test. Individual SHAP analyses quantified how these factors influenced walking performance for each participant, informing personalized rehabilitation targeting areas with the most potential for improvement.

    Study II stratified gait heterogeneity in individuals with iSCI by deriving clusters with similar gait patterns without a priori parameter identification and assessed clinical correlations within the derived clusters. Six distinct gait clusters were identified and characterized among 280 iSCI gait cycles, informing more individualized rehabilitation.

    Study III characterized margin of stability, temporospatial parameters, and joint mechanics in four iSCI subgroups from Study II compared to participants without disability, identifying how gait adaptations evolve as muscle weakness affects major muscle groups. Gait patterns remained normal with isolated mild plantarflexor weakness but deteriorated with combined hip muscle weakness and severe plantarflexor weakness.

    Study IV developed a bilevel optimization framework using Bayesian optimization to automatically identify optimal objective weights for predictive gait simulations in individuals with iSCI. Tested on one female participant with asymmetric muscle weakness, the framework successfully automated weight identification in 9-12 days and demonstrated that simulations with optimized weights outperformed literature-based reference weights for predicting kinematics, kinetics, and ground reaction forces, showing promise for systematically exploring personalized compensatory gait strategies with predictive simulations.

    These findings demonstrate the potential of advanced data-driven and simulation techniques to address gait complexity in individuals with SCI, with broader applicability to other clinical populations.

    Download full text (pdf)
    Minh_PhDThesis_Kappa
  • Sushen, Joshi
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Space and Plasma Physics.
    Roth, Lorenz
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Space and Plasma Physics.
    Chaufray, Jean-Yves
    LATMOS-IPSL, UVSQ Paris Saclay, Sorbonne Université, CNRS.
    Gladstone, Randy
    Southwest Research Institute, San Antonio, TX, USA .
    Ivchenko, Nickolay
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Space and Plasma Physics.
    Strobel, Darrell
    Department of Earth and Planetary Sciences, The Johns Hopkins University.
    Lamy, Laurent
    LIRA, Observatoire de Paris, Université PSL, Sorbonne Université, Université Paris Cité, Cergy Paris Université, CNRS; Aix Marseille Université, CNRS, CNES, LAM.
    Probing methane in Uranus’ upper stratosphere using HST observations of the 1280 Å Raman feature2026In: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 705, article id A109Article in journal (Refereed)
    Abstract [en]

    We analysed far-ultraviolet (FUV) spectra of Uranus obtained by the HST STIS and COS instruments in 2012 and 2014, respectively, to determine the brightness of Raman-scattered Lyman-alpha (Ly α ) emissions centred at 1280 Å (hereafter, the Raman feature). The Raman feature is unique among the Solar System’s giant planets and forms in Uranus’ atmosphere due to weak vertical mixing of hydrocarbons with H 2 , leading to efficient Rayleigh–Raman scattering. Methane is the dominant hydrocarbon species on Uranus, and since it absorbs FUV radiation, it affects the Rayleigh–Raman scattering of Ly α photons by H 2 and, eventually, the brightness of the Raman feature. We derive a brightness of 20 −6 +1 R from the STIS data, which is similar to the brightness measured by Voyager 2 UVS during the 1986 flyby of Uranus, when considering the suggested recalibration of UVS measurements by a factor of ∼0.5. Based on the observed brightness, we constrain the upper altitude (pressure) level for the abundance of methane in the upper atmosphere using radiative transfer simulations that include resonant scattering by H, Rayleigh–Raman scattering by H 2 , and absorption by CH 4 . We considered the solar Ly α flux as the source of Ly α radiation at Uranus. We find that resonant scattering by H significantly affects Rayleigh–Raman scattering by H 2 and thus the modelled brightness of the Raman feature. We derive methane profiles by obtaining the simultaneous fit to the observed Ly α , as well as the 1280 Å brightness of Uranus. Methane appears to be depleted (number density becomes less than 1 cm −3 ) above the altitude (pressure) range of ∼478–515 km (4 × 10 −3 –2.4 × 10 −3 mbar), while the Ly α absorption optical depth reaches unity for methane in the altitude (pressure) range of ∼237–257 km (2.54 × 10 −1 –1.65 × 10 −1 mbar). When neglecting resonant scattering by H, the methane depletion must be deeper in the atmosphere at an altitude (pressure) of ∼395 km (1.4 × 10 −2 mbar), similar to previous findings based on Voyager 2 observations of the feature. The analysis of the Raman feature provides independent CH 4 constraints in the upper atmosphere for detailed photochemistry modelling and highlights the importance of UV instruments for the future Uranus Orbiter and Probe (UOP) mission.

    Download full text (pdf)
    Joshi_et_al_2026_Uranus_Methane_1280_A_HST
  • Sannaz, Rasouli
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering.
    Analysis of cascading effects between droughts and floods in Australia2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Droughts and floods are a prominent feature of the Australian scene. These dry and wet extremes have exerted pronounced negative impacts on various sectors such as agriculture, wildlife, infrastructures as well as socio-economic factors. Extensive research has been carried out to investigate the spatial and temporal characteristics of extreme drought and flood in isolation, however the chain of events interlinked is a major gap in the field. Such correlation is called cascading events that are defined as a primary hazard triggering a secondary hazard, such as the topping of dominoes. Consequently, the need to progress our understanding of these events has become critical in order to predict their occurrence, minimise potential impacts, and allow for strategic recovery. 

    This report applied a new method for quantifying rapid shifts of dry-to-wet events, used as a proxy for studying drought affected locations being hit by floods. 

    The Australian Monthly Gridded Rainfall Dataset from the Australian Bureau of Meteorology (BoM) was used to assess the spatial extent of the hydrological extremes. The gridded data spans the period 1900 – 2008, with a spatial resolution of 0.05° × 0.05°. A geospatial analysis was conducted using ArcMap and python for coding. Further, specific flood and drought events were used from Australia's historical record as a baseline for exploring the changing character of precipitation extremes. This was achieved by selecting a wide range of wet, dry and dry-wet transition events from historical analogues, specifically looking at meteorological drought. The dry-wet transitions were named "precipitation whiplash". Hydrological extremes were quantified by first identifying the wet season from observing BoM’s average rainfall data for each month. From there the accumulated precipitation per rainy season and percentile maps were calculated and correlated in order to produce climate extreme maps. The final precipitation whiplash maps were derived from these and compared with the literature findings.

    The results indicate that for all five case studies, both extreme dry and wet events of selected periods demonstrate dry-wet transition, i.e precipitation whiplash. In other words, whiplash events take place between extreme dry to wet shifts. The outcome of the sensitivity analysis suggests sources of uncertainty and poor robustness of the findings as a major shift in results is seen from changing the percentile variables. Furthermore, the calculated precipitation whiplash results are poorly in agreement with previous historical accounts. More specifically, the extreme wet occurrences were difficult to find records of that would lead to a whiplash event.

    Download full text (pdf)
    fulltext
  • Malmsten, Nicklaes
    KTH, School of Architecture and the Built Environment (ABE), Real Estate and Construction Management.
    Oskadlighetsprövning vid fastighetsreglering: En studie om Lantmäteriets oskadlighetsprövning då en fastighets värde minskar i samband med fastighetsreglering2025Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Property reallotment is a collective term for altering, merging and dividing properties and often entails changes in the value of the properties involved. This change in value, particularly depreciation, may affect the relationship between the property owner and the creditor whose security may be impaired. Lantmäteriet (LM) is therefore tasked in specific cases with deciding on compensation under Chapter 5, 16§ of the Real Property Formation Act (FBL) and distributing it among claimants. Central to this type of relationship is a so-called harmlessness test, which assesses whether the regulation may result in reduced security for the claimant. However, in the current legal text there are no clear directives on how such an assessment should be carried out and what criteria are of importance to the land surveyor carrying out the assessment. This study therefore aims to investigate how LM carries out harmlessness tests, its relationship to the existing legal text, and how property owners and claimants are affected by real estate settlements where the test is applied. To provide insight into the relevant legal text and reasoning in the processing, the jurisprudential method and qualitative method in the form of semi-structured interviews have been applied and delimitations have primarily been drawn according to Chapter 5, 16§ of the FBL. 

    The results of the interviews indicate that the assessment is mainly based on the assessed value of the property and mortgages on the property. Although procedures and risk assessment may differ between units and administrators, a common feature of a harmlessness test is that the consent of the creditor is always sought in the event of uncertainty in the assessment.

    Download full text (pdf)
    fulltext
  • Marcinek, Lubos
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Beskow, Jonas
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Gustafsson, Joakim
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    A dual-control dialogue framework for human-robot interaction data collection: integrating human emotional and contextual awareness with conversational AI2024In: International Conference of Social Robotics (ICSR 2024), 2024Conference paper (Refereed)
    Abstract [en]

    This paper presents a dialogue framework designed to capture human-robot interactions enriched with human-level situational awareness. The system integrates advanced large language models with realtime human-in-the-loop control. Central to this framework is an interaction manager that oversees information flow, turn-taking, and prosody control of a social robot’s responses. A key innovation is the control interface, enabling a human operator to perform tasks such as emotion recognition and action detection through a live video feed. The operator also manages high-level tasks, like topic shifts or behaviour instructions.

    Input from the operator is incorporated into the dialogue context managed by GPT-4o, thereby influencing the ongoing interaction. This allows for the collection of interactional data from an automated system that leverages human-level emotional and situational awareness. The audiovisual data will be used to explore the impact of situational awareness on user behaviors in task-oriented human-robot interaction.

    Download full text (pdf)
    fulltext
  • Sanclemente, Mateo
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Heat and Power Technology.
    Trevisan, Silvia
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Heat and Power Technology.
    Guédez, Rafael
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Heat and Power Technology.
    Techno-Economic Assessment of Solar Hybrid System With High-Temperature Heat Pump for Industrial Heat Generation2025In: SolarPACES 2024 Conference Proceedings, TIB Open Publishing , 2025, Vol. 3Conference paper (Refereed)
    Abstract [en]

    Solar thermal collectors, high-temperature heat pumps and thermal energy storage are key technologies for industrial decarbonization. The development and installation of these technologies represent a double benefit. First, by using renewable energy sources, it contributes to reducing fossil fuel consumption. Second, their integration can supply heat for industrial processes where other single renewable-based technologies, such as solar, can encounter limitations. This work presents the techno-economic assessment of a solar hybrid system using pressurized water as heat transfer fluid. The system is studied in the Greek electricity market considering prices of 2022. Dispatch strategies and system sizing are identified for optimal techno-economic performance. The main performance indicators investigated are the levelized cost of heat, the operational expenditure, and savings compared to traditional fossil-fuel solutions. The results highlight that the levelized cost of heat is as low as 98 €/MWh with operational cost savings of 23k €/y against traditional non-flexible gas boilers.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-02-06 13:00 D2, via Zoom: https://kth-se.zoom.us/j/66460872948, Stockholm
    Pohjanen, Emmie
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Protein Science, Cellular and Clinical Proteomics. KTH, Centres, Science for Life Laboratory, SciLifeLab.
    Spatial proteome mapping of specialized subcellular structures in human cells2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Proteins are the primary workhorses of the cell, carrying out virtually all processes to sustain cellular functioning. From enzymes that catalyze biochemical reactions, to motor proteins that transport large cellular cargo across the cell, protein functions are as diverse as the unique amino acid sequences that compose the proteins. Protein function is largely dependent on the subcellular localization of the protein, as subcellular compartmentalization enables different environments that are suitable for different reactions. Knowledge about protein localization and function can, in the broader context, help us understand the cell in health and disease, as protein dysfunction and mislocalization are key drivers of developing disease.

     The work in this thesis has been carried out within the framework of the Human Protein Atlas (HPA) initiative, primarily for the subcellular resource. In Paper I, we measured the autoantibody profiles of patients with systemic sclerosis with the goal to identify new candidate biomarkers associated with fibrosis. We performed a near proteome-wide, untargeted screen combined with a targeted bead array and revealed 11 autoantibodies with higher prevalence in patients with systemic sclerosis than in controls. Two of these show high potential for being used as biomarkers for systemic sclerosis patients that are affected by skin and lung fibrosis. 

     For Paper II, we took advantage of the vast image library generated by the subcellular resource of HPA to create an image-based map of the micronuclear proteome. In total, we identified 944 proteins as micronuclear, dominated by proteins associated with nuclear and chromatin processes. The findings of this study expand our view of micronuclei as byproducts of mitotic errors to potential active participants in biological processes. In Paper III, we applied antibody-based spatial proteomics combined with 3D confocal imaging to map 715 proteins to primary cilia, and three ciliary substructures, across three different cell lines. Of the identified proteins, 91 had not been identified in cilia before, expanding our knowledge on the ciliary proteome and function. The findings of the study portray cilia as sensors able to tune their proteome to effectively sense the environment to compute cellular responses. Finally, in Paper IV, we mapped the subcellular localization of a subset of the human sperm proteome to 11 distinct subcellular structures of human sperm cells, providing the first image-based resource on protein localization in sperm cells. We found that 54% of the studied sperm proteins vary in spatial distribution and/or abundance between individual sperm, which raises the question of subpopulations of sperm. 

     In summary, this thesis expands our knowledge on protein localization in specialized subcellular structures and provides a foundation for further in-depth research into the mechanisms behind the drivers of certain diseases, such as for autoimmunity, cancers, ciliopathies, and male infertility phenotypes. 

    Download full text (pdf)
    kappa
  • Sanclemente, Mateo
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Heat and Power Technology.
    Trevisan, Silvia
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Heat and Power Technology.
    Law, Richard
    School of Engineering, Merz Court, Newcastle University, NE1 7RU, Newcastle Upon Tyne, UK.
    Baker, Henry
    School of Engineering, Merz Court, Newcastle University, NE1 7RU, Newcastle Upon Tyne, UK.
    Høeg, Arne
    ENERIN AS, N-1383, Asker, Norway.
    Guédez, Rafael
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Heat and Power Technology.
    Techno-economic assessment of a high temperature stirling heat pump with latent thermal energy storage for industrial heat generation2026In: Energy, ISSN 0360-5442, E-ISSN 1873-6785, Vol. 344, article id 139910Article in journal (Refereed)
    Abstract [en]

    High temperature heat pumps and thermal energy storage are key technologies for industrial decarbonization. An effective integration of these technologies can provide flexible and reliable process heat whilst facilitating further uptake of renewable energy sources in the grid. This work presents a comprehensive techno-economic assessment of an integrated system based on a novel high temperature Stirling heat pump coupled with an innovative latent thermal energy storage to deliver process heat at 200 °C. Three different layouts were investigated: a single Stirling heat pump upgrading waste heat, a single Stirling heat pump upgrading ambient heat, and a two-stage vapor compression heat pump coupled with a Stirling heat pump for upgrading ambient heat. The systems are studied with electricity prices from 2023 from four electricity markets: Germany, Greece, Norway, and Spain. Operational dispatch strategies and system sizing are identified for optimal techno-economic performance. The main performance indicators investigated are the levelized cost of heat, CO2 emissions, operational expenditures, and cost savings compared to traditional fossil-fuel and electric boilers. The results highlight that the levelized cost of heat can be reduced by 3–12 % in Germany and Spain while generating operational cost savings of 30–40 %. CO2 emissions can be reduced by 24–63 % when upgrading waste heat. In Norway, the levelized cost of heat can be reduced by 35–45 % while generating operational cost savings of 50–70 % against traditional gas boilers. In Greece, the levelized cost of heat can be reduced by 1 % in the Mid Scenario.

    Download full text (pdf)
    fulltext
  • Costa, Melvin
    KTH, School of Engineering Sciences (SCI), Physics.
    Design of Test Matrix for HWAT facility using GOTHIC code2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis develops a test matrix for the High-Pressure WAter Test (HWAT) facility at KTH to support quantitative validation of the GOTHIC code within a verification, validation, calibration, and uncertainty quantification framework. A detailed GOTHIC model of the three-loop HWAT facility was developed. The methodology integrates solution verification, measurement uncertainty quantification for differential pressure, temperature, and condenser level measurements, forward uncertainty propagation, one-at-a-time sensitivity analysis, and calibration strategies for hydrodynamic losses, thermal losses, and thermal inertia.

    Key results demonstrate that optimal roughness calibration requires high mass flow rates (0.8 kg/s) and temperatures (250 °C), yielding the least relative uncertainty. Local loss coefficient uncertainties remain correlated with roughness estimates, requiring sequential calibration at high temperature and high mass flow rate. Thermal loss parametrisation via insulation thickness proved unreliable (with an error exceeding 200%), while approaches using air-gap/insulation conductivity and heat transfer coefficient showed promise. Sensitivity analysis of thermal loss and inertia identified ambient temperature, air-gap conductivity, heat transfer coefficient, and flange density as dominant UIPs affecting system response quantities. Whereas Sensitivity analysis of forced-to-natural circulation transients shed local loss coefficients and heat transfer area of the steam generator as dominant UIPs

    Conclusions establish that the chosen methods will provide a systematic approach to reducing the uncertainty of measurements and input parameters, thereby providing a robust foundation for GOTHIC validation against HWAT transients. Recommendations include Morris global sensitivity methods and extended natural circulation instability studies.

    Download full text (pdf)
    fulltext
  • Trundle, Graeme
    et al.
    KTH, School of Engineering Sciences (SCI), Physics, Nuclear Science and Engineering.
    Bechta, Sevostian
    KTH, School of Engineering Sciences (SCI), Physics, Nuclear Science and Engineering.
    Galushin, Sergey
    Vysus Group, Stockholm, Sweden.
    Roshan Ghias, Sean
    KTH, School of Engineering Sciences (SCI), Physics, Nuclear Science and Engineering.
    Söderström, Michael
    Vattenfall AB, Stockholm, Sweden.
    Olsson, Anders
    Vysus Group, Stockholm, Sweden.
    Reliability Assessment of the BWRX-300 Passive Isolation Condenser System: Addressing Uncertainties in Two-Phase Natural Circulation Flow Modeling2025Conference paper (Other academic)
    Abstract [en]

    Passive safety systems are increasingly utilized in prospective nuclear power plant designs. The low magnitude of the forces involved in such systems, combined with the uncertainty inherent in the factors affecting them, poses a problem in assessing their reliability compared to active counterparts. The purpose of this paper is to investigate and apply a state-of-the-art technique in passive reliability assessment, known as the Reliability Methods of Passive Systems (RMPS) methodology, to the isolation condenser system (ICS) of the prospective BWRX-300 small modular reactor (SMR) design. The ICS is a safety system driven by natural circulation that provides emergency core cooling and pressure control for the BWRX-300. Using RMPS to analyze the effect of uncertainties in (a) the thermal characteristics of the fuel rods and (b) two-phase constitutive correlation factors on ICS operation, the reliability of natural circulation was quantified with a confidence of 95%, yielding an immeasurably small failure probability. Considering residual uncertainty, an engineering judgment assigned a failure probability of 1.00E-07. This finding was integrated into a fault tree analysis of the ICS using failure mode and effect analysis (FMEA) of system components, including insufficient natural circulation as a failure mode. Analysis of sequences leading to failure resulted in system unavailability being determined as 1.62E-07 for the case of all three loops initially available and 2.91E-05 for the case when only two loops are initially available. Sensitivity analysis of the natural circulation failure probability with respect to ICS system unavailability was also performed to investigate the robustness of the design.

    Download full text (pdf)
    fulltext
  • John, Jacob
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electromagnetic Engineering and Fusion Science.
    Månsson, Daniel
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electromagnetic Engineering and Fusion Science.
    A Ragone Plot Framework for Battery Aging Studies2025In: 2025 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT Europe), Institute of Electrical and Electronics Engineers (IEEE), 2025Conference paper (Refereed)
    Abstract [en]

    The growing integration of renewable energy sources and electric vehicles (EVs) into the grid has increased the importance of energy storage systems (ESS). As many different types of ESS presently exist, the theory of Ragone plots can aid in evaluating the energy-power relationship of these storage devices. Until now, much of the research has focused on analytical models of ideal batteries or experimental data from batteries to develop Ragone plots. However, there are often discrepancies between Ragone plots created from analytical studies and actual experimental data. A research gap exists in developing analytical models for batteries where studies often neglect two factors: (1) actual operational constraints like temperature, voltage, and current limits, and (2) aging characteristics. The research presented here aims to bridge this gap by implementing a practical Ragone plot method through analytical studies with measured constraints. In addition, this model incorporates aging characteristics to determine the trajectory of the Ragone plot as the cell ages. Through this inclusive approach, we aim to bridge the gap between analytical models and the operational reality of battery performance assessment.

    Download full text (pdf)
    fulltext
  • Kulanovic, Aneta
    et al.
    Department of Management and Engineering, Linköping University, Linköping SE-581 83, Sweden.
    Nordensvärd, Johan
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Management & Technology. Department of Management and Engineering, Linköping University, Linköping, Sweden; Institute for Global Sustainability, Boston University, Boston, United States.
    Urban, Frauke
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Sustainability, Industrial Dynamics & Entrepreneurship. SCANCOR, Weatherhead Center for International Affairs, Harvard University, USA.
    The discursive silos of transport discourse in Sweden: Using future storylines to understand the polarization and politicization of sustainable aviation transitions2026In: Futures: The journal of policy, planning and futures studies, ISSN 0016-3287, E-ISSN 1873-6378, Vol. 176, article id 103755Article in journal (Refereed)
    Abstract [en]

    Within the multi-level perspective (MLP) on sustainability transitions, there has been a rise in research on storylines and discursive framing that have become more central in understanding how competing narratives shape the trajectories of innovation. This paper examines how policy actors and stakeholders construct and frame competing scenario narratives of sustainable aviation futures. Using a scenario narrative framing approach, we analyze empirical data from focus groups and interviews with Swedish aviation sector actors. The findings reveal a discursive split: one set of narratives supports an active state fostering sustainable aviation through niche innovation (aligned with ecological modernism), while another advocates for limiting aviation altogether (reflecting green theory). These national narratives are contrasted by a multilateral, risk-averse discourse calling for international or EU-level decision-making processes. Our results highlight a deeper divide — scenario narratives are polarized and politicized, with transport mode innovations increasingly tied to political identities. Centre-right actors tend to support aviation innovation over rail, while green and Centre-left actors often argue the reverse. This politicization reflects broader discursive struggles, as seen in debates such as the proposed closure of Västerås regional airport and Bromma airport. This includes dissuading tourists who use aircrafts, excluding aviation from approaches to collective traffic and lacking integration in any public transport system. This leads to aviation being perceived neither as private nor collective transport in discourses and as ambivalent in policies.

    Download full text (pdf)
    fulltext
  • Meijer, Jonas
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemistry.
    Local electron attachment energy in the prediction of covalent inhibitor reactivity2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Covalent inhibitors are a class of pharmaceuticals that form covalent bonds to their target, which is typically a nucleophilic residue on an enzyme. To speed up the process of drug discovery and decrease the costs quantum chemical methods have previously been used to predict their reactivity using measures such as activation energies from transition state calculations and Parrs electrophilicity index ω. This work investigates the ability of the local electron attachment energy ES(r) and electrostatic potential VS(r) associated with the minimum local electron attachment energy ES,min to predict the reactivity of covalent inhibitors and compares the result to that of the electrophilicity index.

    The local electron attachment energy, electrostatic potential associated with the ES,min and Parr’s electrophilicity index were calculated for 4 different datasets of covalent inhibitors. They were then compared to experimental half-lives or computed activation energies. Two of the datasets contained molecules with acrylamide warheads and two contained propynamide warheads. Both acrylamides and propynamides have a unsaturated β-carbon as the reactive site. Overall the ES,min shows the most consistent ability to predict the reactivity showing acceptable to excellent correlation for both acrylamides and propynamides there is possibly a trend toward the ES,min working better than the other properties when there are different substituents on the warhead β-carbons. The VS(r) shows a similar performance to the ES,min for the acrylamides. For the propynamides it shows a slightly higher correlation for one dataset but almost no correlation for the second. The ω works as well or better than the ES,min for the two datasets containing acrylamides but shows a poor performance for the datasets containing propynamides.

    Download full text (pdf)
    fulltext