kth.sePublications
1 - 53 of 53
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Public defence: 2025-05-23 09:00 Kollegiesallen, Room 4301, Stockholm
    Kizyte, Asta
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics.
    Neuromechanical Assessment ofIntact and Impaired Muscle Control: High-density EMG-informed approach2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Neuromuscular impairments in ankle dorsi-/plantarflexor muscles presentrehabilitation challenges after spinal cord injury (SCI) and stroke. Reliabletorque prediction and characterization of muscle impairments are essential forguiding rehabilitation and monitoring recovery. This thesis aims to assess howhigh-density EMG (HDEMG) improves torque estimation by integrating spatialand neurophysiological data (Studies I & II) and examine motor unit (MU)behavior and corticomuscular connectivity in SCI and stroke (Studies III & IV).Proposed methodologies combine HDEMG with advanced signal processingtechniques. Specifically, Study I uses machine learning (ML) to predict torquefrom bipolar EMG, HDEMG, and extracted features. Study II incorporates acomputational cumulative spike train-driven motoneuron pool model into aneuromusculoskeletal framework to generate neural drive signals. Study IIIuses HDEMG decomposition to analyze MU firing behavior in SCI. Study IVinvestigates MU/EMG–EEG corticomuscular coherence (CMC) to assesscorticospinal disruptions in stroke.Findings from Studies I & II show ML methods predict torque well in staticconditions but face challenges in dynamic movement due to absence ofkinematic constraints. Neuromusculoskeletal modeling provides betterrepresentation of neural and mechanical function by incorporating MU firingproperties. Studies III & IV offer insights into MU-level changes inneuromuscular disorders. Specifically, Study III identifies SCI-related EMG andMU behavior alterations, reflecting compensatory motor control strategies.Study IV introduces MU-level CMC analysis in stroke, revealing that motorneuron parameters do not significantly determine CMC strength, and thefundamental pattern of beta-band coupling over motor areas remainsidentifiable across all subject groups and CMC modalities.Overall, this thesis demonstrates that HDEMG enhances torque estimation andneuromuscular assessment. By integrating spatial EMG features and MU-levelanalyses, it deepens understanding of pathological motor control andneurophysiology, with implications for rehabilitation, assistive devices, andneuromuscular modeling.

    Download full text (pdf)
    fulltext
  • Public defence: 2025-05-23 09:15 Air&Fire, Stockholm
    Bhalla, Nayanika
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Gene Technology. KTH, Centres, Science for Life Laboratory, SciLifeLab.
    Patterns of Life: Advancing Spatial Omics for a Better Understanding of Metabolic Tissues2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Life hinges on the precise interplay between gene regulation and metabolism, a dynamic balance that unfolds in specific tissues and underlies both normal physiology and disease. This thesis follows a unifying red thread, advancing spatial omics for tissue-specific metabolic insights and translational applications by combining cutting-edge spatial transcriptomics and spatial epigenomics to illuminate how local regulatory mechanisms shape metabolic function.

    In the initial segment of the thesis, we establish the conceptual groundwork, exploring how chromatin accessibility and transcriptional programs orchestrate cellular metabolism. We then apply spatial transcriptomics to two distinct yet metabolically active tissues. Paper I maps subcutaneous white adipose tissue (WAT) and discovers multiple adipocyte subtypes with divergent insulin responses, underscoring the critical role of tissue architecture in metabolic homeostasis. Paper II extends these methods to the human placenta, revealing region-specific gene expression patterns that help explain the metabolic dysregulation observed in preeclampsia. Given the placenta’s pivotal role in maternal-foetal nutrient exchange, these findings offer novel insights into how morphological compartments become disrupted in disease.

    Building on these insights, Paper III introduces spatial ATAC-seq, a novel technique for profiling open chromatin within intact tissues, linking regulatory elements to their spatial context. Paper IV refines this protocol for broader adoption, integrating it with commercial platforms and enabling seamless multi-omic workflows. Building on these technological advances, Paper V returns to adipose tissue in a clinically relevant setting, employing a multi-omic approach to chart the long-term remodelling of WAT after bariatric surgery. By capturing transcriptional shifts in adipocytes and immune–stromal interactions, we highlight the tissue-level transformations that underpin sustained metabolic improvements.

    Collectively, these studies showcase how spatial omics can deepen our understanding of tissue-specific metabolism, bridging foundational biology and translational research. They also underscore the power of integrated multi-omic approaches in revealing how chromatin states, gene expression, and metabolic function intersect in situ. By decoding the spatial architecture of gene regulation, we not only unravel the cellular intricacies of adipose and placental tissues but also pave the way for targeted therapeutic interventions in metabolic diseases, offering a powerful lens through which to view, and ultimately shape human health.

    Download (pdf)
    Kappa
  • Public defence: 2025-05-23 10:00 F3, Lindstedtvägen 28, KTH Campus, Stockholm
    Larsson, Martin
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Building Materials.
    Structural design, degradation and condition assessment of cycle paths2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    A shift in modal share from car driving to cycling has many benefits, e.g., health benefits from increased physical activity and less pollution and congestion. A smooth cycle path surface with sufficient friction is important for cyclist traffic safety, comfort and level of service. Cracks and surface unevenness are frequent maintenance-related deficiencies associated with the degradation of the structure. A purpose of this thesis is to identify degradation factors specific for cycle paths, through a state-of-the-art literature review. The review is complemented by four appended papers. Paper A analyses the stated distress modes and causes reported on Swedish municipal cycle paths with respect to climatic and population data. Paper B evaluates a novel method for condition assessments on cycle paths related to cycling comfort—the Bicycle Measurement Trailer. PaperC proposes alternative deflection bowl parameters for structural evaluation of cycle paths from in-situ falling weight deflectometer and light weight deflectometer measurements. Paper D reports on the results of full-scale testing on instrumented cycle path structures. The main results from the papers indicate that surface roughness and unevenness, longitudinal cracks and edge deformations are the most common distress modes. The main reasons behind this distress are structural interventions, tree roots, frost heave and heavy vehicles. The load-bearing capacity close to the pavement edge and at increased moisture content is reduced. The proposed alternative approaches for cycle path condition assessment were able to assess the surface roughness and evenness, along with the structural condition, for practical applications on the investigated cycle paths. The conclusions of the thesis suggest that the structural design principles for cycle paths in the Swedish structural design manual needs to be updated. Models that better describe the behaviour of thin-surfaced asphalt pavements, especially with respect to climate, should be developed. More studies are recommended to validate the proposed condition assessment approaches.

    Download (pdf)
    summary
  • Public defence: 2025-05-26 09:00 T2 (Jacobssonsalen), Huddinge, Stockholm
    Lindgren, Natalia
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Neuronic Engineering.
    From Impact to Insight: Finite Element Modeling of Real-World Head Trauma2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Traumatic head injuries represent a major global health burden, affecting up to 70 million people annually world-wide. To study head injury mechanisms and evaluate preventive measures, virtual, anatomically-detailed human surrogates, referred to as Human Body Models (HBMs), can be created using Finite Element (FE) modeling techniques. Such FE models can be used to computationally recreate real-world head traumas to study human response to impact and reveal injury mechanisms. However, since FE is an inherently heavy computational task, there are numerous modeling challenges associated with using FE analysis for this purpose: constitutive models need to be appointed to complex biological tissues, models need to be properly validated, the chosen approach should be feasible in terms of time, and so forth. This doctoral thesis aims to address a few of these difficulties.

    This thesis is composed of four comprehensive studies, each related to the overall objective of developing new methodologies and models, and further developing existing ones, for in-depth FE reconstructions of real-world head trauma. To emphasize their applicability in head injury research, the four studies also feature in-depth reconstructions of real-world injurious events. In the first study, a male and female pedestrian HBM was developed based on an existing occupant HBM, along with an efficient framework for anthropometric personalization. In the second study, a framework for reconstructing head traumas of pedestrians and cyclists in real-world road traffic accidents was developed, validated and exemplified by reconstructing 20 real-world cases. In the third study, a material model for cranial bone was developed and validated, and used for predicting skull fractures in five fall accidents. Lastly, in the fourth study, the material model was applied to a subject-specific head model, used to conduct an in-depth reconstruction of a workplace fatality to assess the protective effect of construction helmets.

    Together, these four studies highlight how in-depth FE reconstructions, involving geometrically personalized models of the human body, can provide head injury predictions with striking resemblance to real-world data. When conducted with care, such reconstructions can offer valuable insights into the complex dynamics of head trauma. They can be indispensable tools for evaluating injury prevention strategies, and can potentially be useful within the field of forensic medicine, as they may help open up for objectification of forensic evaluations.

    Download full text (pdf)
    Kappa
  • Public defence: 2025-05-26 09:00 Kollegiesalen, Brinellvägen 8, KTH Campus, Stockholm
    Papageorgiou, Asterios
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering, Resources, Energy and Infrastructure.
    Assessing circular economy progress in urban areas: An Industrial Ecology perspective2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Rapid urbanization in recent decades, combined with unsustainable production and consumption practices rooted in the linear “take-make-use-waste” model, has transformed urban areas into global hotspots of resource consumption, waste, and emissions. As a consequence, urban areas today exert tremendous pressure on natural resources and contribute to severe environmental problems both within and beyond their boundaries. To address the challenges of unsustainable urbanization, an increasing number of local governments worldwide are embracing the circular economy (CE) concept and are actively working to develop and implement circular strategies at the urban level. In this context, it is crucial to equip local decision-makers, such as policy makers and urban planners, with effective tools to assess progress toward the CE, enabling them to design impactful strategies and monitor their implementation based on comprehensive information.

    This thesis aims to advance knowledge on approaches to monitor and assess CE progress at the urban level to support informed decision-making within the CE context. To achieve this, it investigates how the indicator-based approach and urban metabolism (UM) assessment methods can be used to assess CE progress in urban areas. Specifically, the research focuses on indicator-based frameworks and two UM assessment methods: material and energy flow analysis (MEFA) and urban metabolic life cycle assessment (UM-LCA), which integrates MEFA with life cycle assessment (LCA). The aim of the thesis is addressed by answering the following research questions: 

    1.      What is the availability of indicator-based frameworks for assessing and monitoring CE progress at the urban level, and what are their strengths and limitations? 

    2.      What is the applicability and utility of MEFA and UM-LCA in supporting the design and monitoring of urban-level circular strategies? 

    3.      How can the indicator-based and UM-LCA approaches be integrated to provide a comprehensive assessment of UM and circularity in urban areas, supporting decision-making within the context of the CE?

    To address these research questions, a combination of methods is employed, including literature reviews, indicator-based assessments, MEFA, and UM-LCA, with the urban area of Umeå in Sweden serving as a study area. Additionally, a novel framework that integrates the indicator-based and UM-LCA approaches is developed and applied to the study area. 

    The results indicate that existing indicator-based frameworks have potential for monitoring and assessing CE progress at the urban level. However, they also have limitations, particularly in relation to data constraints and their scopes, which are not comprehensive enough to capture all aspects related to the CE. Thus, relying solely on indicator-based frameworks cannot provide all the necessary information for decision-making in the CE context.

    The applications of MEFA and UM-LCA to the study area demonstrate that these two methods can provide detailed quantitative information on material and energy flows and environmental impacts caused by urban areas, thus offering insights that indicators alone cannot provide. This makes them particularly useful tools for supporting the design and monitoring of circular strategies. Nevertheless, applying these methods without the use of appropriate indicators cannot fully support decision-making within the CE context, as they have limited potential to capture specific aspects of the CE, such as resource efficiency, waste management performance and governance aspects.

    As the use of the indicator-based apprach and UM assessment approaches in isolation can only provide fragmented and incomplete insights, this thesis advocates for their integration. For this purpose, it introduces a novel framework that combines the UM-LCA approach with an indicator-based framework comprising 27 CE indicators. The application of the framework demonstrates its great potential to inform decision-making in the CE context by providing detailed insights into material and energy flows, environmental impacts and urban circularity. However, the proposed framework also has limitations, including complexity of application, extensive data requirements, and limited capacity to assess socio-economic aspects. Thus, further research is recommended to address these limitations.

    Download (pdf)
    kappa
  • Public defence: 2025-05-26 09:00 F3(Flodis), Stockholm
    Zhang, Xiaochen
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics.
    Advancing Exoskeleton Use Post Stroke: Developing and Optimizing a Soft Biplanar Ankle Exoskeleton2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Stroke is a leading cause of long-term disability worldwide. Dropfoot gait, or inability to adequately lift, advance, and land on the foot on the more impaired side, is one of the most common gait impairments following a stroke, and often results in reduced mobility and increased fall risk, severely impacting independence and quality of life. Wearable robotics and exoskeletons have been widely explored for their potential in physical rehabilitation and mobility assistance for individuals with motor disorders. However, despite their promise, few existing systems have demonstrated convincing evidence for use. The specific exoskeleton performance challenges in assisting dropfoot gait that are addressed in this compilation thesis are 1) to simultaneously control both sagittal plane ankle dorsiflexion and frontal plane ankle inversion/eversion motions, and 2) to identify control strategies that are individualized based on gait impairments and subjective preferences. 

    Specifically, the aims of the thesis were to design an ankle exoskeleton that is capable of providing biplanar assistance and meets both biomechanical and feasibility requirements, to develop a customized control framework that optimizes multiple performance metrics, and to identify subject-specific assistive parameters that improve gait metrics while aligning with users' subjective assessments as well.

    The first two studies focus on the development and feasibility of a soft ankle exoskeleton designed to provide  assistance in both ankle dorsiflexion and eversion, via  cable-driven mechanisms and compliant materials. Initial testing confirmed the exoskeleton's ability to effectively guide ankle joint motion in both planes with minimal resistance. In the second study, the device's feasibility was evaluated in a pilot group of persons with dropfoot gait following a stroke. Improvements in key gait parameters, along with positive user assessment on the device's comfort, usability, and perceived effectiveness, encourage further application of the device in persons in a chronic post-stroke phase.

    The third and fourth studies focus on developing personalized exoskeleton control strategies, specifically a multi-objective human-in-the-loop optimization framework that evaluates individual responses to various assistive profiles, then identifies assistive profiles tailored to each individual's gait impairments. The framework was constructed to simultaneously optimize two objectives that describe gait quality. This approach yielded not just one, but a group of good solutions that improve both gait metrics to varying degrees, among which solutions can be selected based on context and preference. The framework was developed and tested on a group of non-disabled subjects with a simulated dropfoot impairment in the third study and on a pilot group of persons with dropfoot following a stroke in the fourth study. In the fourth study, the personalization framework was further advanced by incorporating user preferences, thereby incorporating both objective gait quality metrics and subjective preference in identifying optimal exoskeleton assistance.

    This thesis advances the application of exoskeletons for individuals post-stroke by addressing both hardware design and personalized control strategies. The findings highlight the potential of the developed ankle exoskeleton to enhance mobility in this population and underscore the importance of individualized assistance to meet diverse user needs. Together, the exoskeleton design and individualized control framework offer a valuable foundation for future research and practical implementation of assistive technologies.

    Download (pdf)
    Advancing Exoskeleton Use Post Stroke: Developing and Optimizing a Soft Biplanar Ankle Exoskeleton
  • Public defence: 2025-05-26 10:00 At 11:00 am Tanzania time: DMTC Building, Ardhi University, Dar es Salaam, Tanzania, Dar es Salaam
    Nyanda, Frank
    KTH, School of Architecture and the Built Environment (ABE), Real Estate and Construction Management. Ardhi University (ARU), Dar Es Salaam, Tanzania.
    Construction of house price indices in Dar es Salaam: Suggestion of a practical model for Tanzania amid data constraints2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Real estate significantly influences economic growth, with prices shaped by utility attributes and buyer willingness, making price dynamics crucial for stakeholders. In nascent real estate markets like Dar es Salaam, where data is less integrated and transactions often involve informal agents, creating accurate price indices is challenging, and methodologies may need to incorporate both formal and informal data sources, potentially with the help of machine learning techniques to improve predictions. Nevertheless, the Dar es Salaam housing market lacks indices, despite the existence of data sources, particularly formal and informal real estate agents. The main objective of this doctoral thesis is to examine the adoption of the best method for developing a house price index (HPI) for Dar es Salaam, Tanzania's most active real estate submarket, which shares operational characteristics with other regional submarkets in the country.

    This thesis consists of four papers, utilising a survey strategy and cross-sectional data from real estate agents. It examines the feasibility of using informal real estate agents' data to establish a house price index in Dar es Salaam, the impact of spatial dependence on the index, the impact of informal and formal agents' data sources on the index and the use of machine learning techniques for property valuation, aiming to highlight its feasibility for house pricing.

    The findings of the study indicate that the hedonic approach, with the informal agents’ data, appears to yield a useful house price index that shows a steady but rising trend (paper I). The hedonic pricing model for Dar es Salaam may not require spatial considerations due to data limitations, suggesting that proximity factors and spatial dependence may not significantly improve the house price index (paper II). Since the resulting price trend seems to be consistent with both formal and informal real estate agents, the house price index can be constructed using data from both sources. Nevertheless, incorporating data from various agent categories improves the index, likely due to the larger sample size (paper III). Despite challenges with informal market data, machine learning techniques can effectively estimate housing worth, with some methods consistently outperforming others (paper IV).

    The study poses several implications for various stakeholders. The hedonic modelling approach is effective for developing house price indices in Dar es Salaam's nascent housing market. Policies must encourage informal agents to share their property transaction data. This could be through mandating the digitisation of informal transactions. Policies should also encourage standardised data formats and reporting for both formal and informal housing transactions to ensure consistency and reliability in integrating datasets into machine learning models. Data privacy regulations must ensure secure and ethical handling of sensitive information from individuals and informal agents. 

    Download (pdf)
    summary
    Download (pdf)
    errata
  • Public defence: 2025-05-26 13:00 https://kth-se.zoom.us/j/66482272586, Stockholm
    Bragone, Federica
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Scientific Machine Learning for Forward and Inverse Problems: Physics-Informed Neural Networks and Machine Learning Algorithms with Applications to Dynamical Systems2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Scientific Machine Learning (SciML) is a promising field that combines data-driven models with physical laws and principles. A novel example is the application of Artificial Neural Networks (ANNs) to solve Ordinary Differential Equations (ODEs) and Partial Differential Equations (PDEs). One of the most recent approaches in this area is Physics-Informed Neural Networks (PINNs), which encode the governing physical equations directly into the neural network architecture. PINNs can solve both forward and inverse problems, learning the solution to differential equations and inferring unknown parameters or even functional forms. Therefore, they are particularly effective when partially known equations or incomplete models describe real-world systems. 

    Differential equations enable a mathematical formulation for various fundamental physical laws. ODEs and PDEs are used to model the behavior of complex and dynamical systems in many fields of science. However, many real-world problems are either too complex to solve exactly or involve equations that are not fully known. In these cases, we rely on numerical methods to approximate solutions. While these methods can be very accurate, they often are computationally expensive, especially for large, nonlinear, or high-dimensional problems. Therefore, exploring alternative approaches like SciML to find more efficient and scalable solutions is fundamental.

    This thesis presents a series of applications of SciML methods in identifying and solving real-world systems. First, we demonstrate using PINNs combined with symbolic regression to recover governing equations from sparse observational data, focusing on cellulose degradation within power transformers. PINNs are then applied to solve forward problems, specifically the 1D and 2D heat diffusion equations, which model thermal distribution in transformers. Moreover, we also develop an approach for optimal sensor placement using PINNs that improves data collection efficiency. A third case study examines how dimensionality reduction techniques, such as Principal Component Analysis (PCA), can be applied to explain and visualize high-dimensional data, where each observation comprises a large number of variables that describe physical systems. Using datasets on Cellulose Nanofibrils (CNFs) of various materials and concentrations, Machine Learning (ML) techniques are employed to characterize and interpret the system behavior. 

    The second part of this thesis focuses on improving the scalability and robustness of PINNs. We propose a pretraining strategy that optimizes the initial weights, reducing stochasticity variability to address training instability and high computational costs in higher-dimensional problems arising from solving multi-dimensional or parametric PDEs. Moreover, we introduce an extension of PINNs, referred to as $PINN, which includes Bayesian probability within a domain decomposition framework. This formulation enhances performance, particularly in handling noisy data and multi-scale problems.

    Download full text (pdf)
    fulltext
  • Public defence: 2025-05-27 09:00 Kollegiesalen, Stockholm
    Ericson, Ludvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Exploration and Prediction: Beyond-the-Frontier Autonomous Exploration in Indoor Environments2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Autonomous exploration is a fundamental problem in robotics, where a robot must make decisions about how to navigate and map an unknown environment. While humans rely on prior experience and structural expectations to act under uncertainty, robotic systems typically operate without such priors, exploring reactively based only on what has been observed. The idea of incorporating predictions into exploration has been proposed previously, but the tools required to learn general, high-capacity models have only recently become available through advances in deep learning. This thesis addresses two tightly connected challenges: learning predictive models of indoor environments, and constructing exploration strategies that are able to benefit from such predictions. A core obstacle in this research area is a cyclic dependency: there is little value in developing better predictive models unless exploration methods can make effective use of them, and little value in de- signing such exploration methods unless reliable models exist. This dependency has historically limited progress. By breaking it, this thesis enables the study and development of both components in tandem. The thesis introduces deep generative models that capture structural regularities in indoor environments using autoregressive sequence modeling. These models outperform traditional approaches in predicting unseen regions beyond the robot’s current observations. However, standard exploration methods are shown to perform worse, not better, when informed by accurate predictions. To resolve this, new planning heuristics are proposed, including the distance advantage strategy, which prioritizes exploring regions that are likely to be more difficult to reach in the future. These methods allow predictive models to be used effectively, reducing path length by avoiding situations where the robot must backtrack to previously visited locations. Together, these contributions provide a foundation for autonomous exploration that is informed by learned expectations, and establish a framework where map-predictive modeling and decision-making can be studied and improved jointly.

    Download full text (pdf)
    summary
  • Public defence: 2025-05-27 10:00 https://kth-se.zoom.us/webinar/register/WN_nbT8YAdmQje5AfqVN9hAYw, Stockholm
    Butori, Martina
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemical Engineering, Applied Electrochemistry.
    Analysis of proton exchange membrane fuel cells operated at Intermediate temperatures (IT: 80—120 °C)2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Fuel cells convert energy stored in hydrogen into electricity. As a zero-emission technology, they represent a sustainable alternative to conventional combustion engines, particularly for heavy-duty vehicles. Proton exchange membrane fuel cells (PEMFCs) are used in vehicles and typically operate up to 80 °C. To facilitate the cooling of PEMFCs, slightly rising the operating temperature to the range of 80-120 °C, defined as intermediate temperature (IT), would be desirable.

    The aim of the thesis is to electrochemically analyze the impact of IT operation on commercially available PEMFC materials. Results show that increasing the temperature has multiple effects on the cell. Increased ionic conductivity, faster reaction kinetics and reduced mass transport resistance are counteracted by a negative shift of the equilibrium potential, enhanced corrosion of the carbon support, and reduced gas barrier properties. Additionally, if the humidity and the cell pressure are constant, the partial pressure of oxygen is reduced at higher temperature, which limits the cell performance. Finally, a higher temperature leads to faster degradation. The ultimate failure is attributed to the formation of pinholes in the membrane, but the polymer conductivity and the catalyst's electrochemical surface area are also negatively affected. 

    Despite a scarcity of comparable data above 80 °C, the main obstacle for IT-PEMFCs is apparently the lack of stable materials. Current state-of-the-art polymers for PEMFCs are based on perfluorosulfonic acid (PFSA), whose sustainability has recently been questioned. Alternatively, fluorine-free hydrocarbon-based polymers investigated here show comparable results up to 100°C, but cannot tolerate operation at 120 °C. More research is needed to further develop sustainable materials and to allow continuous operation of PEMFCs in the intermediate temperature range. 

    Download (pdf)
    Summary
  • Public defence: 2025-05-27 14:00 F3 (Flodis), Stockholm
    Andreasson, Annika
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Cyber situation awareness and common operational pictures: Studies of the Swedish public sector2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Cybersecurity is one of the pillars of successful digitalization of our societies. A key component of cybersecurity is that staff involved in cybersecurity work develop situational awareness of the cyber environment and respond to events  based on that understanding. Despite growing interest in situation awareness for cybersecurity, few empirical studies look at cyber situation awareness from the human actor’s perspective within organizational contexts. The purpose of this thesis is to contribute to research on improving cyber situation awareness capabilities in organizations, with a focus on the Swedish public sector.

    The thesis includes five papers concerning different aspects of cyber situation awareness. In the first paper, a census is conducted presenting a snapshot of the cybersecurity maturity of the Swedish public sector and how the public sector communicated cybersecurity risks during the COVID-19 pandemic. In the second paper, the conditions under which cybersecurity work is conducted at Swedish administrative authorities are investigated, and results from semi-structured interviews with respondents involved in cybersecurity work are presented. In the third paper, four personas, based on empirical material from the first and second papers, are created and validated. In the fourth paper, a case study on how staff members involved in handling a cyberthreat in a large, complex organization develop cyber situation awareness while handling the threat is presented. In the fifth paper, participatory video prototyping is used to explore common operational picture system support needs to aid cyber situation awareness for staff involved in handling cyberthreats.

    The thesis discusses challenges to cyber situation awareness in organizations, how cyber situation awareness can be improved, and how common operational pictures should be designed. 

    Download full text (pdf)
    Annika_Andreasson_Comprehensive_Summary
  • Public defence: 2025-05-28 10:00 Kollegiesalen, Stockholm
    Bengtsson, Ivar
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, Optimization and Systems Theory. RaySearch Laboratories.
    Mitigating uncertainties in adaptive radiation therapy by robust optimization2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The fractionated delivery of radiation therapy leads to discrepancies between the planning image and the patient geometry throughout the treatment course. Adaptive radiation therapy (ART) addresses this issue by modifying the plan based on additional image information acquired closer to the time of delivery. However, technologies used in ART introduce new uncertainties in the treatment modeling. This thesis deals with the mitigation of uncertainties that are introduced in the context of ART workflows.

    The first two appended papers address mitigating uncertainty related to localizing the tumor and the relevant organs-at-risk (OARs). In Paper A, we consider phantom cases with isotropic, microscopic tumor infiltration around a visible tumor. We compare minimization of the expected value of the objective function to the conventional minimization of an objective function applied to a margin designed to contain the tumor with sufficient probability. The results show that the approach can improve the sparing of a nearby OAR, at the expense of increasing the total dose. In Paper B, we compare multiple formulations of the objective function under contour uncertainty, given a non-isotropic uncertainty model represented by a set of contour scenarios. At comparable tumor dose, margins derived from the scenarios outperform methods from clinical practice in terms of sparing OARs and limiting the total dose. In comparison, considering the scenarios explicitly, including minimizing the expected value of the objective function over the scenarios, spares the OARs further at the expense of total dose.

    The three subsequent papers address motion-related uncertainty, which is particularly relevant in particle treatments. In Paper C, we investigate a robust optimization method that explicitly considers the radiation delivery’s time structure. It is applied to lung cancer cases with synthesized, irregular breathing motion, and the results indicate that it outperforms the conventional method that does not consider the time structure. In Paper D, we simulate the use of a real-time adaptive framework that re-optimizes the plan during delivery, based on the observed and anticipated patient motion. It is shown to have substantial dosimetric benefits, even under simplifying approximations that would facilitate an actual real-time implementation. In PaperE, we estimate the error associated with performing dose calculations that consider motion when the temporal resolution of the time-varying patient image is low. We apply a method to synthesize intermediate images and propose a temporal resolution required to mitigate the error. Finally, in Paper F, we address some of the computational issues introduced by the robust optimization methods from the other papers. We propose methods that reduce the number of scenarios considered during robust optimization to reduce the associated computation times.

    Download full text (pdf)
    kappa
  • Public defence: 2025-05-28 13:00 https://kth-se.zoom.us/j/62265705564, Stockholm
    Biørn-Hansen, Aksel
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Wrestling with data: Investigating the role of data in challenging unsustainable practices2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The planet is embroiled in multiple crises that threaten to destabilize the biosphere and permanently harm the possibility for a good life on this earth. To make sense of the human impact on the climate, we collect a wealth of environmental data. However, facts seem to not easily move us, leading to little practical action in terms of curbing global greenhouse gas emissions. Urgent transformational change is needed to enable a transition toward more sustainable futures.

    Two key challenges hinder this transition. The first is the challenge of translating knowledge about our impact on the climate into meaningful insight and action, with both climate change as concept and data as a material being abstract and complex phenomenon. A second obstacle is how change is understood in the paradigm of western modernity, with rational thinking being seen as the epitome of how people understand and act upon the world. In HCI, this has led to a much critiqued focus persuasive technologies targeting individual behaviour change. 

    It is important to stress that the measuring of our impact on the environment do provide an important indication of harm, mediating our understanding of the world. However, such data is fragmented and incomplete, capturing only a limited view of a very wicked and entangled problem. As social practice theory posits, people do not operate in isolation, but are embedded within larger ecologies of stuff, meanings and skills that together constitute everyday life. 

    In this thesis, I investigate what a more social and relational engagement with environmental data can entail when the aim is to support a transition to sustainable futures. In particular, the goal of this work has been to invite multiple stakeholder into processes of collectively questioning and challenging unsustainable practices. Drawing on HCI research arguing for relational engagements with immaterial materials like carbon emissions, and employing the concept of middle-out proposed in the sustainable transitions literature, my work spans multiple design-oriented interventions studying social interaction with data at the workplace. Weaving together the different strains of this work, this thesis contributes with practical knowledge for identifying where and how we can facilitate a translation of knowledge into insight and action on the climate crises. 

    Download full text (pdf)
    fulltext
  • Public defence: 2025-06-02 10:00 F3, Stockholm
    Schmidt, Alina E. M.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Fibre- and Polymer Technology. AIMES-Center for the Advancement of Integrated Medical and Engineering Sciences, Karolinska Institutet and KTH Royal Institute of Technology.
    Biopolymer Networks from Terrestrial and Aquatic Biomasses2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Today’s sustainability challenges demand more than new materials - they require new ways of thinking about the resources we already have to support zero waste strategies. This thesis explores the valorization of underutilized biomasses - specifically the terrestrial crop Lupinus angustifolius (Lupin) and the marine macroalga Ulva fenestrata (Ulva) - as alternative feedstocks for bio-based materials. These two biomasses were selected for their dual functionality: both are already cultivated for food applications, yet their residual non-edible fractions remain largely unexplored. By combining structural biology, bioprocess engineering, materials science, and bioimaging, the thesis establishes a comprehensive, interdisciplinary framework for biomass characterization and conversion. Biopolymer mapping using multimodal fluorescence imaging and optotracing revealed the tissue architecture and native biopolymer distribution in Lupin residues and Ulva thalli. From Lupin, lignocellulose was extracted through mild alkaline pretreatment and defibrillated into lignin-containing microfibrillated cellulose (L-MFC). In Ulva, complex structural features, including oligo-/polyaromatic-rich layers and rhizoidal fibrillar structures, were discovered, prompting a redefinition of its tissue terminology. A decellularization-inspired approach was then developed to recover tissue scaffolds from Ulva, leveraging its naturally thin, two-cell-layered structure to remove cellular content while preserving scaffold integrity. Finally, two material design strategies were employed: a bottom-up approach for Lupin-derived L-MFC films, exploiting their nanoscale fibrillar network for structural organization, and a top-down approach for Ulva-based films, preserving the intrinsic tissue scaffold architecture. The resulting materials demonstrated structural integrity while preserving key biopolymer networks. Across the entire biomass-to-material workflow, multimodal fluorescence imaging combined with optotracing was integrated and adapted as a novel analytical tool, providing non-destructive, real-time and high-resolution information.

    Download (pdf)
    Summary
  • Public defence: 2025-06-02 14:00 E2, Stockholm
    Araújo De Medeiros, Daniel
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Towards Adaptive Resource Management for HPC Workloads in Cloud Environments2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Maximizing resource efficiency is crucial when designing cloud-based systems,which are primarily built to meet specific quality-of-service requirements.Common optimization techniques include containerization, workflow orchestration,elasticity, and vertical scaling, all aimed at improving resource utilizationand reducing costs. In contrast, on-premises high-performance computingsystems prioritize maximum performance, typically relying on static resourceallocation. While this approach offers certain advantages over cloud systems,it can be restrictive in handling the increasingly dynamic resource demands oftightly coupled HPC workloads, making adaptive resource management challenging.

    This thesis explores the execution of high-performance workloads in cloudbasedenvironments, investigating both horizontal and vertical scaling strategiesas well as the feasibility of running HPC workflows in the cloud. Additionally,we will evaluate the costs of deploying these workloads in containerizedenvironments and examine the advantages of using object storagein cloud-based HPC systems.

    Download full text (pdf)
    fulltext
  • Public defence: 2025-06-02 14:00 F3 (Flodis), Stockholm
    Hotti, Alexandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Black-Box Variational Inference: Mixture Models, Efficient Learning, and Applications2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    We advance Black-Box Variational Inference (BBVI) by improving its flexibility, scalability, and applicability to real-world challenges. In Paper I, we demonstrate that integrating mixture-based variational distributions into VAEs—leveraging adaptive importance sampling—enhances posterior expressiveness and mitigates mode collapse in applications such as image and single- cell analysis. Paper II introduces MISVAE, along with two novel ELBO estimators—Some-to-All and Some-to-Some—which enable efficient training with hundreds of mixture components and achieve state-of-the-art performance on the MNIST and Fashion-MNIST datasets. Paper III shifts focus to real-world applications by presenting the Klarna Product Page Dataset, a diverse benchmark for web element nomination, where we achieve strong performance by benchmarking GNNs in combination with GPT-4. Additionally, the dataset has been leveraged in generative modeling tasks, facilitating the learning of latent web page representations and the generation of complex web interfaces using VAEs. Finally, Paper IV provides new smoothness results and gradient variance bounds for BBVI under non-linear scale parameterizations, highlighting advantages in large-data regimes. Collectively, these contributions extend the frontiers of BBVI for tackling high-dimensional, structured data in both theory and practice.

    Download full text (pdf)
    Black-Box Variational Inference: Mixture Models, Efficient Learning, and Applications
  • Public defence: 2025-06-03 13:15 Kollegiesalen, Stockholm
    De Carvalho, Francisco Coelho
    KTH, School of Industrial Engineering and Management (ITM), Learning, Learning in Stem.
    Student Experience in Higher Education: A Study on How Institutional Environments Affect Student Learning and Development in the Mozambican Context2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

     The effects of institutional environments within universities on students’ learning and development can manifest in various ways. However, there are aspects of university environments that can be double-edged: they can facilitate smooth development and promote quality learning or they may contribute to low motivation and interest in study and thereby increase the tendency for students to drop out and be dissatisfied with their overall experience at university. Knowing how these elements interact with each other and the contradictory effects they may engender can provide valuable insights into higher education policy and ways to control and reduce the negative effects caused by institutional practices and behaviours on student learning. Learning in higher education is the result of the influence of a wide range of factors. However, this thesis focuses on environmental factors or social and academic contexts. Particularly, this thesis focuses on students’ perspectives on various aspects that influence their experience and perceptions of their interactions with teachers outside the classroom, participation in class discussion and interactions in the classroom. All these aspects can be even harder when considering the experience of first-year students whose development and learning are associated with low engagement with effective educational practices, coupled with difficulties in managing the academic requirements of being a university student. The indirect effect of academic development on student learning through a change in a teaching approach that promotes student learning may constitute a synergy in a context where ҅good teaching҅ is not well understood and promoted, not to mention the results show that teachers in the Mozambican context are battling with their own environmental conditions to be effective university teachers. 

    Download full text (pdf)
    Kappa
  • Public defence: 2025-06-03 14:00 F3, Stockholm
    Jal, Aryaman
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Algebra, Combinatorics and Topology.
    Enumerative and matroidal aspects of rook placements2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Simply construed, combinatorics entails the  counting and classifying of finite objects. Such objects vary from permutations and graphs, to posets and matroids; they have in common the idea of a device that represents discrete data. Enumerative combinatorics deals with the exact or asymptotic enumeration of this data. Rook theory is the enumerative study of non-attacking rook placements on a board and matroid theory is the combinatorial abstraction of the notion of linear independence in linear algebra. In this dissertation, we will describe a new connection between rook theory and matroid theory, and study some consequences of this connection. In a different direction, we will build new tools to study the combinatorial features of the set of all points equidistant from two fixed points, where the notion of distance comes from a polytope. Across this work, the following three perspectives are employed to understand combinatorial data: (a) discrete and convex geometric objects like polyhedra and polyhedral~complexes; (b) bijective methods; and (c) the geometry of polynomials.  

    This thesis is divided into two parts. The first part consists of Paper A on the geometric combinatorial theory of bisectors. A polyhedral norm is a notion of distance arising from centrally symmetric polytopes; they have found application in modelling problems in areas ranging from algebraic statistics, topological data analysis, robotics, and crystallography.  The bisector associated to any polyhedral norm is a polyhedral complex whose maximal cells are labeled by pairs of facets of the polytope. We identify precisely which maximal cells are present, and in doing so, systematize the study of the bisection fan of a polytope, a tool that captures fundamental combinatorial information about the structure of the bisector. We focus on four fundamental notions of distance ---polygonal norms, || · ||1 and || · ||∞ norms, and the discrete Wasserstein norm. In each of these cases, we give an explicit combinatorial description of the bisection fan, thereby extending work of Criado, Joswig, and Santos (2022). 

    The second part of this thesis --- spanned by Papers B,C, and D --- is concerned with novel enumerative and matroidal properties of rook placements. In particular, in Paper B, we introduce and define a new matroid whose bases are the set of non-nesting rook placements on a skew Ferrers board; this establishes the first known bridge between rook theory and matroid theory. The structure of rook matroids is interesting: they form a subclass of both transversal matroids and positroids. In this regard, and through the skew shape association, rook matroids bear a striking resemblance to lattice path matroids. We explore this connection in depth by (a) proving a precise criterion for when a rook matroid and lattice path matroid are isomorphic; (b) proving that despite the failure of isomorphism in general, rook matroids and lattice path matroids have the same Tutte polynomial; and (c) proving that every lattice path matroid is a certain contraction of a rook matroids, thereby obtaining a new perspective on results of de Mier--Bonin--Noy (2004) and Oh (2011). 

    We then apply this matroid structure to two enumerative problems that bear no relation to one another at first glance. The first is determining the precise distributional property satisfied by the generating polynomial of the set of non-nesting rook placements on a skew shape. This question is motivated by the famous Heilmann--Lieb theorem on the real-rootedness of the matching polynomial. In contrast to the case of unrestricted rook placements, the polynomial in question satisfies ultra-log-concavity, but not real-rootedness. The second enumerative problem concerns the log-concavity consequence of the Neggers--Stanley conjecture. The (P, ω)-Eulerian polynomial  --- the descent-generating polynomial of the set of (inverses of) linear extensions of a labeled poset (P, ω) --- was introduced by Stanley (1972) in his PhD thesis and was conjectured to be real-rooted, first by Neggers (1978) and later by Stanley (1986) himself. It was eventually disproven, in one formulation by Brändén (2005) and in another by Stembridge (2007). The natural follow-up question, also a conjecture of Brenti (1989), is whether the (P, ω)-Eulerian polynomial is log-concave in general. We provide an affirmative answer to this conjecture in the case when (P, ω) is a naturally labeled poset of width two, by combining two ideas: a classical bijection known in the theory of distributive lattices, and the Brändén--Huh theory of Lorentzian polynomials.  

    In Paper D, the positroidal structure of non-nesting rook placements is explored in greater depth. Some consequences include a new proof of the positroid structure and an inequality description of the base polytope of the rook matroid using only the combinatorics of the underlying skew shape. Answering a question of Lam (2024), we characterize rook matroids from the positroidal point of view. 

    In Paper C, we establish the real-rootedness of the generating polynomial of complete rook placements on Ferrers boards, enumerated by the number of ascents. This set of rook placements is interesting in connection with the natural poset--theoretic structure that it has: the lower interval of 312-avoiding permutations in the Bruhat order. This polynomial also represents another yet another generalization of the classical Eulerian polynomials; it is similar to but distinct from the generalization introduced by Savage and Visontai (2015). 

    Download full text (pdf)
    Kappa
  • Public defence: 2025-06-04 10:00 F3, https://kth-se.zoom.us/webinar/register/WN_YMH0Xz3TQtOn8TOpODOfeQ, Stockholm
    Nikolić, Nikola
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemical Engineering, Applied Electrochemistry.
    Water and gas transport across the membrane in AEMFC and PEMFC2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Efficient operation and durability of polymer electrolyte fuel cells (PEFCs) depend strongly on thermodynamic conditions and materials selection. This thesis investigates water transport, gas permeability and corrosion of carbon catalyst support under realistic operating conditions in proton exchange membrane fuel cells (PEMFCs) and anion exchange membrane fuel cells (AEMFCs).

    Experiments were conducted using real-time humidity sensors, mass spectrometry, and electrochemical methods, supported with numerical modelling in COMSOL Multiphysics.

    Interest in PEMFCs which operate at low temperatures (LT, <80 °C) has shifted towards intermediate temperatures (IT, 80-120 °C) for the possibility of reducing size of cooling systems. Although water accumulation was minimized at IT, increased pressure led to the opposite effect, however with performance limitations attributed to reduced oxygen partial pressure (Paper I). Hydrogen crossover was found to increase with temperature, pressure and RH, and a convective component was observed under asymmetric pressure conditions (Paper IV and V).

    In AEMFCs, asymmetric inlet humidification and dynamic gas flows were effective in managing hydration. Material properties such as ionomer water uptake and GDL hydrophobicity had a minor impact on water transport but considerably affected peak power output (Paper II and III). AEMs exhibited lower gas crossover than PEMs under all conditions, particularly the reinforced second-generation membranes (Paper V).

    Carbon corrosion in PEMFCs was shown to accelerate significantly with both temperature and humidity, becoming the dominant reaction above 1.1 V and almost constant at the most extreme conditions tested (120 °C and 70 % RH, Paper VI).

    These findings provide new insights into optimization strategies and limitations for PEFC systems, highlighting the importance of thermodynamic control and robust materials in achieving long-term, high-performance operation.

    Download (pdf)
    Summary
  • Public defence: 2025-06-05 09:00 Sal C1 / https://kth-se.zoom.us/j/65497631196, Södertälje
    Kalaiarasan, Ravi
    KTH, School of Industrial Engineering and Management (ITM), Production engineering, Advanced Maintenance and Production Logistics. Scania.
    Visibility in Manufacturing Supply Chains: Conceptualisation, Realisation and Implications2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Supply chain visibility (SCV) is considered key for enhancing both operational and strategic performance, as well as for supporting manufacturing companies in addressing upcoming regulations. Still, most companies and organisations experience low levels of SCV. The inherently complex nature of supply chains presents challenges for the practical realisation of SCV. Hence, there is a need to develop a holistic and comprehensive understanding of the factors affecting the realisation of SCV.

    This thesis, based on three research questions, focuses on building knowledge on supply chain visibility to improve supply chain performance and how technologies for real-time data can support these improvements.

    The findings are based on a systematic literature review and four empirical studies. The systematic literature review identified and categorised factors affecting SCV into four areas: antecedents, barriers and challenges, drivers, and effects. The empirical studies, which included a Delphi study and multiple case studies, explored viewpoints, critical information and data, and Internet of Things (IoT) based technologies for the real-time visibility of incoming goods. In addition, the studies explored perspectives on SCV in extended supply chains.

    Overall, the findings show that SCV can enhance supply chain performance in areas such as decision-making, risk management and operations and that it demands cross-functional collaboration and technological integration. The results also indicate that manufacturing companies are likely to face challenges related to trust, interorganisational collaboration, data quality and supply chain complexities in the effort to fully benefit from SCV. Thus, an implication of this thesis is that SCV is important yet difficult to implement.

    The primary contribution of this thesis is the proposal of a framework that offers a comprehensive, multidimensional view of SCV. This framework can be utilised by supply chain practitioners to map and evaluate dominant influences in supply chains. Furthermore, the framework can be adopted by manufacturing companies to establish priorities, foster collaborations and implement technologies to enhance SCV in their supply chains, thus supporting the transformation towards intelligent supply chains.

    Download full text (pdf)
    Thesis
  • Public defence: 2025-06-05 09:00 D3, Stockholm
    Milinanni, Federica
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Probability, Mathematical Physics and Statistics.
    Large Deviation Analysis of Markov Chain Monte Carlo Methods and Algorithmic Advances for Uncertainty Quantification in Neuroscience2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The main topic of this thesis is Markov chain Monte Carlo (MCMC) methods, one of the most widely used classes of sampling algorithms. Sampling from a target probability distribution is a fundamental problem in many applied fields, including computational biology, computational physics, statistical mechanics, machine learning, ecology and epidemiology. In Bayesian uncertainty quantification, sampling algorithms are used to sample from posterior distributions.

    The five papers included in this thesis relate to MCMC methods in three ways: implementation, algorithmic advances, and convergence analysis based on the theory of large deviations. Despite their distinct focuses, all three directions share the common goal of advancing sampling algorithms.

    Paper A introduces the R package UQSA in which we implemented some MCMC methods. One of the main goals with UQSA is to offer a user-friendly tool for researchers from applied sciences, such as neuroscience, to solve uncertainty quantification problems. We apply the methods in UQSA to models describing networks of biochemical reactions that take place inside nerve cells.

    In Paper B, we consider a specific MCMC algorithm: the simplified manifold Metropolis-adjusted Langevin algorithm (SMMALA). When used to perform Bayesian uncertainty quantification on ordinary differential equation (ODE) models, SMMALA requires the computation of the sensitivity matrix of the ODE system for thousands or even millions of iterations. While many algorithms already exist for computing ODE sensitivities (e.g., forward sensitivity), they are too costly for SMMALA. In Paper B, we propose a method based on the Peano-Baker series from control theory to approximate the ODE sensitivities, which is computationally cheaper than pre-existing methods and sufficiently accurate for sampling.

    The MCMC literature offers a big variety of algorithms to sample from a target probability distribution.  Probability distributions encountered in applications are often complex and high-dimensional, and a blind use of off-the-shelf sampling algorithms on these target distributions often becomes computationally prohibitive. There is therefore a great need for tools to analyze and design algorithms that successfully sample from a given target in a reasonable computational time. In recent years there has been growing interest in using the theory of large deviations, one of the cornerstones of modern probability theory, to describe the rate of convergence of MCMC methods. 

    In Papers C, D and E, we use the large deviations approach to study the convergence of algorithms of Metropolis-Hastings (MH) type, one of the most popular classes of MCMC methods. In Paper C, we establish a large deviation principle for the empirical measures of Markov chains generated via MH algorithm for sampling from target distributions on a continuous subsets of . In Paper D, we apply this result to specific choices of MH algorithms and derive large deviation principles for two popular MCMC methods: the Independent Metropolis-Hastings algorithm and the Metropolis-adjusted Langevin algorithm. Moreover, we prove that the current assumptions in the large deviation principle from Paper C do not apply for the Random Walk Metropolis algorithm. In Paper E, we further develop the large deviation principle from Paper C, and we illustrate how this theoretical result can be used to tune algorithms of MH type to obtain faster convergence.

    Download (pdf)
    SummaryFedericaMilinanni
  • Public defence: 2025-06-05 09:15 FD5, Stockholm
    Hein, Dennis
    KTH, School of Engineering Sciences (SCI), Physics, Particle Physics, Astrophysics and Medical Imaging.
    Deep learning approaches for denoising, artifact correction, and radiology report generation in CT and chest X-ray imaging2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Medical imaging is a cornerstone of modern healthcare delivery, providing essential insights for effective diagnosis and treatment planning. Among the myriad imaging modalities, computed tomography (CT) and chest X-rays stand out for their widespread clinical use with approximately 400 million CT and 1.4 billion chest X-ray examinations are performed globally each year. Recent advancements in detector technology have given rise to photon-counting CT, which promises improved spatial and energy resolution along with enhanced low-dose imaging capabilities. However, elevated image noise and ring artifacts–stemming from higher spatial and energy resolution and inconsistencies in detector elements–pose significant hurdles, degrading image quality and complicating the diagnostic process. Beyond CT imaging, the volume of chest X-ray examinations continues to grow, placing increasing pressure on radiology departments that are already stretched thin. Moreover, advanced and innovate techniques in CT leads to a steady increase in the number of images that the radiologist are required to read, further exacerbating the workloads. To address these challenges, this thesis leverages generative artificial intelligence methods throughout the medical imaging value chain. For photon-counting CT imaging, this thesis address inverse problems using diffusion and Poisson flow models. Syn2Real synthesizes realistic ring artifacts to effciently generate training data for deep learning-based artifact correction. For image denoising, the thesis introduces methods that capitalize on the robustness of PFGM++ in supervised and unsupervised versions of posterior sampling Poisson flow generative models, and finally culminating in Poisson flow consistency models—a novel family of deep generative models that combines the robustness of PFGM++ with the effcient single-step sampling and the flexibility of consistency models. Moreover, this thesis works towards addressing the global shortage of radiologists, by improving medical vision-language models through CheXalign: a novel framework that leverages publicly available datasets, containing paired chest X-rays and radiology reports written in a clinical setting, and reference-based metrics to generate high quality preference data. This in turns enables the application of direct alignment algorithms that increase the probability of good reports, while decreasing the probability of bad ones, improving the overall results. Partial automation of chest X-ray radiology report generation—in which language models are used to draft initial reports—hold great promise for more effcient workflows, reducing burn-out, and allowing radiologists to allocate more time to more advanced imaging studies, such as photon-counting CT.

    Download full text (pdf)
    fulltext
  • Public defence: 2025-06-05 09:30 https://kth-se.zoom.us/j/61562773806, Stockholm
    Rencelj Ling, Engla
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Cyber Security Threat Modeling of Power Grid Substation Automation Systems2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The substation is a vital part of the power grid and serves to aid in the distribution of electricity by, for example, transforming from high to low voltage. It is essential to protect the substation as a loss of electricity would cause severe consequences for our society. The Substation Automation System (SAS) allows for remote management and automation of substations but also creates possibilities for cybersecurity threats. In this thesis efforts towards using threat modeling to assess the cybersecurity of SAS are presented. Threat modeling entails creating a model of the system that shows the possible cybersecurity threats against it. To reach this goal, previously used information sources for threat modeling in the power systems domain are found. The thesis also includes the creation of a Time-To-Compromise (TTC) estimate for cyber attacks against Industrial Control Systems. By estimating the TTC, it is possible to prioritize which attacks to defend against. One method of creating threat models is by using threat modeling languages in which the assets, associations, attacks, and defenses have been defined. In this thesis, a threat modeling language for creating threat models of SAS is presented. The threat models in this thesis are used to create attack graphs to show the possible paths an attacker could take throughout the system. The work of this thesis also consists of evaluation of threat modeling languages that have been created or used. As a result, accurate assessment of cybersecurity for SAS can be made that helps in the efforts to keep them secure against cyber attacks.

    Download full text (pdf)
    Summary
  • Public defence: 2025-06-05 10:00 Q2, Stockholm
    Staffas, Theodor
    KTH, School of Engineering Sciences (SCI), Applied Physics, Quantum and Nanostructure Physics. KTH.
    Single-Photon LIDAR Exploiting Pulsed, Continuous, and Chaotic Illumination2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Light Detection and Ranging (LIDAR) is a key enabling technology in our modern society, with widespread applications across multiple industries ranging from heavy industry such as manufacturing and mining to telecommunication and autonomous vehicles. LIDAR has also developed into a critical and widespread tool for scientific research, impacting a broad range of fields such as archaeology and atmospheric sensing, to name but two.At its core, LIDAR consists of three fundamental components: a source, an optical system, and a detector. The source generates the optical probe signal, the optical system directs the probe toward the target and collects the returned signal, and the detector records the reflected signal for further analysis to determine the target’s distance (and other variables such as velocity or reflectivity may be extracted depending on the LIDAR system). This dissertation primarily focuses on the source and detector, exploring different methodologies and applications of LIDAR, particularly in the context of single-photon detectors.

    This thesis begins with an overview of the historical development of LIDAR and its natural progression towards incorporating single-photon-sensitive detectors to enhance performance. The operating principles of the most commonly used single-photon detectors are examined and compared, with a detailed discussion on how their performance characteristics and practical constraints influence LIDAR system functionality and design.

    Following this, three fundamentally different types of light sources used for different LIDAR methods—pulsed, continuous, and chaotic—are explored. The operating principles of each method are detailed, including how distance information is encoded within each type of probe and the corresponding analysis required for its extraction. Additionally, the advantages and limitations of these LIDAR methods are discussed.

    Download full text (pdf)
    fulltext
  • Public defence: 2025-06-05 10:00 Kollegiesalen, Stockholm
    König, Hans-Henrik
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering, Structures.
    Real-time tracking of additive manufacturing with high-energy X-ray techniques2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Additive manufacturing (AM) of metals offers unique design freedom and the ability to tailor the microstructure and properties of components. However, the complex thermal histories and rapid solidification occurring during AM introduce significant challenges in microstructure control and process optimisation. To address these challenges, this work employs real-time synchrotron techniques to elucidate the rapid phenomena that occur during AM. Synchrotron techniques, including synchrotron X-ray diffraction (XRD) and synchrotron radiography, are powerful tools for investigating various AM-related phenomena such as heat source-matter interaction, melt pool behaviour, solidification, and phase transformations in real time. High resolution temporal and spatial synchrotron data enable the correlation of these phenomena with AM processing parameters, thereby advancing the understanding of the AM process and its underlying mechanisms. These insights can be instrumental in process optimisation, alloy design, and the development of computational models.

    The contribution of this work to the field of real-time studies in AM is structured into two parts. First, the design and implementation of an electron beam powder bed fusion (PBF-EB) sample environment for real-time synchrotron studies are detailed in Chapter 3. Second, real-time studies of solidification and phase transformations during AM are presented in Chapter 4.

    The first part of this work focuses on the design and implementation of a sample environment for real-time synchrotron studies of the PBF-EB process. The sample environment facilitates the investigation of the previously listed AM phenomena during PBF-EB at high process temperatures and under vacuum. Furthermore, it enables the characterisation of phenomena specific to PBF-EB, such as the smoke phenomenon. The design and capabilities of the device for PBF-EB processing and real-time synchrotron measurements are detailed based on collected data.

    In the second part, solidification and phase transformations during AM are studied using real-time synchrotron observations in combination with thermodynamic and kinetic modelling.

    The change in solidification mode of a hot work tool steel is investigated under PBF-LB processing conditions. In this study, the change from primary austenite to primary δ-ferrite is observed with increasing cooling rate. The observations are correlated with predictions from a solidification model. Furthermore, the developed PBF-EB sample environment is employed to study the solidification behaviour of the same material under a wide range of PBF-EB conditions with lower cooling rates compared to the PBF-LB conditions. The observed phase transformation behaviour is linked to thermodynamic and kinetic modelling, highlighting the importance of process-induced compositional variations.

    In addition, the martensite start temperature (Ms) in iron and iron carbon alloy is investigated under PBF-LB conditions using high-speed XRD at 20 kHz. The observed phase transformations are correlated with thermal simulation results, demonstrating cooling rate and composition dependence of the Ms temperature in real-time. Understanding martensite transformation in low-alloyed compositions during PBF processing can facilitate the development of recycling-friendly materials for AM.

    This thesis focuses on real-time studies of metal AM, employing synchrotron techniques and linking the results to modelling. The findings demonstrate that in-situ and operando synchrotron studies, combined with computational models accounting for thermal conditions and compositional variations, are effective tools for process and alloy development for AM.In particular, the versatility of the developed PBF-EB sample environment can facilitate future studies on a variety of AM related phenomena.

    Download full text (pdf)
    kappa
  • Public defence: 2025-06-05 10:00 F3, Stockholm
    Jerlhagen, Åsa
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Fibre- and Polymer Technology.
    Assembly of Colloidal Nanoparticles and Cellulose Nanofibrils: Nanoscopic Structures Control Bulk Properties2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The need for sustainable, high-performance materials is growing rapidly as society moves away from fossil-based resources. This thesis explores how materials derived from renewable sources—cellulose nanofibrils (CNFs) extracted from wood—can be combined with synthetic polymer nanoparticles to create functional, sustainable materials with tunable properties.

    Polymeric nanoparticles were synthesized using polymerization-induced self-assembly (PISA), which allows precise control over particle features such as size and surface chemistry. The nanoparticles were combined with CNFs to create hybrid materials. The thesis investigates how the size, charge, and amount of nanoparticles influence the structure, mechanical behavior, and deformation mechanisms of CNF-based materials.

    Advanced characterization techniques such as small- and wide-angle X-ray scattering were used to understand how nanoparticles impact material structures to gain new insights into structure-property relationships and deformation mechanisms. The results show that by carefully tuning the interactions between components, it is possible to design new bio-based materials with tailored mechanical properties.

    This work contributes to the broader effort of developing environmentally friendly alternatives to conventional plastics and composites, offering insights into how nanostructure and surface chemistry can be used to control material performance. 

  • Public defence: 2025-06-05 10:00 Sal E3 / https://kth-se.zoom.us/s/68521587948, stockholm
    Tavera Guerrero, Carlos Alberto
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Heat and Power Technology.
    Leading Edge Erosion Influence on the Aeroelastic Response in a Transonic Compressor Cascade: A Numerical and Experimental Approach2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Current trends to enhance the aeroengines efficiency rely on more challenging working conditions with lighter, slender, and high-loaded blades. Thishigh power-to-weight ratio can make the blades from the front stages moreprone to face aeromechanic instabilities such as flutter. While key factorsthat affect flutter onset are well established in the literature, the effect ofleading edge erosion mechanisms is vastly sparse or not reported.An oscillating transonic linear cascade has been conceptualized and developed for validation at KTH Royal Institute of Technology. In this testrig, an assessment of the effect of the leading edge erosion mechanism onthe aeroelastic response is performed. The analyzed operating points arerepresentative of a transonic axial compressor at part speed where a shockinduced separation mechanism is present. The aeroelastic measurementsare performed at the first natural bending mode. The presented thesis comprises three key aspects: the aeroelastic response of a smooth reference case,the identification of limitations in roughness wall modeling, and the aeroelastic response under leading edge erosion mechanisms. For the latter, theblades have been subjected to an increase in roughness at the leading edgefor the rough case, and the leading edge has been eroded and roughened forthe eroded case.The results indicate that for the smooth case, the numerical modelstend to overpredict the aeroelastic response downstream from the shockinduced separation compared to the experimental data. Surface roughnesswall modeling showed limitations when separated regions exist at fully roughwall regimes. When erosion mechanisms are introduced, the numerical results predict an opposite trend compared to the experimental observations.The experimental data from the eroded case showed a local increase in theunsteady pressure amplitude while the phase remained unchanged.

    Download full text (pdf)
    Carlos Alberto Tavera Guerrero_kappa
  • Public defence: 2025-06-05 13:00 D3, Lindstedtsvägen 9, 3rd floor, KTH Campus, Stockholm
    Berg Mårtensson, Hampus
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering.
    Where are we heading, and where do we want to go?: Exploring transport system futures, climate targets and new mobility services2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis is written at a time when a considerable gap exists between the current trajectory of the transport system and developments in line with societal goals, such as those set to reduce greenhouse gas emissions. Responding to the challenge, the thesis explores what futures aligned with societal goals can entail. Transport system scenarios fulfilling climate goals are developed and analysed, examining the potential of electrification, biofuels,and vehicle efficiency. The thesis also pays special attention to New MobilityServices (NMS), which enables mobility through vehicle sharing andridesharing. In addition to developing scenarios where NMS are included, are view explores other studies in which target-fulfilling scenarios have been developed, to understand possible future roles of the services. The thesis also investigates potential contributions from mobility and accessibility services to reduce transport volumes, facilitating a modal shift from car use to alternative transport modes, and enhancing the environmental performance of cars when in use. Potential tensions associated with NMS are also explored, which can be barriers to successful implementation at scale.

    The thesis contributes to the understanding of what future transport systems achieving climate goals could entail. The results show that a combination of technological advancements and behavioural changes will be necessary. Significant contributions to emissions reductions are expected from, for example, electrification, biofuels, and improvements in vehicle energy efficiency. Despite this, there is still a need to reduce transport volumes from cars, air travel, and freight via road networks to an extent that represents a major break in trends. This is partly due to the insufficient maturity of technological alternatives and the long turnover time of the passenger car fleet, which limits the progress of electrification. It is also due to the fact that biobased raw materials for fuel production are limited resources from both territorial and global perspectives. Moreover, even vehicles with low direct emissions result in significant emissions from lifecycle stages such as the production of vehicles and fuels. It is demonstrated that large vehicle fleets may be challenging to reconcile with climate ambitions concretised through the European emissions trading system.

    Another contribution of this thesis is an improved understanding of potential futures for NMS. A potential role of NMS in achieving societal targets such as greenhouse gas emissions reductions is concluded. In the developed scenarios, car-sharing plays an important role in ensuring car access when the number of cars decreases. Mechanisms are also identified through which several services can contribute to avoiding transport, shifting from car travel, and improving environmental performance when cars are used. However, negatively contributing mechanisms are also identified, and a challenge arises from the fact that the mechanisms are intertwined in complex relationships and embedded in contexts that have implications for the final impact. In addition, 168 identified tensions presented within a set of ten themes demonstrate challenges to the feasibility of future sustainable transport systems where NMS plays a significant role, which needs to be navigated. The tensions involve public entities, private organisations as well as citizens/users. It is concluded that there is a need to connect individual mechanisms and tensions to cumulative system level impacts and to maintain a systems perspective when investigating the effects of the services.

    Download (pdf)
    Cover Essay
  • Public defence: 2025-06-05 15:00 Q2, Stockholm
    Marta, Daniel
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Towards safe, aligned, and efficient reinforcement learning from human feedback2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Reinforcement learning policies are becoming increasingly prevalent in robotics and AI-human interactions due to their effectiveness in tackling complex and challenging domains. Many of these policies—also referred to as AI agents—are trained using human feedback through techniques collectively known as Reinforcement Learning from Human Feedback (RLHF). This thesis addresses three key challenges—safety, alignment, and efficiency—that arise when deploying these policies in real-world applications involving actual human users. To this end, it proposes several novel methods. Ensuring the safety of human-robot interaction is a fundamental requirement for their deployment. While most prior research has explored safety within discrete state and action spaces, we investigate novel approaches for synthesizing safety shields from human feedback, enabling safer policy execution in various challenging settings, including continuous state and action spaces, such as social navigation. To better align policies with human feedback, contemporary works predominantly rely on single-reward settings. However, we argue for the necessity of a multi-objective paradigm, as most human goals cannot be captured by a single valued reward function. Moreover, most robotic tasks have baseline predefined goals related to task success, such as reaching a navigation waypoint. Accordingly, we first introduce a method to align policies with multiple objectives using pairwise preferences. Additionally, we propose a novel multi-modal approach that leverages zero-shot reasoning with large language models alongside pairwise preferences to adapt multi-objective goals for these policies. The final challenge addressed in this thesis is improving the sample efficiency and reusability of these policies, which is crucial when adapting policies based on real human feedback. Since requesting human feedback is both costly and burdensome—potentially degrading the quality of human-agent interactions—we propose two distinct methods to mitigate these issues. First, to enhance the efficiency of RLHF, we introduce an active learning method that combines unsupervised learning techniques with uncertainty estimation to prioritize the most informative queries for human feedback. Second, to improve the reusability of reward functions derived from human feedback and reduce the need for redundant queries in similar tasks, we investigate low-rank adaptation techniques for adapting pre-trained reward functions to new tasks.

    Download full text (pdf)
    kappa
  • Public defence: 2025-06-09 09:00 Sal F3, Stockholm
    Rangavittal, Bharath Vasudev
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering, Process.
    Advanced Techniques for Process Optimization in Iron and Steelmaking: Modeling and Monitoring Innovations2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The iron and steel industries are well-known contributors to global CO2 emissions, accounting for nearly 7% of the total, and are therefore in urgent need of technological advances to improve the efficiency of their current processes and support their transition to more sustainable practices. To address some of the needs, this study introduces advanced techniques aiming at two key objectives: first, pioneering computationally efficient gas-solid flow models for ironmaking blast furnaces to accelerate prediction of their interior state and enhance process understanding beyond current approaches; and second, advancing the functionality and practical application of infrared (IR)-based systems to optimize slag and process control during steelmaking processes.

    An important step in modeling of the blast furnace interior state is the simulation of coupled gas-solid flow. Existing numerical-based models employing Discrete Element Method (DEM) and Computational Fluid Dynamics (CFD) approaches are limited in their application due to high computational demand, arising from the complexity of the blast furnace process. Alternatively, asymptotic methods were employed to simplify the Euler-Euler formulation for modeling gas-solid flow in an ironmaking blast furnace to yield essentially the same results as numerical methods, but at a much-reduced computational cost. As an initial step towards full-scale modelling, a one-dimensional (1D) gas-solid flow model was analysed in this work − first under steady-state conditions, and later for a transient case, which better represents the dynamic nature of the blast furnace process. A preliminary analysis of the nondimensionalized one-dimensional equations under steady-state conditions revealed the need for boundary layers near both the gas and solid inlets to ensure accurate flow predictions. Notably, the boundary layer near the gas inlet was significantly wider than that near the solid inlet. The method of matched asymptotic expansions was then used to derive asymptotic solutions in each layer, in addition to the leading-order solution in the bulk of the domain. A key finding from the 1D steady-state model is the strong influence of solid viscosity on the behavior of the solution, resulting in cases where the solution can be unique, non-unique or non-existent. Insights from the 1D steady-state model were instrumental in enhancing a previously developed 2D axisymmetric steady-state model and in laying the groundwork for an asymptotic framework for the subsequent transient model presented in this work. The transient model incorporated time-varying boundary condition at the solid inlet to mimic the blast furnace charging practice. The analysis of the transient formulation using asymptotic methods indicated the same boundary layer structure as in the steady-state case. The transient solution exhibited a quasi-steady state behaviour and simply alternated between two independent steady-state profiles, corresponding to the time-dependent boundary condition. The solid viscosity continued to influence the solution even in the transient model. Overall, the asymptotically reduced 1D models yielded results that showed good agreement with results from numerical simulations, providing the basis for a computationally efficient approach towards modeling the blast furnace process with increasing complexities. Further modifications in the transient Euler-Euler model formulation are recommended with a particular focus on its potential to capture the layered burden structure that is an integral feature of the blast furnace interior state.

    This latter part of this thesis addresses the limitations of the currently existing IR-based systems in their functionality concerning slag composition assessment and in their applicability in slag and process control in small- and medium-scale industries. The potential of IR-based systems in slag composition assessment was explored by experimentally determining slag emissivities within the wavelength range of 7.5 − 14 μm at high temperatures, both in industry and in the laboratory. Slag emissivities in the range from 0.64 to 0.68, as evaluated at metal-making temperatures by the industrial trials, were likely affected by apparent surface inhomogeneities in slag composition and temperature that are difficult to control in industrial environments, emphasizing the need for laboratory experiments under well-controlled conditions. Consequently, lab-scale experiments were conducted to determine emissivity of three slag samples from the quaternary system of CaO-Al2O3-SiO2-MgO, representative of ladle slag, at 1773 K. The calculated emissivities from the lab-scale trials ranged from 0.75 to 0.87 at the target temperature, with repeatability best observed in the slag which was completely molten during the investigations. In contrast, the other samples exhibited variations in their emissivities, likely due to the presence of solid phases at the target temperature. The solid phases introduced compositional inhomogeneities on the surface, rendering the surface unstable leading to their emissivity variations. The findings from the lab-scale investigations therefore stressed the need for fully liquid slag samples, ensuring a stable surface with uniform composition and temperature for accurate emissivity determination. While the lab-scale experiments highlighted a discernible effect of slag composition on emissivity, further research is needed including investigations of other typical slag compositions at different temperatures to study the potential of IR-based systems for online slag composition assessment. Moreover, the final phase involving the enhancement of the applicability of IR-based systems was addressed via the development of an Operator Vision Assistance System (OVIAS), designed with key components to enhance operators’ vision in real-time visualization. Industrial testing of a prototype of OVIAS, aimed at optimal imaging of the molten steel surface in an induction furnace and a ladle demonstrated the potential of OVIAS in empowering operators with enhanced visualization and better decisions in optimizing slag control practices such as slag coagulant additions and process control operations such as de-slagging. Industrial calibration of OVIAS, although conducted in the study, met with challenges and needs further investigations using a laboratory scale furnace. Overall, OVIAS presents a cost-efficient, flexible alternative to current expensive IR camera systems in supporting small- and medium-scale industries with improved slag and process control practices during steelmaking.

    Download full text (pdf)
    fulltext
  • Public defence: 2025-06-09 10:00 F2, Stockholm
    Chrysanthidis, Nikolaos
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Neurocomputational mechanisms of memory – Hebbian plasticity across short and long timescales2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The mammalian brain is a complex structure, capable of processing sensory stimuli through the lens of prior knowledge and experiences to guide behavior and decision-making. This process relies on intricate neural dynamics, synaptic plasticity mechanisms, and interactions across brain networks. Despite the brain's remarkable ability to store information, even seemingly stable memories can be modified by new experiences or forgotten over time.

    In this work, we use computational modelling to investigate the mechanisms underlying memory functionality, focusing on how the brain supports short- and long-term memory processes. Our research sheds light on episodic, semantic, and working memory phenomena by employing cortical memory models integrating neural plasticity, Bayesian-Hebbian synaptic plasticity across a range of short and long timescales, together with short-term non-Hebbian mechanisms. Inspired by behavioral memory tasks and experimental evidence, we explore processes such as memory semantization — where associated episodic memories are gradually decoupled, allowing for the extraction of abstract semantic meaning. We also investigate and propose hypothetical underlying neurocomputational mechanisms of verbal omissions (memory forgetting) in odor naming tasks. Additionally, we examine the interplay between episodic memory and recency effects in immediate recall. Expanding our framework to working memory, we investigate how different plasticity mechanisms interact to enable both stability and flexibility in memory maintenance.

    By bridging computational models with cognitive neuroscience, this research provides new insights into the neural and synaptic basis of memory processes.

    Download full text (pdf)
    kappa
  • Public defence: 2025-06-09 10:00 https://kth-se.zoom.us/s/68395855098, Stockholm
    Mostafavi, Seyed Samie
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Predictability, Prediction, and Control of Latency in 5G and Beyond: From Theoretical to Data-Driven Approaches2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The explosive growth of mobile communication and the proliferation of real-time applications, such as industrial automation and extended reality (XR), have created unprecedented demands for ultra-reliable low-latency communication (URLLC) in wireless networks. For example, in industrial closed-loop control systems, data must be transmitted within a target delay of atmost a few milliseconds; violations can lead to costly failures and, there-fore, must occur with probabilities below 0.0001 (or, reliability above 0.9999).This dissertation addresses the critical challenge of end-to-end latency pre-diction and control in these dynamic and stochastic environments, bridging the gap between the inherent randomness of wireless communication and the deterministic performance guarantees required by time-sensitive applications.

    In this thesis, we adopt a twofold approach, combining rigorous theoretical analysis with practical, data-driven methodologies. First, we introduce a framework for analyzing predictability that quantifies the inherent limits of latency forecasting in communication networks. Through analysis of Marko-vian systems, including single-hop and multi-hop queues, exact expressions and spectral-based upper bounds for predictability are derived, revealing the crucial influence of network topology, state transitions, and observation defects. Building on this foundation, we developed and implemented data-driventechniques for probabilistic delay prediction. A key contribution is a tail-optimized prediction method that integrates Extreme Value Theory (EVT) within a mixture density network framework, significantly enhancing the accuracy of predicting rare, high-latency events critical for URLLC. To demonstrate the practical utility of these predictions, ”Delta,” a novel active queue management scheme, is introduced. Delta integrates real-time delay violation probability predictions into packet-dropping decisions, dynamically adapting to delay variations and significantly reducing delay violations. To validate these approaches, the ExPECA testbed and EDAF framework were developed, enabling fine-grained delay measurement and decomposition in real 5G systems. Extensive experiments on both commercial off-the-shelf5G and software-defined radio-based Open Air Interface platforms confirmedthe superior accuracy and efficiency of the proposed EVT-enhanced models.

    Furthermore, temporal prediction models, leveraging LSTM and Transformer architectures, were developed and shown to achieve higher accuracy comparedto the baseline approaches in real 5G network experiments, capturing the time-varying dynamics of wireless networks and providing accurate multi-step forecasts. This dissertation advances latency prediction and control for wireless networks, offering both theoretical foundations and practical solutions for time-sensitive applications. These findings have significant implications for designing and operating next-generation wireless networks, paving the way for more dependable communication. Future work should focus on integrating these prediction models to optimize the network and extending the framework to encompass broader quality of service metrics and emerging wireless technologies.  

    Download full text (pdf)
    Kappa
  • Public defence: 2025-06-09 13:00 Kollegiesalen, Brinellvägen 8, KTH Campus, Stockholm
    Molin, Jonas
    KTH, School of Architecture and the Built Environment (ABE), Real Estate and Construction Management, Real Estate Business and Financial Systems.
    Business streamlining: a theory of service sourcing2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Purpose:

    The overarching purpose of this thesis is to develop an integrated conceptual model ofservice sourcing.Design/methodology/approach: The four articles of the thesis are divided into a generative phase and a validation phase. The generative phase refers to a longitudinal case study approach where Glaserian grounded theory analysis is applied in the development of business streamlining (BS model) a substantive theory of service sourcing (article I). Article IV suggests and reflects ona structured methodological strategy for dealing with large amounts of qualitative data. The validation phase refers to the quantitative validation of the BS model where confirmatory factory analysis (article II) and mediating process analysis (article III) is applied.

    Findings: In the generative phase business streamlining was found to be the core process of service sourcing. BS denotes the process by which four interrelated dimensions are managed to make service sourcing processes simpler, more effective and/or productive. The validation phase confirmed the BS model to be significant overall, meaning that it is likely to be applicable irrespective of the service sourcing context being simple or complex. In addition, the category ofefficiency pursuing (EP) was found to have an interlinking role that called for a revision of the BSmodel (Article II).

    Article III delves deeper into researching the relative role of the subcategories of EP explaining the logics of EP´s role in BS.

    Research limitations/implications: Although the BS model has been developed and tested through the use various methods further research could be conducted delving deeper into the rolesof the subcategories of main categories. Thus, further studies should examine the relative significance of categories in different service-sourcing contexts. Follow up surveys could also add a valuable longitudinal element to the cross-sectional survey. Practical implications: The thesis support that the BS model is flexible and adaptable to a wide range of service sourcing circumstances. Irrespective of the relative complexity of facility management (FM) sourcing processes, the categories can be dynamically adapted to fit different service sourcing contexts. Thus, the BS model is likely to apply for a wide range of services and, can be used as a tool to analyse and facilitate strategic decision-making.

    Originality/value: Business streamlining integrates the fragmented field of service sourcing in a novel way.

    Download (pdf)
    Kappa
    Download (pdf)
    Disputationsbeslut
  • Public defence: 2025-06-09 14:00 https://kth-se.zoom.us/j/62914864473, Stockholm
    Gärtner, Joel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Algebra, Combinatorics and Topology.
    Lattice-Based Post-Quantum Cryptography and Quantum Algorithms2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The focus of this thesis is the threat that quantum computers pose to asymmetric cryptography. This threat is considered through analysis and development of both quantum algorithms and post-quantum cryptosystems. Lattices are used in both of these areas of this thesis; lattice-based analysis is used for the quantum algorithms and the cryptosystems are lattice-based.

    Arguably the most important building block for lattice-based cryptography is the Learning With Errors (LWE) problem. This problem was introduced by Regev in 2005 together with a quantum reduction from standard lattice problems. In this thesis Regev’s reduction is analyzed in detail, allowing for the first parametrization of a cryptosystem whose concrete security actually is based upon this reduction.

    Another important problem used for lattice-based cryptography is the NTRU problem, which was introduced several years before the LWE problem. In this thesis, the NTWE problem is introduced as a natural combination of the NTRU and LWE problems with NTWE-based cryptosystems having certain benefits over comparable NTRU and LWE-based systems.

    The quantum algorithms considered in this thesis are variants of Regev’s recently introduced quantum factoring algorithm. When attacking factoring-based cryptography, Regev’s algorithm has certain asymptotic advantages over previous quantum algorithms. As a part of this thesis, variants of Regev’s algorithm for solving other cryptographically relevant problems are introduced. Additionally, by analyzing the lattice-based classical post-processing of the algorithm, it is argued that the algorithm can be made robust to quantum errors.

    Although Regev’s new algorithm, and the variations thereof, have an asymptotic advantage over previous quantum algorithms, an advantage for the concrete instances that are used for cryptography would arguably be more interesting. This motivates comparing the concrete efficiency of variants of Regev’s algorithm to that of previous quantum algorithms. Such a comparison is part of this thesis and — based on this comparison — it seems like previously available algorithms still are the best choice for quantum attacks against traditional cryptography.

    The final contribution of this thesis is a new lattice-based digital signature scheme. Similar signature schemes have been considered before, such as with the recently standardized ML-DSA. However, compared to similar signature schemes, the new scheme is significantly more compact. This is in large part thanks to developing a new technique for constructing signatures, but also to some extent from being based on the NTWE problem instead of a variant of the LWE problem.

    Download full text (pdf)
    sammanfattning
  • Public defence: 2025-06-10 09:00 https://kth-se.zoom.us/j/63745671489, Stockholm
    Sewlia, Mayank
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Path Planning and Control for Multi-Manipulator Systems under Spatio-Temporal Constraints2025Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Cooperative manipulation, where multiple robotic manipulators coordinate to transport an object, presents significant challenges due to the physical coupling between manipulators, the complexity of motion planning in constrained environments, and the need for precise control without full system knowledge. These challenges are further exacerbated when tasks impose not only spatial constraints but also strict timing requirements, as specified through signal temporal logic. Under such complex requirements, cooperative manipulation remains a relatively underexplored area. These shortcomings motivate the present thesis. Broadly, by blending tools from control theory, formal methods, and motion planning, this thesis aims to deepen our understanding of how to accomplish such complex tasks within the cooperative manipulation setting. Specifically, we develop controllers and planning algorithms that enable the specification, planning, and execution of cooperative robotic manipulation under such tasks.

    We approach this problem in three parts. In the first part, we develop control algorithms that take task specifications in the form of signal temporal logic and encode them through funnel-based formulations. Our main results include performing cooperative manipulation under signal temporal logic tasks with both shared task knowledge among all agents and task knowledge limited to a single agent. To achieve this, we design a prescribed performance control scheme that ensures the boundedness of errors within the funnels, thereby enforcing task satisfaction. We present both experimental and simulation validations of the proposed algorithms.

    In the second part, we shift focus from designing controllers to developing planning algorithms for satisfying signal temporal logic specifications. This shift provides additional flexibility, allowing us to broaden the class of specifications considered. Our main results include the development of a cooperative sampling-based planning algorithm for two autonomous agents, which is then extended to multiple agents through a distributed optimisation approach. In both approaches, we perform sampling in both spatial and temporal domains to search for trajectories that satisfy signal temporal logic formulas. We present several examples to demonstrate the performance of the algorithms, along with experimental validation and proofs of probabilistic completeness.

    In the final part of the thesis, we address cooperative manipulation through constrained environments. We integrate the control and planning components through a trajectory optimisation framework that continues to address signal temporal logic tasks under environmental constraints. The main contributions include decoupling the planning of mobile bases from that of the manipulator arms, and developing an inverse kinematics algorithm combined with a control design strategy to track the resulting joint-space trajectories while avoiding obstacles. We present simulation results to validate the effectiveness of the proposed approach.

    Download full text (pdf)
    DoctoralThesis
  • Public defence: 2025-06-10 09:00 FA31, Stockholm
    Chen, Liang
    KTH, School of Engineering Sciences (SCI), Physics, Nuclear Science and Engineering.
    Experimental study and simulation of metallic melt infiltration into porous media2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Modeling corium melt infiltration into porous debris beds is crucial for predicting and mitigating severe accidents in nuclear power plants. A comprehensive understanding of melt infiltration requires both experimental and numerical approaches. 

    Experimentally, two key studies are conducted: One quantifies the wettability of various surfaces by metallic melt, while the other examines one-dimensional melt infiltration dynamics in porous media composed of corresponding materials (MRSPOD-1D). The results indicate that wettability significantly influences infiltration dynamics, with wettable surfaces facilitating initial infiltration and non-wettable surfaces impeding it. 

    Numerically, a multiscale modeling framework is employed, spanning from an interface- resolved (pore-scale) method to a space-averaged (macroscopic) approach. The numerical study of this thesis first focuses on developing and validating a pore-scale numerical model that simulates molten metal relocation using interface tracking methods. The model integrates the Level-Set (LS) method to track the metal-gas interface and the enthalpy-porosity approach to account for phase changes. Validation is performed using REMCOD-E8 and REMCOD-E9 experimental data. 

    Building on the experimental and pore-scale findings, a macroscopic model is developed by coupling the LS method with the Brinkman equations. The macroscopic model is validated against MRSPOD-1D and REMCOD experiments and further assessed through comparisons with pore-scale simulations. 

    The multiscale modeling approach reveals the complex interplay among particle surface wettability, pore size, the melt pressure head, melt superheat, and particulate bed temperature on the dynamics of the melt infiltration: (1) enhanced surface wettability consistently promotes melt infiltration and heat transfer across all Bond numbers, though it can also trigger early solidification, particularly at low Bond numbers; (2) melt infiltration becomes more sensitive to the wettability as pore size decreases, occurring in non-wettable media only when melt pressure overcomes capillary resistance, while this sensitivity diminishes as pore size increases; (3) at high Bond numbers, infiltration rates strongly depend on the initial melt pressure head, which drives faster infiltration until the melt layer atop the bed is depleted; (4) higher initial particulate bed temperatures and melt superheat enhance infiltration, whereas lower temperatures may cause solidification arrest, indicating that additional heat sources in reactor-relevant scenarios could promote remelting and facilitate deeper infiltration; (5) pore-scale simulations more accurately capture infiltration dynamics when solidification occurs, whereas both pore-scale and macroscopic models yield comparable results in high-temperature cases without solidification. 

    This thesis advances the understanding of melt infiltration mechanisms and provides validated tools for severe accident modeling, which are critical for enhancing severe accident management strategies. 

    Download full text (pdf)
    Kappa_Liang Chen
  • Public defence: 2025-06-10 10:00 F3, 114 28 Stockholm
    Ikram Ul Haq, Omer
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electric Power and Energy Systems.
    Generalized Harmonic Injection Strategy for Dynamic Pole Reconfiguration of a Multiphase Induction Machine2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The rapid evolution of electrification across industries demands electric machines that combine high efficiency, adaptability, and a large operating range. Traditional induction machine (IM), constrained by fixed winding configurations and static operating characteristics, struggle to meet these dynamic requirements over the wider operational range demanded by the application. This thesis addresses these limitations by pioneering dynamic pole reconfiguration of multiphase IMs, leveraging control frameworks and modeling techniques to unlock flexibility and performance.

    Central to this thesis is the vector space decomposition (VSD) mathematical framework, which decomposes the electrical variables of machines into orthogonal vector spaces, allowing for the separation of space harmonics. These independent vector spaces enable the dynamic control of magnetic pole pairs through magnetic pole pair transition (MPT) theory. This capability allows a single machine to emulate a ”virtual gearbox,” transforming its torque-speed profile from one pole pair configuration to another in real-time without requiring physical winding reconfiguration. For instance, a 9-phase multiphase IM can transition from a 1-pole pair configuration for high-speed operation to a 3-pole pair configuration for high-torque demands, expanding the torque-speed operational range to suit diverse applications.

    A critical contribution of this work is its robust approach to parameter identification. Traditional methods rely on time-consuming finite element analysis (FEA) and static laboratory tests. The thesis introduces a methodology for translating equivalent circuit parameters of the multiphase IM in a known pole pair winding configuration to any target pole pair winding configuration. Additionally, the research addresses practical challenges such as converter non-linearities, proposing converter parameter identification and compensation algorithm that reduces voltage drop errors, ensuring reliable control under practical operating conditions.

    One of the cornerstones of this thesis is generalized harmonic injection (GHI), a groundbreaking control strategy developed in this work. GHI optimizes torque density by strategically injecting harmonic currents into multiple subspaces while synchronizing their stator frequencies to mitigate the adverse effects of inter-plane cross-coupling (IPXC), which otherwise could cause beat-frequency oscillations resulting in large torque ripple. This enables the possibility of loss reduction by minimizing the stator current for any given operating point of the multiphase IM. Furthermore, smooth reference frame transition (SRFT) extends the GHI to achieve ripple-free pole pair transition (RFPT). The synchronization strategy proposed in this thesis suppresses these beat-frequency oscillations and torque ripples, thereby improving the performance of the multiphase IM during pole pair transitions. Experimental validation on a 9-phase test bench demonstrated the efficacy of GHI, and results show a significant reduction in measured torque ripples.

    The findings of this research have far-reaching effects in various industries. In electric mobility, RFPT enables vehicles to seamlessly switch between high torque urban driving and high-efficiency highway cruising, thereby improving the vehicle’s energy efficiency. Renewable energy systems, such as wind turbines, leverage adaptive pole pair numbers to optimize power generation across fluctuating wind speeds. As industries worldwide transition to greener technologies, the methodologies and insights presented here can serve as a cornerstone for the electric machines of tomorrow.

    Download full text (pdf)
    Kappa
  • Public defence: 2025-06-10 14:00 F3,Lindstedtsvägen 26 & 28, floor 2, KTH Campus, Stockholm
    Tütüncüoğlu, Feridun
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Joint Optimization of Pricing and Resource Allocation in Serverless Edge Computing: A Game-Theoretic Perspective2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The rapid advancement of Internet of Things (IoT), Augmented Reality (AR), autonomous systems, and intelligent automation is transforming daily life and revolutionizing industrial processes. These technologies demand significant computational resources while also imposing stringent latency requirements. A common approach to meet computational resource demand is leveraging Cloud Computing (CC), which offers scalable processing capabilities through centralized data centers. Nonetheless, this centralized approach often fails to meet stringent latency requirements due to communication delays caused by the geographical distance between cloud servers and end users. This limitation has led to the emergence of the novel paradigm of Edge Computing (EC), which addresses the latency issue by placing compute units closer to end users.

    EC server clusters are expected to be smaller in scale and more geographically dispersed compared to CC data centers. This introduces new challenges, such as limited computational and storage capacity, making efficient resource allocation crucial. Additionally, the network operator managing edge infrastructure must maintain its financial sustainability under different workload characteristics and application requirements of users, necessitating joint and adaptable resource management and pricing strategies. Function-as-a-Service (FaaS) offers a promising approach in this regard, as its pay-as-you-go pricing model allows users to pay only for the resources they consume, while also enabling dynamic resource management by shifting the entire responsibility of application deployment to the operator. However, this flexibility also makes the choice of computation, memory, and bandwidth resources price-dependent, further complicating resource management and pricing.

    The papers included in this thesis are organized into three parts, each addressing distinct challenges related to pricing, resource allocation, and system dynamics in EC. In the first part of the thesis, we first consider a setting where Wireless Devices (WDs) minimize energy and the monetary costs of computing tasks, while the operator maximizes revenue by optimizing pricing and application caching under memory constraints. We consider a dynamic setting where the operator has no prior knowledge of the varying availability of WDs over time. We model this interaction as a Stackelberg Game (SG) and demonstrate the existence of an equilibrium. To address information asymmetry, we use Bayesian optimization to learn pricing strategies, establish an upper bound on its asymptotic regret, and propose a greedy approximation algorithm for application caching. We then investigate the joint optimization of compute, communication, and memory resources in a static network setting, where WDs minimize the costs of executing the tasks of their applications, including monetary and energy expenses. We model this interaction as a SG, show the existence of an equilibrium, and prove that computing an equilibrium is NP-hard. We propose an efficient approximation algorithm with a bounded approximation ratio. An interesting feature of our solution is that the operator's revenue is maximized when the WDs maximize their energy savings through computation offloading. Furthermore, we investigate the rate-adaptation problem, where WDs adjust their offloading rates based on available compute resources and pricing. We model the interaction as a SG and propose a Stackelberg gradient play algorithm that computes the operator’s implicit revenue function with respect to the rate selection of the WDs.

    The second part of this thesis explores a dynamic network and pricing setting where WDs arrive at the edge cell according to a non-homogeneous stochastic process, and the operator sets prices based on the availability of WDs and their heterogeneous workload characteristics. We formulate the problem of maximizing the revenue ofthe operator as a sequential decision-making problem under uncertainty, where the operator's price can be piecewise linear or non-linear and could vary over time. In a Markovian steady-state setting, we derive analytical results for the optimal pricing strategy, which also serve as a heuristic for the general case. To address the general case, we introduce a Generalized Hidden Parameter Markov Decision Process and propose a dual Bayesian neural network approximator that approximates the state transitions and the revenue to accelerate the learning of the optimal pricing policy. This approach enables pre-training on synthetic traces while adapting quickly to unseen workload patterns.

    The third part addresses computational challenges by examining the impact of server contention on both operator revenue and application latency constraints. To address this, we propose a contention model validated through experiments across applications with varying compute demands, including L1/L2/L3 caches, I/O, and memory bus usage. We develop a novel model-based Bayesian optimization algorithm to maximize operator revenue while ensuring that latency and resource capacity constraints are met.

    The algorithmic contributions of this thesis in pricing and resource management are intended to provide efficient, deployable, and scalable solutions that strengthen the robustness and efficiency of resource allocation and pricing in EC.

    Download (pdf)
    summary
  • Public defence: 2025-06-10 14:30 https://kth-se.zoom.us/j/68663108750, Stockholm
    Weng, Zehang
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems.
    Approach-constrained Grasp Synthesis and Interactive Perception for Rigid and Deformable Objects2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis introduces methods for two robotic tasks: grasp synthesis and deformable object manipulation. These tasks are connected by interactive perception, where robots actively manipulate objects to improve sensory feed-back and task performance. Achieving a collision-free, successful grasp is essential for subsequent interaction, while effective manipulation of deformable objects broadens real-world applications. For robotic grasp synthesis, we address the challenge of approach-constrained grasping. We introduce two methods: GoNet and CAPGrasp. GoNet learns a grasp sampler that generates grasp poses with approach directions that lie in a selected discretized bin. In contrast, CAPGrasp enables sampling in a continuous space without requiring explicit approach direction annotations in the learning phase, improving the grasp success rate and providing more flexibility for imposing approach constraint. For robotic deformable object manipulation, we focus on manipulating deformable bags with handles—a common daily human activity. We first propose a method that captures scene dynamics and predicts future states in environments containing both rigid spheres and a deformable bag. Our approach employs an object-centric graph representation and an encoder-decoder framework to forecast future graph states. Additionally, we integrate an active camera into the system, explicitly considering the regularity and structure of motion to couple the camera with the manipulator for effective exploration.

    To address the common data scarcity issue in both domains, we also develop simulation environments and propose annotated datasets for extensive benchmarking. Experimental results on both simulated and real-world platforms demonstrate the effectiveness of our methods compared to established baselines.

    Download full text (pdf)
    fulltext
  • Public defence: 2025-06-11 09:00 Kollegiesalen
    Eshghie, Mojtaba
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Securing Smart Contracts Against Business Logic Flaws2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Blockchain smart contracts empower decentralized applications (DApps) by enabling trustless, transparent, and immutable transactions. However, this immutability means that flaws in design or implementation lead to irreversible and costly security breaches. Therefore, rigorous verification and analysis of smart contracts before deployment is crucial. Furthermore, since not all vulnerabilities are detectable pre-deployment, monitoring after deployment is equally critical.

    One particularly challenging yet understudied class of vulnerabilities in smart contracts is business logic flaws (BLFs), which occur when the implemented logic deviates subtly but critically from the intended behavior. BLFs are challenging to detect because they often require a deep understanding of the intended business rules rather than merely code-level analysis. This class of vulnerabilities has largely been overlooked in existing literature. This thesis introduces a comprehensive framework to address vulnerabilities across the entire DApp security lifecycle from specification and pre-deployment analysis to post-deployment monitoring with an emphasis on detecting, preventing, and mitigating BLFs.

    At its core, our work formalizes high-level business logic of contracts using Dynamic Condition Response (DCR) graphs. We categorize design patterns into low-level platform/implementation specific and high-level business logic design patterns. We formalize the high-level patterns such as time-based constraints and action dependencies into precise graphical formal models. These models provide developers with a clear visual blueprint of intended contract workflows. Formalized contracts not only facilitate automated verification and analysis but also lay the groundwork for our "model to mitigate" approach. In this methodology, smart contract functions are mapped to DCR activities and its logic is modeled using DCR relations to expose design flaws by making implicit assumptions about the code explicit. 

    Complementing our design-phase contributions, we propose an off-chain monitoring tool that observes on-chain transactions. By mapping each transaction against the DCR-specified logic, this tool detects deviations from intended behavior without instrumenting the deployed contract. Furthermore, our method effectively detects complex cross-chain attacks and violations of the specified behavior to secure the increasingly popular cross-chain DApps.

    We address the lack of concrete exploit scenarios for business logic vulnerabilities using Large Language Models (LLMs) to inject realistic BLFs into contracts and synthesize corresponding exploits. Our approach uses DCR-based formal specifications as guidance, ensuring that synthesized vulnerabilities are both representative and realistic.

    Recognizing the need for rigorous invariants to enforce security, we develop a semantic invariant differencing tool that given two smart contract require/assert statements it produces a verdict of which one is stronger. Our experiments prove its effectiveness in filtering out redundant or weak invariants to enhance the usefulness of dynamically mined invariants. In parallel, acknowledging the pivotal role of blockchain oracle security, we propose an oracle data lifecycle model spanning from the creation of data to its deprecation. We systematically identify vulnerabilities at each stage of this model and survey mitigation strategies.

    Collectively, these contributions provide an end-to-end approach to smart contract security by bridging formal design and analysis, dynamic monitoring, and invariant refinement to build more resilient decentralized applications.

    Download full text (pdf)
    Securing Smart Contracts Against Business Logic Flaws
  • Public defence: 2025-06-11 09:00 https://kth-se.zoom.us/j/65431729410?ampDeviceId=f2e64df5-2699-4de4-a189-7ed11bea93d9&ampSessionId=1747228554863, Stockholm
    Aknesil, Can
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Electronics and Embedded systems.
    Protecting Remote FPGAs and Embedded Devices from Non-Invasive Physical Attacks2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The remote computing market has been growing rapidly for more than ten years, and this growth is expected to continue in the future. A considerable portion of this market is held by data centers, which today offer a diverse range of acceleration technologies for cloud computing, from massive multiprocessing to field-programmable gate arrays (FPGAs). Another significant portion is held by embedded devices in specialized mechanical and electronic systems from motor vehicles to home security systems. Valuable intellectual properties and sensitive information are being deployed in the cloud and embedded devices, which require strong protection. Besides the benefits, the industry’s shift toward remote computing comes with increased vulnerability to physical attacks. FPGAs and embedded devices are among the electronic devices that are most vulnerable to physical attacks, because they may be deployed in a location physically accessible by potential adversaries. This thesis aims to protect FPGAs and embedded devices from physical attacks by exploring the boundaries of possible attack vectors and introducing new countermeasures.

    This thesis contains six research papers. The first paper presents an FPGA implementation of a novel arbiter physically unclonable function (PUF) with 4×4 switch blocks. The PUF provides a more resource-efficient solution to secure key generation and storage on FPGAs. The second paper presents near-field electromagnetic deep learning-based side-channel analysis performed on Raspberry Pi 3, a widely-used single-board computer. The paper investigates the generalizability of side-channel analysis by focusing on the extraction of data in memory operations.  The third and fourth papers present covert transmitting antennas and covert near-field EM sensors, respectively, both implemented entirely within the FPGA configurable fabric. The results highlight wireless covert channels as a plausible attack vector for cloud FPGAs and point to the need for further research on the topic. The fifth paper aims to improve IP security in FPGA clouds by introducing circuit disguise, a new method that enables FPGA design checks to be performed in the cloud without requiring the disclosure of the clients’ unprotected designs. Last but not least, the sixth paper presents a hybrid method for fingerprinting neural networks by combining power side-channel measurements with information domain metrics.

    Download full text (pdf)
    Kappa
  • Public defence: 2025-06-11 10:00 https://kth-se.zoom.us/j/69506042503, Stockholm
    Fay, Dominik
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Machine Learning with Decentralized Data and Differential Privacy: New Methods for Training, Inference and Sampling2025Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Scale has been an essential driver of progress in recent machine learning research. Data sets and computing resources have grown rapidly, complemented by models and algorithms capable of leveraging these resources. However, in many important applications, there are two limits to such data collection. First, data is often locked in silos, and cannot be shared. This is common in the medical domain, where patient data is controlled by different clinics. Second, machine learning models are prone to memorization. Therefore, when dealing with sensitive data, it is often desirable to have formal privacy guarantees to ensure that no sensitive information can be reconstructed from the trained model.

    The topic of this thesis is the design of machine learning algorithms that adhere to these two restrictions: to operate on decentralized data and to satisfy formal privacy guarantees. We study two broad categories of machine learning algorithms for decentralized data: federated learning and ensembling of local models. Federated learning is a form of machine learning in which multiple clients collaborate during training via the coordination of a central server. In ensembling of local models, each client first trains a local model on its own data, and then collaborates with other clients during inference. As a formal privacy guarantee, we consider differential privacy, which is based on introducing artificial noise to ensure membership privacy. Differential privacy is typically applied to federated learning by adding noise to the model updates sent to the server, and to ensembling of local models by adding noise to the predictions of the local models.

    Our research addresses the following core areas in the context of privacy-preserving machine learning with decentralized data: First, we examine the implications of data dimensionality on privacy for ensembling of medical image segmentation models. We extend the classification algorithm Private Aggregation of Teacher Ensembles (PATE) to high-dimensional labels, and demonstrate that dimensionality reduction can improve the privacy-utility trade-off. Second, we consider the impact of hyperparameter selection on privacy. Here, we propose a novel adaptive technique for hyperparameter selection in differentially private gradient descent; as well as an adaptive technique for federated learning with non-smooth loss functions. Third, we investigate sampling-based solutions to scale differentially private machine learning to datasets with a large number of data points. We study the privacy-enhancing properties of importance sampling and find that it can outperform uniform sub-sampling not only in terms of sample efficiency but also in terms of privacy. Fourth, we study the problem of systematic label shift in ensembling of local models. We propose a novel method based on label clustering to enable flexible collaboration at inference time.

    The techniques developed in this thesis improve the scalability and locality of machine learning while ensuring robust privacy protection. This constitutes progress on the goal of a safe application of machine learning to large and diverse data sets for medical image analysis and similar domains.

    Download full text (pdf)
    fulltext
  • Public defence: 2025-06-11 13:00 F3, Stockholm
    Elf, Patric
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Fibre- and Polymer Technology, Polymeric Materials.
    Prediction of Thermoplasticity in Lignocellulose-Based Materials using Molecular Simulations2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    With increasing interest for sustainably sourced and renewable materials, lignocellulose-based biopolymers are a natural candidate for such applications. However, most biopolymers, including lignocellulose, are intrinsically rigid and are therefore difficult to shape with e.g. extrusion and other tools for processing thermoplastic polymers. The lignocellulose must thus be modified somehow, with e.g. chemical modifications and/or plasticizers, to become thermoplastic. In this thesis, the molecular and atomistic-level mechanisms that govern the interactions important for thermoplasticity within lignocellulosic materials are investigated through the use of molecular dynamics (MD) simulations combined with experimental validation. The chemical structures of the lignocellulosic components, in particular cellulose and lignin, are explored to improve processability, mechanical performance, and thermodynamical behavior.

    The first part of the work focuses on cellulose and dialcohol cellulose, a ring-opened derivative which have demonstrated promising properties for processing, both via simulations and experiments. An increased degree of ring opening enhances the molecular mobility and lowers the glass transition temperature, both in dry and moist conditions, facilitating thermoplastic behavior while maintaining mechanical performance. Subsequent studies, extend the investigation into a broader set of cellulose modifications and ring openings, including aldehyde- hydroxylamine and carboxyl functionalization, identifying how the different types of modifications affect the thermodynamic and mechanical properties. The role and effect of moisture content, and the presence of functional groups are thoroughly investigated. 

    Plasticizers in cellulose and dialcohol cellulose systems were also evaluated, revealing that the plasticizer size and mobility influence the stability and thermal softening of the systems, with sorbitol and glycerol in particular showing especially promising results. 

    The thermo-mechanical behavior of lignin is also examined under the influence of temperature and moisture content, linking lignin softening with effects on hot-pressed unbleached paper through a combined simulation and experimental study. The study showed that wet hotpressing is an efficient way for improving the mechanical properties of paper.

    This thesis demonstrates how molecular dynamics simulations can provide a better understanding of the internal structure of materials. It shows how MD simulations can guide the development of new thermoplastic materials, especially by examining properties that are difficult or even impossible to observe experimentally.

  • Public defence: 2025-06-11 13:00 Air & Fire, SciLifeLab
    Shi, Mengnan
    KTH, Centres, Science for Life Laboratory, SciLifeLab. KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Protein Science, Systems Biology.
    Systems Biology Approaches for Target Identification and Therapeutic Development in Chronic Diseases: Integrating Bulk and Single-Cell Transcriptomics2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Chronic diseases such as metabolic, renal, or liver disorders involve complex interactions of genes, cell types, and tissues. This doctoral thesis leverages systems biology by integrating transcriptomics with other omics data to map biological interactions and identify novel therapeutic targets. By viewing gene perturbations as interconnected networks rather than isolated factors, the research uncovers key drivers of disease and matches them with potential interventions. A combination of bulk and single-cell RNA sequencing is used: bulk RNA-seq provides a broad view of tissue-level changes, while single-cell RNA-seq pinpoints changes in specific cell populations. Together, these approaches enable more precise identification of drug targets for chronic diseases and facilitate drug repositioning to expedite therapy development.The thesis is structured into three key sections. The first part (Paper I) integrates transcriptomic, proteomic and lipidomic data, exploring PKLR as a druggable target of non-alcoholic fatty liver disease (NAFLD). This study investigates whether small-molecule inhibitors of PKLR expression could serve as therapeutic agents, offering a drug repurposing strategy to mitigate disease progression. The second part (Papers II–IV) relies on gene co-expression network, and leverages both bulk and single cell transcriptomics to discover disease-associated molecular drivers of hepatocellular carcinoma (HCC) and chronic kidney disease (CKD), respectively. These studies illustrate how single cell data can locate key molecular targets in diverse cell types within tissues, and help to understand molecular mechanism of these diseases.In the final section (Paper V), a whole-body single-cell gene expression atlas is introduced, providing a foundational reference for human biology. This resource enhances the systems biology toolkit, enabling rapid contextualization of newly identified disease genes and drug targets. Researchers can determine tissue and cell-type specificity, facilitating a clearer understanding of therapeutic strategies for chronic diseases.Overall, this thesis underscores the power of systems biology in deciphering disease mechanisms and advancing precision medicine. The integration of multi-omics data with network analysis fosters a holistic understanding of chronic diseases, leading to effective and targeted treatments. Beyond identifying therapeutic targets, the research contributes a lasting resource in form of the single-cell gene expression atlas, bridging molecular discoveries withIclinical applications. These insights accelerate the development of novel, data- driven therapies for complex diseases, advancing translational medicine.

    Download (pdf)
    kappa
  • Public defence: 2025-06-11 14:00 D3, Stockholm
    Lin, Ting Jun
    KTH, School of Industrial Engineering and Management (ITM), Learning, Learning in Stem.
    Bridging Policy and Practice in Spatial Ability Development: A Curriculum and Teaching Inquiry2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    A curriculum is a complex system shaped by educational values, institutional structures, and policy priorities. Within these systems at the compulsory level, spatial ability—the mental capacity to visualise, manipulate, and transform visual-spatial information—is inconsistently addressed, which translates to inconsistencies in classroom practice as well. While research increasingly acknowledges its significance, spatial ability is rarely positioned as an explicit learning goal within national curricula. Its development is often left to implicit interpretation, creating systemic ambiguity that affects equitable access to spatial ability development opportunities, particularly during the foundational years of schooling.

    This thesis investigates how spatial ability development is positioned at the intersection of curriculum policy and teaching practice, with a specific focus on primary and lower secondary education in Sweden. The central research question is: How is the development of spatial ability situated within the national curriculum and supported through educational practice? Subquestions explore how spatial ability is represented and structured in curriculum documents, and how teachers perceive and implement its role within their teaching.

    Framed by Bronfenbrenner’s ecological systems theory, which situates human development within interacting layers from policy to classroom practice, the study explores how these levels shape spatial ability development. Guided by an interpretivist paradigm, it adopts a qualitative, multi-method design incorporating grounded theory and a phenomenological approach. The research is primarily situated in the Swedish education system, with Ireland serving as a secondary, comparative context. The thesis comprises five interlinked papers, collectively exploring curriculum and teaching from both systemic and practitioner perspectives. Paper A establishes a foundational understanding of system-level enablers and barriers that influence spatial ability development. Paper B focuses on how spatial ability is embedded in Swedish Technology and Craft education. Paper C develops a structured analytical framework to evaluate curriculum representation. Paper D extends this analysis across the broader Swedish national curriculum beyond Technology and Craft. Finally, Paper E brings in teacher perspectives, contextualising curriculum findings within classroom practice and comparing implementation between Sweden and Ireland.

    Findings are interpreted through Bronfenbrenner’s ecological systems model. At the macrosystem level, national curricula in both Sweden and Ireland do not explicitly define spatial ability as a learning objective, limiting coherent policy support. At the mesosystem and exosystem levels, subject organisation, interdisciplinary structures, and assessment frameworks shape whether and how opportunities for spatial ability development are constructed within curriculum pathways. At the microsystem level, teachers consistently acknowledge the value of spatial ability and attempt to integrate it into their practice. However, such efforts remain informal, uncoordinated, and dependent on individual initiative, resulting in uneven experiences for learners.

    This thesis reveals a disconnect between curriculum policy and classroom practice, where the development of spatial ability depends heavily on teacher agency in the absence of systemic support. The thesis argues that bridging this gap requires more intentional integration of spatial ability into curriculum frameworks, alongside institutional supports that align policy, practice, and pedagogy. In doing so, it contributes to broader discussions on how national education systems can better support foundational cognitive development through both policy and practice.

    Download full text (pdf)
    fulltext
  • Public defence: 2025-06-12 09:30 F3, Stockholm
    Golo, Dusanka
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemistry, Theoretical Chemistry and Biology.
    Simulations of CO2 reduction in molecular materials2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The anthropogenic effect induces, among other gases, an increase of atmo- spheric carbon dioxide concentration, which contributes to global warming and the resulting climate change. Compared to the pre-industrial era, a tem- perature increase of 1.5 °C was recorded in 2024. Although this may seem like a modest amount, it represents a significant accumulation of heat. One way to counteract this increase is by using catalysts towards the conversion of CO2 to high-value products. This Ph.D. thesis focuses on the development and mechanistic investigation of promising catalysts capable of reducing CO2 to HCOOH or CO, two important chemical feedstocks. 

    The first study shows that by using the Mn(bpy)(CO)3 Br catalyst along with triethylamine and isopropanol as additives, the electrochemical reduction of CO2 is shifted from CO to HCOOH. The reaction mechanism was elucidated, highlighting the critical role of these additives. Subsequently, in the second study, changes were made to the bipyridine ligand of the Mn(bpy)(CO)3Br catalyst to see how it would affect catalyst performance and selectivity. We compared two Mn catalysts with two and four pendant amine groups and performed density functional theory calculations to investigate the effect of these pendant groups. 

    Besides transition metal-based molecular catalysts, this thesis also covers the modeling of metal-organic frameworks. To gain deeper insights into their catalytic properties, in the third study we first developed a new cationic dummy atom model that successfully reproduced experimental values and ensured successful and stable molecular dynamics simulations. Beyond that, some properties such as the "breathing phenomena" were modeled. Newly parame- trized force fields for metal ions showed to be transferable in our fourth project, where cobalt-based MOF, Al2(OH)2TCPP-Co, was investigated due to its po- tential for reducing CO2 to CO in an aqueous electrolyte. Simulations pro- vided insights into the spatial distribution of CO2 and counter ions, which further led to conclusions that can help further research within the field on how to enhance MOF structure for electrochemical conversion. 

    This work provides the reader with knowledge about interesting candi- dates for electrochemical CO2 conversion and ideas for possible advancements in the future within the field. Additional literature, that exceeds the scope of this research, will be included to provide a broader overview of the problem and potential solutions to it. 

    Download full text (pdf)
    fulltext
  • Public defence: 2025-06-12 13:00 Sal F2, Stockholm
    McMullan, Kathryn (Kylie)
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.).
    The “wild west” of social media influencer marketing: examining the relationship between social media influencers and firms2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Influencer marketing is a popular marketing tactic that is only growing in importance. While there is much optimism from firms around their partnerships with social media influencers (SMIs), it is important that they understand the limitations and even the potential risks. However, there is currently limited research on how firms and SMIs can best work together. This thesis investigates influencer marketing, particularly how SMIs and firms can work together more effectively throughout their relationship lifecycle, from choosing which SMIs to work with, negotiating contracts, and protecting themselves from potential reputational risks.

     Paper 1 explores the ways SMIs can increase their engagement and social influence through their social media posts to better understand the factors firms should consider when selecting SMIs. Using the automated text analysis tool Linguistic Inquiry and Word Count (LIWC), it analyzes wine bloggers’ tweets and finds that by increasing personal pronoun use and decreasing the use of full-text numbers (numbers written out alphabetically) and interrogatives in their social media posts, wine SMIs can increase their engagement. Paper 2 examines the key tensions that exist when SMIs and firms work together. The paper identifies three key tensions that exist for marketers in managing SMI relationships: control tension, timeframe tension, and value tension. Paper 3 applies agency theory and opportunistic  to the influencer marketing practice to understand how opportunistic behaviour might occur on both sides of the SMI/firm relationship. Paper 4 explores the potential risks of firms partnering with SMIs and how they can be mitigated. It then provides a checklist to support managers in implementing this marketing tactic.

     This thesis uses qualitative studies to contribute to a more complete understanding of influencer marketing, especially how firms and SMIs can work together. It advances the research on influencer marketing and on the SMI/firm relationship and provides evidence that firms and SMIs sometimes act opportunistically and proposes ways to mitigate this. The major contributions of this thesis are that together the four papers apply the bases of power to create a way to understand the management of the various lifecycle stages of the SMI/firm relationship, as well as providing practical applications for marketing managers on how to better manage their SMI relationships to optimize the partnership and mitigate risk.

    Download full text (pdf)
    kappa
  • Public defence: 2025-06-12 14:00 https://kth-se.zoom.us/j/68652985718, Stockholm
    Tu, Sijing
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Models and Algorithms for Addressing Challenges in Online Social Networks2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Social network platforms such as Facebook and X (formerly Twitter) facilitate convenient access to news and discussions and enable individuals to express their opinions on societal issues. In recent years, numerous challenges have emerged as social network platforms present significant societal issues, such as increasing political polarization and the circulation of misinformation and disinformation. Malicious actors have exploited these platforms to target vulnerable individuals and manipulate the content they encounter on critical societal matters. Furthermore, algorithmic mechanisms implemented by these platforms, such as information filtering and personalized news feeds, have contributed to the formation of filter bubbles. The filter bubbles restrict individuals’ exposure to diverse perspectives and reinforce existing biases on societal issues. This thesis aims to deepen our understanding of emerging challenges in social network platforms by conceptualizing them as computational problems. We examine the intricate interplay between information flow, human interactions, and algorithmic interventions, selecting and proposing appropriate models to frame these dynamics. We transform complex real-world challenges into computational problems with precise mathematical formulations. We then analyze the complexity of these problems and design approximation algorithms to address them. This thesis comprises six publications and is organized around four research topics. First, we examine the capacity of malicious actors to amplify political polarization and shift individuals’ opinions toward extreme viewpoints. The two associated publications consider scenarios in which malicious actors either influence the opinions of a small subset of individuals or have extensive connections in the network. Second, we propose methods to mitigate filter bubbles by increasing individuals’ exposure to diverse information, achieved either through a viral marketing campaign or by adjusting the exposure of a small subset of individuals. Third, we analyze the impact of viral marketing campaigns on the opinion-formation process, introducing a model that integrates the dynamics of information dissemination with opinion formation. Fourth, we propose the OptiRefine framework for classical problems in social network analysis, such as the max-cut problem and the densest subgraph problem. The framework defines a class of problems for which an initial solution is given. The goal is to identify a new solution that remains close to the original while optimizing predefined objective functions, such as the cut value or the subgraph density. All proposed approaches are rigorously evaluated against multiple baseline algorithms and heuristics in all publications. 

    Download full text (pdf)
    fulltext
  • Public defence: 2025-06-12 14:00 Sal D3, Stockholm
    He, Yifei
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Domain-Specific Compilation Framework with High-Level Tensor Abstraction for Fast Fourier Transform and Finite-Difference Time-Domain Methods2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    With the end of Dennard scaling, hardware performance improvements now stem from increased architectural complexity, which in turn demands more sophisticated programming models. Today’s computing landscape includes a broad spectrum of hardware targets—CPUs, GPUs, FPGAs, and domain-specific ASICs—each requiring substantial manual effort and low-level tuning to fully exploit their potential. Performance programming has evolved beyond traditional code optimization and increasingly depends on domain-specific compilers, constraint-solving frameworks, advanced performance models, and automatic or learned strategies for code generation.

    Conventional implementations of numerical libraries often rely on handwritten, platform-specific kernels. While such kernels may achieve high performance for selected routines, they typically underperform in others, and their lack of portability results in high development overhead and performance bottlenecks. This impedes scalability across heterogeneous hardware systems.

    To address these challenges, this thesis presents the design and implementation of end-to-end domain-specific compilers for numerical workloads, with a focus on applications such as Fast Fourier Transform (FFT) and Finite Difference Time Domain (FDTD) solvers. The proposed framework is built on the Multi-Level Intermediate Representation (MLIR) and Low-Level Virtual Machine (LLVM) infrastructures. It models compute kernels as operations on 3D tensor abstractions with explicit computational semantics. High-level optimizations—including loop tiling, fusion, and vectorization—are automatically applied by the compiler.

    We evaluate the proposed code generation pipeline across diverse hardware platforms, including Intel, AMD, and ARM CPUs, as well as GPUs. Experimental results demonstrate the approach’s ability to deliver both high performance and portability across heterogeneous architectures.

    Download full text (pdf)
    kappa
  • Public defence: 2025-06-12 14:00 Air&Fire, Solna
    Truong, Patrick
    KTH, Centres, Science for Life Laboratory, SciLifeLab. KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Gene Technology.
    Machine Learning Models in Proteomics and Phylogenetics2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The exponential growth of biological data in recent years has necessitated the development of sophisticated computational methods to extract meaningful insights. This thesis explores various aspects of bioinformatics, focusing on benchmarking existing methods and developing novel approaches to address current challenges.

    As computational biology and large-scale biological datasets continue to expand, the discipline has undergone a paradigm shift toward data-driven methodologies. This transformation is driven by advances in high-throughput technologies that generate vast amounts of genomic, proteomic, and other omics data. The sheer volume and complexity of these datasets demand innovative computational strategies.

    Data-driven methods are increasingly central to biological research due to their ability to uncover hidden patterns, predict outcomes, and generate hypotheses from large-scale data. These approaches enable researchers to address complex biological problems that were previously intractable, leading to breakthroughs in areas such as personalized medicine, drug discovery, and systems biology.

    This thesis presents four studies that advance bioinformatic methods and their applications. The first study modifies and evaluates the performance of Triqler, a probabilistic graphical model, for protein quantification in data-independent acquisition (DIA) mass spectrometry. By adapting Triqler for DIA data and comparing it with established methods, we demonstrate its superior performance in identifying differential proteins while maintaining better statistical calibration.

    The second study introduces Prosit-transformers, a novel approach to prediction of MS2 spectrum intensity. By incorporating a transformer model pre-trained on protein features, we achieve improved prediction accuracy and reduced training time compared to the original Prosit model based on recurrent neural networks.

    The third study explores proteome-wide alkylation to enhance peptide sequence coverage and detection sensitivity in proteomic analyses. Through systematic modification of peptides with varying alkyl chain lengths, we demonstrate significant improvements in ionization signals, particularly for hydrophilic peptides. This approach has potential applications in nanoproteomics and single-cell proteomics, where sample material is limited.

    Finally, the fourth study presents difFUBAR, a scalable Bayesian method for comparing the selection pressure between different sets of branches in phylogenetic analyzes. Implemented in the Julia-based MolecularEvolution.jl framework, difFUBAR offers improved computational efficiency through subtree-likelihood caching and provides a robust alternative to frequentist approaches for characterizing site-wise variation in selection parameters.

    Together, these studies contribute to the benchmarks for these novel methods to establish their superiority over existing methods and to develop the arsenal of novel computational approaches in bioinformatics. By addressing challenges in proteomics, computational biology, and evolutionary analysis, this thesis contributes to the ongoing advancement of data-driven methods in biology. The work presented here not only improves our understanding of biological systems, but also provides researchers with enhanced tools to extract meaningful insights from complex biological data.

    Download (pdf)
    Summary
  • Public defence: 2025-06-13 10:00 Air&Fire, Solna
    Zhigulev, Artemii
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Gene Technology. KTH, Centres, Science for Life Laboratory, SciLifeLab.
    The role of enhancer mutations in human complex traits2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Understanding the genetic basis of diseases and individual responses to external stimuli holds immense potential for improving quality of life. Since the discovery of DNA as the carrier of genetic information, science has progressively unraveled the mechanisms of information transfer from DNA to RNA and proteins, bringing us closer to comprehending the principles of human biology. The advent of whole-genome sequencing, genome-wide association studies, and expression quantitative trait loci analysis has made the clinical and, in particular, personalized application of genomic data increasingly feasible. However, it soon became clear that highly penetrant protein-coding mutations, which are relatively straightforward to annotate, account for only ~5% of known cases, typically associated with Mendelian disorders. Most diseases are complex traits involving multiple, often regulatory, non-coding variants.

    A significant challenge in annotating non-coding variants lies in two interrelated tasks: (a) predicting the regulatory activity of genomic regions, where variants are located, and (b) linking these regulatory regions to their target genes. This thesis employs sequence capture Hi-C as its primary methodological approach to address both points in a cell-type-specific manner.

    In Article A, we investigated whether regulatory variants could predict chemotherapy-induced myelosuppression levels in non-small cell lung cancer patients. To this end, we analyzed interactome and bulk transcriptomic changes following carboplatin or gemcitabine treatments using three relevant hematopoietic cell lines (CMK, MOLM-1, and K-562). As a result, we demonstrated that non-coding variants, previously prioritized in 96 patients with varying degrees of myelosuppression, are enriched in interactions withgenes that exhibit differential interaction profiles upon treatment. This proof- of-concept study laid the foundation for follow-up analyses in patient-derived bone marrow samples.

    In Article B, we examined the contribution of rare non-coding variants to the missing heritability of a congenital cardiovascular disorder – bicuspid aortic valve. We combined the endothelial interactome of ascending aorta samples from sixteen adult patients, focusing on all promoter-interacting regions, with individuals’ whole-genome sequencing data. Moreover, we integrated embryonic single-cell and spatial transcriptomic datasets to contextualize these findings developmentally. By leveraging innovative analytical approaches, including allele-specific expression, advanced non-redundant transcription factor motif sets, and single-patient network models, we showed that rare regulatory variants complement protein-coding mutations in shaping the fetal heart mesenchyme and fibroblast transcriptome profiles in disease patients. This work is the foundation for an expanded and more comprehensive ongoingstudy using aortic valve cells.

    In Article C, we sought to elucidate the regulatory mechanisms underlying the 9p21 locus, the most significant genetic risk locus for coronary artery disease. We integrated the second part of the endothelial cell interactome dataset, focusing on the coronary artery disease-associated SNPs, with smooth muscle cell interactomes derived from the ascending aortas of six additional patients. We proved that the risk variant rs1333042 interacts with the previously unrelated MIR31HG gene, specifically in endothelial cells. Multiple layers of experimental validation supported this finding.

    In conclusion, this thesis advances the field of clinical interactomics by illustrating novel cases of enhanceropathies and proposing new frameworks for integrating interactome data with bulk, single-cell, spatial transcriptomics, and whole-genome sequencing across different developmental stages. These studies offer conceptual insights and practical methodologies for understanding the non-coding genome in disease.

    Download (pdf)
    fulltext
  • Public defence: 2025-06-13 14:00 F3, Stockholm
    Dehlin, Fredrik
    KTH, School of Engineering Sciences (SCI), Physics, Nuclear Science and Engineering.
    Design and safety analysis of a lead-cooled research reactor2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This compilation thesis presents work focused on the design and safety analysis of the small lead-cooled research and demonstration reactor SUNRISE-LFR–the first step toward the construction of a next-generation reactor in Sweden. Two versions of SUNRISE-LFR are introduced—the second necessitated by the lack of access to uranium enriched above 10 wt.% 235U/U. Neutronic characterization is performed using the Monte Carlo code Serpent 2, while the reactors’ behaviours during design extension conditions (DECs) are analysed using the in-house developed code BELLA and an established fast reactor safety code, SAS4A/SASSYS-1. An analytical method for designing a passively safe lead-cooled reactor is derived and used to propose the core configuration of SUNRISE-LFR. This method is subsequently expanded into a semi-analytical framework for designing the Reactor Vessel Auxiliary Cooling System (RVACS), aimed at ensuring fuel cladding survivability during unprotected station blackout (USBO) transients. The model is further extended to evaluate the impact on system temperatures during USBO transients when using nuclear fuel with different actinide compositions. It is shown that actinide compositions with low concentrations of americium and the plutonium isotope 241Pu are beneficial for cladding integrity. Finally, the thesis assesses the impact of coolant circulation on the total neutron activation of the lead coolant over the reactor’s operational lifetime. It is demonstrated that a sufficiently pure lead vector—particularly one with low silver content—could allow the coolant to be exempted from radiological control within a reasonable time frame, thereby avoiding the need for disposal in a final repository. This thesis serves as both a foundation and a stepping stone for the continued development, licensing, and eventual construction of a lead-cooled reactor in Sweden.

    Download full text (pdf)
    fulltext
  • Public defence: 2025-06-13 14:00 https://kth-se.zoom.us/j/68571101309, Stockholm
    Jonsson, Marika
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Cognitive Accessibility in eHealth – Introducing Participatory Research through Design2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The digitalisation of healthcare often comes with promises of solving future expected increasing demands on healthcare and ensuring equitable health service distribution. However, this promise is not yet realised in Sweden’s present eHealth services. Meanwhile, Swedish public healthcare is moving towards person-centred care, where the person’s resources, experiences, and needs are considered. The primary motive of this thesis is to contribute to knowledge on how to design eHealth services that support cognitive accessibility for increased equity. The work in this thesis is based on the idea that accessibility is important, and that people affected by problems in eHealth should be part of the design of accessible eHealth services. The research started by looking into previous research on accessibility in eHealth services and then focused on Sweden’s local context. There are regulations on accessibility in eHealth services the European Union (EU). Therefore, we investigated how Swedish public healthcare complies with this regulation. The scope was then narrowed to cognitive accessibility, an area not sufficiently covered by the EU regulations and current guidelines. Together with people with lived experiences of cognitive impairments, we explored participatory design methods for cognitive accessibility and how to make an impact in real-life settings. We used two cases for the research: the personal eHealth services on the Swedish national healthcare website 1177.se; and a symptom checker and triage tool, called 1177 direkt, presented as a conversational agent. For the first case, we co-designed a prototype for enhanced cognitive accessibility and used it as a dialogue tool to make an impact on the development of the existing eHealth services. For the second case, we evaluated 1177 direkt and conducted co-design activities for suggestions on enhanced cognitive accessibility with a collaborative approach with representatives from the product owner. In this case, the product owners are both the company that develops and markets the product, and Inera, which procures the product so that public healthcare providers can select it from Inera’s range of services and products. This thesis ii concludes that eHealth services that are experienced as inaccessible will most likely remain inaccessible if we continue developing eHealth services as before. Considering the insights from people with lived experience of cognitive impairment in participatory approaches when designing eHealth services can contribute to eHealth services that support person-centred care.

    Download full text (pdf)
    fulltext