kth.sePublications KTH
1 - 21 of 21
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Public defence: 2026-01-16 10:00 https://kth-se.zoom.us/s/61617488895, Stockholm
    Moothedath, Vishnu Narayanan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Towards Efficient Distributed Intelligence: Cost-Aware Sensing and Offloading for Inference at the Edge2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The ongoing proliferation of intelligent systems, driven by artificial intelligence (AI) and 6G, is leading to a surge in closed-loop inference tasks performed on distributed compute nodes.These systems operate under strict latency and energy constraints, extending the challenge beyond achieving high accuracy to enabling timely and energy-efficient inference.This thesis examines how distributed inference can be optimised through two key decisions: when to sample the environment and when to offload computation to a more accurate remote model.These decisions are guided by the semantics of the underlying environment and its associated costs.The semantics are kept abstract, and pre-trained inference models are employed, ensuring a platform-independent formulation adaptable to the rapid evolution of distributed intelligence and wireless technologies.

    Regarding sampling, we studied the trade-off between sampling cost and detection delay in event-detection systems without sufficient local inference capabilities. The problem was posed as an optimisation over sampling instants under a stochastic event sequence and analysed at different levels of modelling complexity, ranging from periodic to aperiodic sampling. Closed-form, algorithmic, and approximate solutions were developed, with some results of independent mathematical interest.Simulations in realistic settings showed marked gains in efficiency over systems that neglect event semantics. In particular, aperiodic sampling achieved a stable improvement of ~10% over optimised periodic policies across parameter variations.

    Regarding offloading, we introduced a novel Hierarchical Inference (HI) framework, which makes sequential offload decisions between a low-latency, energy-efficient local model and a high-accuracy remote model using locally available confidence measures. We proposed HI algorithms based on thresholds and ambiguity regions learned online by suitably extending the Prediction with Expert Advice (PEA) approaches to continuous expert spaces and partial feedback. HI algorithms minimise the expected cost across inference rounds, combining offloading and misclassification costs, and are shown to achieve a uniformly sublinear regret of O(T2/3).The proposed algorithms are agnostic to model architecture and communication systems, do not alter model training, and support model updates during operation. Benchmarks on standard classification tasks using the softmax output as a confidence measure showed that HI adaptively distributes inference based on offloading costs, achieving results close to the offline optimum. HI is shown to add resilience to distribution changes and model mismatches, especially when asymmetric misclassification costs are present.

    In summary, this thesis presents efficient approaches for sampling and offloading of inference tasks, where various performance metrics are combined into a single cost structure. The work extends beyond conventional inference problems to areas with similar trade-offs, advancing toward efficient distributed intelligence that infers at the right time and in the right place. Future work includes conceptual extensions like joint sampling-offloading design, and integration with collaborative model-training architectures.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-01-16 10:00 FB51, Stockholm
    Hossny, Karim
    KTH, School of Engineering Sciences (SCI), Physics, Nuclear Science and Engineering.
    Decision Tree Insights Analytics (DTIA): An Explainable AI Framework for Severe Accident Analysis2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In nuclear reactor safety analysis, we label an accident as severe once a partial core meltdown and material relocation begin. Researchers use simulation tools such as ANSYS and MELCOR to study these events safely, producing vast and complex datasets. In this work, we applied machine learning explainability and interpretability to extract insights from severe accident simulations for the Nordic boiling water reactor (BWR) through five iterative studies. First, we examined the explainability of the decision tree classification algorithm to distinguish between accident types using time-wise pressure vessel external temperature. Second, we generalised the model to create a more statistically robust and generic framework, introducing the open-source Decision Tree Insights Analytics (DTIA) framework (https://github.com/KHossny/DTIA), which combines explainability, interpretability, and statistical robustness. Third, we applied DTIA to high-dimensional MELCOR COR package data for a station blackout combined with a loss-of-coolant accident (SBO + LOCA) in a Nordic BWR, revealing new findings. Fourth, we used DTIA to compare structural variables of the reactor pressure vessel lower head under SBO and SBO + LOCA conditions. Finally, we coupled DTIA with K-Means clustering to address its need for labelled data, uncovering previously overlooked events such as canister melting. We concluded that the patterns identified by machine learning in mapping inputs to outputs can uncover insights that were previously overlooked, particularly in high-dimensional and complex datasets.

    Download full text (pdf)
    Without_Papers
  • Public defence: 2026-01-16 10:00 Kollegiesalen, Stockholm
    Alloisio, Marta
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Material and Structural Mechanics.
    In-Vitro Testing and Numerical Modelling towards Uncovering Aortic Wall Fracture Mechanisms2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Cardiovascular pathologies such as aortic aneurysm and dissection remain one of the leading causes of mortality worldwide. Current clinical standards for assessing rupture risk in aneurysmal aortas rely primarily on external diameter and its growth rate, despite the inherently multifactorial nature of rupture. Although tissue fracture plays a crucial role in the onset and progression of vascular diseases, understanding in this area remains limited. The hierarchical histological structure of vascular tissue gives rise to complex mechanical behaviour, while existing experimental protocols for soft tissue fracture are often inadequate for a sound characterisation of the fracture response.

    A comprehensive understanding of fracture requires the assessment of fracture mechanisms and the quantification of key parameters, including resistance to rupture and the size of the fracture process zone. Concerning biological soft tissue, most mechanistic information stems from studies on skin, which is extremely resistant to fracture. However, the histological structure of vascular tissue differs from that of skin, and impedes the translation of such information. Moreover, the influence of clinical factors on the mechanics of diseased vessel walls cannot be ignored, as focusing solely on normal tissue may yield clinically irrelevant estimates of mechanical properties. Bridging engineering fracture mechanics with medical application thus represents both a critical and challenging task.

    A major part of this thesis was dedicated to the design and application of a fracture test experiment, the symmetry-constraint Compact Tension (symconCT) test. The setup enabled a stable propagation of the crack in a pre-notched specimen orthogonal to the loading direction. Investigations could be carried out up to complete rupture of the specimen, and image analysis captured local mechanisms at the fracture tip. Pronounced rounding/flattening of the crack notch, called blunting, characterised the fracture. Besides, the study demonstrated the strong dependence of crack morphology on loading orientation relative to fiber alignment. Despite a slow displacement rate being applied, the experiments revealed significant strain-rate effects ahead of the notch. The protocol allowed testing of both normal porcine tissue and human aneurysmal aorta, with results linking fracture properties to clinical and histological data. Collagen content increased fracture resistance, while energy dissipation decreased with age, underscoring the relevance of patient-specific factors in rupture prediction. To further validate this hypothesis, mechanical, geometrical, and clinical information were integrated through different machine learning models to assess abdominal aortic aneurysms' rupture. The models outperformed the clinical standard, revealing that rupture identification depends on multiple interacting factors rather than any single dominant parameter.

    Based on the experimental data, finite element models were developed to simulate the fracture behaviour during the symconCT test. Elastic and fracture properties were identified at a specimen-specific level, exploring two different methods to fracture: the cohesive zone and phase-field approaches. As the fracture resistance (strength) of notched specimens was significantly lower than that of unnotched tensile specimens, this indicates that conventional tests on flawless tissue overestimate fracture properties, especially in diseased tissues, which contain microvoids and microdamage. Future work should aim to simulate entire vessel walls using patient-specific geometries and boundary conditions.

    The combined experimental and computational framework in this thesis advanced the understanding of the fracture processes and mechanical behaviour of the aortic vessel wall. It provided essential groundwork for patient-specific rupture risk prediction, supported the translation of biomechanics into clinical decision-making, and paved the way for future studies addressing more realistic and complex physiological scenarios.

    Download (pdf)
    Kappa
  • Public defence: 2026-01-16 14:00 F3, Stockholm
    Fejne, Frida
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Existence, uniqueness, and regularity theory for local and nonlocal problems2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of three papers, an individual summary of each paper, and an introduction. The papers are all related to existence, uniqueness, or regularity theory of local and nonlocal partial differential equations (PDEs).

    In Paper A, we establish uniqueness for viscosity solutions of the inhomogeneous nonlocal infinity Laplace equation Lu = f, where the right hand side f is a bounded, continuous, and nonpositive function. Uniqueness is proven through a comparison principle.

    In Paper B, we use Perron's method to construct viscosity solutions to the equation ∂u/∂t = L u in Ω, and u = g in the complement.

    In Paper C we study regularity of a minimizer of the expression J(u) := ∫ F(∇u) dx, where F(x) is a strongly convex function whose second derivatives might jump at |x| = 1. The specific form of F gives rise to a free boundary Γ, and the resulting Euler-Lagrange equation varies over Γ. In this paper we only consider two-phase flat points. We show that under some regularity and non-degeneracy assumptions the asymptotic expansion of a minimizer u can be written as u(x) = a + ν · x + p(x) + q(x), where a ∈ R, ν ∈  R^n. The function p is a broken polynomial that is defined as a C^1 function consisting of one polynomial in the upper half space and another polynomial in the lower half space, and the function q is a rest term. We derive the PDEs that are satisfied by p and q, respectively, and show many regularity properties for the terms in the expansion. This paper is intended to be the first part of a project that aims at establishing regularity of the free boundary Γ.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-01-20 09:30 F3, Lindstedtsvägen 26, KTH Campus, Stockholm
    Vosoughian, Saeed
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Road and Railway Engineering. Trafikverket.
    A mechanistic framework for evaluating the performance of asphalt pavements subjected to frost heave and thaw settlement2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Asphalt pavements are subjected to various forms of deterioration throughout their service life due to the combined effects of traffic loading and environmental conditions. In cold regions, the coexistence of subfreezing temperatures and a high moisture content promotes the formation of ice lenses in frost-susceptible soils that may be present in the subgrade layer. This phenomenon, known as frost heave, induces differential upward movement of the ground surface, leading to cracking and surface irregularities in pavements. During the subsequent thawing period, the melting of ice lenses leads to a significant reduction in the stiffness and stability of the underlying soil, thereby compromising the structural integrity of the pavement. This cycle of frost heave and thaw settlement contributes to progressive surface degradation, a phenomenon commonly observed in countries like Sweden, where long winters and moisture-saturated soils are prevalent.

    The present thesis proposes a mechanistic framework to assess the performance of asphalt pavements under frost heave and thaw settlement. To achieve this, a thermomechanical frost heave–thaw settlement model is coupled with a thermodynamics-based asphalt damage model. In the frost heave–thaw settlement component, thermal and mechanical fields are coupled through a porosity evolution function, which implicitly accounts for water seepage during frost heave. Notably, the proposed model introduces a new approach in which the formation of ice lenses during frost heave and the excess water introduced into soil composite during the thawing phase are treated analogously to the healing and damage processes in continuum damage mechanics. 

    The mechanical behavior of asphalt materials is modeled using a continuum constitutive framework capturing viscoelasticity, viscoplasticity, and material degradation. The formulation is developed in the context of finite strain theory and is grounded in thermodynamic principles governing irreversible processes. In this model, damage initiation and evolution are ascribed to the restored viscoelastic energy. 

    The proposed framework is implemented and evaluated across study scenarios that include both uniform and non-uniform frost heave and thaw settlement. The scenarios comprise isolated freeze-thaw cycles and full-scale cases using measured climate data from the city of Kiruna in northern Sweden. The results show that the framework effectively captures the frost action within the soil by modeling the evolution of porosity, the distribution of ice and liquid water contents, and the associated ground surface deformations. In addition, it enables the analysis of the subsequent propagation of damage within the asphalt layer. The framework also serves as a valuable tool for assessing the effectiveness of various mitigation strategies aimed at alleviating the detrimental effects of annual ground surface deformations induced by frost heave and thaw settlement.

    Download (pdf)
    summary
  • Public defence: 2026-01-23 10:00 Stockholm
    Sjöström, Jenny
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Fibre- and Polymer Technology.
    Lignin release during oxygen delignification – kinetics, structure and potential2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Oxygen delignification is a critical stage in modern kraft pulp production, enabling significant reductions in chlorine-based bleaching chemicals and environmental emissions while maintaining fiber quality. The process remains limited by challenges in efficiency and selectivity, governed jointly by chemical reactions and mass transport constraints. This thesis investigates the interplay between these mechanisms and explores the properties and valorisation potential of oxidized lignin (oxlignin) extracted from oxygen-stage wash liquors. Experimental results demonstrate that lignin removal during oxygen delignification is driven by a combination of rapid early-stage oxidative reactions and diffusion-controlled leaching. High oxygen pressure and sufficient alkalinity promote lignin depolymerization and oxidation, improving selectivity, while insufficient chemical conditions lead to lignin redeposition and cellulose degradation. Upstream factors such as brownstock washing efficiency and storage conditions significantly influenced lignin leaching and pulp quality, highlighting the importance of integrated process control. Oxlignin, isolated from industrial filtrates, differed markedly from conventional kraft lignin, exhibiting higher carboxylic acid content, improved water solubility, and a narrower molecular weight distribution. These properties suggest potential applications as dispersants or additives in biopolymer formulations. Ultrafiltration proved to be a viable approach for fractionating oxlignin. By connecting process optimization with resource valorisation, this work contributes to more sustainable kraft pulp production and supports the development of new lignin-based value streams in future biorefineries.

    Download (pdf)
    summary
  • Public defence: 2026-01-27 13:00 Kollegiesalen, via Zoom: https://kth-se.zoom.us/j/61935309457, Stockholm
    Keskitalo, Markus M.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Industrial Biotechnology, Industrial Biotechnology.
    Functional characterization of dolichol phosphate mannose synthases and development of infrared nanoscopy to study membrane proteins in solution2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Membrane proteins are proteins that are embedded in the lipid bilayers oforganisms. Roughly a fourth of all human proteins are estimated to bemembrane proteins and about 60 % of human-approved medications targetmembrane proteins. The correct function of membrane proteins is essential toall organisms.

    This thesis is made up of two parts. First, the biochemistry and function ofdolichol phosphate mannose synthases (DPMS) are investigated. Theseenzymes are responsible for the transfer of mannose from a nucleotide sugardonor to the acceptor lipid dolichol phosphate, forming dolichol phosphatemannose (Dol-P-Man). In eukaryotes and archaea, Dol-P-Man is the keymannose donor for mannosylation reactions inside the endoplasmic reticulum(ER) lumen or on the extracellular leaflet of cell membrane, respectively. Asthe synthesis of Dol-P-Man is known to take place on the cytoplasmic side ofthe ER membrane in eukaryotes or the cell membrane in archaea, the questionremains how Dol-P-Man is transported onto the other side of the membraneto serve as a mannose donor. This thesis presents a hypothesis in which theDPMS itself is responsible for the flipping of its own product. The hypothesisis supported by crystallographic data that shows Dol-P-Man bound to a DPMSin a “flipped” orientation that could enable the transport to the other side ofthe membrane. This thesis also covers the recombinant expression,purification, and in vitro characterization of DPMS from the zebra fish Daniorerio. This DPMS is similar to the human enzyme and can therefore yieldmechanistic details behind DPMS-related diseases.

    The second part covers the development of scattering-type scanning near-fieldoptical microscopy (s-SNOM) to study proteins in solution. The method iscapable of collecting images and infrared spectra from samples at nanometerscalelateral resolution. The method is not readily applicable for the study ofobjects in solution, but this limitation can be circumvented by the use of aliquid cell. The liquid cell is first used to probe the stretching vibrations ofwater in nanoscale and the method is then further developed and is applied tocollect images and spectra from purple membranes, a model membranecomprising tightly packed bacteriorhodopsin molecules and associated lipids.

    Download (pdf)
    Kappa
  • Public defence: 2026-01-29 14:00 Kollegiesalen, Stockholm
    Chechan, Batoul
    KTH, School of Industrial Engineering and Management (ITM), Learning, Learning in Stem.
    Swedish high school students’understanding of functions: The role of digital tools2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

     This thesis explores Swedish high school students’ learning experiences with the

    concept of functions, focusing on the challenges they encounter and the role of

    digital tools in deepening their understanding. Despite its central role in mathematics,

    the concept of functions remains one of the most challenging topics for many

    students. As a foundation for mathematical modelling, logical reasoning, and

    problem-solving, addressing these learning difficulties is essential for long-term

    mathematical development. With the growing integration of technology in education,

    digital tools have become increasingly significant in mathematics instruction. This

    thesis examines how such tools enhance conceptual understanding by analysing how

    students use them to solve function-related tasks. It draws on four interconnected

    papers that together provide a comprehensive perspective on students’ learning of

    functions and the pedagogical effects of digital tool usage. Paper I presents a

    qualitative analysis based on observations of two high school students solving

    function problems. It identifies misconceptions and highlights how strong graphical

    knowledge and visual reasoning reduce learning difficulties. Paper II is a quasi-

    experimental comparison between students using a digital tool and those employing

    traditional methods. Results show that students using digital tools performed

    significantly better. Paper III investigates how students independently use a digital

    tool when solving mathematical tasks, identifying four key strategies: using it as the

    primary solving method, for verification, as a support tool, or variably depending on

    the task. Paper IV examines the digital tool usage of teachers and students, as well as

    their respective perspectives. It further explores teaching strategies, teachers’

    perspectives on student usage, and the instructions students receive about digital

    tools. Findings across the studies indicate that visualisation through digital tools

    enhances conceptual understanding by providing dynamic, interactive learning

    experiences. Students can autonomously use these tools in diverse ways, influencing

    their learning outcomes and engagement. The research underscores the importance of

    teacher facilitation in guiding meaningful tool use and offers practical insights for

    integrating technology into mathematics education. Overall, this thesis contributes to

    the growing field of technology-enhanced mathematics education and provides

    recommendations for improving students’ understanding of functions through digital 

    means. Collectively, the studies demonstrate that digital visualisation makes abstract

    concepts more tangible, enables multiple solution pathways, and supports student

    understanding. However, they also emphasise that the realisation of the potential of

    digital tools depends on their use and on intentional pedagogical integration.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-01-30 09:00 https://kth-se.zoom.us/j/65512985691, Stockholm
    Börjeson, Charlie
    KTH, School of Engineering Sciences (SCI), Applied Physics, Bio-Opto-Nano Physics.
    Peripheral image quality and myopia2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Myopia (nearsightedness) is when the axial length of the eye is too long relative to its focal length. This typically occurs because of excessive eye growth in childhood, and results in blurred vision for distant objects. The elongation of the eye furthermore increases the risk for ocular sight-threatening diseases later in life. Myopia has increased globally in the past decades, but the underlying mechanisms behind myopic eye growth are not yet fully understood.

    Studies in animals have found that a peripheral focus behind or in front of the retina can signal to the eye to grow or to stop grow, respectively. More recently, various optical myopia control spectacles and contact lenses that modify peripheral image quality have been able to slow myopia progression in children, i.e., act as myopia control, though there are large individual variations that are unexplained. The aim of this thesis is to identify characteristics in the peripheral image quality related to myopia.

    In this thesis, peripheral image quality was predominantly measured with a dual angle wavefront aberrometer, employing two Hartmann-Shack wavefront sensors and connected relay systems. The properties of 4f relay systems, and an alternative non-4f relay system, were investigated, and the results used during development of the dual angle aberrometer.

    We investigated the effect of optical myopia control spectacles (one progressive design and two microlens designs) on peripheral image quality. We found that the progressive design induced a more negative relative peripheral refraction (RPR), i.e., shifted the peripheral image more in front of the retina. The two microlens designs did not change the relative peripheral refraction; instead, they made the peripheral image blurrier, irrespective of habitual RPR. This indicates that progressive and microlens spectacles have different myopia control mechanisms. We also studied changes in corneal aberrations during orthokeratology (rigid night lenses that reshape the cornea), and found that higher baseline myopia was correlated with better myopia control effect, and with larger corneal changes. As orthokeratology has been found to induce more negative RPR, this implies that the working mechanisms of  keratology and progressive myopia control spectacles are similar.

    Additionally, we investigated differences in peripheral image quality in children, as well as adult myopes and emmetropes. We found that the children and adult non-myopes had asymmetric RPR profiles (nasal vs. temporal visual field), but not the adult myopes. The asymmetry strengthened during near-work, suggesting that RPR during near work could be important for myopia regulation. We also found that non-myopes had less well-defined peripheral foci (a broader 'depth-of-refraction') than myopes (in both adults and children), often with multifocal characteristics.

    Finally, we investigated the longitudinal development of the children over one year. Larger axial length growth was linked to more positive baseline RPR, particularly during near-work, although not to a significant degree. We will continue to monitor RPR, peripheral image quality, and axial growth in the children over the coming years.

    Download full text (pdf)
    Peripheral_image_quality_and_myopia_kappa
  • Public defence: 2026-01-30 10:00 https://kth-se.zoom.us/j/66026793395, Stockholm
    Nourazar, Mehdi
    KTH, School of Industrial Engineering and Management (ITM), Materials Science and Engineering, Properties.
    Atomic Scale Investigation of Defects in High-Performance Materials2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Transition metal carbides of groups 4 and 5 (TiC, ZrC, HfC, VC, NbC, TaC) with the rocksalt (B1) structure are critical refractory materials for extreme temperature applications due to their exceptional hardness, high melting points, and thermal stability. This high-temperature behavior governed by point defects and diffusion has long been preplexing, with experimental metal self-diffusion activation energies ( 7.5 eV in TiC and ZrC) and anomalously high prefactors (entropy 10–14.5 𝑘𝐵 in TiC)conflicting with traditional ab initio predictions assuming unreconstructed vacancies.

    This thesis focuses on these discrepancies through systematic density functional theory (DFT) investigations, revealing that metal vacancies in group 4 and certain group 5 carbides spontaneously reconstruct by displacing neighboring carbon atoms to form strong C–C bonds. A combinatorial enumeration in TiC identified a rich landscape of reconstructed configurations, with the ground-state structure featuring a planar graphene-like C dimer lowering the Ti vacancy formation energy by 3.5 eV relative to the unreconstructed state. This reconstruction dramatically reduces Schottky defect formation energies from 7–8 eV (unreconstructed) to 3.98 eV (TiC), 6.08 eV (ZrC), 7.14 eV (HfC), and 1.97 eV (VC), while NbC and TaC retain unreconstructed vacancies ( 2.7 eV). Trends across the MeX (X = C, N, O) series correlate with valence electron count and bond covalency. Ab initio molecular dynamics (AIMD) at 1500–3000 K demonstrate that the C-dimer in the 2G structure undergoes thermally activated rotation above 1500 K, periodically opening the vacancy site and enabling Ti jumps into metastable open configurations with migration barriers of 3.5–4.0 eV. The resulting activation energy of 7.5 eV in agreement with experimental values. The anomalously high diffusion entropy arises from the large configurational and vibrational entropy of the reconstructed vacancy ensemble, particularly the dimer’s rotational degree of freedom (rotational diffusion coefficient 1.5 × 1012 s−1 at 2500 K) and numerous low-energy C-bonded metastable states. Reconstruction also induces strong short range repulsion between vacancies,preventing clustering and restoring the classical dissociated Schottky picture contrary to earlier cluster-based models. These findings establish a monovacancy mediated diffusion mechanism driven by dynamic carbon reconstruction as the dominant metal transport pathway in group 4 carbides. The insights are extended to technologically vital WC–Co cemented carbides, where vacancy-reconstruction-mediated processes at the surface of WC particles and WC/Co interfaces control Ostwald ripening,abnormal grain growth, and phase stability during liquid-phase sintering. The reconstructed vacancy framework provides a new atomic-scale foundation for defect engineering of refractory carbides, enabling predictive modeling of creep, sintering,and microstructural evolution in ultra-high-temperature ceramics and cemented carbides for aerospace, nuclear, and cutting-tool applications.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-01-30 11:00 https://kth-se.zoom.us/j/68657960472, Stockholm
    Kronborg, Joel
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Turbulence Generation and Left Ventricular Hemodynamics Elucidated Through Flow Decomposition2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In recent years, the triple decomposition of the velocity gradient tensor has emerged as a novel vortex identification method in fluid flows. Although early algorithms for computing it were limited by an incomplete physical interpretation of the underlying mathematics, the decomposition has the potential to contribute to more than just vortex identification, such as shear estimation in blood flow and analysis of turbulence generation.

    An attractive feature of the triple decomposition is its ability to give a rotation measure uncontaminated by shear, something that many established methods fail to do. However, several different algorithms have been proposed for computing it, and not all of them yield the same results. Here, advances are presented not only in explaining this non-uniqueness and motivating a unified and simplified approach for computing the triple decomposition, but in widening the scope of its applications as well.

    In blood flow, shear is an important parameter that, if sustained at a high level, may contribute to platelet activation and subsequent thrombosis events such as stroke or myocardial infarction. Simulations are presented here of the intraventricular blood flow in the left ventricle of a human heart, both using a simplified model of the mitral valve to simulate transcatheter edge-to-edge repair, and introducing a novel arbitrary Lagrangian-Eulerian fluid-structure interaction model of the mitral valve. The triple decomposition is demonstrated to outperform the established von Mises-like scalar shear stress, which is shown to be contaminated by strain.

    A mathematical stability analysis of the shear, strain and rotation components from the triple decomposition is also used to motivate a novel process in turbulence generation. In a simulation of two adjacent vortices interacting to develop turbulent flow, a zig-zag pattern is identified as a mechanism that rearranges small-scale secondary vortices to transfer energy to larger scales, contributing to the formation of a turbulent energy spectrum.

    The results presented in this thesis contribute not only to better understanding and more straightforward computation of the triple decomposition, but also demonstrate its usefulness in improving analysis of potentially adverse shear in blood flow, as well as of fundamental aspects of turbulence generation.

    Download full text (pdf)
    kappa
  • Public defence: 2026-02-03 14:00 https://kth-se.zoom.us/j/63739777936, Stockholm
    Joshi, Sushen
    KTH, School of Electrical Engineering and Computer Science (EECS), Electrical Engineering, Space and Plasma Physics.
    Insights into Uranus’ atmosphere from HST FUV observations and radiative transfer modelling2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Uranus is one of the extreme worlds in the Solar System. Its large axial tilt of 98o and orbital period of 84 years lead to unique seasons. It has been visited only by the Voyager 2 spacecraft and remains one of the poorly understood planets in the Solar System. Uranus’ atmosphere is primarily composed of atomic and molecular hydrogen (H and H2, respectively), helium (He), and methane (CH4). One of the strongest emission lines from the Sun in the ultraviolet is Lyman alpha (Lyα, 1215.67 Å). It is efficiently scattered by H and H2, and absorbed by hydrocarbons (mostly, CH4) in planetary atmospheres. This makes remote sensing observations at Lyα and associated wavelengths an excellent tool to study giant planets’ upper atmospheres. At giant planets, the upper atmosphere plays a key role in various processes such as photochemistry, interaction with the plasma environment and possibly solar wind, magnetosphere-ionosphere coupling, atmospheric escape, and interaction with ring particles. In this thesis, we analysed Hubble Space Telescope (HST) observations of Uranus obtained at Lyα and 1280 Å wavelengths, and performed radiative transfer simulations considering resonant scattering by H, Rayleigh-Raman scattering by H2, and absorption by CH4. The results and insights into Uranus’ neutral upper atmosphere gained from the work are presented in a series of papers.

    Our analyses of the first spatially resolved images of Uranus’ Lyα emissions, obtained in 1998 and 2011, revealed an extended exosphere of gravitationally bound hot H. The abundance of this hot H varied with time and cannot be explained by production mechanisms involving solar UV radiation alone, pointing to additional energetic processes (Paper I). Further, we analysed Uranus’ Raman-scattered Lyα emissions at 1280 Å, unique among the Solar System giant planets. Using the observed brightness of these emissions, we constrained the vertical distribution of methane in Uranus’ upper atmosphere, providing key inputs for photochemical modelling (Paper II). Our 2024 HST observations revealed a significant increase in exospheric hot H abundance compared to 1998 and 2011, indicating an increase in energetic processes creating this hot H. We also found a persistent azimuthal variation in the exospheric Lyα emissions. Thus, we provide tentative evidence of the role of energetic particles in the Uranian magnetosphere in producing the hot H observed in the exosphere (Paper III).

    Download full text (pdf)
    Sushen_Joshi_PhD_Thesis
  • Public defence: 2026-02-06 09:00 Kollegiesalen, Stockholm
    Truong, Minh
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Aerospace, moveability and naval architecture.
    Decoding gait in individuals with spinal cord injury: From explainable AI to predictive simulations2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    While current biomechanics research based on normal models and assumptions of normalcy has substantial merit, it fails to reliably describe individuals with impairments. Spinal cord injury (SCI), whether traumatic or nontraumatic, can partially or completely damage sensorimotor pathways, leading to heterogeneous gait abnormalities. A substantial knowledge gap exists regarding biomechanical and neurological movement strategies in this population due to complex, interacting factors including age, weight, time since injury, pain, sensorimotor impairment, and spasticity. The ASIA Impairment Scale, while recommended for classifying injury severity, was not designed to characterize individual ambulatory capacity. Other standardized assessments based on subjective ratings or timing/distance measures have limited ability to characterize functional capacity in this population comprehensively.

    This thesis therefore aims to create computational frameworks for studying walking strategies in individuals with SCI, particularly incomplete SCI (iSCI), through two complementary approaches: developing machine learning algorithms that link individual characteristics to gait outcomes, and individualizing objective functions and constraints in predictive simulations using neuromusculoskeletal modeling.

    Study I proposed and evaluated a framework applying Gaussian Process Regression and SHapley Additive exPlanations (SHAP) to quantify how neurological impairments and other demographic and anthropometric factors contribute to walking speed and net Oxygen cost during a six-minute walk test. Individual SHAP analyses quantified how these factors influenced walking performance for each participant, informing personalized rehabilitation targeting areas with the most potential for improvement.

    Study II stratified gait heterogeneity in individuals with iSCI by deriving clusters with similar gait patterns without a priori parameter identification and assessed clinical correlations within the derived clusters. Six distinct gait clusters were identified and characterized among 280 iSCI gait cycles, informing more individualized rehabilitation.

    Study III characterized margin of stability, temporospatial parameters, and joint mechanics in four iSCI subgroups from Study II compared to participants without disability, identifying how gait adaptations evolve as muscle weakness affects major muscle groups. Gait patterns remained normal with isolated mild plantarflexor weakness but deteriorated with combined hip muscle weakness and severe plantarflexor weakness.

    Study IV developed a bilevel optimization framework using Bayesian optimization to automatically identify optimal objective weights for predictive gait simulations in individuals with iSCI. Tested on one female participant with asymmetric muscle weakness, the framework successfully automated weight identification in 9-12 days and demonstrated that simulations with optimized weights outperformed literature-based reference weights for predicting kinematics, kinetics, and ground reaction forces, showing promise for systematically exploring personalized compensatory gait strategies with predictive simulations.

    These findings demonstrate the potential of advanced data-driven and simulation techniques to address gait complexity in individuals with SCI, with broader applicability to other clinical populations.

    Download full text (pdf)
    Minh_PhDThesis_Kappa
  • Public defence: 2026-02-06 09:00 Air & Fire, via Zoom: https://kth-se.zoom.us/j/62549123996, Stockholm
    Bendes, Annika
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Protein Science, Affinity Proteomics. KTH, Centres, Science for Life Laboratory, SciLifeLab.
    Applications of multiplexed immunoassays for precision medicine2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Proteins are molecules that play central roles in almost all biological processes. Their abundance in cells, tissues, and body fluids is dynamic, reflecting both physiological states and disease-related changes. When studying proteins, a major challenge is distinguishing normal biological variation from alterations that indicate early or ongoing disease. Using proteomics, a term that describes measuring hundreds of proteins at the same time, will deepen our understanding of how protein signatures relate to health and disease. This will assist to establish molecular measurements of so-called biomarkers that support precision medicine through earlier detection, better disease stratification, and more individualized treatment strategies. 

    In the studies included in this thesis, we applied affinity proteomics techniques to investigate how levels of antibodies and proteins in blood samples related to health and disease and to expand our understanding of protein-protein interactions of drug targets. 

    Although proteins can be measured in different sample types, blood offers a minimally invasive window into our body and to measure molecules coming from many organs and biological processes. Home-sampled dried blood spots (DBS) has gained renewed interest due to the recent development of newer and more accurate sampling cards. In several studies included in this thesis, we demonstrate that DBS can be used in the general population sampling without relying on or involving clinical facilities and healthcare resources. In Paper I, we established an analytical procedure for measuring home-sampled DBS for antibodies against SARS-CoV-2. In Paper II, we expanded this effort to protein measurements and longitudinal sampling. In Paper III, we showed the importance of even more frequent DBS sampling for capturing the dynamic changes of inflammation-related proteins following infection. This demonstrated how these early changes in DBS protein levels can support the timing of clinical interventions. Together, these findings of our studies highlight the potential of DBS for remote and continuous health monitoring for precision health approaches.

    Proteins are also among the most common targets of therapeutic drugs. Still, many proteins interact also with other proteins, and such complexes can critically influence how a drug binds to its target, its therapeutic efficacy, and the risk of side effects. In Paper IV, we established an affinity proteomics workflow for validating binding reagents, which we then applied in Paper V to investigate potential protein-protein interactions of membrane proteins. The gained insights and knowledge can contribute to improve our understanding of biologically relevant protein interactions aiding the development of more selective and effective drug candidates. 

    Overall, the studies presented in this thesis contribute with valuable insights to the transition toward precision health by enabling scalable remote sampling and by deepening our understanding of protein interactions relevant to both normal physiology and disease.

    Download (pdf)
    fulltext
  • Public defence: 2026-02-06 09:30 F3, Stockholm
    Terra, Ahmad
    KTH, School of Industrial Engineering and Management (ITM), Engineering Design, Mechatronics and Embedded Control Systems.
    Explainable Artificial Intelligence for Telecommunications2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Artificial Intelligence (AI) is a key driver of technological development in many industrial sectors. It is being embedded into many components of telecommunications networks to optimize their functionality in various ways. AI technologies are advancing rapidly, with increasingly sophisticated techniques being introduced. Therefore, understanding how an AI model operates and arrives at its output is crucial to ensure the integrity of the overall system. One way to achieve this is by applying Explainable  Artificial Intelligence (XAI) techniques to generate information about the operation of an AI model. This thesis develops and evaluates XAI techniques to improve the transparency of AI models.

    In supervised learning, several XAI methods that compute feature importance were applied to identify the root cause of network operation issues. Their characteristics were compared and analyzed for local, cohort, and global scopes. However, the generated attributive explanations do not provide actionable insight to resolve the underlying issue. Therefore, another type of explanation, namely counterfactual, was explored during the study. This type of explanation indicates the changes necessary to obtain a different result. Counterfactual explanations were utilized to prevent potential issues such as Service Level Agreement (SLA) violations from occurring. This method was shown to significantly reduce SLA violations in an emulated network, but requires explanation-to-action conversion.

    Unlike the previous method, a Reinforcement Learning (RL) agent can perform an action in its environment to achieve its goal, eliminating the need for explanation-to-action conversion. Therefore, understanding its behavior becomes important, especially when it controls a critical infrastructure. In this thesis, two state-of-the-art Explainable Reinforcement Learning (XRL) methods, namely reward decomposition and Autonomous Policy Explanation (APE), were investigated and implemented to generate explanations for different users, technical and non-technical, respectively. While the reward decomposition explains the output of a model and the feature attribution explains the input, the connection between them was missing in the literature. In this thesis, the combination of feature importance and reward decomposition methods was proposed to generate detailed explanations as well as to identify and mitigate bias in the AI models. In addition, a detailed contrastive explanation can be generated to explain why an action is preferred over another. For non-technical users, APE was integrated with the attribution method to generate explanations for a certain condition. APE was also integrated with a counterfactual method to generate a meaningful explanation. However, APE has a limitation in scaling up with the number of predicates. Therefore, an alternative textual explainer, namely Clustering-Based Summarizer (CBS), was proposed to address this limitation. The evaluation of textual explanations is limited in the literature. Therefore, a rule extraction technique was proposed to evaluate textual explanations based on their characteristics, fidelity, and performance. In addition, two refinement techniques were proposed to improve the F1 score and reduce the number of duplicate conditions. 

    In summary, this thesis has developed the following contributions: a) implementation and analysis of different XAI methods; b) methods to utilize explanations and explainers; c) evaluation methods for AI explanations; and d) methods to improve explanation quality. This thesis revolves around network automation in the telecommunications field. The explainability methods for supervised learning were applied to a network slice assurance use case, and for reinforcement learning, it was applied to a network optimization use case (namely, Remote Electrical Tilt (RET)). In addition, applications in other open-source environments were also presented, showing broader applications in different use cases.

    Download full text (pdf)
    kappa
  • Public defence: 2026-02-06 10:00 Q2, Stockholm
    Pucci, Giulia
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Probability, Mathematical Physics and Statistics.
    Deep Learning and Optimal Stochastic Control with Applications2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis brings together theoretical advances in stochastic optimal control and modern deep learning techniques, with particular emphasis on applications in environmental and energy systems. The first group of contributions investigates optimal control from a theoretical perspective, developing new results and illustrating their relevance through real world applications. The second part explores deep learning methods for solving stochastic differential equations and control problems that are analytically intractable.

    We begin by studying impulse control problems for conditional McKean--Vlasov jump diffusions, extending the classical verification theorem to the setting in which the state dynamics depend on their conditional distribution. We then examine an optimal control problem for pollution growth on a spatial network, formulated in a deterministic framework but capturing how environmental policies propagate across interconnected geographical regions. Finally, we develop a model for investment in renewable energy capacity under uncertainty, characterising how optimal installation strategies change in response to fluctuations in energy demand and production. These contributions show how stochastic control can be used to address pressing challenges in environmental regulation and energy planning.

    The second line of research focuses on deep learning methods for backward stochastic differential equations (BSDEs) and related formulations, together with direct machine learning approaches for high-dimensional stochastic control. Specifically, we solve Dynkin games by reformulating them as doubly reflected BSDEs, enabling the computation of optimal stopping strategies in energy market contracts. We further develop a deep learning solver for backward stochastic Volterra integral equations (BSVIEs), extending neural BSDE methods to systems with memory. In addition, we propose a machine learning framework for renewable capacity investment under jump uncertainty, treating the problem both through a direct control learning strategy and through a newly developed solver for pure jump BSDEs.

    Overall, this thesis lies at the intersection of rigorous mathematical analysis and machine learning-based approaches to stochastic optimal control. On the one hand, we show how careful modeling and theoretical results enable the formulation and study of complex, realistic control problems; on the other hand, we demonstrate how modern machine learning techniques provide powerful tools for solving these problems efficiently. The applications are motivated by urgent questions in environmental and energy sustainability.

    Download full text (pdf)
    kappa
  • Public defence: 2026-02-06 13:00 https://kth-se.zoom.us/j/65337054278, Stockholm
    Katsikeas, Sotirios
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Developing and validating domain specific languages for cyberattack modeling and simulations2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis explores the potential of domain-specific languages (DSLs) to enhance the accuracy, efficiency, and expressiveness of cyberattack modeling and simulation. Motivated by the increasing sophistication of cyber threats, this work addresses the limitations of traditional modeling approaches by developing and validating two novel DSLs: one tailored for vehicular systems and another for the Information and Communications Technology (ICT) domain. These languages provide specialized vocabulary and syntax for describing attack patterns, system behaviors, and defense mechanisms concisely and straightforwardly. Through a series of experiments and case studies, this research demonstrates the effectiveness of these DSLs in capturing the complexities of real-world cyberattacks. These languages enable the automatic generation of attack graphs from system architecture models, streamlining threat identification and enhancing the alignment of security measures with established frameworks for cybersecurity professionals. This thesis contributes to the advancement of cyberattack modeling and simulation techniques, providing cybersecurity professionals with tools to express, analyze, and predict the behavior of cyberattacks.

    Download full text (pdf)
    Full thesis
  • Public defence: 2026-02-06 13:00 D2, via Zoom: https://kth-se.zoom.us/j/66460872948, Stockholm
    Pohjanen, Emmie
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Protein Science, Cellular and Clinical Proteomics. KTH, Centres, Science for Life Laboratory, SciLifeLab.
    Spatial proteome mapping of specialized subcellular structures in human cells2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Proteins are the primary workhorses of the cell, carrying out virtually all processes to sustain cellular functioning. From enzymes that catalyze biochemical reactions, to motor proteins that transport large cellular cargo across the cell, protein functions are as diverse as the unique amino acid sequences that compose the proteins. Protein function is largely dependent on the subcellular localization of the protein, as subcellular compartmentalization enables different environments that are suitable for different reactions. Knowledge about protein localization and function can, in the broader context, help us understand the cell in health and disease, as protein dysfunction and mislocalization are key drivers of developing disease.

     The work in this thesis has been carried out within the framework of the Human Protein Atlas (HPA) initiative, primarily for the subcellular resource. In Paper I, we measured the autoantibody profiles of patients with systemic sclerosis with the goal to identify new candidate biomarkers associated with fibrosis. We performed a near proteome-wide, untargeted screen combined with a targeted bead array and revealed 11 autoantibodies with higher prevalence in patients with systemic sclerosis than in controls. Two of these show high potential for being used as biomarkers for systemic sclerosis patients that are affected by skin and lung fibrosis. 

     For Paper II, we took advantage of the vast image library generated by the subcellular resource of HPA to create an image-based map of the micronuclear proteome. In total, we identified 944 proteins as micronuclear, dominated by proteins associated with nuclear and chromatin processes. The findings of this study expand our view of micronuclei as byproducts of mitotic errors to potential active participants in biological processes. In Paper III, we applied antibody-based spatial proteomics combined with 3D confocal imaging to map 715 proteins to primary cilia, and three ciliary substructures, across three different cell lines. Of the identified proteins, 91 had not been identified in cilia before, expanding our knowledge on the ciliary proteome and function. The findings of the study portray cilia as sensors able to tune their proteome to effectively sense the environment to compute cellular responses. Finally, in Paper IV, we mapped the subcellular localization of a subset of the human sperm proteome to 11 distinct subcellular structures of human sperm cells, providing the first image-based resource on protein localization in sperm cells. We found that 54% of the studied sperm proteins vary in spatial distribution and/or abundance between individual sperm, which raises the question of subpopulations of sperm. 

     In summary, this thesis expands our knowledge on protein localization in specialized subcellular structures and provides a foundation for further in-depth research into the mechanisms behind the drivers of certain diseases, such as for autoimmunity, cancers, ciliopathies, and male infertility phenotypes. 

    Download (pdf)
    kappa
  • Public defence: 2026-02-06 14:00 F3, Stockholm
    Shahverdi, Vahid
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Algebra, Combinatorics and Topology.
    Manifolds of Learning2025Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Neural networks are central to modern machine learning, with applications that range from computer vision to natural language processing. Despite their success, their mathematical foundations are barely understood. At the heart of every such model lies a training procedure that amounts to solving a highly nonconvex optimization problem with many potential solutions, yet optimization algorithms often find parameters that not only fit the data but also generalize well to unseen samples. Why neural networks exhibit this favorable behavior, and how architectural choices influence it, remain fundamental open questions that call for new mathematical tools.

    This thesis suggests one promising approach, neuroalgebraic geometry, whose research program is to study neural networks through the lens of algebraic geometry. In this framework, nonlinearities such as activation functions are replaced with algebraic counterparts, for instance polynomials, so that the resulting models become amenable to rigorous algebro-geometric analysis. Since polynomials are universal approximators, by a limit argument, the methods developed in neuroalgebraic geometry can extend beyond the polynomial realm and thus bridge the gap between algebraic models and practical neural networks.

    Through neuroalgebraic geometry, we study the function space parameterized by a given neural network architecture, which we refer to as the neuromanifold. Its dimension and volume reflect how rich the model is and how well it can generalize from data. Singular points, places where the neuromanifold is not regular, characterize implicit biases that arise during training. The analysis of the parametrization map relates to the identifiability of neural networks, a property that is essential for the design of equivariant architectures in which symmetries of the data are encoded into the model. Finally, viewing optimization through this geometric lens connects the landscape of the loss function to the underlying structure of the neuromanifold. The algebraic setting makes these analyses more feasible since it makes the ambient space of the neuromanifold finite-dimensional.

    The main goal of this thesis is to analyze the functionality of two important architectures, multilayer perceptrons (MLPs) and convolutional neural networks (CNNs), through the lens of algebraic geometry.

    In Paper A, we present a position paper that introduces and motivates the emerging research area of neuroalgebraic geometry. We construct a dictionary between algebro-geometric concepts (such as dimension, degree, and singularities) and key machine learning phenomena (including sample complexity, expressivity, and implicit bias). Along the way, the paper provides a concise literature overview and argues for new connections at the intersection of algebraic geometry and machine learning.

    In Paper B, we investigate linear convolutional networks with single-channel and one-dimensional filters. By examining their neuromanifold, we provide its dimension and singularities. Furthermore, by considering optimization with squared loss, we show that the critical points of the parameterization corresponding to spurious points are not attracted by gradient-based optimization once all strides are larger than one.

    In Paper C, which continues the investigation from Paper B, we introduce a recursive algorithm that generates the polynomial equations defining the Zariski closure of the neuromanifold of linear convolutional networks. We further provide the exact number of (complex) critical points that arise when training these networks with squared loss and generic data.

    In Paper D, we examine linear invariant and equivariant networks under permutation groups. We determine the dimension, degree, and singular locus of the neuromanifold for these models. We then analyze the number of (complex) critical points that can arise during training. Furthermore, we show that the neuromanifold of linear equivariant networks comprises many irreducible components that cannot be parameterized by a single fixed architecture, and thus the choice of architecture determines which irreducible component we parameterize.

    In Paper E, which is our first exploration of non-linear activation functions, we analyze single-channel, one-dimensional-filter convolutional networks with monomial activation functions. We show that its neuromanifold, once projectified, is birational to a Segre--Veronese variety, a well-known object in classical algebraic geometry. We then describe its algebraic invariants such as dimension and degree and characterize its singular points, including their types. Finally, we provide an exact formula for the number of (complex) critical points arising when training with generic data under squared loss optimization.

    In Paper F, we investigate both MLPs and CNNs with generic polynomial activation functions. We prove that no continuous symmetry exists in either model, i.e., the generic fiber of the parameterization is finite. Consequently, the dimension of the neuromanifolds coincides with the number of parameters. Furthermore, we show that in both models, certain subnetworks correspond to singular points of the neuromanifold. Finally, for CNNs the parameters associated with these subnetworks are not critically exposed, whereas for MLPs they are critically exposed, meaning that they appear as critical points of the loss function with nonzero probability over the data distribution.

    Although the main thrust of this thesis concerns the geometry and optimization of neural networks, the perspective developed here---framing learning problems as algebraic optimization over structured sets---also motivated a new approach to a classical problem in signal processing. 

    In Paper G, we address the problem of multireference alignment (MRA), where multiple noisy observations of a one-dimensional signal are available, each subjected to an unknown circular shift. The objective is to reconstruct the underlying signal up to circular shift. We introduce a novel algorithm that minimizes a distance function defined on the manifold of signals whose second-order moments agree with those estimated from the observations. We analyze the optimization problem both in the finite- and infinite-data regimes. In the latter, we show that the true signal is always a critical point of the loss function, and our empirical results show that it corresponds to a global minimum.

    Download full text (pdf)
    Kappa
  • Public defence: 2026-02-11 09:00 F3, Stockholm
    Engström, Viktor
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Modeling and Simulating Cyberattacks with Dynamic Graphs: With applications to cloud security assessments2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This dissertation presents a formalism for exploring two fundamental, yet underrepresented, cyberattack dynamics. Namely, how adversary actions drive the emergence of cyberattacks and how adversaries manipulate dynamic system structures, such as by creating and destroying objects. The formalism in question is encapsulated in the Dynamic Meta Attack Language (DynaMAL), a meta-level formalism for modeling and simulating cyberattacks with dynamic graphs. DynaMAL has been designed and developed in accordance with the design science research framework across four studies. The first study introduces an attack graph construction language for assessing cloud architectures and identifies the central problem of representing attacks in which adversaries manipulate dynamic system structures. The second study is a systematic literature review of cyberattack simulations that identifies key simulation concepts used in later stages of the design process. Building on the two initial studies, the third study establishes the cyberattack modeling foundations of DynaMAL, comprising a dynamic graph system, a multi-layered graph model, a lazy graph generation strategy, and the DynaMAL grammar. Finally, the fourth study develops the corresponding discrete-event simulation process for DynaMAL. The resulting capabilities are evaluated through a first simulation experiment that uses three cloud penetration testing scenarios that rely on dynamically creating and destroying resources. The scenarios are then solved automatically with near-optimal results by combining two search and optimization algorithms.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-02-13 13:00 F3, Lindstedtsvägen 26 & 28, KTH Campus, Stockholm
    Svahn Garreau, Hélène
    KTH, School of Architecture and the Built Environment (ABE), Architecture, Architecture, Culture and Environment.
    Det framställda autentiska originalet: Konservering av kalkmålningar i svenska kyrkorum omkring 1850–19802026Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In Sweden, there are more than a thousand completely or partially preserved medieval churches, with many containing kalkmålningar (mural paintings), dating from between the twelfth and seventeenth centuries. This thesis deals with the enactments and re-enactments that formed the revival of these paintings, particularly the conservation-restoration work carried out between 1850–1980, including contemporaneous, decorative, and monumental paintings that were created in relation to this revival. The thesis further investigates how kalkmålningar became embedded within Sweden’s listed cultural heritage. Following the enactments, thought collectives of the period are revealed, shedding light on the relation between these paintings and the establishment of theories and practices within the emergent field of conservation-restoration. Through an eclectic methodology–including analysis of written sources, and investigations of the paintings in-situ, partly grounded on inspiration from actor-network theory (ANT), discursive analysis, posthumanism, and object-oriented ontology (OOO)–the thesis asks what agency these paintings have. How can we understand conservation-restoration as a hybrid practice that encompasses a more-than-human perspective, challenging the modern dichotomy between nature and culture?

         The thesis is divided into three parts. The first part gives an introduction to the thesis, theory, and method, including an analysis of the emergence of conservation-restoration in relation to the modern “thought figure” of the “authentic original”. It gives an introduction to the topic of kalkmålningar in Sweden, and an analysis of the network that enacted them and their collective values. The second part forms a historical analysis and description of the enactments of kalkmålningar between 1850–1980, identifying three periods of different conservation-restoration principles: stylistic restoration, original conservation-restoration, and precautionary conservation-restoration. The third part discusses the enactments that informed the revival of kalkmålningar, and draws conclusions concerning the ontology of conservation and paintings. This part further identifies and discusses particular traditions of Swedish restoration, such as the principles of Sigurd Curman, the precautionary attitude, the pursuit of a lively harmonic church room, the prevalence of handicraft conservation-restoration, as well as the restrictive attitude towards retouching that emerged in the 1960s, and the lack of an institute dedicated to technical studies of art. The thesis concludes with an epilogue, in which three contemporary conservation-restoration tendencies are finally discussed, influenced by a changing perception regarding ethics, sustainability, and the authentic original.

    Download full text (pdf)
    Helene Svahn Garreau-low.pdf