kth.sePublications KTH
1234 41 - 90 of 162
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Persson, Annie
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Engineering Design.
    Widmark, Johan
    KTH, School of Industrial Engineering and Management (ITM), Engineering Design.
    Comparison of Sensor Combinations for SATCOM-on-the-Move Antenna Applications2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Today, there are several uses for satellites, one of them is communication. Having the ability send audio, video, or other types of data through electromagnetic waves, people all around the globe can communicate with each other. This kind of technology enables a lot of possibilities, and there exist both stationary and mobile communication. Ovzon AB is a Satellite Communication company which works with both these solutions. This Master thesis will be focusing on their SATCOM on the Move, SOTM, solution. It uses sensor fusion to combine multiple sensor data in order to track the antenna mounted on the vehicle. This technology has been around for a while, but the sensors in the system are still susceptible to disturbances. The scope of the project is to compare different sensor combinations through Unscented Kalman filter against the common combination INS and GNSS. The results indicate that there are differences between the sensors. However, the result also shows that there were some outliers depending on which sensor combination that was used. Overall, the conclusion of the thesis is that there were two groups which improved the antenna performance, the GNSS + Gyroscope and GNSS + Gyroscope + Accelerometer.

    Moreover, this project has given more insight into design choices as well as optimization alternatives.

    Download full text (pdf)
    fulltext
  • Kawabata, Atsushi
    KTH, School of Engineering Sciences (SCI), Physics.
    Analysis of Test Beam Data for the ATLAS High-Granularity Timing Detector and Jet Flavor-Tagging Studies: Exploring Pile-Up Interactions from Detector- and Reconstruction Perspectives2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The increase in the number of simultaneous proton-proton collisions, pile-up interactions, will be a major challenge for the future High-Luminosity phase of the Large Hadron Collider (LHC) at CERN, making pile-up mitigation an important issue for the ATLAS experiment. The High-Granularity Timing Detector (HGTD), to be installed in the upcoming years, is one of the detector upgrades contributing to this pile-up mitigation. To report the current status of the HGTD development, an HGTD test-beam analysis was performed using data taken during two weeks in October 2025 at the CERN SPS test-beam facility. Synchronization between different detector systems in the test-beam environment was achieved. The full data reconstruction chain was demonstrated to be feasible although the measured hit efficiency and timing resolution were limited by non-optimal chip tuning including a very high threshold setting, as well as by low statistics. To study the effect of pile-up, the impact of jet pile-up contamination on the jet flavor-tagger was investigated under current LHC running conditions. A substantial degradation in flavor-tagging performance was seen, highlighting the importance of understanding pile-up effects in jet reconstruction. A supplementary timing-based pile-up mitigation study was also performed.

    Download full text (pdf)
    fulltext
  • Yang, Yinan
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Vehicle engineering and technical acoustics.
    Circular Economy: Quantifying Railway Sustainability2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Railway infrastructure requires a more circular use of materials, but much research still focuses on single metrics and assumes that higher recycling rates are always better. This paper constructs a simple and consistent framework for evaluating recycling strategies for three railway assets (sleepers, rails, and overhead contact lines). The framework covers four dimensions: material utilization, greenhouse gas emissions, energy consumption, and financial costs. It follows the EN 15804 life cycle structure (Modules A-D) and links all metrics to the basic material flow. Recycling rate (R) and reuse or maintenance rate (U) are used as the primary decision variables.To avoid unrealistic results at extremely high recycling or reuse rates, the model introduces penalty functions. These penalties represent additional effort and losses, such as more complex dismantling, lower waste quality, or higher processing energy consumption. All metrics are standardized and can be combined through a simple weighted scheme, allowing for comparison of different strategies on the same scale.Case studies show that the optimal range for R and U largely depends on the asset type. For railway sleepers, moderate steel recycling and low reuse rates already offer significant benefits in terms of material utilization and emissions, while pushing recycling rates close to their maximum leads to higher costs and fewer additional benefits. Rails can more easily achieve higher recycling rates due to well-established scrap and rerolled steel routes. Overhead contact wires are primarily made of copper, a material that is highly recyclable in principle. However, in practice their effective recycling rate remains narrow because dismantling, contamination, and alloy degradation significantly constrain recoverability. As a result, low recycling rates fail to capture substitution benefits, while excessively high rates increase energy use and costs.Overall, the results indicate that higher recycling rates are not always better in practice. Recycling targets should be set based on asset type, rather than a single value for the entire railway system. This framework provides a practical approach to testing different recycling and reuse levels and supports procurement and maintenance decisions that balance material utilization, climate impact, energy demand, and costs.

    Download full text (pdf)
    fulltext
  • Anagni, Giuseppe
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Aerospace, moveability and naval architecture.
    Development of a multi-objective design workflow for long-range electric VTOL fixed-wing UAVs based on computational aerodynamics2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study presents the development and validation of a fast, iterative design workflow for a new class of fully electric long-range delivery drones that combine simultaneous vertical take-off and landing (VTOL) capability with high cruise efficiency in a fixed-wing configuration. With a growing presence of unmanned aerial vehicles (UAVs) in today's world, efficient and reliable design methodologies become increasingly important to decrease costs and accelerate development, especially in startup environments. The proposed workflow integrates multi-objective optimization of the drone's geometry with an aerodynamic analysis of increased fidelity level through computational tools. Rapid potential flow solvers are employed for preliminary design, followed by computational fluid dynamics (CFD) simulations of greater accuracy for further refinement, advanced analysis and validation. To test the speed and modularity of the workflow, two different UAV geometries are designed and optimized with the potential solver VSPAERO, then further analyzed with advanced CFD models in Ansys Fluent. The results obtained demonstrate that this approach significantly reduces design time and usage of computational resources compared to CFD-only methods, while maintaining accuracy and scalability for further iterations.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-05-07 09:00 https://kth-se.zoom.us/j/63639257006, Stockholm
    Narri, Vandana
    KTH, School of Electrical Engineering and Computer Science (EECS), Decision and Control Systems. Scania/ TRATON.
    Shared Situational Awareness for Connected and Automated Vehicles in Urban Scenarios2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    A major challenge in developing connected and automated vehicles~(CAVs) for urban environments is achieving a comprehensive understanding of the surrounding traffic scene. This relies on situational awareness, defined as the ability to perceive, interpret, and anticipate the behavior of surrounding road-users, which is essential to ensure safety. In particular, unprotected road-users, such as pedestrians and cyclists, are often occluded or located in sensor blind-spots of the CAV, which remains a critical challenge. This thesis aims to improve the situational awareness of the ego-vehicle, the CAV of primary interest, in urban environments by leveraging vehicle-to-everything (V2X) communication to incorporate information from connected road-users. A framework using set-based methods is developed to systematically handle uncertainties in measurements and initial conditions of detected pedestrians.

    The objective is to address several key challenges that arise in real-world scenarios, including data inconsistency, data association, pedestrian motion prediction, and efficient reduction of redundant information. The thesis first proposes a shared situational awareness framework for occluded pedestrian-crossing scenario to compute an estimated set for the pedestrian. The framework is extended to handle measurements from V2X units that may be inconsistent with the ground truth of the detected pedestrian. To address scenarios involving multiple occluded pedestrians, a data association method based on intersection-over-union heuristics is introduced. Pedestrian motion prediction is further studied using both a data-driven approach and a bounded velocity–acceleration model applied to the estimated set obtained from the framework. An occlusion-aware extension is also developed to handle situations where occlusions affect both the ego-vehicle and V2X units by exploiting previously observed measurements. Finally, a method for selecting and filtering relevant information from multiple V2X units is proposed to reduce the computational load while maintaining effectiveness. The proposed methods are validated through numerical simulations and real-world experiments using Scania prototype automated vehicles.

    Download full text (pdf)
    fulltext
  • Lopez, Alexandre
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Material and Structural Mechanics.
    Development and Automation of an Airframe Structural Analysis Methodology2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The structural certification of transport category aircraft mandates a rigorous demonstration of static strength and stability for all primary load-carrying components. This thesis presents the development and validation of ASTRA (Airframe STRuctural Analysis), a computational framework designed to automate the transition from linear Global Finite Element Models (GFEM) to margins of safety for metallic semi-monocoque structures.ASTRA implements a hybrid analysis methodology applicable to any skin-stringer-frame assembly, integrating linear elastic internal loads with non-linear stability correlations. By capturing complex stiffened panel interactions ASTRA enables rapid identification of critical failure modes across large structural assemblies.The tool contributes to a more efficient, effective, and comprehensive airframe structural analysis, offering a faster and less error-prone approach than traditional methods. It helps identify the most critical regions, drives design decisions, and facilitates compliance demonstration. Additionally, ASTRA can assist in devising the structural load applicable at a given fuselage reference station by checking which are the maximum loads that yield positive margins of safety.Developed in a modular way, ASTRA allows each failure mode method to be used independently and can be further expanded to include new failure modes, alternative methods, and extensions to address more detailed FEMs, potentially broadening its scope significantly.

    Download full text (pdf)
    fulltext
  • Ferré, Noé
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geoinformatics.
    Crop Irrigation Water Management Through Evapotranspiration Modeling Integrating Optical and Passive Microwave Satellite Data from PlanetScope2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Freshwater availability is a critical planetary boundary currently under threat from intensified climate change and agricultural over-exploitation. Agriculture accounts for over 70% of global water use, yet the ability to monitor crop water consumption accurately remains a significant challenge. Satellite remote sensing has become an essential tool for monitoring agricultural water use at regional to global scales, primarily through the estimation of evapotranspiration (ET). However, most operational ET studies rely on moderate-resolution satellite missions such as the Landsat program or Sentinel-2, which may limit the detection of short-term dynamics and field-scale variability in agricultural systems.

     

    This thesis, conducted at KTH Royal Institute of Technology in collaboration with Planet Labs, evaluates the operational potential of three distinct evapotranspiration (ET) modeling frameworks: STIC (Surface Temperature Initiated Closure), PT-JPL (Priestley-Taylor, Jet Propulsion Laboratory), and GLEAM (Global Land Evaporation Amsterdam Model). The objective of this research is to assess how these widely used ET models perform when driven by high-frequency commercial satellite observations and to investigate whether combining their outputs can improve the reliability of agricultural water use estimation across diverse climatic conditions. By leveraging Planet's unique satellite constellation, this study explores the potential of high-temporal-resolution Earth observation data to enhance evapotranspiration monitoring at the field scale. This research benchmarks these models against in-situ AmeriFlux data across diverse climate zones in North America between 2020 and 2024.

    Results indicate that the PT-JPL model demonstrated the highest overall consistency (R2: 0.64 and RMSE: 1.04 mm/day), driven by its robust eco-physiological scaling. While STIC proved highly accurate during peak biomass, but vulnerable to thermal inputs from bare soil during early growth. Conversely, GLEAM provided the most robust performances over the different climates, though sometimes underestimating the transpiration rates of mature crops.

    A central contribution of this research is the development of a multi-model ensemble, which significantly improved validation metrics (R2: 0.68, RMSE: 0.96 mm/day) by mitigating individual model biases. The study further identifies climatic dependencies, showing higher accuracy in subtropical and continental climates compared to arid regions, where GLEAM emerged as the most resilient framework. By translating these biophysical fluxes into actionable Water Stress and Deficit Indices, this thesis demonstrates a clear bridge between satellite-based modeling and practical irrigation scheduling. Beyond model benchmarking, the results highlight the potential of high-frequency commercial satellite constellations to support operational agricultural water monitoring within the fields of geoinformatics and remote sensing. This research provides a foundation for scalable to similar climates, model-driven tools that can optimize agricultural water use, ensuring both economic viability for farmers and ecological sustainability in a water-constrained future.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-05-07 14:00 https://kth-se.zoom.us/s/67709142389, Stockholm
    Larsson Forsberg, Albin
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning.
    Multi-Agent Learning Under Spatio-Temporal Constraints in Coordinated Communication Networks2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Modern cellular networks have gotten more complex over the years, transitioning from sparse macro-cell deployments to ultra-dense, heterogeneous systems. In this thesis we consider a radio resource management (RRM) problem called remote electrical tilt (RET), in particular. The objective in RET opimization is to tune antenna tilt parameters in the network to allocate radio resources where they are the most needed. As cellular networks evolve toward 6G, we expect an unprecedented increased need for autonomous decision making in the networks, introducing new coordination challenges exacerbated by the denser networks. Traditional network management has been reliant on manual engineering and rule-based heuristics and is insufficient for the needs of the next generation as it scales poorly. While Multi-Agent Reinforcement Learning appears as a promising tool for autonomously adapting the network, currently deployed solutions often struggle with the large scale of the problem. Additionally, they fail to provide formal guarantees, and remain limited by myopic and step-wise reward structures that cannot capture complex constraints communication service providers (CSPs) may impose on the network. Lacking these attributes holds back deployment in live networks beyond small scale pilot studies.

    This thesis proposes a series of approaches that aim to provide high-assurance autonomous network parameter control. The contributions progressively build on each other from spatial interference coordination to long-horizon, risk-aware planning to satisfy CSP network intents. First, we address the myopic constraints by leveraging graph-based decomposition and coordination graphs to factorize the joint action space, enabling scalable \textit{constrained} learning in dense urban environments. Recognizing that critical infrastructure demands reliability beyond mean performance, we also introduce a risk-aware constrained learning framework utilizing Conditional Value-at-Risk to provide probabilistic reasoning over constraints in the network.

    To bridge the gap between low-level control and high-level CSP intents, we transition from scalar rewards to formal specifications. We utilize Signal Temporal Logic (STL) and transformer-based architectures to satisfy complex intents, enabling agents to reason over  long-horizon requirements. Finally, we move beyond traditional control policies toward generative planning of trajectory rollouts. We aim to enable the generation of safe, high-quality plans that respect hard constraints with probabilistic guarantees by using diffusion probabilistic.

    The proposed methods are evaluated on high-fidelity simulators modeled after real-world urban topologies. The results demonstrate that by integrating structural coordination, formal logic, and generative modeling, it is possible to address many of the issues that plague contemporary autonomous network management. The policies that are obtained by these approaches are not only high-performing but also interpretable, safe, and aligned with the rigorous demands of next-generation telecommunications infrastructure.

    Download full text (pdf)
    Kappa
  • Vincensini, Jean-Paul
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Structural Engineering and Bridges.
    Methodology for Analyzing Structural Response to Blast-Type Rapid Dynamic Loading Using SOFiSTiK2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis investigates the structural response of reinforced concrete elements subjected to blast-type loading using advanced finite element modelling. In engineering practice, blast-resistant design is often performed using simplified equivalent static methods, which may not fully capture the transient behaviour of structures subjected to short-duration loads.The objective of this work is to evaluate the differences between equivalent static approaches and nonlinear dynamic time-history analyses for reinforced concrete structures. A numerical modelling framework was developed in the SOFiSTiK finite element environment. Blast pressure-time histories were generated using empirical relations from the Kingery–Bulmash and UFC 3-340-02 formulations and implemented as time-dependent surface loads.The modelling approach was first verified through comparison with published numerical and experimental studies involving reinforced concrete beams and slabs subjected to blast loading. The methodology was then applied to two engineering case studies: the blast response of an anaerobic digester structure and the accidental drop of a heavy container on a reinforced concrete slab.The results show that nonlinear dynamic analyses capture the transient interaction between load evolution and structural deformation more realistically than simplified static approaches. While dynamic simulations often predict larger displacements, they generally lead to lower internal forces due to energy dissipation mechanisms. This behaviour highlights the limitations of equivalent static methods and demonstrates the potential of nonlinear dynamic analysis to provide more realistic and less conservative structural assessments.The study provides a practical framework for implementing blast loading in SOFiSTiK and illustrates the benefits of dynamic modelling for engineering design applications.

    Download full text (pdf)
    fulltext
  • Kayhan, Özge
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Structural Engineering and Bridges.
    Automated seismic design and resilience analysis of buildings: A comparison between lateral force resisting systems2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The primary objective of this master’s thesis was to automate the seismic design and analysis cycle of buildings. The automation of the building models was performed using the computer-aided design (CAD) software Rhinoceros and its implemented visual programming environment Grasshopper. Additionally, one of the third-generation programming languages, C#, embedded within Grasshopper, was used to develop custom components to export the building models to external structural and seismic analysis software, ETABS. The seismic analyzes were performed in ETABS using the nonlinear time-history analysis (THA) method, with strong ground motion recordings from the 2023 Kahramanmaraş (Türkiye) earthquake obtained at the station in Narlı, Pazarcık, in Kahramanmaraş. The guidelines for conducting seismic analyzes specified in the European seismic building codes and regulations, Eurocode 8 (2004), were primarily used. However, for certain aspects, such as the selection of material and structural properties and coefficients related to dynamic analysis, some of the guidelines specified in the Turkish Building Earthquake Code (2018) were adopted.A further objective of this study was to compare different lateral force resisting systems (LFRS), with a focus on configurations consisting of shear walls. In total, five configurations were compared, where these differed in terms of shear wall placement, including locations at the building periphery, such as corners, edges, and inner parts, and also at the center of the building, and combinations of these. Moreover, different shapes of shear walls were constructed, such as L-shaped, rectangular-shaped, and box-shaped, the latter representing the shear core located at the center of the building. The performance of these configurations was evaluated from different perspectives, including building height, where a variety of total floors were examined, and the column length on the ground floor, for which the same column length as and a longer column length than other floors were investigated. Nevertheless, in this study, a total of 70 building models were analyzed, all of which had the same material and structural properties, except for the length of the shear walls.The results indicated that the shear wall configuration with only the shear core performed best in contrast to other configurations, when comparing the result outputs such as story displacement, story drift, story shear, and story stiffness. At the same time, this configuration also exhibited the highest natural frequencies, where one of the building models experienced resonance due to the earthquake’s close match of frequency content. It was also implied that the shear walls located along the building periphery were more efficient in decreasing the base shear force demand. However, the configuration with only the shear core has shown a higher shear resistance. Another crucial finding was that the lengths of the shear walls that define the shear core in the L-shaped and rectangular-shaped shear wall configurations were insufficient, as there was no significant impact of the shear core on increasing the earthquake resistance. Consequently, the results for these shear wall configurations were remarkably close to those without a shear core. In general, the L-shaped shear wall configurations performed better than the rectangular-shaped shear wall configurations in the x-direction, whereas the opposite was observed in the y-direction. Notably, none of the analyzed building models has shown soft and weak story failure modes.For further research, it was recommended to expand the scope of the study by including other shear wall configurations and a wider range of building heights and column lengths on the ground floor in the comparison frame. Furthermore, incorporating enlarged structural properties of columns, beams, shear walls, and slabs into the comparison frame could provide deeper insights into their influence on earthquake resistance. In addition, using strong ground motion recordings from other stations and comparing them to the findings of this study may provide a broader perspective for the conclusions. However, based on the suggested recommendations for further research, another significant factor affecting earthquake resistance is the structural layout. Therefore, considering other placements of columns was also addressed as an enhancement of this study. Finally, to perform a more comprehensive analysis, the inclusion of soil-structure interaction effects was mentioned as a crucial factor influencing building responses and overall results.

    Download full text (pdf)
    fulltext
  • Robin, Clement
    KTH, School of Engineering Sciences (SCI), Physics.
    Emergency Operating Procedures for a Steam Generator Tube Rupture Mitigation2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Emergency Operating Procedures (EOPs) are guidelines developed to ensure a safeand effective response to emergency situations in nuclear power plants. This reportpresents the work carried out during a five-and-a-half-month internship at Framatome in the DTIPRO unit, which is responsible for emergency procedures andthermal-hydraulic studies.The first objective was to further develop a simulation tool (OSCIA) used to test, derisk, and pre-validate early versions or modifications of EOPs. This tool was designedto be simple and easily accessible, serving as an alternative to a full-scope simulator.The second objective was to create a best-estimate transient CATHARE dataset fora Steam Generator Tube Rupture accident and to test the tool using this scenariowhile following the emergency operating procedures. The aim was to demonstratethat, even with a delayed initial operator action, the accident can be mitigated whilekeeping radioactive releases to the atmosphere within acceptable limits. The resultsshow that the OSCIA EOPs applications enables the reactor to be brought to a safestate, with radioactive releases remaining within prescribed limits. Furthermore, theOSCIA meets its intended objectives and proves to be an effective and user-friendlytool for simulating EOPs and pre-validating new strategies.

    Download full text (pdf)
    fulltext
  • Rochelle, Maxime
    KTH, School of Engineering Sciences (SCI).
    DEM Simulation of Power Harrow - Soil Interaction: Cohesion, Fragmentation, and Sensitivity to Operating Parameters2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Power harrows are key implements for seedbed preparation, yet soil–tool interactions remain difficult to characterize experimentally due to the variability of field conditions and the cost of instrumented trials. This thesis aims to further integrate Discrete Element Method (DEM) simulations into KUHN’s design workflow by improving model robustness and quantifying the influence of operating and modelling parameters on key performance indicators.A DEM workflow was developed primarily in ThreeParticle, with complementary sensitivity investigations in EDEM. Moist soil cohesion was represented using the Johnson–Kendall–Roberts (JKR) model for soil–soil contacts, while a computationally efficient fragmentation framework was implemented via a Voronoi-based particle-replacement API to model friable soil and breakable stones. The tool geometry was progressively refined to mitigate non-physical particle interpenetration, and a robust post-processing pipeline (winsorization, Hampel filtering, rolling median, EMA) was introduced to extract reliable power signals and key performance indicators. Work-quality was assessed using displacement, mixing and density indices, extended to fragmentation through lineage tracking based on a persistent “Master_shape” identifier.Results show that fragmentation acts as a stress-relief mechanism that disrupts highly loaded contact networks, consistently reducing mean power demand and limiting extreme load regimes, while cohesion increases power demand and promotes stress transmission below the active layer. Fragmentation and cohesion also shape the bulldozing heap dynamics and associated displacement patterns, whereas the density response is dominated by cohesion and working depth, with deeper sublayers remaining weakly affected. Across a 24-case screening design of experiments spanning forward speed, rotational speed, working depth, cohesion and fragmentation for two harrow configurations, operating parameters (notably forward speed and working depth) govern most KPI variability, while configuration-to-configuration differences remain second order. Overall, the predicted power levels fall within the expected order of magnitude for real machines, supporting the credibility of the DEM trends and providing actionable guidance for targeted testing and design trade-offs.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-05-08 09:00 https://kth-se.zoom.us/s/67983919825, Stockholm
    Lakshminarayanan, Braghadeesh
    KTH, School of Electrical Engineering and Computer Science (EECS), Decision and Control Systems.
    Simulation-Driven Parameter Estimation with a Focus on Control and Privacy2026Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Parameter estimation is a pivotal task across various domains such as system identification, statistics, and machine learning. The literature presents numerous estimation procedures, many of which are backed by well-studied asymptotic properties. In the contemporary landscape, highly advanced digital twins (DTs) offer the capability to faithfully replicate real systems through proper tuning. Leveraging these DTs, simulation-driven estimators can alleviate challenges inherent in traditional methods, notably their computational cost and sensitivity to initializations. Furthermore, traditional estimators often rely on sensitive data, necessitating protective measures. In this thesis, we consider simulation-driven and privacy-preserving approaches to parameter estimation that overcome many of these challenges.

    The first part of the thesis delves into an exploration of modern simulation-driven estimation techniques, focusing on the two-stage (TS) approach. Operating under the paradigm of inverse supervised learning, the TS approach simulates numerous samples across parameter variations and employs supervised learning methods to predict parameter values. Divided into two stages, the approach involves compressing data into a smaller set of auxiliary statistics, and the second stage utilizes these statistics to predict parameter values. The simplicity of the TS estimator underscores its interpretability, necessitating theoretical justification, which forms a core motivation for this thesis. We establish statistical frameworks for the TS estimator, yielding its Bayes and minimax versions, alongside developing an improved minimax TS variant based on gradient boosting that excels in computational efficiency. We conduct both asymptotic and non-asymptotic analyses of the TS estimator, establishing strong consistency, asymptotic normality, and finite-sample deviation bounds that characterize the estimation error in terms of the number of training samples and observation length. Finally, we address the question of generating diverse and informative training samples by proposing a Metropolis-Adjusted Langevin Algorithm (MALA)-based scheme for sampling training parameters from Jeffreys prior, which reflects the intrinsic geometry of the parameter space via the Fisher Information Matrix.

    The second part of the thesis considers the problem of adapting pretrained TS estimators to out-of-distribution scenarios. We introduce two fine-tuning methods: a supervised approach that combines feature-space anomaly detection with Fisher-information-guided retraining, and an unsupervised approach that minimizes the discrepancy between simulated and observed trajectories without requiring labeled data.

    The third part of the thesis introduces applications of simulation-driven estimation methods in the design of tuning rules for PI controllers. Leveraging synthetic datasets generated from DTs, we train machine learning algorithms to meta-learn tuning rules, streamlining the calibration process without manual intervention. We explore both TS-based approaches with explicit parameter estimation and direct learning approaches using neural network architectures such as convolutional neural networks and WaveNet.

    In the final part of the thesis, we tackle scenarios where estimation procedures must handle sensitive data. Here, we introduce differential privacy constraints into the Bayes point estimation problem to protect sensitive information. By proposing a unified approach, we integrate the estimation problem and differential privacy constraints into a single convex optimization objective, thereby optimizing the accuracy-privacy trade-off. In cases where both observations and parameter spaces are finite, this approach reduces to a tractable linear program which is solvable using off-the-shelf solvers.

    In essence, this thesis endeavors to address theoretical foundations, robustness, and privacy concerns within the realm of simulation-driven parameter estimation.

    Download full text (pdf)
    Doctoral_Thesis_BL
  • Omer, Shahd
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Flexible access control for WebAssembly-based serverless computing2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    We present a framework and proof of concept for flexible access control in WebAssembly Components. This mechanism enables custom mediation and supports varying trust levels amongst components. It enforces policies via shims, allowing mediation logic to be expressed as any logic presentable as WebAssembly code. This is a vast improvement over the limited capabilities previously granted. This paper provides flexible access control in WebAssembly through a new composition-time enforcement mechanism at build. The system inserts shims that can enforce any access control policy that can be described with code, without the need for specific runtime support. Two tools, Wacky and Shimmer, automate shim insertion based on change manifests derived from declarative policies and generate shim scaffolds, respectively. This enables fine-grained permission without modifying component code or runtimes. Evaluation shows negligible performance impact for function calls, with an average overhead of 4.5 ns per mediated call. Instantiation has a higher relative overhead of 53𝜇s, in absolute terms the cost is low and since it occurs only once the impact is deemed acceptable. Despite limitations in supporting certain WebAssembly Interface Type (WIT) type support, such as resources due to their stateful nature and the static nature of the framework that operates at binding before execution, our results show that build-time enforcement is a practical and efficient way to improve WebAssembly access control. We see hybrid approaches combining build and runtime linking as promising future work, which could further help adapt to cloud-native settings.

    Download full text (pdf)
    fulltext
  • Kalab Eyob, Fre
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Localising oscillation source in power systems using Dissipating Energy Flow method2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The increasing integration of renewable energy sources into power systems increases fluctuations which can lead to oscillations that can threaten the reliable and stable operation of the power system. Power system oscillations can be divided into two types of oscillations: natural and forced oscillations. In order to maintain reliable and stable power system operation, it is essential to understand the cause of these oscillations and locate where such oscillations have their origin so that the system operator can perform corrective operations. This thesis project analyses different case studies on both forced and natural oscillations. The aim of the project is to develop a test model for simulating both oscillation types and using PMU data from these simulations to verify the potential and limitations of the Dissipating Energy Flow (DEF) method of localising the oscillation source. DIgSILENT PowerFactory version of Nordic 32 test system was used as a benchmark for this project. Test models were constructed to simulate forced oscillation in the generator governor and excitation system. Moreover, simulations of forced oscillation in Battery Energy Storage System (BESS) were simulated in PQ control of the battery. A Matlab version of the DEF method was used to process the simulation data and to see the potentials and limitations of the DEF method of localising the oscillation source. A total of fourteen case studies were conducted in this project: twelve involving forced oscillations with constant and variable amplitudes in the generator governor, excitation system, and BESS, and two involving natural oscillations. In order to simulate the natural oscillation, gain in generator Power System Stabiliser (PSS) parameter was set to negative value along with short-circuit event that lasted for 110 ms. The different load models used in the project show no effect on the results of the DEF method. The method correctly localised the oscillation source in all cases with the different load models. The key observation of the analysis of the DEF method shows the capability and limitations of the method. DEF method gave accurate and reliable results for both forced and natural oscillations. The cases where the oscillation source is located inside the generator showed results that were accurate and consistent. For natural oscillation case studies, the DEF method could locate the oscillation source accurately. DEF method showed limitations in localising the oscillation source correctly when the oscillation was located inside BESS. The results of these cases showed that the DEF method identified some generators and transmission lines as oscillation sources which were wrong. The main conclusion of this analysis is that the DEF method has a good potential to localise different types of oscillation that originate from various parts of the generator. For oscillations that originate from the BESS, the DEF method shows limitations in identifying the oscillation source.

    Download full text (pdf)
    fulltext
  • Maino, Alessandro
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Mathematical Modeling of Kinematics for Offset-Wrist Cobot Designs: Phenomena Arising from Offset Wrist Manipulator Architectures2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cobots have become increasingly common in industry for their versatility and manufacturing in conjunction with operators. Deviating from spherical wrist 6-Degrees of Freedom (DOF) manipulator designs, offset-wrist manipulators give rise to less structured singularities and added difficulties in the Inverse Kinematics (IK) problem. This thesis proposes a more intuitive and visual way of determining singularities for an offset-wrist 6-DOF robot by visualizing the determinant for all combinations of the wrist joint angles for a given arm configuration, with singularities appearing as the zero crossings of this 3D function. From this representation, we can reduce our domain of possible values for wrist singularities, represent them in Cartesian space, and relate phenomena such as an increased number of IK solutions and the conditions under which cuspidal changes can occur.

    Download full text (pdf)
    fulltext
  • Eckerbom, Gustav
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Design and Benchmarking of an Embedded System for Low Power Event Based Vision2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The ability to perceive the environment with minimal energy is a remarkable characteristic of biological vision. Engineers are still far from achieving comparable efficiency in conventional computer vision, where cameras and processors consume significant power to handle dense frame-based data. Event cameras, inspired by the principles of biological vision, represent an effort to move closer to this efficiency by outputting only pixel-level brightness changes instead of full frames. This results in both high temporal resolution and sparse data that can be exploited for fast, energy-efficient computation. However, the baseline power required to acquire and transmit event data in embedded platforms is still not well characterized, and recent work has emphasized the need for realistic end-to-end benchmarks of complete computer vision pipelines in biologically inspired architectures. To address this gap, this thesis presents the design and benchmarking of a scalable embedded system combining the Prophesee GENX320 event camera with the quad-core Alif Ensemble E7 microcontroller featuring integrated Ethos-U55 hardware accelerators. Results show that the camera consumed less than 3 mW even in high-activity scenes, while the MCU dominated overall system consumption. The findings establish a practical lower bound for eventdriven perception on the studied architecture and provide a reference point for future designs, while also identifying areas where further optimization is possible. Building on this work, future research can utilize the embedded system for comparative benchmarking of complete computer vision algorithms to evaluate power – performance trade-offs.

    Download full text (pdf)
    fulltext
  • Růžička, Jakub
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Live Migration of Confidential Virtual Machine2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Given stricter regulations and the transfer of sensitive data to the cloud, there is a clear need to further strengthen cloud security. The latest advances fall under the term confidential computing, which complements existing methods of protecting data during storage and transfer with memory encryption and remote attestation. The introduction of these countermeasures significantly raises the security bar for both remote attackers who operate malicious virtual machines and exploit vulnerabilities in cloud infrastructure, and malicious actors with physical access. AMD SEV-SNP and Intel TDX are the latest developments implementing confidential computing for server-grade processors. For wider adoption of this technology, effective management of confidential virtual machines, i.e., virtual machines utilizing the protection provided by confidential computing chips, is essential. To facilitate the lifecycle management of confidential virtual machines, the Secure VM Service Module (SVSM) has been introduced as a common layer that can be used across different vendors.

    This thesis investigates live migration of confidential virtual machines running under AMD SEV-SNP using the SVSM module. First, the current state of the art is investigated. Since there is no solution for migrating confidential machines with the SVSM module, a migration design is developed and proof of concept is provided for the most time-consuming part of the migration process, random access memory (RAM) migration. The proposed solution is analyzed and the steps needed to increase its scope and functionality are outlined. A new methodology for evaluating incomplete migration is developed and used to assess the upper limit of the overhead that AMD SEV-SNP confidential machines would represent for the live migration process. Our single-threaded proof-of-concept resulted in a tenfold slowdown in memory page transfers.

    Download full text (pdf)
    fulltext
  • Grimau, Florent
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Source and Sensor Placement Optimisation for Spatial Active Noise Control2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Spatial Active noise control (ANC) aims to manipulate sound fields within target regions through active control methods, with applications including noise cancellation and high-fidelity audio reproduction. The placement of secondary sources and sensors has a significant impact on system performance; however, traditional approaches optimise these placements independently, resulting in suboptimal results. This thesis addresses the joint source and sensor placement optimisation problem in spatial ANC systems. The challenge lies in formulating an effective cost function that simultaneously optimises secondary source placement for sound field synthesis and sensor placement for accurate spatial interpolation, while maintaining reasonable computational complexity. A novel joint cost function formulation was developed that explicitly incorporates interpolation error alongside synthesis error by comparing interpolated field estimates against the true desired field across the entire control zone. This approach addresses fundamental limitations in existing methods, including clustering behaviour and convergence issues, by properly accounting for estimation uncertainty inherent in limited sensor measurements. A comprehensive simulation environment was implemented to evaluate four placement algorithms: Random (baseline), Regular (geometric), Greedy (optimisationbased), and Matching Pursuit (signal processing-based). The algorithms were systematically compared using normalised mean square error and computational efficiency metrics across multiple resolution configurations. Results demonstrate that optimisation-based approaches provide meaningful performance improvements over simple placement strategies, with the Greedy algorithm achieving the best acoustic performance. However, significant computational trade-offs were revealed, with optimisation methods requiring substantially longer execution times. The novel cost function formulation successfully resolved convergence issues and demonstrated superior robustness across varying acoustic conditions.

    Download full text (pdf)
    fulltext
  • Hamidovic, Dino
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Designing and Evaluating Gamified User Onboarding for an Industrial Simulation Tool2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis explores the potential of gamification to enhance the onboarding experience in a factory simulation tool designed for the corrugated fiber industry. The tool, developed for Van Den Bos Robotics, allows users to model transport systems and simulate production flows. While rich in functionality, the tool’s complexity may pose a challenge for new users during onboarding. To address this issue, a gamified onboarding system was developed, incorporating game design elements such as missions, achievements, levels, and in-app rewards to guide users through key features of the tool. Most gamification work focuses on employee performance and motivation; very little work is done on learning complex simulation software. This is a crucial gap provides a strong justification for the present study. By examining gamification in the context of digital tool onboarding, this thesis contributes new insights into improving learnability and user experience in industrial simulation environments. To evaluate the impact of this approach, a between-subjects experiment was conducted with 16–20 participants of diverse academic backgrounds. One group used a traditional onboarding method (verbal intro and manual), while the other group experienced the gamified onboarding. Task performance, usability (SUS), engagement, and cognitive load (NASA-TLX) were measured and compared.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-05-05 10:00 F3, Stockholm
    Karlsson, Tobias
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Material and Structural Mechanics.
    Intrinsic Self-Sensing in Advanced Composites Enabled by Carbon Nanostructures2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Lightweight composite structures have become essential in modern aerospace engineering, where increasing demands for fuel efficiency, reduced emissions, and improved operational reliability place new requirements on both materials and manufacturing. As composite components grow more advanced, featuring co-cured components, complex geometries, and thinner design margins, the need for improved insight into their internal behaviour becomes critical. Existing sensing technologies struggle to provide local, in-situ information from the composite’s interior during manufacturing or throughout its service life, without compromising structural integrity. This creates a gap between the capability of current sensing approaches and the monitoring demands required by the complexity of next-generation composites.

    This thesis addresses this gap by investigating the feasibility of embedding nanomaterial-based sensing structures, primarily vertically aligned carbon nanotube (VACNT) forests, into fibre-reinforced polymer composites. The overarching aim is to explore how such sensors can be integrated with minimal structural intrusion, from where their sensing behaviour originates, and how they can provide reliable, multifunctional monitoring both during manufacturing and in the cured state. The work spans the development of embedding and contacting strategies, bottom-up characterisation to investigate sensing mechanisms, and the exploration of both direct current (DC) and alternating current (AC) measurement approaches. Collectively, the research seeks to expand the understanding of how nanomaterial sensors interact with composite materials and how they can support the design of future multifunctional aerospace structures.

    The findings demonstrate that VACNT forests can be embedded into composite laminates without compromising the composite’s mechanical structure, while providing robust and reproducible sensing capabilities. A bottom-up analysis helps determine that the embedded VACNT forests’ thermoresistive behaviour is governed by fluctuation-assisted tunnelling, and their linear piezoresistive response originates in the intrinsic piezoresistivity of individual CNTs. The VACNT forests enable local in-situ cure monitoring of prepreg laminate, detecting key process transitions. Strategies for sensing in conductive carbon fibre environments are established, as well as comparisons with alternative nanomaterial-based sensors such as graphene coatings. Finally, by transitioning from DC resistance to AC impedance measurements, the work shows that embedded CNT structures can detect high transverse pressures and exhibit frequency-dependent sensing sensitivity.

    Together, these results establish VACNT forests as a promising, multifunctional, and structurally compatible sensing concept for advanced composite structures, offering new pathways for embedded process monitoring, structural health monitoring, and the development of next-generation multifunctional aerospace components.

    Download full text (pdf)
    Kappa_TobiasKarlsson
  • Public defence: 2026-05-05 14:00 F3 (Flodis), Stockholm
    Wang, Honglian
    KTH, School of Electrical Engineering and Computer Science (EECS), Theoretical Computer Science.
    Fairness and Diversity-Aware Algorithms: Ranking, Streaming, and Graph Analysis2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    As algorithmic systems increasingly shape human experiences, ensuring fairness and diversity has become a central challenge. This thesis studies fairness and diversity through the lens of algorithm design and optimization theory, providing formal frameworks and efficient algorithms across three domains: ranking-based recommendation, streaming recommendation, and graph analysis.

    The first part of the thesis investigates diversity maximization in recommender systems with stochastic user engagement. We first study how to rank items in recommendation systems, where users engage with content sequentially and probabilistically. We introduce two novel diversity measures, sequential sum diversity and sequential coverage diversity, which account for uncertainty in user engagement. Our goal is to find a ranking of items that maximizes these sequential diversity measures. We show that sequential coverage diversity is ordered submodular, enabling a greedy 1/2-approximation. For sequential sum diversity, we provide polynomial-time constant-factor approximation algorithms. Separately, we study a streaming setting where items arrive continuously and users may visit the system multiple times at arbitrary moments. For this setting, we aim to design a streaming algorithm that maximizes a stochastic coverage diversity measure. We show that a classic greedy algorithm achieves a tight 1/2-competitive ratio but requires memory linear in the stream length. With sublinear memory and an upper bound T' on the number of user visits T, we propose STORM, which achieves a 1/4(T'-T+1)-competitive ratio. We further propose STORM++, improving the competitive ratio to 1/8delta, where the integer parameter delta controls the tradeoff between solution quality and computational cost.

    The second part of the thesis studies diversity as a constraint in densest subgraph discovery and addresses the problem of finding dense communities in networks with heterogeneous relationship types. We model relationship types as edge colors and formulate the At Least h Colored Edges Densest Subgraph problem (ALHCEDGESDSP), which seeks subgraphs that are both dense and contain at least h_i edges of each color i. We prove that even the simplest variant of this problem is NP-hard and develop constant-factor approximation algorithms. Our key technical contribution links the edge-constrained and node-constrained versions of the densest subgraph problem. We first show that algorithms for the At Least k Nodes Densest Subgraph problem (DalkS) can approximate the At Least h Edges Densest Subgraph problem (ATLEASTHEDGESDSP), and then extend the algorithm for DalkS to handle colored edge constraints for solving ALHCEDGESDSP.

    The third part of the thesis studies graph interventions for fairness in networks. We examine two fairness measures, PageRank fairness and hitting-time fairness, developing methods to balance influence and improve accessibility across groups. For each demographic group, the sum of PageRank scores within it quantifies the influence of that group. PageRank fairness measures how far the current group-wise influence deviates from a given target. We formulate the PageRank fairness problem as an optimization problem that adjusts edge weights such that the resulting graph achieves a group-wise influence distribution as close to the target as possible. The optimization problem involves a nonconvex objective over a convex feasible set under practical constraints, such as not adding new edges and limiting the magnitude of weight changes. We solve this PageRank fairness maximization problem using efficient projected gradient descent, proving convergence to a stationary point. For hitting-time fairness in bipartite graphs, we formulate two problems, minimizing the average (BMAH) and the maximum hitting time (BMMH) from one group to another via strategic edge additions. We provide a (2+epsilon)-approximation for BMAH by combining fast random walk simulation with greedy supermodular minimization. For the more challenging BMMH problem, we develop two approaches: the first leverages its connection to the BMAH problem, and the second employs a method based on the asymmetric k-center problem. Both approaches yield provable approximation guarantees for BMMH.

    The algorithms and analysis techniques presented in this thesis contribute to the growing body of work on fairness and diversity in algorithmic systems. By formalizing new problem variants that capture realistic constraints in interactive and networked settings, and by providing approximation algorithms with provable guarantees, this work expands the toolkit available for addressing fairness and diversity challenges in computational systems.

    Download full text (pdf)
    Kappa summary
  • Fuldauer, Lena I.
    et al.
    University of Oxford, School of Geography and the Environment, Oxford, OX1 3QY, UK.
    Ives, Matthew C.
    University of Oxford, School of Geography and the Environment, Oxford, OX1 3QY, UK.
    Adshead, Daniel
    University of Oxford, School of Geography and the Environment, Oxford, OX1 3QY, UK.
    Thacker, Scott
    University of Oxford, School of Geography and the Environment, Oxford, OX1 3QY, UK.
    Hall, Jim W.
    University of Oxford, School of Geography and the Environment, Oxford, OX1 3QY, UK.
    Participatory planning of the future of waste management in small island developing states to deliver on the Sustainable Development Goals2019In: Journal of Cleaner Production, ISSN 0959-6526, E-ISSN 1879-1786, Vol. 223, p. 147-162Article in journal (Refereed)
    Abstract [en]

    Waste management is particularly challenging for Small Island Developing States (SIDS) due to their high per-capita infrastructure costs, remoteness, narrow resource bases and high dependence on tourism. The lack of integrated planning frameworks considering these SIDS-characteristics has stalled progress on sustainable waste management. To address this challenge, this paper proposes an integrated methodology for long-term waste management planning to deliver on the United Nations’ Sustainable Development Goals (SDGs) in SIDS. This explicitly combines multi-level participatory SDG visioning and back-casting with waste infrastructure modelling. This methodological development is piloted using a national-scale demonstration on Curacao. Three island-specific waste management portfolios (Inaction, Circular Economy, Technology-led), developed through stakeholder back-casting, are modelled for SDG delivery using a national accounting model under different socio-economic futures. The results highlight the importance of waste prevention and material re-use strategies within islands that engage local populations. Evidence-based identification and evaluation of waste management strategies, grounded in participatory processes, can itself contribute to SDG delivery.

    Download full text (pdf)
    fulltext
  • Wang, Honglian
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Digital futures.
    Tu, Sijing
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Digital futures.
    Oettershagen, Lutz
    University of Liverpool, Liverpool, UK.
    Gionis, Aristides
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Digital futures.
    Streaming Stochastic Submodular Maximization with On-Demand User Requests2025In: Neurips2025: 39th Conference on Neural Information Processing Systems, 2025, p. 1-31Conference paper (Refereed)
    Abstract [en]

    We explore a novel problem in streaming submodular maximization, inspired by the dynamics of news-recommendation platforms. We consider a setting where users can visit a news website at any time, and upon each visit, the website must display up to k news items. User interactions are inherently stochastic: each news item presented to the user is consumed with a certain acceptance probability by the user, and each news item covers certain topics. Our goal is to design a streaming algorithm that maximizes the expected total topic coverage.

    To address this problem, we establish a connection to submodular maximization subject to a matroid constraint. We show that we can effectively adapt previous methods to address our problem when the number of user visits is known in advance or linear-size memory in the stream length is available. However, in more realistic scenarios where only an upper bound on the visits and sublinear memory is available, the algorithms fail to guarantee any bounded performance. To overcome these limitations, we introduce a new online streaming algorithm that achieves a competitive ratio of 1/(8δ), where δ controls the approximation quality. Moreover, it requires only a single pass over the stream, and uses memory independent of the stream length. Empirically, our algorithms consistently outperform the baselines.

    Download full text (pdf)
    fulltext
  • Karlsson, Tobias
    et al.
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Material and Structural Mechanics.
    Hallander, Per
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Material and Structural Mechanics. Saab AB, Bröderna Ugglas gata, SE-581 88 Linköping Sweden.
    Åkermo, Malin
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Material and Structural Mechanics.
    Isolation strategies of carbon nanotubes for resistive sensing in carbon fibre prepreg laminates2024In: Proceedings 21st European Conference on Composite Materials (ECCM21), 2024Conference paper (Refereed)
    Abstract [en]

    In this paper, two strategies to isolate resistive vertically aligned carbon nanotube (VACNT) forestsfrom the conductive carbon fibre environment are presented, enabling embedded sensing with resistivecarbon nanotube sensors in carbon fibre laminates. VACNT forests are used due to their already proventemperature and strain sensing capabilities and ease of placement in prepregs, enabling localisedsensing. To achieve this, a non-permeable separator and a permeable separator are used and compared.Performing cure-monitoring on the VACNT forests during the sample manufacture, it can be concludedthat short-circuits of the resistive sensor are avoided. After manufacture, the temperature and strainsensing capabilities of the VACNT forests when using the two isolation strategies are evaluated. Fromthese measurements, differences in temperature sensing range and sensitivity to strain are observed.

    Download full text (pdf)
    ECCM21_TobiasKarlsson
  • Wang, Honglian
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Theoretical Computer Science. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Digital futures.
    Zhou, Haoyun
    KTH, School of Electrical Engineering and Computer Science (EECS), Theoretical Computer Science. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Digital futures.
    Gionis, Aristides
    KTH, School of Electrical Engineering and Computer Science (EECS), Theoretical Computer Science. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Digital futures.
    Fairness-aware PageRank via Edge Reweighting2026In: WSDM 2026: Proceedings of the Nineteenth ACM International Conference on Web Search and Data Mining, Association for Computing Machinery (ACM) , 2026Conference paper (Refereed)
    Abstract [en]

    Link-analysis algorithms, such as PageRank, are instrumental in understanding the structural dynamics of networks by evaluating the importance of individual vertices based on their connectivity. Recently, with the rising importance of responsible AI, the question of fairness in link-analysis algorithms has gained traction.

    In this paper, we present a new approach for incorporating group fairness into the PageRank algorithm by reweighting the transition probabilities in the underlying transition matrix. We formulate the problem of achieving fair PageRank by seeking to minimize the fairness loss, which is the difference between the original group-wise PageRank distribution and a target PageRank distribution. We further define a group-adapted fairness notion, which accounts for group homophily by considering random walks with group-biased restart for each group. Since the fairness loss is non-convex, we propose an efficient projected gradient-descent method for computing locally-optimal edge weights. Unlike earlier approaches, we do not recommend adding new edges to the network, nor do we adjust the restart vector. Instead, we keep the topology of the underlying network unchanged and only modify the relative importance of existing edges. We empirically compare our approach with state-of-the-art baselines and demonstrate the efficacy of our method, where very small changes in the transition matrix lead to significant improvement in the fairness of the PageRank algorithm.

    Download full text (pdf)
    fulltext
  • Wang, Honglian
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Tu, Sijing
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Gionis, Aristides
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
    Sequential Diversification with Provable Guarantees2025In: WSDM2025: Proceedings of the Eighteenth ACM International Conference onWeb Search and Data Mining, Association for Computing Machinery (ACM) , 2025, p. 345-353Conference paper (Refereed)
    Abstract [en]

    Diversification is a useful tool for exploring large collections of information items. It has been used to reduce redundancy and cover multiple perspectives in information-search settings. Diversification finds applications in many different domains, including presenting search results of information-retrieval systems and selecting suggestions for recommender systems.

    Interestingly, existing measures of diversity are defined over sets of items, rather than evaluating sequences of items. This design choice comes in contrast with commonly-used relevance measures, which are distinctly defined over sequences of items, taking into account the ranking of items. The importance of employing sequential measures is that information items are almost always presented in a sequential manner, and during their information-exploration activity users tend to prioritize items with higher ranking.

    In this paper, we study the problem of maximizing sequential diversity. This is a new measure of diversity, which accounts for the ranking of the items, and incorporates item relevance and user behavior. The overarching framework can be instantiated with different diversity measures, and here we consider the measures of sum diversity and coverage diversity. The problem was recently proposed by Coppolillo et al. [11], where they introduce empirical methods that work well in practice. Our paper is a theoretical treatment of the problem: we establish the problem hardness and present algorithms with constant approximation guarantees for both diversity measures we consider. Experimentally, we demonstrate that our methods are competitive against strong baselines.

    Download full text (pdf)
    fulltext
  • Karlsson, Tobias
    et al.
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Material and Structural Mechanics.
    Zhybak, Mykhailo
    Grafren AB Industrigatan 9, SE-582 77 Linköping, Sweden.
    Hallander, Per
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics. SAAB AB, Bröderna Ugglas gata, SE-581 88, Linköping, Sweden.
    Åkermo, Malin
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Material and Structural Mechanics.
    Comparative Study of Graphene Coated Glass Fibres and Vertically Aligned Carbon Nanotube Forests as Embedded Structural Health Monitoring Systems2025In: Proceedings 24th International Conference on Composite Materials, International Committee on Composite Materials (ICCM) , 2025Conference paper (Refereed)
    Abstract [en]

    In this paper, a graphene-coating on glass fiber weave has been evaluated as a multifunctional sensingmaterial based on previous work performed by the authors on embedded vertically aligned carbonnanotube forests. First, the graphene-coating has been assessed as a resistive cure-monitoring sensorwhen embedded in thermosetting glass fiber/epoxy laminate to monitor its production. The graphene-coating showed similar results to the resistive cure monitoring of carbon nanotubes. However, thegraphene-coating diverges in its resistive signature upon reaching cure temperature, showing acontinuous resistance decrease in this phase, suggesting a sensitivity to cure-induced shrinkage of theepoxy, not seen in the carbon nanotube sensor. Later, in the cured state, the embedded graphene-coatingfunctions as an excellent temperature sensor, possessing a negative thermoresistive effect. However, asa strain sensor, the graphene-coating does not perform as well as the embedded carbon nanotube sensor,possessing an initial drift in resistance upon its first load cycle and additional drift during constant strainconditions, and when unloaded.

    Download full text (pdf)
    ICCM24_ID132_TobiasKarlsson
  • Larsson Forsberg, Albin
    et al.
    Ericsson, Sweden.
    Lau, Kenneth
    Elekta Instrument AB, Sweden.
    Nikou, Alexandros
    Ericsson, Sweden.
    Feljan, Aneta Vulgarakis
    Ericsson, Sweden.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning.
    Diffusion Models for Constrained Planning with Probabilistic Risk-awareness Guarantees2026In: Proceedings of the 18th International Conference on Agents and Artificial Intelligence, INSTICC , 2026, p. 2350-2358Conference paper (Refereed)
    Abstract [en]

    Diffusion models have shown great potential in generating trajectory plans for agents in environments with unknown dynamics. However, such models provide no safety guarantees. In this work, we focus on risk-aware planning with respect to safety constraints and introduce a probabilistically risk-aware variant of Diffuser (PRA-Diffuser). The diffusion model initially learns a distribution over trajectories that may or may not be unsafe. We then fine-tune this model to reduce the probability of sampling such unsafe trajectories. We analyze the proposed solution and introduce a provable lower bound on risk of safety violation leveraging concentration inequalities for conditional Value-at-Risk. Our approach can be applied to models that have been pre-trained, potentially from datasets containing unsafe trajectories. Our empirical results demonstrate that our approach significantly reduces unsafe trajectories generated by the diffusion model across multiple environments.

    Download full text (pdf)
    fulltext
  • Larsson Forsberg, Albin
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Ericsson AB, Stockholm, Sweden.
    Nikou, Alexandros
    Ericsson AB, Stockholm, Sweden.
    Feljan, Aneta
    Ericsson AB, Stockholm, Sweden.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning.
    Learning Long-Horizon Multi-Agent Coordination from Temporal Logic Specifications2026In: Proceedings of the 18th International Conference on Agents and Artificial Intelligence, INSTICC , 2026, Vol. 1, p. 70-79Conference paper (Refereed)
    Abstract [en]

    We study multi-agent reinforcement learning (MARL) under temporally extended Signal Temporal Logic(STL) objectives, which require reasoning over both long-horizon dynamics and inter-agent relations. Wepropose TD-MAT, a transformer-based architecture with multivariate positional encodings, causal temporalmasking, and a decomposed reward based on arithmetic–geometric mean robustness with variance regularization. Experiments on coordination tasks ranging from unstructured multi-objective problems to strict temporalsequencing show that TD-MAT learns effective long-term behaviors and generalizes to heterogeneous agentsettings. Ablation studies highlight the necessity of temporal masking, positional encodings, and reward decomposition, while comparisons to MAPPO, RMAPPO, and MAT reveal that transformers provide the greatestbenefit on unstructured, long-horizon tasks.

    Download full text (pdf)
    fulltext
  • Soman, Supriya Mini
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Heat and Power Technology.
    Golzar, Farzin
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Heat and Power Technology.
    Rolando, Davide
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Applied Thermodynamics and Refrigeration.
    Molinari, Marco
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology, Applied Thermodynamics and Refrigeration.
    Occupancy Detection for Residential Buildings using Machine Learning with Indoor Temperature as the Only Training Feature2026In: Proceedings 17th International Conference on Applied Energy (ICAE2025), Applied Energy Innovation Institute (AEii) , 2026, Vol. 64, article id 214Conference paper (Refereed)
    Abstract [en]

    Global floor area is increasing every year which is subsequently leading to an increase in electricity and heating demand in buildings. Residential buildings have huge potential for energy savings and there is an immediate need to decarbonize them by the end of 2050. Machine learning is finding application across all fields and will thus have an important role to play in the building sector also. One of the important challenges that building owners need to tackle is occupancy detection in residential apartments which can help save considerable amounts of energy and costs. However, occupancy is highly variable, and it is difficult to quantify and predict occupancy because of the random and individualistic nature of humans. In addition, scalable approaches for occupancy detection should prioritize data from common and cost-effective sensors like temperature sensors. In contrast to existing literature which has stated that occupancy detection based on the data from a single environmental sensor is not appropriate for obtaining good results, this paper aims to detect occupancy in a real residential building using only indoor temperature as the feature to train the model. Different machine learning models and techniques are studied and tested to understand how the accuracy of occupancy detection can be increased. With the right techniques, it has been possible to obtain promising results in the form of an accuracy of 95% using machine learning models and only indoor temperature to train it.

    Download full text (pdf)
    fulltext
  • Champavere, Aude
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment.
    Circular Economy Strategies in French Urban Renewal Projects: Exploring Opportunities and Challenges within the ANRU Framework2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The construction sector, deeply entrenched in a linear model of resource extraction, consumption, and disposal, significantly contributes to environmental degradation. In response, the circular economy paradigm has emerged as a key alternative. Given their high resource intensity, large-scale urban development projects call for a reconfiguration of their material flows. These interventions offer a strategic opportunity to develop closed loops at a fine spatial scale. This study explores the development of circular economy strategies for construction and demolition materials in French urban renewal projects led by the National Agency for Urban Renewal (ANRU). A qualitative research design is adopted, combining a literature review, document analysis, ten semi-structured interviews, a site visit and participant observation. Following a contextual analysis of the ANRU Framework, two case studies of NPNRU projects in the Est Ensemble area are examined. The literature review engages with conceptual approaches to the circular economy in the built environment and identifies urban metabolism as a key theoretical concept at the macro scale. The subsequent analysis is informed by this framework, complemented by the Multi-Level Perspective (MLP). The empirical results first identify opportunities associated with intense material flows and conditions conducive to innovation, as well as barriers related to the spatial, temporal and material characteristics of this specific context. The case studies further reveal key implementation mechanisms, particularly financial instruments, physical platforms, and monitoring tools. Finally, the findings point to conditions for scaling up circular practices, including institutional changes, enhanced synergies and a broader paradigm shift in urban production. In brief, this thesis contributes to both the literature on circular economy in the built environment and to research on ANRU projects. The integration of circular economy theory with empirical evidence highlights the potential of these urban renewal programmes in the transition to a circular regime, while also acknowledging the diverse associated limitations.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-04-28 13:00 D3, Stockholm
    Verkama, Emil
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Algebra, Combinatorics and Topology.
    Inversions in the 1324-avoiding permutations2026Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The study of pattern avoidance stems from a question in computer science: which sequences of distinct numbers can be ordered by a single pass through a stack? Knuth (1968) found that these sequences are characterized by having no subsequence a, b, c, such that < a b. Such a subsequence has the same relative order as the permutation 231, so we say that our original sequence avoids 231.

    In more general terms, pattern avoidance is a natural way to restrict the structure of permutations by forbidding subsequences with a certain relative order. This has become a popular topic in enumerative combinatorics, and it has connections to various other fields.

    Determining the number of 1324-avoiding permutations of length n is the most important open problem in pattern avoidance. This thesis is comprised of two papers contributing to the inversion monotonicity conjecture by Claesson, Jelínek and Steingrímsson (2012), according to which avnk(1324), the number of 1324-avoiders of length n with a fixed number k of inversions, is weakly increasing in n. If the conjecture is true, it improves our understanding of the asymptotic behavior of the number of 1324-avoiders.

    In Paper A, we provide an explicit formula for avnk(1324) for all n ≥ (k + 7)/2. The proof relies on a novel structural characterization of 1324-avoiders with few inversions. As a byproduct, we show that the inversion monotonicity conjecture holds when n ≥ (k + 7)/2.

    In Paper B, we study the inversion monotonicity of classes of permutations avoiding multiple patterns. We show the sets {1324, 231} and {1324, 2314, 3214, 4213} are inversion monotone via explicit injections, and introduce a general procedure for constructing large inversion-monotone sets. We also analyze the limiting structure of large permutations with a fixed number of inversions avoiding 1324 and another pattern of length four, and prove several half-monotonicity results similar to Paper A.

    Download full text (pdf)
    fulltext
  • Deuda Lundkvist, Samuel
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Chemistry.
    Redistribution of Metallic Lithium in Anode Materials: A 7Li NMR Study in Batteries2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Understanding the degradation mechanisms in lithium-ion batteries is essential for the development of next generation batteries. Herein, a post-mortem protocol described in “Quantifying lithium lost to plating and formation of the solid-electrolyte interface in graphite and commercial battery components” by Fang et.al is used to track the redistribution of metallic lithium in two types of carbonaceous anode materials: graphite, and hard carbon. In this report, two different commercially available graphite anodes were used, one power optimized while the other energy optimized. The hard carbon anode was prepared in house.

    Metallic lithium in plated samples displayed an immediate redistribution to ionic compounds at low chemical shifts on a complex manner. Graphite displayed a behaviour that could be modelled by a double-exponential function whilst hard carbon displayed a single-exponential function decaying to a baseline. There was no observed difference between the power optimized and energy optimized graphite. “Dead” lithium in delithiated samples showed a period of stability followed by a fast redistribution into ionic compounds. No attempt was made to identify these ionic compounds due to their large NMR linewidths. 

    The plating kinetics of hard carbon differed significantly from graphite, requiring a much greater overcharge to reach the deposition underpotential which was the same for both anode materials.

    Download full text (pdf)
    fulltext
  • Zarouf, Marwan
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Biomedical Engineering and Health Systems.
    Influence of Aerobars Positions in Ultra Cycling on Musculoskeletal Loadings2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Ultra-endurance cycling exposes athletes to prolonged static postures, leading to a high prevalence of overuse injuries, particularly in the lower back and knees. While aerobars are widely used to reduce aerodynamic drag, their internal musculoskeletal loads remain poorly understood. This study aims to quantify the biomechanical impact of aerobar use compared to standard road positions and evaluate the specific influence of stack and inclination settings on lumbar and lower limb joint reaction forces. Ten experienced cyclists performed trials on an instrumented ergometer at 70% of their Functional Threshold Power (FTP) across six conditions: hoods, drops, and four aerobar configurations varying in stack (high/low) and inclination (0°/15°). A custom automated musculoskeletal modeling workflow using OpenSim was developed to compute compressive and shear forces for lumbar and lower limb joints. Results revealed that the flat aerobar position (0°) significantly reduced L4-L5 compressive forces (17.5 ± 10.9 N/kg) compared to the standard hoods position (20.9 ± 4.8 N/kg), likely due to effective skeletal support. However, tilting the aerobars to 15° significantly increased spinal loading. A mechanical trade-off was identified: aerobar use shifted the load distally, more than doubling hip compression forces (~46 N/kg) and significantly increasing knee forces compared to road positions. These findings suggest that while flat aerobars reduce spinal joint loading, they impose severe stress on the lower limbs, highlighting the need for injury-specific bike fitting strategies in ultra-endurance.

    Download full text (pdf)
    fulltext
  • Feito, Ivana
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Fibre- and Polymer Technology.
    Comparative Evaluation of Photoactivators for Clinical Polymer Applications-Crosslinking performances and safety screening in the context of applied research in an industrial environment2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Photocuring is central to many biomedical polymer applications because light-activated photoactivators (PAs) generate radicals that rapidly convert acrylate functions into a crosslinked network, enabling fast, localised curing. For clinical devices the PA must therefore combine efficient, rapid crosslinking under the clinical light source with an acceptable safety profile.

    Following a regulatory change, three candidate photoactivators (A, B and C) were evaluated as possible replacements for the current PA in a medical polymer formulation. Absorption at 405 nm (the emission of the clinical light source) was confirmed for all candidates, and formulations were prepared by mixing the base polymer with PA solutions in ethanol at the highest feasible loading. Target concentrations of 2500 ppm and 5000 ppm were tested. Crosslinking kinetics and efficiency were characterized using real-time FTIR, ATR-FTIR and photo-DSC; efficient crosslinking was defined as ≥90% acrylate conversion within 15 s at the Lowest Acceptable Polymerization Irradiance (70 mW・cm⁻2).

    Here we show that Candidate A, despite slower polymerization kinetics, offers the best balance of performance and safety: minimal concentrations to meet the 90%/15 s criterion were ~2500 ppm for the current PA and ≥2500 ppm for Candidates B and C, whereas Candidate A required ≥5000 ppm. All formulations gave similar depth of cure and were stable to ambient light for 7h except Candidate C, which hardened more rapidly. Regulatory database review indicated Candidate A had the lowest toxicity, Candidate C was more cytotoxic and classified as skin sensitizer 1A, and less data were available for Candidate B.

    These findings revise the expectation that all candidate PAs would perform like the incumbent PA at the same dose: structural and photophysical differences alter required operating concentrations and residual risk. Practically, the study identifies Candidate A as the preferred replacement and also shows that reducing the current PA below labelling thresholds is infeasible (900 ppm failed to achieve effective crosslinking even at higher irradiance and longer cure). Ongoing work evaluates POL004 formulations with Candidate A after supercritical CO₂ purification to confirm crosslinking efficiency and overall performance in an industrial, clinically relevant context.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-05-08 14:00 https://kth-se.zoom.us/j/68743059353, Stockholm
    Mehdifar, Farhad
    KTH, School of Electrical Engineering and Computer Science (EECS), Decision and Control Systems.
    Funnel-Inspired Closed-Form Control for Satisfaction of Spatiotemporal Constraints and Multi-Agent Coordination2026Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Autonomous systems have become integral to industry and society, spanning applications from robotic platforms to autonomous vehicles and beyond. Most contemporary engineering systems exhibit complex nonlinear dynamics and are often subject to time-varying uncertainties, making high-fidelity modeling difficult. At the same time, these systems are increasingly required to execute complex tasks that extend beyond classical objectives such as set-point tracking and stabilization. In many real-world scenarios, they must satisfy spatiotemporal specifications, requirements that depend jointly on space and time. Such specifications can be formulated as time-varying constraints in control systems, and enforcing them is crucial for enhanced performance, guaranteed safety, and reliable, timely task execution. 

    Funnel-based control methods provide closed-form feedback laws that enforce certain classes of time-varying constraints for uncertain nonlinear systems. The main focus of this thesis is to develop new robust closed-form control schemes, rooted in the core ideas of funnel-based control, to address broader classes of time-varying constraints that cannot be treated directly by conventional funnel-based methods. In addition, the thesis investigates distributed coordination of multi-agent systems under spatial constraints, as well as the application of funnel-based methods to multi-agent formation control under transient performance requirements.

    The first part of the thesis is devoted to extending funnel-based control methods, particularly prescribed performance control (PPC), to address time-varying hard (safety) and soft (performance) funnel-type specifications. We then revisit the standard PPC design to highlight its limitations and to motivate the need for a new control framework. Building on the PPC design philosophy, we propose a novel robust closed-form control scheme that enforces generic time-varying set invariance for high-relative-degree, multi-input multi-output uncertain nonlinear systems, thereby accommodating classes of time-varying constraints beyond those handled by standard PPC. Finally, we extend the proposed design to treat potentially conflicting generalized time-varying hard and soft specifications, further broadening the applicability of the method.

    In the second part of the thesis, we shift the focus to multi-agent coordination problems. First, we present a novel coordinate-free formation control scheme for directed leader–follower multi-agent systems that achieves almost-global convergence to a desired shape. Fully decentralized robust controllers are synthesized by leveraging the PPC framework to impose prescribed transient and steady-state performance on the agents’ formation errors, while ensuring robustness to system uncertainties. A key ingredient of the approach is the use of bipolar coordinates to obtain orthogonal (decoupled) formation-error coordinates for each follower. This not only promotes almost-global convergence to the desired shape but also enables a systematic and effective application of PPC. Finally, we introduce a distributed, task-based implicit formation determination and control problem in which each agent is subject to spatial constraints with respect to other agents and the environment. We reformulate the problem as a distributed optimization scheme and, based on this formulation, develop a control protocol for kinematic agents.

    Download full text (pdf)
    F.Mehdifar_PhD dissertation
  • Fonsati, Arianna
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Sustainable Buildings.
    Dervishaj, Arlind
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Sustainable Buildings.
    Gudmundsson, Kjartan
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Sustainable Buildings.
    Leveraging OpenBIM standards and information delivery specification (IDS) for digital validation in circular construction: reusing hollow core slabs2026In: Smart and Sustainable Built Environment, ISSN 2046-6099, E-ISSN 2046-6102, p. 1-24Article in journal (Refereed)
    Abstract [en]

    PurposeThe study investigates how openBIM workflows can support standardised digital validation processes, aiding the transition towards circular construction. Specifically, it examines the use of the Information Delivery Specification (IDS) standard to validate Industry Foundation Classes (IFC) models for the reuse of precast hollow core slabs, in accordance with the Norwegian standard NS 3682:2022 Hollow Core Slabs for Reuse.Design/methodology/approachThis study proposes an Automated Checking Compliance (ACC) method to verify the compliance of IFC models with reuse-driven information contents for precast hollow core slabs. To achieve this, the IDS standard was selected to develop and test an openBIM validation workflow. The methodology includes three main steps: (1) identifying the minimum set of information requirements derived primarily from NS 3682:2022; (2) implementing these as IDS specifications linked to IFC entities, and (3) applying the workflow to a case study of a precast hollow core slab modelled in Autodesk Revit and exported to IFC4x3. The validation is performed using the open-source Bonsai add-on for Blender, while a complementary buildingSMART Data Dictionary (bsDD) is developed to ensure semantic consistency and standardised property definitions.FindingsThe results confirm that IDS enables effective automation of compliance checking for data presence and structure within IFC models. The IDS-based workflow reliably identifies missing information, highlighting the specific objects that fail the validation. By aligning rule-based validation with recognised standards, the proposed approach supports quality assurance for the digital representation of reusable hollow core slabs. Furthermore, the study establishes a standardised, machine-readable database of reuse requirements. The approach can be adapted and applied to other building components, promoting interoperability and reliability in digital marketplaces for reclaimed materials.Research limitations/implicationsWhile the proposed ACC workflow guarantees consistency and completeness of digital information, it does not assess the correctness or validity of underlying physical test results and of the initial data entry, such as mechanical properties or service life parameters. The applicability of the approach also depends on the digital maturity of stakeholders and the completeness of IFC models. Broader applicability will require further harmonisation of reuse-related standards and increased awareness of information requirements.Practical implicationsThe study illustrates how openBIM standards can be used within reuse-driven design and procurement processes through automated data validation. The combined use of IDS and bsDD enhances regulatory compliance, data transparency, and reliability of digital inventories of reclaimed components, lowering barriers to reuse, especially for small and medium enterprises.Originality/valueThis paper is among the first to apply the IDS framework to the context of building component reuse, translating reuse-oriented standards into machine-readable validation rules. It extends the use of openBIM standards from model verification to circular construction practices, supporting both digitalisation and sustainability efforts in the built environment. Specifically, the study contributes to improved interoperability and trust in digital marketplaces for reclaimed construction products.

    Download full text (pdf)
    fulltext
  • Poom, Elise
    KTH, School of Engineering Sciences (SCI), Physics.
    Thermal-hydraulic response of the wetwell after LBLOCA in Nordic BWR2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis investigates the thermal-hydraulic phenomena during a LBLOCA ina Nordic BWR with a focus on the temperature and pressure response in thewetwell gas space. The phenomena in the pressure suppression system weresimulated using GOTHIC thermal-hydraulic code. Reliable response of a pressuresuppression system is crucial for containing main steam line break accidents inBWRs to avoid leakage of radioactive materials to the environment. Therefore,it is a thoroughly researched subject but a gap seems to exist in evaluating theaccident with a maximum possible break area for the steam line. The objectiveof this thesis is to model the accident with maximum break size and evaluate theresults and suitability of chosen methodology.Two models with different approaches regarding steam venting were developedto compare which one leads to a better alignment with expected results oftemperature and pressure. Sensitivity studies were performed on the mainsimplifications of the model to study the impact on the response.Comparison with other studies shows that a model with lumped blowdownpipes yields more conservative results than the model with one large subdividedblowdown pipe in GOTHIC. However, the model with lumped pipes lacks incapability of monitoring processes in the pipe and modeling the heat gradientaccurately throughout the length of the pipe. Sensitivity studies emphasized theimportance of wetwell gas space grid resolution, number of modeled blowdownpipes and incorporating the heat radiation from blowdown pipe to the wetwellgas space into the model. The pressure safety criteria for the containment was notexceeded in any of the simulations in this study.Additional studies could be done on the condensation phenomena in the wetwellpool with total number of pipes and horizontal asymmetry of temperaturedistribution in the wetwell containment.

    Download full text (pdf)
    fulltext
  • Chhatre, Kiran
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology. Adobe Research.
    Jeong, Hyeonho
    Adobe Research.
    Gryaditskaya, Yulia
    Adobe Research.
    Peters, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology.
    Huang, Chun-Hao
    Adobe Research.
    Guerrero, Paul
    Adobe Research.
    TrajectoryMover: Generative Movement of Object Trajectories in VideosManuscript (preprint) (Other academic)
    Abstract [en]

    Generative video editing has enabled several intuitive editing operations for short video clips that would previously have been difficult to achieve, especially for non-expert editors. Existing methods focus on prescribing an object's 3D or 2D motion trajectory in a video, or on altering the appearance of an object or a scene, while preserving both the video's plausibility and identity. Yet a method to move an object's 3D motion trajectory in a video, i.e. moving an object while preserving its relative 3D motion, is currently still missing. The main challenge lies in obtaining paired video data for this scenario. Previous methods typically rely on clever data generation approaches to construct plausible paired data from unpaired videos, but this approach fails if one of the videos in a pair cannot easily be constructed from the other. Instead, we introduce TrajectoryAtlas, a new data generation pipeline for large-scale synthetic paired video data and a video generator TrajectoryMover fine-tuned with this data. We show that this successfully enables generative movement of object trajectories.

    Download full text (pdf)
    TrajectoryMover 2026
  • Moberg, Ella
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Fibre- and Polymer Technology.
    The influence of process conditions and additives on the mechanical properties of dry-formed, high-density fibre materials2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    One of the most important problems in society today is the abundance of plastics and microplastics causing harm in nature. Because of this, the current need for a switch to wood fibre-based solutions to replace plastics is urgent, and one promising emerging process is the dry-forming of cellulosic fibres. Dry-forming is advantageous since it can produce biobased, recyclable products in complex shapes with mechanical properties in par with plastics. Since dry-forming is an emerging field, additional research is required to better understand the process. In this work the effect of different conditions for forming and the effect of hydrophobisation using Alkyl Ketene Dimer (AKD) has been investigated. It was found that an increase in tensile properties could be observed when increasing the pressing temperature from 100 to 140 °C with a difference in the type of tensile failure between these temperatures. Below 100 °C the main mechanism is fibre pull-out, and above 140 °C the failure is more chaotic indicating stronger interfibre bonding. The results from analysis of AKD-treatment were unexpected with small differences in tensile properties, where previous experiences show decreased mechanical properties when material is treated with AKD. The reason for this is not yet understood, but an interesting area for further research.

    Download full text (pdf)
    fulltext
  • Zhang, Yuanqiu
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology.
    Circular Process Optimization for Cellulose-based Membrane Production: An Integrated Material–Energy Flow and scenario-based Sustainability Assessment, A Case study of Cellfion2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The transition toward sustainable energy systems requires the development of environmentally benign and resource-efficient materials, including membranes used in electrochemical and humidification processes. This thesis focuses on the sustainability assessment and optimization of Cellfion’s pilot-scale production of PFAS-free((per- and polyfluoroalkyl substances-free)), cellulose-based humidifier membranes, positioning the work at the intersection of sustainable energy engineering, materials innovation, and circular economy principles.

    The study begins by mapping the pilot plant’s resource consumption through a Material and Energy Flow Analysis (MEFA), quantifying inputs such as water, energy, and chemical solvents. This baseline assessment identifies critical environmental and operational hotspots, with drying emerging as the largest energy consumer, washing stages driving significant water demand, and solvent losses contributing to hazardous waste generation. A Techno-Economic Assessment (TEA) complements this analysis by linking these environmental hotspots to cost structures, revealing that while fixed costs dominate at pilot scale, solvent losses and waste disposal remain non-trivial variable costs.

    To address these specific challenges (excess energy use in drying, high water consumption, and solvent wastage), a scenario-based approach is employed, developing three optimization pathways: Energy Optimization through waste-heat recovery in drying, Water Optimization via process-water recycling, and Material Reduction through improved solvent management. These scenarios are informed by analogous studies in sustainable materials manufacturing and modeled using MEFA and TEA frameworks to assess their environmental and economic impacts.

    The results show that targeted circular economy interventions can achieve notable sustainability gains without increasing production costs. Specifically, the Water Optimization scenario reduced freshwater use by over 35% and lowered material intensity by 34%, the Energy Optimization scenario decreased energy intensity by approximately 7%, and the Material Reduction scenario cut hazardous solvent waste by around 30%. All scenarios maintained unit production costs within ±1% of the baseline, confirming cost neutrality. Trade-off analysis reveals minimal cross-impacts, with the most notable being a slight energy increase in the water-recycling case.

    In conclusion, the integrated assessment demonstrates that PFAS-free membrane production can be made significantly more resource-efficient while retaining economic viability. The findings provide actionable guidance for Cellfion’s scale-up strategy, offering a replicable methodological framework for early-stage sustainable process design in the energy materials sector.

    Download full text (pdf)
    fulltext
  • Li, François
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology.
    Development of a battery management system simulation tool2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With the growing renewable energy power production, Battery Energy Storage Systems (BESS) have appeared on the French electricity grid. To monitor those batteries and ensure its safety Battery Management Systems have been developed as an interface between the battery cells and the Energy Management System (EMS) that dictates the energy flows between the BESS and the grid. Developing a robust Battery management System (BMS) is at the core of economical optimization since controlling the state of health and state of charge while meeting energy requirements is the main interest. This thesis focuses on the development of an academic BMS tool based on a 2-RC Equivalent Circuit Model (ECM) coupled with an aging and a thermal model. The ECM has been built on to replicate the hysteresis behavior of the Lithium iron Phosphate (LFP) chemistry by representing surface particles with a probabilistic approach. A focus on reversible and irreversible heat generation has been done with a feedback effect on the electrical circuit following an Arrhenius law. The aging model is physically-driven but with the lack of experimental data, the parameters could only have been mathematically determined. With a decent accuracy at the single cell level, it has been scaled up to the battery level with multiple cells in series and strings. The effects of cell variability in terms of capacity and resistances has been tackled. Computation time and Error compared with commercial data will be studied in a sensitivity analysis that focus on the number of particle of the hysteresis algorithm, as well as the time step of simulation. The safety measures implemented in the BMS have been tested in different configurations to observe the different mechanisms at stake. A simulation of a world-leader’s battery cell has been done to determine the capacity fade over 20 years of daily cycling. The accuracy of the prediction was of 0.138% RMSE. The tool simulates and monitors Voltage, State Of Health (SOH), State Of Charge (SOC) and temperature of a battery pack as would a digital twin in a commercial BMS.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-04-23 14:00 Kollegiesalen, Stockholm
    Ennadir, Sofiane
    KTH, School of Electrical Engineering and Computer Science (EECS), Computing and Learning Systems.
    On the Adversarial Robustness of Graph Neural Networks2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Graph Neural Networks (GNNs) have emerged as the standard paradigm for machine learning on graph-structured data, demonstrating remarkable success in diverse applications such as molecular design, anomaly detection within networks, and recommendation systems. However, despite their effectiveness in learning meaningful representations for nodes and graphs, GNNs remain vulnerable to adversarial attacks. These attacks, which are small strategically crafted perturbations to the input graph, can result in unreliable predictions. This specific vulnerability raises serious concerns regarding the deployment of GNNs in safety-critical domains like finance and healthcare, where ensuring robustness is crucial. Consequently, understanding and enhancing the adversarial robustness of GNNs has become a critical research focus, involving both the design of potent attack strategies and the development of resilient defense mechanisms.

    Many existing defense methods rely on pre-processing techniques or modifications to the message-passing framework to mitigate attacks, often by discarding or re-weighting parts of the input graph. Although these defenses have shown great results, they are frequently based on heuristic reasoning and lack strong theoretical guarantees. Specifically, given the input graphs' rich topological aspect, a deeper understanding of their vulnerabilities and internal behaviors is essential, especially regarding how an attack can propagate through the network. Moreover, current defense methodologies are typically evaluated only against the state-of-the-art attacks available at the evaluation time; in the absence of theoretical guarantees, these defenses remain susceptible to more advanced or previously unseen attack strategies. This gap underscores the need for mechanisms that not only exhibit robust empirical performance but also provide certifiable robustness for long-term effectiveness. Furthermore, most current approaches entail high computational overhead, limiting their practical feasibility in real-world applications.

    In this thesis, we address key challenges in GNN adversarial robustness, focusing on the aforementioned drawbacks. First, we introduce defense mechanisms that are both empirically effective and grounded in solid theoretical analysis, thereby offering provable robustness against evolving attacks. Second, we investigate how to reconcile strong defense performance with computational efficiency, which is an essential requirement in multiple domains such as applications in the mobile and online platforms. Achieving this balance is critical for broadening the deployment of robust GNNs in practical settings. Finally, we explore often overlooked factors related to the training dynamics, such as weight initialization and the number of training epochs, that can substantially influence a model’s underlying robustness, illustrating how effective parameter selection can bolster resilience with very limited costs.

    The contributions of this thesis are organized around four core pillars. In the first, we propose an adaptation of Graph Convolutional Networks (GCNs) using orthogonal weight matrices, showing both theoretically and empirically that this design can significantly enhance model robustness. In the second contribution, we present a simple yet powerful technique for injecting noise into hidden representations during training, which substantially improves robustness with minimal additional computational cost, consequently offering a more lightweight alternative to many existing, high-complexity defense methods. The third work examines the neglected interplay between training dynamics (e.g., number of epochs, initialization strategies) and model vulnerability, demonstrating how careful tuning of these parameters can enhance a model's underlying robustness. Finally, we propose a novel adversarial attack approach that generates adversarial graphs from scratch via a learnable generator, rather than merely perturbing existing graphs, thereby introducing new perspectives on attack methodologies.

    Through these contributions, the current thesis aims to provide theoretical insights and tools that could help advance the current understanding of adversarial attacks in the context of GNNs. These contributions and insights can advance the development of robust GNNs, paving the way for safer and more reliable graph-based machine learning systems.

    Download full text (pdf)
    Kappa
  • Kemgne, Chloé Maeva
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Transport planning.
    Comparison and evaluation of pricing systems for formal and informal public transport in African metropolises2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Due to the increasing urbanisation of the African continent, the urban population of the continent is expected to double by 2050, putting a strain on road networks. It is therefore essential for transport planners to develop their public transport offer while remaining affordable for travellers, in order to ease general traffic congestion, which already causes significant financial losses nowadays. The master's thesis below aims to provide an overview of various aspects of transport in different African cities: the municipalities selected for benchmarking are Nairobi, Abidjan and Kigali. After describing and mapping the network of formal operators in each municipality, the thesis describes and maps their informal networks. Two networks are then evaluated in terms of fare and geographical equity, using national surveys to establish the average expenditure of residents according to their area of residence and income class. It is showed the weight of transport costs in monthly expenditure increases with poverty and distance from the city centre. In a second step, a multimodal model is used to assess the impact of implementing fare integration on the formal and informal transport networks on another city of the continent. The results indicate that the reduction has a negligible impact on non-public transport users and is not sufficient to create a modal shift towards public transport. In the case of public transport users, the implementation of the measure may even prove negative if the integration of informal lines into the formal network is accompanied by an alignment of their fares with those of the formal network. This observation is potentially caused by a network design that favours direct connections, or by the input parameters of the model itself penalising connections.

    Download full text (pdf)
    fulltext
  • Bertilsson, Fredrik
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History, History of Science, Technology and Environment.
    Defense and disaster medicine: Civil contingencies and natural disasters in Swedish civil defense2026In: Journal of the history of medicine and allied sciences, ISSN 0022-5045, E-ISSN 1468-4373, article id jrag010Article in journal (Refereed)
    Abstract [en]

    As in many other countries, Swedish defense during the Cold War was primarily organized around the perceived threat of war, including the potential use of nuclear weapons. This article shifts attention from such scenarios to the ways in which civil contingencies and natural disasters were conceptualized as knowledge objects within the emerging field of defense and disaster medicine. Their incorporation into Sweden’s preparedness agenda signaled a broader and more multifaceted understanding of protection and security within the scientific advisory system of the Swedish defense. By centering medical knowledge production on these hazards, the article offers new insights into the role of medical expertise in Swedish preparedness, while simultaneously shifting focus away from a war-centered narrative of Cold War defense investments. The empirical exploration spans the period from the late 1950s, when a comprehensive governmental inquiry into Swedish defense medicine led to the establishment of the Delegation for Applied Medical Defense Research [Försvarsmedicinska forskningsdelegationen] and subsequently the Organizing Committee for Disaster Medicine [Katastrofmedicinska organisationskommittén] (Kamedo) in the first half of the 1960s. The study concludes in the mid-1970s, when the Delegation was incorporated into the Swedish Defense Research Establishment [Försvarets forskningsanstalt] (FOA).

    Download full text (pdf)
    fulltext
  • Berugoda Arachchige, Chathura Jayendra
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Ergonomics.
    Comparative Analysis of Postural Demands of Main and Assistant Surgeons During Traditional Multi-Port Laparoscopic versus Single-Port Robotic Procedures2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis project investigated whether single-port robotic surgery (SP) provides meaningful ergonomic advantages over traditional multi-port laparoscopy (LAP) for main and assistant surgeons. Work-related musculoskeletal disorders (WRMSDs) are highly prevalent among minimally invasive surgeons, yet objective, role-specific biomechanical comparisons between these two modalities and roles remain scarce. The aims of the study were to quantify and compare head, trunk, and upper-arm postural demands in main and assistant surgeons during LAP  and SP robotic procedures, and to evaluate whether their exposure exceeded evidence-based action levels for neck and upper-extremity MSD prevention. 

    An observational design was used at Karolinska University Hospital Solna, where thirteen experienced surgeons contributed to, in total 43 observed procedures (laparoscopic: 11 mains, 10 assistants; single-port robotic: 12 mains, 10 assistants). Wearable inertial measurement units continuously recorded head and trunk sagittal inclination (positive forward), and bilateral upper-arm elevation from incision to skin closure. For each role and modality, 10th, 50th, and 90th percentile angles were calculated and compared using t-tests (LAP vs SP within role). The postural results were interpreted in relation to established ergonomic thresholds for increased MSD risk, through one-sample one-tailed tests against recommended action levels (α=0.05). 

    Single-port robotic surgery was associated with partial, role-specific ergonomic advantages: for main surgeons, singleport robotic surgery showed a trend toward less extreme backward head sagittal inclination compared to laparoscopic surgery: the 10th percentile head sagittal inclination was −11.9° in SP versus −18.9° in LAP (p = 0.32), closer to the −10° action level. The median head angles remained near neutral (LAP −2.0°, SP −0.6°) and 90th percentile forward head angles stayed well below the 50° action level (LAP 15.5°, SP 10.5°). Trunk sagittal inclination was modest for all roles (10th90th percentiles: −9.0° to 11.6°), although the 10th-percentile trunk angles were negative in all groups (SP main −1.8°, LAP main −5.3°, SP assistant −9.0°, LAP assistant −7.2°), hence below the 0° action level. In contrast, SP main surgeons showed significantly higher left-arm elevation than LAP main surgeons at the 10th (15.5° vs 11.9°, p=0.02) and 50th percentiles (26.6° vs 19.3°, p<0.001), while the 90th percentile was not significantly different (p=0.28). For the right arm, the medians were near the 30° action level in both modalities (LAP 26.0°, SP 32.0°), with 90th percentile values below 60° (LAP 42.3°, SP 47.4°). Assistants exhibited no significant LAP–SP differences in arm elevation, yet both groups were more backward than the −10° head sagittal inclination limit at the 10th percentile (LAP −16.0°, SP −22.0°) and worked with median arm angles clustered around 30° (left: 26.7°–28.7°; right: 23.6°–28.7°). Collectively, these statistics indicate that WRMSD risk in these procedures is dominated by neck and shoulder loading, particularly sustained backward head inclination and midrange arm elevation rather than lumbar forward inclination, and that SP technology only partially mitigates this burden for main surgeons while leaving assistant exposures largely unchanged. 

    The findings demonstrate that single-port robotic technology does not provide a comprehensive ergonomic solution but redistributes biomechanical load within the surgical team. While console operation moderates extreme neck loading for main surgeons, shoulder elevation remains high, and assistant surgeons continue to experience pronounced backward head inclination and near-threshold arm elevations regardless of modality. Benchmarking against action levels confirms that many postural exposures fall in zones associated with elevated MSD risk, consistent with the high WRMSD prevalence reported in minimally invasive surgeons. Consequently, the thesis concludes that technological innovation must be complemented by multi-level ergonomic strategies including equipment and room redesign, optimized monitor and console positioning, task reallocation, role rotation, and ergonomics training aligned with Swedish work-environment regulations and the hierarchy of controls to sustainably reduce musculoskeletal risk in laparoscopic and single-port robotic surgery. 

    Download full text (pdf)
    fulltext
  • Public defence: 2026-04-17 10:00 Harry Nyquist, Stockholm
    Li, Zhenyu
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems.
    Sustainable Metasurface-Assisted Indoor Wireless Communication System Design2026Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The densification of wireless networks toward fifth- and sixth-generation standards has intensified the demand for reliable high-throughput connectivity in indoor deployment scenarios (IDS), such as aircraft cabins, metro wagons, and stadiums. Although millimeter-wave (mmWave) communication offers the spectral resources needed to meet this demand, its sensitivity to propagation loss and blockages severely limits its performance, particularly in IDS. Metasurfaces have emerged as a promising means of extending mmWave coverage through manipulating the propagation environment. Advanced investigations have been conducted on metasurface-featured system performance enhancement. However, the operating cost, which is a practical and critical concern of metasurface deployment, has received insufficient attention in the literature. Deploying a reconfigurable metasurface in practice requires cabling, power supply, and control infrastructure, costs that represent a real barrier to scalable deployment, particularly in indoor environments like IDS, where infrastructure installation is physically limited or tightly regulated.

    This thesis investigates the design of sustainable metasurface-assisted indoor wireless communication systems, placing operating cost alongside performance as a primary design criterion. The work examines different types of metasurfaces that differ in the metasurface gain they provide and the operating cost they incur. By identifying and verifying an optimal design choice among these alternatives, this thesis advances a sustainable metasurface-assisted system that addresses the performance-cost dilemma inherent to IDS deployments.

    The first contribution studies the trade-off between operating cost and performance enhancement by optimizing a mixed static metasurface (SMS) and reconfigurable intelligent surface (RIS) deployment in an mmWave IDS. Using a fractional programming penalty-based successive convex approximation (FPPSCA)-based iterative algorithm, the results reveal a diminishing-returns relationship. While replacing two SMSs with RISs already yields a 13 Mbps gain, increasing the RIS count beyond 16 out of 22 surfaces produces less than 1 Mbps of additional gain, confirming that full reconfigurability is unnecessary and motivating a more cost-effective middle-ground solution. The second contribution proposes and evaluates a self-sustainable RIS (ssRIS)-assisted mmWave system for IDS, where ssRIS achieves self-sustainability through power harvesting via a codebook-based element splitting scheme, eliminating the need for cabling and external power. A two-stage iterative algorithm jointly optimizes phase shifts, user equipment (UE)-to-ssRIS associations, and time allocation. The results show that ssRIS outperforms SMS by up to 19.8 Mbps in compact environments, confirming a favorable position within the gain-cost trade-off, with coverage advantages diminishing as deployment distances grow. The third contribution conducts a feasibility study of ssRIS across diverse scenarios, analyzing how element count scales with transmit power, data rate demands, and outage constraints under element splitting (ES) and time switching (TS) schemes. TS benefits from stronger channel hardening under moderate conditions, but scales exponentially with harvesting difficulty, whereas ES scales only linearly, offering greater robustness in challenging environments. Together, these findings provide actionable guidance for practical ssRIS deployment.

    Download full text (pdf)
    Kappa for the web
  • Public defence: 2026-04-21 13:00 F3, Stockholm
    Khorsandmanesh, Yasaman
    KTH, School of Electrical Engineering and Computer Science (EECS), Communication Systems.
    Transceiver Architectures for Future Wireless Systems with Hardware Constraints2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In the upcoming era of communication systems, there is an anticipated shift towards using lower-grade hardware components to optimize size, cost, and power consumption. This shift is particularly beneficial for multiple-input multiple-output (MIMO) systems and Internet of Things devices, which require numerous components and extended battery lives. However, using lower-grade components introduces impairments, including various non-linear and time-varying distortions affecting communication signals. Traditionally, these impairments have been treated as additional noise due to the lack of a rigorous theory. This thesis explores a new perspective on how the structure of impairments can be exploited to optimize communication performance. To address these challenges, this thesis presents impairments-aware beamforming in various scenarios. 

    Initially, we investigate the systems with limited fronthaul capacity. We propose an optimized linear precoding for advanced antenna systems (AAS) operating at a 5G base station (BS) within the constraints of a limited fronthaul capacity, modeled by a quantizer. The proposed novel precoding minimizes the mean-squared error (MSE) at the receiver side using a sphere decoding (SD) approach. 

    After analyzing MSE minimization, a new linear precoding design is proposed to maximize the sum rate of the same system in the second part of this thesis. The latter problem is solved by a novel iterative algorithm inspired by the classical weighted minimum mean square error (WMMSE) approach. Additionally, a quantization-aware low-complexity algorithm expectation propagation (EP) is presented for large massive MIMO setups, which is more practical for nowadays systems. Besides, the heuristic quantization-aware precoding method with lower computational complexity is presented, showing that it outperforms the quantization-unaware baseline. This baseline is an optimized infinite-resolution precoding, which is then quantized. This study reveals that it is possible to double the sum rate at high SNR by selecting weights and precoding matrices that are quantization-aware. 

    Next, we adopt a splitting precoding architecture tailored to fronthaul-constrained systems for practical deployments. In modern systems, the AAS can perform part of the beamforming locally, for example, through beam-space selection. The remaining lower-dimensional interference-cancelation precoder can then be transmitted over the limited-capacity fronthaul link. Compared to the previous fully centralized setup under the same fronthaul constraint, this approach enables higher quantization resolution for the precoder coefficients. Moreover, since both the uplink pilot signals used for channel estimation and the downlink precoding matrix must be transmitted over the limited-capacity fronthaul link, we design a joint uplink–downlink bit allocation scheme to determine the optimal distribution of fronthaul resources between the two directions.

    In the final part of this thesis, we focus on the signaling problem in mobile millimeter-wave (mmWave) communication. The challenge of mmWave systems is the rapid fading variations and extensive pilot signaling. We explore the frequency of updating the combining matrix in a wideband mmWave point-to-point MIMO under user equipment (UE) mobility. The concept of beam coherence time is introduced to quantify the frequency at which the UE must update its downlink receive combining matrix. The study demonstrates that the beam coherence time can be even hundreds of times larger than the channel coherence time of small-scale fading. Simulations validate that the proposed lower bound on this defined concept guarantees no more than 50 \% loss of received signal gain (SG). Based on these results, beam-coherence-aware two-stage digital combining is proposed for the mmWave single-user point-to-point MIMO and multi-user MIMO systems. We also propose time-domain channel estimation.

    Download full text (pdf)
    fulltext
  • Raikova, Iuliia
    KTH, School of Architecture and the Built Environment (ABE), Sustainable development, Environmental science and Engineering.
    Beyond the city limits: analysing consumption-based emissions and spillover effects in the Swedish urban context2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Global efforts to achieve the Sustainable Development Goals and operate within Planetary Boundaries require comprehensive strategies that account for non-territorial impacts, particularly negative spillover effects driven by consumption in wealthy nations. This highlights a critical disconnect in climate mitigation in high-consumption economies like Sweden mainly focuses on territorial emissions, creating a significant oversight in addressing the true global environmental impact driven by urban consumption patterns.This thesis evaluates Stockholm, Gothenburg, and Malmö urban strategies and identifies different policy instruments including policy instruments from the Nordic Cooperation report’s longlist to pinpoint local policy needs. The analysis revealed a significant implementation gap: municipal strategies remain largely based on territorial accounting, resulting in insufficient integration of policies that effectively address consumption-related emissions and international spillover effects. This thesis concludes with a set of recommendations for local policies aimed at minimizing the impact of Swedish cities on the global climate and promoting positive global effects.

    Download full text (pdf)
    fulltext