kth.sePublications KTH
23456 201 - 250 of 266
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Public defence: 2026-02-19 09:00 Kollegiesalen, Stockholm
    Lindblom, David
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Material and Structural Mechanics.
    Neutron Imaging and Constitutive Modeling of Hydrogen Embrittlement in Steels2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis concerns the phenomenon of coupled hydrogen diffusion and fracture in steels from both experimental and computational perspectives. Hydrogen embrittlement, where the ingress of hydrogen (H) reduces a steel’s load-carrying capacity, is a long-standing scientific challenge, first documented in the late 19th century. W.H. Johnson observed that exposing pure iron to an acidic solution led to premature fracture and that the metal regained its original strength and ductility after being removed from the solution for a period. Despite more than a century of research, the mechanistic understanding of hydrogen embrittlement remains limited, primarily because of the multiscale nature of hydrogen behavior and its complex interactions with metallic microstructures. Hydrogen diffuses through thecrystal lattice and interacts with grain boundaries, carbides, voids, cracks, and dislocations. Under external mechanical loading, hydrogen transport is further influenced by dilatational lattice distortions and by moving dislocations, adding additional complexity. As a result, Fick’s law often fails to describe hydrogen diffusion in these systems accurately, and experimental investigations on the submicrometer, micrometer, and engineering scales remain challenging. This thesis addresses these challenges by combining fracture mechanics experiments with neutron imaging to investigate crack propagation caused by hydrogen embrittlement. Additionally, it presents a detailed numerical framework for modeling hydrogen embrittlement at the continuum scale. The strong coupling between mechanical fields and solute concentration necessitates advanced numerical techniques to solve the governing partial differential equations reliably and efficiently

    Download full text (pdf)
    Kappa_DavidL
  • Enoksson, Fredrik
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Learning, Digital Learning.
    Davis, Richard Lee
    KTH, School of Industrial Engineering and Management (ITM), Learning, Digital Learning.
    Karunaratne, Thashmee
    KTH, School of Industrial Engineering and Management (ITM), Learning, Digital Learning.
    Land, Anna
    KTH, School of Industrial Engineering and Management (ITM), Learning, Digital Learning.
    Yoon Blomstervall, Jeongin
    KTH, School of Industrial Engineering and Management (ITM), Learning, Digital Learning.
    Bern, Caroline
    KTH, School of Industrial Engineering and Management (ITM), Learning, Digital Learning.
    KTH Internal report on the 2024 E-learning survey2026Report (Other (popular science, discussion, etc.))
    Abstract [en]

    This report contains a large potion of the result of the KTH e-learning surveys, administered inDecember 2024. One survey was sent out to teachers and one to students. The main purpose ofthe surveys is to gain insights of how teachers and students perceive the current digital learningenvironment and also explore around challenges, within a near future, concerning digital learningin higher education. The themes of those challenges in these surveys were generative AI and digitalassessment. The data gathered and the report will also be used to make informed decision for thework that is carried out within the E-learning object at KTH.

    The aim of this report has been to present the result in a straightforward way, mainly with graphsand comments on those visualisation of the data. Thus, no deep statistical analysis will be presentedin this report. The survey was divided into the following parts: demography, digital education,generative AI, digital assessment. The data collection yielded responses from 314 teachers and 1291 students, corresponding to response rates of 19% and 6%, respectively.

    The result from the digital education part indicates that teacher think KTH has succeeded well withdigital education in 2024 and that they are offered well-functioning digital infrastructures and toolsto conduct high-quality digital education. Furthermore, students think that the education at KTHuses digital opportunities in a good way and that the education is of high quality. Both groups alsoagree in general that KTH should continue developing digital education.

    Students report that they are very familiar with AI chatbots and are also often using these tools fortheir studies. Teachers are less familiar, and 25% report that they use AI chatbots weekly or moreoften in their role as a teacher, whereas 37% have never used i. The wide availability of AI chatbotshas also led many students to change their study habits, and about two-thirds of the teachers havemade some changes to their teaching practice. The most commonly perceived usefulness and reported use of AI chatbots for teachers was help with creating materials for teaching or examination,whilst students perceive it as useful and use it in their studies to pose questions that they have, findinformation and summarize text. Students think that AI chatbots increase cheating, but few reportthat they have used AI chatbots in a way they were not allowed to. About one quarter of the teacherreported that they have made moderate or significant changes in their assessment practices.

    The results from the part focused on digital assessment showed that slightly more than half of teachers (54%) had created at least one digital assessment during 2024, while 62% of students had takenat least one digital exam. The main reasons for using digital assessment were the ability to providetimely, frequent feedback and to save time and resources. Among students who prefer digital assessment, 63% cited quick feedback and many also found digital exams to be easier to complete andsubmit. Students who preferred paper-based exams emphasized feeling more comfortable (66%)and better able to focus (61%). Teachers who used digital assessment reported technical problemsduring exams as their main challenge, a concern echoed by students, who worried about log-inissues, lost internet connections, and needing to troubleshoot on their own.

    Download full text (pdf)
    KTH_Internal_report_on_the_2024_E-learning_survey
  • Public defence: 2026-02-12 09:00 Kollegiesalen, Stockholm
    Perez-Ramirez, Daniel Felipe
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS. RISE Research Institutes of Sweden AB.
    Machine Learning-Driven Optimization in Networked Systems: Leveraging Graph Neural Networks to Solve Resource Allocation Problems2026Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Modern computer and communication networks are evolving toward unprecedented scale and heterogeneity, driven by advances in Internet of Things (IoT), cloud/edge computing, and 6G. Managing these networks efficiently requires solving large-scale combinatorial optimization problems (COPs) under application-level constraints. Traditional heuristic approaches, while practical, often exhibit poor scalability, leading to sub-optimal resource utilization. This dissertation explores how machine learning, in particular graph representation learning, can automate and scale the process of solving such COPs in networking. We first survey the foundations of learning combinatorial optimization on graphs, identifying key opportunities where Graph Neural Networks (GNNs) can outperform handcrafted heuristics in terms of scalability and adaptability for the networking domain. We then introduce DeepGANTT, a self-attention GNN-based scheduler for IoT networks augmented with battery-free backscatter tags. Trained on optimal schedules derived from small network instances (up to 10 nodes), DeepGANTT generalizes to larger networks up to 60 nodes without retraining, achieving near-optimal performance within 3% of the optimum and reducing energy and spectrum utilization by up to 50% compared to the best-performing heuristic. We further improve the generalization to larger problem instances with RobustGANTT, a next-generation GNN scheduler that integrates improved graph positional encodings and further multi-head attention mechanisms. RobustGANTT demonstrates consistent generalization across independent training rounds and scales to networks 100× larger than training topology sizes, computing schedules for up to 1000 IoT nodes and hundreds of sensor tags. It achieves up to 2× energy and spectrum savings and a 3.3× reduction in runtime over DeepGANTT, with polynomial-time complexity enabling responsiveness to network dynamics. Beyond performance gains, we offer empirical and theoretical insights into the stability and generalization behavior of learning-based scheduling. By uniting graph-based combinatorial optimization with deep learning, this dissertation advances sustainable and adaptive network management, paving the way for energy-efficient networked systems.

    Download full text (pdf)
    fulltext
  • Attari, Hesam
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Structural Engineering and Bridges.
    Design of reinforced concrete slabs: Python-based automation of reinforcement design and layout2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis presents a computational framework for the automated design verification and reinforcement mapping of reinforced concrete (RC) slabs based on finite element method (FEM) results. Instead of performing new FEM analyses, the work focuses on post-processing nodal internal forces—primarily bending moments—exported from the commercial FEM software BRIGADE Plus. The main objective is to automate structural verification according to Eurocode 2 and to compute the required reinforcement areas continuously across the entire slab domain.A Python-based tool has been developed to evaluate both Ultimate Limit State (ULS) and Serviceability Limit State (SLS) conditions at each node of the FEM mesh. The program implements Eurocode 2 design equations, automatically determines the required reinforcement in two orthogonal directions, and visualizes the results as contour plots, providing intuitive reinforcement demand maps. This enables a transparent, detailed, and data-driven approach to slab design. The proposed methodology is validated through a case study comparing the computational results with a traditional design report for an existing bridge slab. The findings demonstrate that the developed tool effectively streamlines the design process, reduces material usage, and enhances both efficiency and clarity in the reinforcement layout, while fully complying with Eurocode design standards.

     

    Download full text (pdf)
    fulltext
  • Ros, Wilhelm
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems.
    Development of an Invasive Pulse-Wave Analysis Algorithm in Patients with Chronic Coronary Syndrome Undergoing Coronary Angiography2025Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Over 40,000 coronary angiographies are performed annually in Sweden, encompassing patients with acute presentations and chronic coronary syndrome (CCS), a classification that includes stable coronary artery disease and microvascular dysfunction. During the invasive procedure, high-quality pressure waveforms are recorded at the aortic root. However, much of the valuable information contained within the pressure waveform remains unused. To address this, a derivative-based algorithm was developed for automated identification of five key time points within invasive arterial waveforms. The algorithm was evaluated using aortic pressure data from 394 CCS patients undergoing angiography as part of the Coronary Thermo-dilution Derived Flow-indices in Chronic Coronary Syndrome study (NCT06306066), and validated against 796 physician-annotated cardiac cycles. The algorithm demonstrated near-sample-level accuracy, achieving RMSE of 0.01-0.02 seconds for time-point detection, comparable to the inherent variability of manual annotation. Further, a two-stage cross-correlation workflow enabled the selection of representative cardiac cycles and classification of patients into distinct waveform response groups based on their hemodynamic changes during adenosine-induced hyperemia. This tool aids in identifying representative waveforms and their key time points, enabling the extraction of waveform-derived features. These features can then be compared with established physiological indices and clinical outcomes.

    Download full text (pdf)
    fulltext
  • Soulas, Rémi
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Building Materials.
    Raw Earth as a building material from an LCA point of view2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Material choice during design and construction phase is one of the key leverages to reduce the high environmental impact associated to buildings during its life cycle. In this context, natural materials can be introduced to decrease the use of traditional construction materials like cement-based and to reduce the climate impact.

    The aim of this study is to apply a hands-on life cycle assessment (LCA) method to measure the climate impact when earthen elements is in building’s structural parts from a point of view This material, extracted directly from the earth’s ground surface, is widely used for dwellings in warm climate areas all around the globe. Overcoming preconceived ideas and mental bias, this study set a frame for advised building methods to align the use of raw earth with a low-carbon construction.

    The study is based on data from two actual building projects, in France and Saudi Arabia. Local context is used in combination with LCA standards and EPD data on construction materials and their emission of greenhouse gases (GHG) contributing to climate impact (GWP, Global Warming Potential). In the context of a full structure LCA, functional units are chosen accordingly, which consider load-bearing capacity and structure characteristics. Different scenarios of structures based on the material used and the architectural demand are developed for each project to allow a fair comparison and analysis. Simplified life-cycle inventories are used to allow for full control of the parameters used for the construction design and results traceability.

    Resulting calculated climate impact point out the preponderance of the transport and additives significantly in the earth material contributes to the total emissions of the building element. From this first result, optimization of these significant parameters that have been applied show relatively low emissions, when raw earth materials are used without cement addition and compared to traditional concrete construction. Secondly, for complex earthen structures the same result is achieved with low associated climate impact, that is demonstrated by applying an eco-building process approach. The result illustrated here is a practical example that of use of natural materials like raw earth usage has a potential for eco-friendly building.

    Amongst limits of this study is worth mention the focus on climate impact without consideration for the other environmental impact categories, or that the actual the lifespan of earthen structure may differ compared to traditional material. These are aspects that might be assessed in future studies.

    Download full text (pdf)
    fulltext
  • Public defence: 2026-02-18 09:00 F3, Stockholm
    Ghanadi, Mehdi
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Material and Structural Mechanics.
    Fatigue Design of Lightweight Welded Structures – Some Aspects of Size Effect2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Lightweight construction is of great value in a number of industrial sectors, asreduced component thickness contributes to improved performance andincreased load capacity. High-strength steels (HSS) enable lightweight, highperformancestructures through their enhanced static and fatigue strength. Theapplication of these steels is significantly affected by manufacturing processes,such as welding. In welded structures, fatigue failures are typically initiated atthe welded joint, which is often the weakest link due to its lower fatigue strengthcompared with the base material.The size effect is defined in relation to the dimensions of the main plate, weld,and attachment. The fatigue strength of welded joints decreases with increasingplate thickness, a phenomenon known as the thickness effect. Conversely,reducing the thickness of welded structures under fatigue loading can improvefatigue strength, which is referred to as the thinness effect.In the current thesis, the size effect on fatigue strength of welded joints has beeninvestigated. The assessment relies on fatigue test data gathered from theliterature and from experiments carried out in this study, and finite elementmodelling. The Effective Notch Stress (ENS) method, a local fatigue assessmentapproach that relies primarily on peak stress and stress concentration values,has been investigated for welded joints. Furthermore, the probabilistic fatiguefailure in welded joints using a weakest-link modelling approach is also analysedfrom the stress distribution within the joint.Weld quality has a critical impact on the fatigue performance of weldedstructures, as higher-quality welds reduce stress concentrations andconsequently extend fatigue life. A detailed analysis of weld profile data provides valuable insights into the sources of uncertainty and variability in fatiguebehaviour, as well as the relationship between weld geometry and fatiguestrength. In the present study, variations in key weld geometry parameters,obtained from measurements, are thoroughly examined to better understandhow these geometric differences influence fatigue performance and reliability.Such analyses are essential for developing strategies to improve weld quality,enhance fatigue resistance, and ensure more predictable structural behaviourunder cyclic loading.The findings from the work performed during this thesis will contribute to thedevelopment of structural reliability assessment methods for the fatigue life ofwelded joints in relation to size effects. These methods will support the industryin designing lighter, high-performance products with a lower carbon footprint,ultimately contributing to more sustainable welded structures. The findingsindicate a pronounced thinness effect, where thinner plates exhibit higherfatigue strength. In addition, improved weld quality, reflected in larger weld toeradii and angles, contributes to extending fatigue life.

    Download full text (pdf)
    fulltext
  • Guardia Calsina, Pol
    KTH, School of Industrial Engineering and Management (ITM), Energy Technology.
    Navigating the future of industrial heat: Modelling tipping points for decarbonisation in light industries2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Industrial heat decarbonisation faces technical and competitiveness challenges. However, opportunities are particularly attainable within light industries, where temperatures are predominantly below 200 °C. Realising these opportunities requires assessing the cost-competitiveness of heat produced from low-emission technologies relative to current market benchmarks (i.e. coal and gas boilers) and identifying actions to address barriers to their competitiveness. This thesis responds to these needs by conducting a techno-economic analysis that combines the Levelised Cost Of Heat framework with a Linear Programming optimisation model that determines optimal hybrid Power-to-Heat solutions under different sensitivities, based on hourly day-ahead electricity prices. Results show that energy input costs are a central factor in the competitiveness of technologies, and the advantage of fossil fuels over other energy inputs remains a significant barrier to the uptake of low-emission options. For electricity-based solutions, this barrier can be mitigated through the adoption of heat pumps or by hybridising electric boilers with gas boilers. High capital expenditure particularly for heat pumps, geothermal systems, and certain hybrid configurations emerges as another major obstacle. A third barrier is the absence or limited uptake of flexibility incentives, as the results indicate that hybridising electric boilers with thermal energy storage and gas boilers can deliver substantial improvements in competitiveness when such incentives are available. Targeted actions to address each of these barriers are discussed.

    Download full text (pdf)
    fulltext
  • Lövström, Frida
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Urban and Regional Studies.
    Supercykelvägar som identitetsskapande element och beteendeförändrande åtgärd: Gestaltning för enhetlig och tydlig cykelinfrastruktur2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cycling for transport has the potential to be a part of sustainable cities and societies by contributing to sustainable transport and promoting physical activity. To increase cycling for transport planners are developing certain cycle paths into cycle superhighways. This development involves upgrading the standard of cycle paths and visualizing them through uniform design for signage and road markings. The effect of such a design is not yet fully established and documented in research. This degree project examines cycle superhighways as identity-shaping elements and as instruments for behavior change. A theoretical framework grounded in environmental psychology is applied. The framework's main components consist of a social-psychological model, the Transtheoretical Model of Change, combined with theory from Lynch, The Image of the City, and wayfinding theory from Mollerup. The project includes a literature and document review, two focus group interviews with participants with varying habits of cycling for transport, as well as expert interviews with planners who work with cycle superhighways.

    The project shows that concepts similar to cycle superhighways are referred to by numerous terms and similar definitions. These definitions often have an abstract phrasing, such as attractive, safe, and identity-shaping, without clarifying what these statements mean in practice concerning implementation and design. The focus group participants expressed a desire to know what to expect from a cycle superhighway. The participants emphasize the importance of a design that represents a clear implementation standard, highlighting the need for cycle superhighways to be uniform and clear. The interviewed planners generally agree on the overall objectives of cycle superhighways. The interviewed planners' shared goals are for the cycle superhighways to increase cycling and make it easy and clear to cycle, and also for the design to appeal to a broader audience than current cyclists. However, the planners differ somewhat in their priorities. Some place greater emphasis on design uniformity, others on dimensions and safety, and others on creating a mental map of the cycle network on a local scale. 

    The project concludes that a widely recognized and clearly defined term benefits both current and potential users of cycle superhighways. The conceptual design of cycle superhighways has the potential to strengthen their identity by signaling a reliable standard. Establishing such an identity is challenging. The degree of uniformity at the municipal, regional, or national level affects the strength of that identity for different groups of cyclists. The level of uniformity depends on whether the primary focus is on establishing a city as a cycling city or on building an identity for cycle paths of the highest quality.

    Download full text (pdf)
    fulltext
  • Duy Trinh, Hung
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.).
    Syntax, Initials, Braces, Superintelligence, Localizations, Sheets, Theoretical, Experimental and Autonomous Economics2026Manuscript (preprint) (Other academic)
    Abstract [en]

    In this paper, we provide the axioms of TTG-brace theory theory. Then, we definitionally extend TTG-brace theory theory to localization theory to form a limited complement to strictly model interpreted methods in order to model-contingently formulate localizations to present hypotheses, and to heuristically search for parts of solutions to selected problems in superintelligence theory and theoretical economics. In superintelligence theory, we provide a working definition of superintelligence SI. Then, we define A-variants of learning before we define braces on the algorithm, abstract modulator A, which serves as a conceptual foundation F(A,O) to the algorithm, modulator Ô. Afterwards, we utilize the modulator O and the codefined modulator Ô to model-contingently formulate localizations, where we hypothesize what braces are on superintelligence SI and the sheathed path to an autonomous economy SP|EA|. Specifically, we hypothesize that some codefined modulator Ô is superintelligent, where superintelligence by definition is a precondition for an autonomous economy. In localization theory, we introduce the notions of variant and concrete verification or falsification to handle partial accuracies and inaccuracies of software specification. Following localization theory, we establish the TTG-brace-theoretic foundation for theoretical economics with sheet theory, where we introduce several braces of importance including the multicompany C|SM|, multieconomy EM, autodrained mint-based financing Fa,m|X| and the Automatic ATU, where we hypothesize that Fa,m|X| can be coutilized to obtain a legally-bounded instantaneous growth rate of the intercurrent-based gross domestic automation GDAI. We establish sheet complexity theory such that a subfield of economics concerns itself with brace identification. We extend experimental economics with localizations and emphasize economic engineering to eliminate renated costs r|X,Y,F|'s. Finally, conditioned on an Automatic ATU we want to coutilize autodrained mint-based financing Fa,m|X| toward solving the autonomous economy localization ACL.

    Download full text (pdf)
    fulltext
  • Bao, Yuhan
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Building Materials.
    Experimental Evaluation of Birch Rod Reinforcement for Improving Compression Strength Perpendicular to the Grain: Within the Context of Stress Laminated Timber (SLT) Decks2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Compression perpendicular to the grain (CPG) is a critical limitation in timber structures,particularly in stress-laminated timber (SLT) decks where high transverse stresses nearanchor zones can compromise long-term performance. This thesis investigates theuse of birch rods as a sustainable, timber-compatible reinforcement method to improveCPG resistance. A series of experimental tests were conducted using C24 softwoodspecimens reinforced with birch rods of varying diameters (10 mm and 20 mm), embeddedin one- and two-layer configurations, with both partial and full engagement.Results demonstrated that birch rod reinforcement can significantly enhance yield capacitycompared to unreinforced samples, with fully engaged rods offering the mostconsistent improvements. The influence of rod diameter, reinforcement depth, andengagement ratio on mechanical performance was systematically analyzed. Findingshighlight the potential of rod-based reinforcement to mitigate local crushing and improveprestress retention in SLT systems. This study contributes valuable data for optimizingreinforcement design and supports the integration of biodegradable materialsinto advanced timber construction.

     

    Download full text (pdf)
    fulltext
  • Zhang, Xindan
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Building Materials.
    Size effect on the bonding strength between spruce glulam and birch plywood2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Wood has recently emerged as a primary alternative to traditional building materials such as concrete and steel, improving the sustainability of modern building construction. In timber engineering, the performance of connections plays an important role. Although adhesive bonding provides excellent load-bearing capacity, its applications is limited by strict environmental requirements. As a result, mechanical connections are more popular than traditional carpentry joints in modern construction. Therefore, investigating the size effect on the bonding strength of adhesively bonded timber joints is necessary, because it can contribute to the progress of adhesive bonding in practical applications. Because birch plywood and spruce glulam both are known for high mechanical strength, so they were used for experiments. The aim of this study was to examine how bonding strength changes with increasing adhesively bonded area. Furthermore, three different face grain angles and adhesive types were tested separately to compare the results and investigate the individual effects of adhesively bonded length and width on bonding strength. In this study, 118 specimens prepared with three different types of adhesives (PRF, Epoxy and 2C-PUR) and tested at three face grain angles (0°, 45°, and 90°), using the modified longitudinal compression shear mechanical equipment to analyse the effect of adhesively bonded area/ length/ width on bonding strength. The results show that bonding strength is influenced by adhesive type and adhesively bonded area. 2C-PUR showed the highest bonding strength, followed by PRF and Epoxy, which closely related with the shear modulus of the adhesives. Additionally, overall bonding strength decreased as adhesively bonded area increased, especially at 45°. Moreover, bonding strength was mainly influenced by adhesively bonded length rather than width, because of uneven shear distribution and defects.  

    Download full text (pdf)
    fulltext
  • Törnqvist, Gustav
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Building Materials.
    Hammarström, Martin
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Building Materials.
    Preservative-treated plywood: Effects on mechanical properties, preservative uptake and dimensional stability2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Plywood is a widely used engineered wood product known for its strength, dimensional stability, and versatility. Birch plywood, in particular, has shown promising structural properties, making it a potential alternative to traditional materials in demanding applications. However, its use outdoors is limited by vulnerability to moisture and biological degradation. This thesis investigates the effects of waterborne (Tanalith) and oilborne (Tanasote) preservative treatments on birch and spruce plywood, focusing on retention and penetration of wood preservative, dimensional stability, and mechanical performance. The aim is to assess the feasibility of these treatments for structural use and identify critical factors influencing treatment effectiveness. 13 full-size plywood boards (ten birch and three spruce) were prepared, cut into various sizes, and treated using industrial-scale pressure impregnation. A full-cell process was used for the waterborne Tanalith treatment and an empty-cell process for the oilborne Tanasote. After treatment, samples were dried under controlled conditions and analyzed for retention, penetration depth, and mechanical performance. Retention was calculated based on the mass increase after treatment in relation to the preservative concentration. Penetration depth was assessed using a copper reagent that caused a visible color change, showing how far the preservative had reached into the plywood. Mechanical performance was evaluated using four-point bending, compression, and Brinell hardness tests. Retention increased as panel size decreased, with smaller specimens absorbing more solution per unit volume. Spruce plywood exhibited higher retention than birch, potentially due to differences in veneer thickness, number and composition of glue lines. While Tanasote-treated boards showed higher apparent retention, these values are not directly comparable, as Tanasote was applied in undiluted delivery form whereas Tanalith was applied as a 4.3% concentration water solution. Tanalith-treated boards displayed more visible deformation, including warping and veneer cracking, especially in spruce. Tanasote-treated boards remained dimensionally stable. In mechanical testing, Tanalith-treated birch showed around 30% strength loss in compression and bending strength, while Tanasote-treated samples retained most strength and exhibited higher surface hardness. In conclusion, panel size had a strong effect on preservative retention, with smaller samples achieving sufficient retention levels, likely due to their higher edge-to-volume ratio. Tanasote-treated specimens generally maintained better dimensional stability and integrity, suggesting oilborne preservatives may be more suitable for structural plywood in exterior environments. However, further studies are needed to assess long-term performance and ensure treatment consistency. 

    Download full text (pdf)
    fulltext
  • Coucher, Isac
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Building Materials.
    Sihlén, Lucas
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Building Materials.
    Mechanical characterization of wooden dowels with emphasis on shear strength: An experimental study on birch and beech dowels2026Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The most common way to join timber members in structural applications is by using steel fasteners and plates offers a lot of advantages. One of the disadvantages is the environmental impact, where the production of steel contributes to a lot of emissions. An alternative solution would be to substitute the steel fasteners with wooden dowels. The next generation of Eurocode 5 (prEN 1995-1-1, 2023) proposes design equations for joints with wooden dowels.

    This study investigates the mechanical properties of hardwood dowels made out of birch and beech. The mechanical properties were evaluated by assessing the stiffness of the dowels experimentally in a modal analysis. Then, the shear strength of the dowels were evaluated experimentally by performing static tests in a set up made out of aluminum with two shear planes. In total, 60 birch dowels and 30 beech dowels were tested in different configurations.The results showed that the highest strength was in the radial direction, as expected.

    The results also showed that the beech dowels possessed higher strength compared and that the yield stress of the dowels were not depending on the diameter of the dowel. It was also found that the shear span that simulates the plastic hinges that forms in the embedment material, denoted Lfree, had a rather large impact on the material behaviour and the yield strength of the dowels, showing that the strength decreased as Lfree increased. By comparing with previous studies that performed similar tests with wood as embedment, it was found that Lfree = d is usually the length of the plastic hinges and that the results using Lfree = d were similar regardless of embedment material.

    The correlation between the shear strength and the axial modulus of elasticity EAX and the bending modulus of elasticity around the radial and tangential axis EBR and EBT were generally above 50%, although some outliers affected the correlations and it was therefore difficult make conclusions. It was evident that the density is a good predictor of strength, as the correlation was high and consistent.

    Design equations in the next generation of Eurocode 5 (prEN 1995-1-1, 2023) was compared with the experimental results, and it was concluded that prEN 1995-1-1 (2023) results in conservative values, which can be explained due to the actual failure mode that occurs in the dowels is a combination of both shear and bending, which seems to be disregarded in prEN 1995-1-1 (2023).

     

    Download full text (pdf)
    fulltext
  • Melander, Casper
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Kiefer, Nils
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Universal Jig for Safe Handling of a CubeSat2025Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In recent years, CubeSats have emerged as a cost-effective and flexible platform for space missions, enabling universities and small organizations to contribute to space exploration. However, testing and handling procedures for CubeSats often require multiple, and frequently expensive, specialized jigs. This paper presents the design and development of a universal jig tailored for a 3U CubeSat, in collaboration with the MIST (MIniature STudent satellite) project at KTH Royal Institute of Technology. The proposed jig is modular, low-cost, and compliant with the CubeSat Design Specification (CDS), allowing for assembly, testing, transportation, and launch integration without the need for multiple support structures. Material selection, outgassing behavior, magnetic susceptibility, and mechanical stability were key factors guiding the design. The final product offers a versatile and scalable solution for CubeSat handling and could serve as a future standard for CubeSats.

    Download full text (pdf)
    fulltext
  • Bölja, Felicia
    KTH, School of Industrial Engineering and Management (ITM).
    Trangia Duo: The Development of a Portable Accessory for Creative Outdoor Cooking2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This master’s thesis, completed within Industrial Design Engineering at KTH Royal Instituteof Technology in Stockholm, was carried out in close collaboration with Trangia. Trangia hasidentified a trend in outdoor recreation among individuals who spend time in nature but donot necessarily hike long distances. This group seeks to prepare more advanced and creativemeals outdoors. This thesis aims to investigate how Trangia could expand their product rangeto serve this emerging target group. This thesis applies a user-centred design approach todevelop a solution that meets the needs of these potential customers.

    This thesis describes the development of Trangia Duo, a new Trangia stove for outdoorenthusiasts who enjoy shorter hikes and creative outdoor cooking. The final product, TrangiaDuo, demonstrates Trangia’s potential to enter the creative outdoor cooking segment. Thenew product offers a way to address emerging demand and meet the needs of a growing groupinterested in outdoor recreation and creative cooking. It is both an ideal complement forhikers and trekkers who want a more stationary version of their Trangia kitchen to use at theirbase camp, as well as vanlifers and car-campers who cook near their vehicles. It is suitable forboth beginners and experienced users.

    The concept is well developed in terms of usability and product performance and is ready fora functional prototype to evaluate performance before the next iteration.

    The project used a user-centered approach and the Double Diamond framework. The processdemonstrated the value of combining user insights with iterative concept development. Literature reviews, user studies, and market analysis established a foundation that defined thetarget group and market position. Sketching, prototyping, and continuous evaluationsupported the development and refinement of the final concept.

    Download full text (pdf)
    fulltext
  • Bendes, Annika
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Protein Science.
    Applications of multiplexed immunoassays for precision medicine2026Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Proteins are molecules that play central roles in almost all biological processes. Their abundance in cells, tissues, and body fluids is dynamic, reflecting both physiological states and disease-related changes. When studying proteins, a major challenge is distinguishing normal biological variation from alterations that indicate early or ongoing disease. Using proteomics, a term that describes measuring hundreds of proteins at the same time, will deepen our understanding of how protein signatures relate to health and disease. This will assist to establish molecular measurements of so-called biomarkers that support precision medicine through earlier detection, better disease stratification, and more individualized treatment strategies. 

    In the studies included in this thesis, we applied affinity proteomics techniques to investigate how levels of antibodies and proteins in blood samples related to health and disease and to expand our understanding of protein-protein interactions of drug targets. 

    Although proteins can be measured in different sample types, blood offers a minimally invasive window into our body and to measure molecules coming from many organs and biological processes. Home-sampled dried blood spots (DBS) has gained renewed interest due to the recent development of newer and more accurate sampling cards. In several studies included in this thesis, we demonstrate that DBS can be used in the general population sampling without relying on or involving clinical facilities and healthcare resources. In Paper I, we established an analytical procedure for measuring home-sampled DBS for antibodies against SARS-CoV-2. In Paper II, we expanded this effort to protein measurements and longitudinal sampling. In Paper III, we showed the importance of even more frequent DBS sampling for capturing the dynamic changes of inflammation-related proteins following infection. This demonstrated how these early changes in DBS protein levels can support the timing of clinical interventions. Together, these findings of our studies highlight the potential of DBS for remote and continuous health monitoring for precision health approaches.

    Proteins are also among the most common targets of therapeutic drugs. Still, many proteins interact also with other proteins, and such complexes can critically influence how a drug binds to its target, its therapeutic efficacy, and the risk of side effects. In Paper IV, we established an affinity proteomics workflow for validating binding reagents, which we then applied in Paper V to investigate potential protein-protein interactions of membrane proteins. The gained insights and knowledge can contribute to improve our understanding of biologically relevant protein interactions aiding the development of more selective and effective drug candidates. 

    Overall, the studies presented in this thesis contribute with valuable insights to the transition toward precision health by enabling scalable remote sampling and by deepening our understanding of protein interactions relevant to both normal physiology and disease.

    Download full text (pdf)
    fulltext
  • Gaspar, Diogo
    KTH, School of Electrical Engineering and Computer Science (EECS).
    DIRTY-WATERS-ACTION: Automated Feedback toward Cleaning Software Supply Chains2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Modern software development heavily relies on external dependencies managed through ecosystems such as Node Package Manager (NPM) and Maven. A dependency’s vulnerabilities can cascade through an entire Software Supply Chain (SSC), making them attractive targets for attackers who exploit patterns, or smells, such as metadata omissions, deprecated packages, and missing provenance. While prior work introduced Dirty-Waters to detect such issues, it lacked broader ecosystem support, automated integration, and a large-scale evaluation on smell prevalence across relevant projects in these ecosystems. This thesis addresses these limitations by extending the tool’s functionality and applying it to some of Maven and NPM’s most widely depended-on packages. Additionally, we develop the Dirty-Waters GitHub Action, Dirty-Waters-Action, integrating SSC smell detection into continuous integration workflows, and assess its usability and impact through developer interviews and project evolution analysis. With these extensions, our tool now supports the Maven ecosystem alongside additional smell types. In particular, empirical results reveal that SSC smells are prevalent across both NPM and Maven ecosystems, with no analyzed project being entirely free of smells. Dirty-Waters- Action provides automated SSC smell feedback and has been found useful by project maintainers. Altogether, our work demonstrates that SSC smells are an important issue to tackle in modern software projects, and that integrating a tool to report them during development is a valuable mechanism for software teams to implement.

    Download full text (pdf)
    fulltext
  • Lesko, Johan
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Bi-static Sonar Parameter Estimation and Target Tracking with Folded Measurements: Frameworks for Monte Carlo Methods Given Folded Samples2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The dynamic range is a ubiquitous but often overlooked problem with signal clipping being a fundamental performance bottleneck. Conventional hardware solutions such as automatic gain control and variable amplifiers adapt the input range but increase quantization noise for a fixed bit depth. This risks drowning out small dynamic range signals in the quantization noise. This challenge may be encountered in the bi-static sonar set up, where the direct transmitter–receiver path can saturate the receiver, leading or the weaker reflected signal may be buried in quantization noise. A promising alternative is the folding modulo ADC, which maps clipped signals back into the dynamic range. Using the unlimited sampling theorem, the original signal can be perfectly reconstructed from folded samples. Recent work has also derived likelihood functions for parameter estimation from modulo data, enabling a probabilistic frameworks, such as Monte Carlo methods, using folded samples. This study presents two frameworks in order to perform parameter estimation and target tracking from folded modulo samples using particle filter estimation. The first performs signal unfolding using the unlimited sampling theorem so that unfolded samples may be used to compute the likelihood in the particle filter. The second framework leverages the modulo likelihood function to work directly with folded samples, bypassing the need for signal unfolding. The performance of both frameworks were tested using simulations of a typical bi-static sonar scenario in the Baltic sea. The results showed that the parameters of the bi-static sonar could be successfully estimated as long as the received samples were processed in blocks. Also noted was that a target moving with constant velocity could be adequately localized, but not tracked. The average estimation of the velocity showed better results. Overall both frameworks showed very similar performance. With these results, this study has showcased the potential of using the modulo folding ADC in order to overcome the dynamic range issue when processing sonar signals. Additionally, the integration of of the modulo likelihood in a probabilistic recursive framework to directly process folded samples has been examined and showed promising potential for further application.

    Download full text (pdf)
    fulltext
  • Seseke, Linda Maria
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Disassembling the Problem: A Frame Creation-Based Strategy for Human-Centered Automation in ELV Recycling2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    End-of-life vehicle recycling faces systemic obstacles: fragment dismantling practices, economic pressures, and regulatory misalignments undermine circular material flows. Automation is often reduced to task substitution, yet its potential lies in serving as a coordination platform that integrates human expertise, technology, and institutional frameworks. This thesis applies Dorst´s Frame Creation methodology, combining literature analysis with 27 expert interviews across dismantlers, recyclers, OEMs, suppliers, and regulators. The study identifies eight recurring themes-ranging from market barriers to information gaps-summarized in the overarching construct of “systemic coordination failure.” Building on this diagnosis, the thesis develops strategic frames that reconceptualize automation as boundary infrastructure. Rather than replacing labor, robotic system and human integration are positioned to align economic drivers, embodied work practices, and regulatory demands.  The results demonstrate that scalable and sustainable vehicle disassembly requires automation embedded in broader coordination architectures, inking information, incentive, and human capabilities

    Download full text (pdf)
    fulltext
  • Zhang, Fangpu
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Joint Sampling and Carrier Allocation for UAV–Satellite Video Transmission: Optimization and Reinforcement Learning Approaches2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Unmanned Aerial Vehicles (UAVs) deployed for rescue and search missions in remote regions, where terrestrial networks are unavailable, must rely on satellite connectivity to continuously upload video data. However, such satellite links face two major challenges: limited bandwidth availability and high uplink energy cost, both of which directly impact video quality and UAV battery life. As a key technology in 4G and 5G, carrier aggregation (CA) provides efficient services by assigning a varying number of component carriers (CCs) to users. However, activating all CCs simultaneously leads to excessive power consumption, while restricting CC usage risks buffer overflow and degraded Quality of Service (QoS). To address these challenges, we propose a joint component carrier assignment (CCA) and adaptive video sampling framework in which UAVs dynamically select active CCs and adjust their video sampling rate according to hotspot density detected in the sensing environment. This ensures that high-resolution video is transmitted in critical scenarios while conserving bandwidth and energy otherwise. We formulate the joint carrier–sampling problem as a multi-objective optimization that maximizes system throughput while minimizing communication power, subject to rate and buffer stability constraints. To handle the problem’s dynamic and combinatorial nature, we first develop a Mixed-Integer Linear Program (MILP) to establish a theoretical upper bound. We then develop a multi-agent Proximal Policy Optimization (PPO)-based Joint Sampling and Carrier Allocation (JSCA) algorithm to enable scalable online control. In this framework, each UAV is modeled as an autonomous agent that jointly determines CC activation and video sampling level based on local observations. This design enables efficient online learning of transmission policies that adapt to dynamic mission environments. Simulation results demonstrate that our proposed JSCA policy reduces UAV energy consumption by more than 45%, extends flight endurance by 6.3%, and achieves throughput within 10% of the MILP-based optimum scheme, while maintaining significantly lower complexity.

    Download full text (pdf)
    fulltext
  • Källström, Ivar
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Learning to Schedule with Graph Neural Networks and Reinforcement Learning: Investigation of Embeddings and Performance2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The increasing number of cores in commercial processors and the rising demand for high performance computing has made the problem of sched􀀸 uling tasks in parallel increasingly relevant. Previous methods have focused on classic heuristics and linear programming, achieving varying levels of performance for the problem. The success of machine learning based methods for combinatorial optimization suggests an entirely learning based method might be utilized. This thesis investigates the possibility of using an end􀀸to􀀸end machine learning method, with a graph neural network based architecture and reinforcement learning training, to produce schedules for the problem of scheduling tasks in parallel with precedence constraints. We focus on a simple task model where precedence between tasks is encoded as a directed acyclic graph, and each subtask has an associated runtime. The results indicate that the method has learned structural properties of the graph, achieving performance comparable to classic methods for small testcases. However a gap remains for larger testcases, with our method struggling to achieve parity to classic methods. An investigation of our graph representation module reveals that the produced node embeddings for tasks are highly explainable. Finally, we suggest further investigation of hybrid methods, more complex task models and with greater computational resources.

    Download full text (pdf)
    fulltext
  • Dexwik, Carolina
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Centering women: Designing for AI-assisted breast-cancer screening outside clinics2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study asks how an online, AI-supported educational platform about breast cancer screening can help women in Sweden make informed decisions outside clinics. A sequential mixed-methods design combined an online survey (n = 64) with three co-design workshops (n = 6). The survey revealed gaps in breast-density knowledge, self-examination practice and AI literacy, alongside conditional trust in both healthcare and AI tools. Workshop participants rejected numerical representations of AI-generated risk scores, calling them alienating and anxiety-provoking, and they stressed that high-risk results delivered digitally should instead come from a clinician who can add empathy, context, and immediate support. Discussions also exposed a deeper gap: women lack personalised guidance to interpret everyday breast changes, leaving them unsure when to seek care. From these findings, the thesis proposes three design guidelines: (1) Contextualise numbers within lived experience, (2) Respect human moments by routing emotionally charged results to humans, and (3) Educate to advocate through interactive, body-literate content, extending Human-Centred AI principles to preventive and proactive women’s health.

    Download full text (pdf)
    fulltext
  • Wendel, Erik
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Improving Glucose Prediction: Using Wearable Device Data in Machine Learning Systems2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Managing blood glucose levels is a significant daily challenge for individuals with Type 1 Diabetes (T1D). Predicting blood glucose fluctuations using continuous glucose monitoring (CGM) data is essential for reducing complications. The data from these CGM devices are becoming more easily available, creating better opportunities to train machine learning models to predict glucose fluctuations and help patients manage their diabetes. This thesis investigates the potential for fully autonomous blood glucose prediction models without manual inputs. By analyzing how current machine learning models could benefit from incorporating additional contextual data, such as heart rate (HR) and time of day, collected from wearable devices, the study aims to give better insights into what can be done to improve performance in everyday scenarios. Three machine learning models were evaluated: a baseline LSTM model using CGM data, a similarly structured LSTM model that used heart rate in addition to CGM data, and a new CNN-LSTM using heart rate. The models were tested using data collected during everyday activities and their performance was measured using Root Mean Squared Error (RMSE) and Clarke Error Grid (CEG) analysis to assess clinical relevance. The results indicated that the new CNN-LSTM model achieved higher prediction accuracy than both the baseline and the LSTM with added HR. In entire everyday scenarios, the new model showed consistently lower RMSE values across all subjects without significantly increasing size, thus remaining small enough for use on mobile devices. Some results were inconclusive when testing the models on data collected from specific activities recorded months later, indicating the need for further research on how well the models generalize over extended periods and whether another training approach could help maintain accuracy. The study also shows that data availability plays a significant role in model performance, far outweighing the benefits of contextual data and model choices.

    Download full text (pdf)
    fulltext
  • Kitsidis, Christos
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Between Flesh and Circuit: Tracing the Mechanics of Intimacy in a Human-Machine Embrace during an Artistic-Scientific Performance2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Digital technologies are increasingly embedded in our bodies, routines, and emotions, shaping intimate relationships between humans and machines. While Human-Computer Interaction (HCI) research has explored intimacy primarily in care, healthcare, and efficiency-driven contexts, less attention has been given to aesthetic, performative, and nonfunctional encounters with technology. This thesis adopts a performance-led research approach, drawing on soma design and proxemics, to analyze a single case from Embrace Angels, an artistic-scientific performance/installation in which humans and robotic arms engage in a choreographed, multi-agent embrace. The analysis reveals how intentional slowness, negotiated meaning, and embodied awareness can foster deeper, more reflective forms of human-machine intimacy. By unpacking the mechanics of this encounter, the research sheds light on the deeper, more ethical impact of human-machine relationships and proposes new directions for designing meaningful, affective connections between humans and robots beyond efficiency-driven paradigms.

    Download full text (pdf)
    fulltext
  • Ledung, Markus
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Toward the automated migration of power grid control systems: an evaluation of ETL tools and a custom-tailored data migration pipeline2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis explores the possibilities of automating display and signal data migration in power grid systems using ETL tools and artificial intelligence. The core problem addressed is the time-consuming and error-prone manual process of migrating data from a legacy or third-party Network Control System (NCS) to Hitachi Energy’s Network Manager (NM) platform. The contribution of this thesis involves evaluating Pentaho Data Integration (PDI) and NCNC (Network Control Nerve Center) ETL tools, and assessing a computer vision-based AI model for replicating engineering displays from Single Line Diagrams (SLDs). The final results demonstrate that PDI is capable of migrating crucial parts of the database, and that AI-driven engineering display generation can effectively automate tasks that were previously performed manually. Combining both tools in one pipeline would allow future projects to reduce migration time, improve data integrity, and enhance customer satisfaction through more efficient deployment processes.

    Download full text (pdf)
    fulltext
  • Hyberg, Jonatan
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Important Training Data Factors When Finetuning Retrieval Augmented Generators2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Retrieval-Augmented Generation (RAG) systems have emerged as a powerful tool for answering questions and generating models. What makes RAGs so powerful is that they integrate retrieval mechanisms with generative models. However, these RAG systems still often suffer from hallucinations—instances where the generated output includes information unsupported by the retrieved context. One strategy used to limit hallucinations in RAGs is to fine-tune the generative models on new data. However, fine-tuning is not always a stable method as imperfections in the training data can negatively affect the model. This thesis investigates how underlying factors in the training data affect hallucinations when fine-tuning RAG systems. The thesis specifically examines the impact of these three factors: no context, noise in context, and incomplete answers in training data. By studying these three factors, we aim to enhance the reliability and trustworthiness of RAG systems in real-world applications. To achieve this, we conducted an experiment where we fine-tuned a generative model on each factor and compared the trained models to the baseline fine-tuned model. We evaluated the models using a comprehensive framework that assessed the factors on three aspects: factuality, faithfulness to the context, and ability to refuse to answer impossible questions. Impossible questions are defined as questions where the retrieved context has no overlap with the question, making it impossible for the RAG to answer the question given the context. Our evaluation revealed that having no contexts in the training data had a negative impact on both the factualness and the faithfulness when compared to a model trained with contexts. This shows that the fine-tuning can be used to train a RAG system to utilize the context better to stop hallucinations. Furthermore, we found that noise in context and incomplete answers in the training data had little to no effect on the RAG system’s factualness and faithfulness. This suggests that fine-tuning is relatively robust to noise in the training data. We found that fine-tuning overall had a severe negative impact on the RAG system’s ability to decline to answer an impossible question. After training in the baseline model correctly declined to answer 0.4% of the time compared to the original untrained that was correct 99.3% of the time. Adding incomplete answers to the training data eliminated the negative effect of fine-tuning on this aspect. The model trained on incomplete answers correctly declined to answer 99.6% of the time. A great improvement from 0.4%, demonstrating that incomplete answers are very important datapoints to add to the training set.

    Download full text (pdf)
    fulltext
  • Edvardsson, Oskar
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Preference-based generation of custom-length round trip routes for running2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    There is estimated to be around 50 million people in Europe engaging in running. Most of these usually stick to a few typical routes when they go out for a run. However, having a greater variance in training routes can have a positive effect on both motivation as well as injury prevention. In this thesis, we look at the problem of generating running routes of a given length based on the runner's preferences, including for example surface and elevation. We prove that this problem is NP-hard and hence to solve it two approximation algorithms and one heuristic algorithm are proposed. Experiments were carried out on graphs representing different kinds of real-life street networks in Sweden. The results showed that none of the algorithms outperformed the others in all aspects. Each algorithm had its pros and cons, and seemed to complement each other. Hence the overall conclusion of this thesis is that for practical use a combination of the algorithms is preferred. This could be achieved by choosing which algorithm to use based on the given problem instance.

    Download full text (pdf)
    fulltext
  • Park, Eugene
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Optimizing State Machine Replication with Quorum-Coordinated Disk Writes2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Disk I/O is a common bottleneck in consensus algorithms like Raft, which typically require a replica to persist log entries before acknowledgment. While this ensures durability, theory only mandates that a quorum of replicas perform this operation to maintain correctness.

    This thesis introduces quorum-coordinated disk writes, a technique that reduces redundant I/O by coordinating persistence across a quorum rather than all replicas. This thesis project implements this approach in CockroachDB, a distributed SQL database built on Raft, and evaluates potential performance benefits under write heavy workloads, possible drawbacks during recovery, and long term efficiency.

    Our results show that quorum-based persistence can improve throughput and latency without compromising system stability. Although recovery becomes more complex, the overall trade-offs suggest quorum coordination is a promising strategy for optimizing consensus protocols in performance-sensitive environments.

    Download full text (pdf)
    fulltext
  • Söderberg, Eric
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Ethical Hacking of a Public Transport Mobile Ticket Service2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Public transport plays an important role in sustainability development globally, and is often considered critical infrastructure. Utilized by billions of people in an increasingly digital world, many providers have also started to offer tickets for purchase directly in mobile phones. Despite all this, there is currently little available research on cybersecurity in the public transport sector. To combat this, in this case study we explored a number of potential cybersecurity vulnerabilities in a mobile ticketing application through white­box penetration testing following a threat modeling of the targeted system. In addition to presenting discovered security concerns, mitigations and theoretical impact were discussed, as well as general security properties of the system. The study is the first of its kind to be conducted in collaboration with a public transport provider, in this case Trafikforvaltningen, which through Stockholms Lokaltrafik (SL) provides all public transport in Stockholm, Sweden. A total of seven vulnerabilities with a measurable impact and six other security discoveries were found during the study. The most severe vulnerabilities all related to impairing the availability of the service for other users. The study found that a number of commonly reported vulnerability categories in public transport providers, such as issuing fake tickets or extracting personal information, were precluded due to a strictly server-side public-key cryptography implementation, and storing almost no personal information, respectively. The results presented in this study provides novel data to the currently scant cybersecurity research of public transport ticketing services. The findings help harden existing solutions and provides future guidance for developers and other stakeholders of public transport providers.

    Download full text (pdf)
    fulltext
  • Hallkvist, Hampus
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Exploring DMA benefits over bounce-buffering in the context of the TDISP building block SPDM: A study of the maturity of a new confidential computing protocol and its practical adoption2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Confidential computing has gained significant traction since 2020. The goal of providing strong security to third-party hardware has been successful in both terms of integrity and confidentiality, allowing a broader class of users, including those with stringent security requirements, to adopt cloud services. However, these security guarantees have been mostly limited to CPU processing via implementations such as AMD’s Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP) and Intel’s Trust Domain Extensions (TDX). Existing peripheral interaction methods incur extra I/O operations through bounce buffers. This technique adds an intermediate memory space that all data must be copied in and out of, as mutual trust is not established. It is shown to be slow, although if applied correctly through existing methods such as AMD SEVSNP, it can be safe. Through joint efforts, TEE Device Interface Security Protocol (TDISP) has emerged as a standard to allow cohesive, secure, and fast communication for any PCI Express (PCIe) connected peripheral device. While TDISP is still in active development, emulated implementations have started to emerge, among those by the QEMU virtualization and emulation suite. This thesis examines the performance impact of a partial subset of TDISP, specifically the transport protocol Secure Protocol And Data Model (SPDM) and its joint overhead with Direct Memory Access (DMA) compared to traditional bounce buffers. The study finds that the new protocol remains immature and impractical, yet shows great potential. As a result, DMA outperforms bounce buffers, but with a greater amortization cost than anticipated, making it suitable for large datasets with large block sizes or longrunning connections.

    Download full text (pdf)
    fulltext
  • Guðmundsson, Oliver
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Parameter Optimization of PVD Tungsten Interconnects for CMOS Using Magnetron Sputtering with Focus on Film Stress2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Copper is widely used as the interconnect material in advanced CMOS technology due to its low resistivity. However, its applicability is limited in high-temperature environments, since copper exhibits poor resistance to electromigration and can diffuse into surrounding dielectrics. Tungsten, although having approximately twice the resistivity of copper, offers superior high-temperature stability, strong resistance to electromigration, and negligible diffusion through dielectrics. For this reason, tungsten is an attractive candidate for Complementary Metal-Oxide-Semiconductor (CMOS) technologies that must operate reliably above 250 °C. However, using tungsten as interconnect introduces its own challenges, and the main issue is the residual stress that it introduces. This thesis investigates the deposition via Physical Vapor Deposition (PVD) technique of tungsten interconnects for high temperature CMOS applications, focusing on film stress and resistivity. The experiment was divided into two phases, the first phase focused on systematically varying the deposition parameters to understand their influence on film stress and sheet resistance. In addition to the deposition parameters, stack composition and film thickness were varied to explore their effects on film properties. Three stack compositions were investigated: Si/W, Si/SiO2/W, and Si/SiO2 /TiW/W. The second phase focused on implementing a linear regression model based on the results from the first phase, and performing a second deposition round using deposition parameters that were estimated to provide a near stress free film. Results showed that the chamber pressure has the biggest impact on the stress in the film, where low pressure produces high compressive stress, while high pressure provides tensile stress. These findings contribute to a better understanding of stress management in tungsten interconnects, providing valuable insights into optimizing tungsten deposition.

    Download full text (pdf)
    fulltext
  • Blackwell, Niall
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Can You Hear The Music?: How music congruency and speech masking impacts sales and overall consumer experiences in an Irish pub setting.2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Pubs can often be at the centre of some communities, where people go to meet, celebrate and enjoy themselves outside of their working life. The pub experience is influenced by many factors including the company a person is with, the staff, the food and beverages they drink and a number of external factors including looks, smells, sounds and general atmosphere. This study wants to understand what impact music congruency, incongruency, the absence of music and speech masking in an Irish bar may have on sales and the overall customer experience. This study sets up the four conditions of congruent, incongruent, speech masking and no music in a real-life pub setting over the course of 4 separate Thursdays. Participants were asked to score their experience on a 5-point Likert scale based on if they noticed the music, if they felt it was a good fit and if it impacted their perception of the atmosphere and stay in the pub. Participants were able to notice if the music was a good fit on the nights of the congruent music but there were some interesting responses to the no music night where 50% of respondents said that they heard music and that it was a good fit for the pub. None of the four music conditions played any role in how respondents perceived the atmosphere, or did they have any influence on their stay over the course of the experiment, so results suggest that there may be many other factors that impact the social and emotional behaviours of customers on their nights out.

    Download full text (pdf)
    fulltext
  • Sävås, Jonas
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Code Coverage for Java Dependencies2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    As software reuse continues to grow in prevalence in modern software development, external code is often integrated to efficiently implement required functionality. In the Java ecosystem, this practice is accelerated by repositories like Maven Central and build tools that automate the integration of external software packages. However, these outsourced packages, or dependencies, often include more functionality than necessary to support various use cases within their domain. The resulting unused code is a potential source of increased maintenance overhead and elevated security risks. Despite this, to our knowledge, no standalone tool currently evaluates the extent of dependency usage in Java projects.

    This thesis presents JACT, a tool to measure dependency usage in Java by leveraging code coverage to report usage of both the project and dependency code. This is achieved in two main steps. First, the project is built using Maven to produce an executable that contains both the project and dependency code. Second, the executable, together with the test suite's execution trace, enables the creation of the code coverage report, where JACT maps the coverage to dependencies and presents a structured overview of their usage.

    We evaluate JACT on 30 open-source Java projects to analyze dependency usage and assess its accuracy in mapping coverage information to dependencies. A comparison with the dependency debloating tool DepTrim provides insights into the strengths and limitations of code coverage in uncovering dependency usage. The results indicate that the dependencies are generally underutilized, with coverage increasing as alignment with project goals improves, while broader dependency feature sets lead to lower coverage. JACT accurately maps coverage to dependencies when Java package names are unique, but identical package names across dependencies introduce slight inaccuracies. Although JACT only captures coverage of executable code, it identifies additional used dependency class files compared to DepTrim, offering insights that could enhance precision in future debloating efforts in the Java ecosystem.

    Download full text (pdf)
    fulltext
  • Hansson, Oscar
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Self supervised learning for semantic image segmentation in railway systems: A comparison of segmentation model pre-training methods2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In recent years, self-supervised learning (SSL) has emerged as a powerful alternative to supervised pre-training in deep learning, particularly in domains with limited labeled data and abundant unlabeled data. This thesis investigates the application of Self-Supervised Learning (SSL) for improving semantic segmentation of concrete railway sleepers, an important step toward improving railway infrastructure maintenance. The work is conducted in collaboration with Trafikverket and explores whether domain-specific SSL pre-training can outperform or complement traditional transfer learning approaches based on ImageNet. Two SSL methods are evaluated: Simple Contrastive Learning framework (SimCLR), which uses contrastive learning to create useful data representations, and Masked Autoencoders (MAE), which learn global semantic representations by reconstructing masked image patches. These encoders are pre-trained on large volumes of unlabeled sleeper images provided by Trafikverket, and the effectiveness of these models is evaluated through linear evaluation and segmentation performance on a downstream crack segmentation task using a UNet architecture. Results show that both SSL methods produce encoder weights that yield comparable performance to ImageNet-pretrained models, with Masked Auto-Encoder (MAE) achieving the highest performance of the two in terms of Intersection over Union (IoU) and recall. The findings demonstrate that SSL offers a viable path toward reducing the dependency on labeled data while improving domain adaptation in real-world segmentation tasks.

    Download full text (pdf)
    fulltext
  • Fang, Yutong
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Enhancing Generative User Interfaces with LLMs: A User-driven Iterative Refinement Process2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Generative user interfaces can offer a personalized experience by adapting content and layout to individual preferences in real time. Recent advancements in large language models (LLMs) have demonstrated significant capabilities for dynamic and real-time user interface (UI) generation based on natural language prompts. However, existing solutions have primarily focused on user interface code generation for developers using large language models, while their practical usability and personalization capabilities for non-technical end users remain underexplored. This study investigates how users interact with a UI personalization system driven by OpenAI’s GPT-4.1-nano model, integrated into a custom-built Android application, AdaptFit. This research aims to understand the user experience, the effectiveness of user involvement, the ease of user interface personalization, and the challenges users face in this UI personalization process. This study combines both quantitative and qualitative methods, including questionnaires, usability testing with six participants, and semi-structured interviews. Thematic analysis was applied to better understand user experiences, and user iteration behaviors were recorded to examiner user satisfaction, prompt specificity, and outcome quality. Results show that users found the concept of UI personalization intriguing and engaging, but the performance of the system was inconsistent, often limited by vague prompts, LLM hallucination, and fixed system parsing structures. Specifically, well-articulated, detailed prompts yielded better outcomes, which shows the importance of prompt quality in LLM-driven design. This thesis offers insights into the design of LLM-powered UI personalization system. Future work could explore better integration between generated outputs and UI framework to enhance real-world deployment.

    Download full text (pdf)
    fulltext
  • Eiderbäck, Jesper
    KTH, School of Engineering Sciences (SCI), Applied Physics.
    Super Resolution Live Cell Imaging on Electron Microscopy Grids for Correlation with Cryogenic Electron Tomography2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Correlated light and electron microscopy (CLEM) combines the molecular specificity of fluorescence microscopy (FM) with the single digit nanometer scale structural information of electron microscopy (EM). To reduce the large resolution gap, causing ambiguities in the correlation, between the modalities, CLEM workflows increasingly incorporate super resolution (SR) FM. Traditionally, SR-FM in CLEM is performed under cryogenic conditions to ensure sample stability during correlation. However, performing SR-FM under cryogenic conditions imposes limitations on illumination intensity and alters fluorophore photophysics. This thesis investigates whether the SR-FM part of the CLEM workflow can be performed at room temperature using U2OS cells expressing vimentin–rsEGFP2 on EM grids. RESOLFT and STED imaging were performed prior to vitrification. RESOLFT imaging on grids was feasible with standard optical intensities, although it yielded lower resolution than on glass coverslips and was affected by several grid related challenges. In contrast, STED led to fluorophore bleaching, likely due to reflections and heat accumulation in the metallic grid dueto the high intensity of the depletion laser, making STED incompatible with the EM grids under the tested conditions. After vitrification, cryogenic fluorescence microscopy (cryoFLM) and electron tomography (cryoET) confirmed that cells on EM grids previously imaged at room temperature could be localized, demonstrating that room-temperature RESOLFT is compatible with cryoET. However, further work is required to establish whether SR imaging at room temperature can be reliably correlated with cryoET.

    Download full text (pdf)
    fulltext
  • Khoraman, Sina
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Cybersecurity Awareness Training Through Attack Simulations and Behavioral Insights: A Qualitative Study with Theory of Planned Behavior in a Custom Cyber Attack Simulation Learning Platform2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study explores the effectiveness of simulation-based cybersecurity awareness training through the lens of behavioral psychology. Using the Theory of Planned Behavior (TPB), it investigates how individuals perceive and identify cybersecurity risks before and after completing a customdesigned cyberattack simulation platform. A qualitative methodology, including interviews and scenario-based assessments, was employed to analyze users’ attitudes, self-perceptions, and behaviors in response to various cybersecurity threats. Findings reveal a persistent gap between cybersecurity knowledge and real-world actions, often driven by convenience, privacy fatalism, and usability concerns. However, the simulation experience enhanced participants’ awareness of specific threats, especially phishing and password-based attacks. The results suggest that while simulations alone may not fully change behaviors, they serve as effective tools for increasing risk perception and should be integrated with tailored interventions for maximum impact.

    Download full text (pdf)
    fulltext
  • Faisol Haq, Muhammad
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Optimizing Hybrid Energy System Operations Based on Battery Decreasing Capacity Characteristics for Isolated Microgrid Islands in Indonesia2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Hybrid energy systems combining diesel generators, photovoltaic (PV) panels, and battery energy storage are increasingly deployed in remote microgrid settings, particularly on isolated islands where grid extension is unfeasible. While these systems offer improved reliability and reduced fuel dependence, battery degradation remains a critical challenge. Premature battery wear not only increases operational costs but also jeopardizes long-term energy access and system sustainability—especially in regions with limited infrastructure and replacement capacity. This thesis addresses the problem of optimizing energy dispatch in hybrid microgrids while explicitly accounting for battery degradation. The novelty lies in the integration of a cycle-based degradation cost model into a Mixed- Integer Linear Programming (MILP) framework, enabling the system to minimize fuel and maintenance costs without compromising battery lifespan. Although energy dispatch optimization iswell-studied, fewmodels incorporate battery aging in a computationally efficient manner, making this a timely and meaningful research topic. The complexity of balancing short-term diesel savings against long-term battery health presents a suitable challenge at the Master’s thesis level. The proposed model uses a piecewise linear approximation of degradation costs based on depth-of-discharge (DoD) breakpoints, ensuring compatibility with MILP solvers. Real-world data from an Indonesian island microgrid was used to evaluate two operational strategies: an optimized, degradationaware dispatch and an aggressive usage strategy. Results show that while both reduce diesel use, the optimized approach significantly preserves battery health, avoiding early degradation while achieving comparable fuel savings and lower CO2 emissions. This research contributes a scalable and practical tool for energy planners and microgrid operators, offering actionable insights into how smart dispatch strategies can extend battery lifespan and reduce long-term costs. The model enables informed operational decisions that balance economic and technical constraints—something that could not be done effectively without integrating battery aging behavior. As a result, it supports more sustainable and resilient energy systems for underserved regions.

    Download full text (pdf)
    fulltext
  • Sourmpati, Konstantina Maria
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Reimagining Intimate Data in Femtech through Speculative Feminist Design: The case of ‘Noei’2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Noei is a speculative data physicalisation companion that uses colored light, subtle motion, texture and projection to support interpretive engagement with bodily and affective experiences across the menstrual cycle. The project provokes discussion about menstrual tracking by challenging assumptions of quantification, prediction, and surveillance. Using a Research-through-Design method, I conducted an autoethnographic study to critique app-based tracking and generate design materials, created a speculative prototype with a narrative booklet to convey the experience, and held a design critique with HCI experts and practitioners. The interactions rely on bounded ambiguity, with reflective responses being open for interpretation yet grounded in a co-created vocabulary and a user-initiated sequence. Contributions include a multisensory prototype adhering to an interaction ritual that prioritises consentful activation, local short-lived traces, and graduated visibility alongside a speculative narrative that opens alternative imaginaries. This thesis reframes menstrual technology as witnessing rather than optimising and proposes a dual ecology in which reflective companions like Noei complement certainty-oriented tools.

    Download full text (pdf)
    fulltext
  • Li, Peiheng
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Empowering Automotive Workshop Processing with Advancements in Natural Language Processing2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis optimizes two key tasks in truck workshop environments: symptom-component diagnosis and generating workshop instructions. Symptom diagnosis is formulated as a text classification task, associating symptom descriptions with faulty components, while instruction generation is framed as a question-answering task requiring extensive domain knowledge. Through rigorous experimentation, multiple NLP methodologies, including discriminative classifiers and Large Language Models (LLMs), are evaluated. Initially, a discriminative BERT-based model demonstrated robust symptom classification performance. Subsequent testing revealed that pure LLM-based methods struggled due to limited statistical insight and hallucinations. To overcome these issues, a structured ReAct Agent combining discriminative models and LLMs was developed, enhancing diagnosis accuracy. Instruction generation was evaluated using synthetic datasets and real-world expert-validated scenarios, with hierarchical RAG systems performing optimally, closely followed by the ReAct Agent. The naive LLM baseline emphasized the necessity of domain-specific knowledge for accurate instruction creation. Our LLM-driven ReAct Agent achieves diagnostic accuracy comparable to top discriminative models, significantly improving precision, recall, and F1 metrics, and fully resolving label hallucination issues. For instruction generation, the ReAct Agent’s performance matches the best Retrieval- Augmented Generation (RAG) models, surpassing simpler LLM baselines. In conclusion, this research demonstrates the complementary strengths and limitations of discriminative and generative NLP models, highlighting the integrated ReAct Agent as a practical, effective solution for boosting workshop technician productivity and operational efficiency.

    Download full text (pdf)
    fulltext
  • Balannagari, Yamini
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Vulnerability Research of Mobile Applications Commonly Used in Sweden: What are the most effective static analysis, dynamic analysis, and reverse engineering methodologies for identifying and evaluating security vulnerabilities in mobile applications?2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This project investigates the security posture of widely used mobile applications, focusing on key challenges related to user data protection, application integrity, and the secure management of communication channels. By evaluating applications across different sectors, the study identifies critical vulnerabilities, including insecure data storage practices, weak cryptographic implementations, improper session management, and insufficient network security configurations. The analysis reveals the substantial risks associated with sensitive information exposure, unauthorized data access, and session hijacking, all of which could compromise user trust and organizational reputation. Through systematic threat modeling and prioritization of findings, the research demonstrates that adopting globally recognized security standards, such as the OWASP Mobile Security Testing Guide (MSTG) and the Mobile Application Security Verification Standard (MASVS), can significantly enhance the security resilience, reliability, and privacy assurances of mobile applications. The study concludes by highlighting the importance of continuous security evaluations, proactive vulnerability remediation, and secure-by-design development methodologies. It recommends future enhancements, including the automation of vulnerability detection processes, the strengthening of encryption standards, and the advancement of user-centric security frameworks to further elevate the overall security and sustainability of the mobile application ecosystem. Finally, a comparative analysis across all evaluated applications was conducted to quantify and contrast their security posture based on OWASP and MASVS compliance.

    Download full text (pdf)
    fulltext
  • Wu, Nicole
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Machine Learning for Integrated Communication and Sensing in Cell-Free Networks2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis investigates the use of machine learning, specifically deep neural networks (DNNs), to enhance integrated sensing and communication (ISAC) in cell-free massive MIMO networks. The main goals are to improve data throughput, sensing accuracy, and computational efficiency. A DNN-based approach is proposed for access point (AP) selection and resource allocation, tested through simulations. Results show significant improvements, with sensing accuracy reaching an average distance error of 3.05 meters and an angle error of 0.05 degrees, compared to 3.88 meters and 0.11 degrees with random AP selection. Communication performance also increases, achieving a downlink rate of 13.59 bits/s/Hz versus 12.42 bits/s/Hz for the baseline. However, computational delay rises, indicating a trade-off for future optimization. This work advances the development of intelligent wireless networks, with applications in autonomous systems and smart cities.

    Download full text (pdf)
    fulltext
  • Yadava, Prashant
    KTH, School of Electrical Engineering and Computer Science (EECS).
    AI-Powered Hybrid Legal Reasoning: Combining Graph and Structured Databases: Multi-Pipeline Query Execution for Legal Knowledge Retrieval2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    As artificial intelligence becomes increasingly integrated across industries, its ineffective or superficial application can lead not only to underwhelming performance but also to critical errors that undermine trust in automation. This risk is especially acute in domains with strict accuracy and reliability demands, such as the legal sector, where misinformation or incomplete analysis can have serious consequences. In precedent-based legal systems, efficient and accurate information retrieval is further complicated by the need to understand and leverage complex relationships among cases, statutes, and doctrines. Existing AI-powered research tools, like conventional Retrieval- Augmented Generation (RAG) systems, primarily rely on vector similarity search, often missing the intricate relational structures vital for legal analysis. Graph-based RAG systems, though using schema-less graph databases like Neo4j, typically rely on rigid, predefined sets of entity and relationship types in their extraction and query logic, limiting adaptability to evolving legal documents. This inability to dynamically capture and utilize both semantic and relational information restricts the effectiveness of automated legal research - a significant practical and academic challenge that remains unresolved. This thesis presents a novel Dynamic Entity-Aware Graphand Vector-Enhanced Retrieval-Augmented Generation system, advancing legal information retrieval through a Large Language Model-driven, adaptive query processing architecture. The system features a dual-database engine: PostgreSQL with pgvector for high-performance semantic search and neo4j for flexible, relationship-centric graph traversal. Key innovations include: (i) real-time semantic query decomposition without reliance on predefined patterns, (ii) intelligent entity matching with automatic generation of legal terminology variants, (iii) schema-aware graph query construction that dynamically adapts to evolving legal structures, and (iv) hybrid retrieval that combines graph-based entity identification with vector-based context enrichment, preserving relational integrity while capturing semantic nuance. Evaluation on complex legal queries-spanning case arguments, statutory relationships, and multi-entity reasoning-demonstrates that the proposed system significantly outperforms both conventional and existing graph-based RAG approaches in retrieval accuracy and transparency. The results show that legal professionals can now access more precise, interpretable, and contextually rich information, supporting advanced legal analysis that was previously infeasible with static or single-modality systems. This research thus represents a paradigm shift from static to adaptive legal research systems, enabling more effective, reliable, and interpretable access to critical legal knowledge and offering a foundation for future advancements in AI-driven legal practice. 

    Download full text (pdf)
    fulltext
  • Zhang, Kecheng
    KTH, School of Electrical Engineering and Computer Science (EECS).
    A Neuromorphic Solver for the Edge User Allocation Problem with Bayesian Confidence Propagation Neural Network: A Dynamic Heuristic Generator for External Unit Excitation2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Edge computing pushes computation from remote clouds to resourceconstrained servers close to end-users. Determining which users should be connected to which edge servers—the Edge User Allocation (EUA) problem —is NP-hard and becomes intractable for large instances when solved by conventional mixed-integer programming. While many approximate methods have been proposed, neuromorphic solutions are particularly appealing due to their potential for high-speed and energy-efficient hardware implementation. These approaches leverage parallel, stochastic dynamics and local competition to explore combinatorial spaces and naturally enforce exclusivity constraints. In this thesis, we present a neuromorphic formulation based on the Bayesian Confidence Propagation Neural Network (BCPNN), in which each user is represented by a winner-take-all module composed of neurons corresponding to possible user-server assignments, including a dedicated unit for the “no allocation” option. Instead of encoding all constraints in a static energy function, we introduce a dynamic bias generator that steers the network with three heuristics: (i) a load-bias curve that favours near-full servers while penalizing overutilization and underutilization, (ii) a size heuristic that prioritizes smaller-demand users and larger-capacity servers, and (iii) a cosine-similarity term that matches a user’s demand vector to the current residual capacity of each server, enforcing dual-resource feasibility online. A single global parameter controls the trade-off between the number of active servers and the number of served users; scanning a small grid of values enables exploration of different levels of user-server tradeoff. Experiments on a 30-instance synthetically generated benchmark, each accompanied by an optimal Gurobi solution, show that the proposed BCPNNEUA solver finds feasible allocations whose score lies within an average of 12.6% of the optimum while converging in a few hundred simulation steps. Because the algorithm relies on local updates and event-driven communication, it is well suited to low-power neuromorphic hardware, offering a scalable and energy-efficient alternative for real-time edge-resource management.

    Download full text (pdf)
    fulltext
  • Brilon, Malin
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Designing Human-Machine Interfaces for Maritime Engine Control Rooms: A User-Centered Co-Design Approach to Reduce Cognitive Load and Enhance Usability2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis explores how user-centered human-machine interfaces (HMIs) can support maritime engine control room (ECR) operators in managing cognitive load and improving usability. Current ECR HMIs are often fragmented, non-standardized, and cognitively demanding, posing safety and operational risks. In collaboration with the German Aerospace Center (DLR), this project addresses the challenge of designing modular, ergonomic, and contextsensitive HMIs that reflect user needs and operational realities. The research follows a user-centered design approach and includes two main phases: expert interviews and a co-design workshop. Five maritime professionals with extensive seagoing and instructional experience were interviewed to identify usability challenges and interface needs. Their insights informed a scenario-based co-design workshop in ECR simulator with six engineering participants, who collaboratively developed interface concepts and low-fidelity prototypes for a blackout scenario. Thematic analysis of the qualitative data revealed six key themes: embodied knowledge, automation challenges, visual clarity, information overload, lack of standardization, and communication barriers in multinational teams. Based on these findings, the thesis proposes seven design principles for ECR HMIs, emphasizing multisensory awareness, visual clarity, transparent automation, operational resilience, standardization, contextual prioritization, and communication support. In addition, the study presents UI mockups and multimodal design solutions derived from the workshop and interviews. This work contributes to the limited HCI research in ECR contexts and provides practical design guidance for future maritime HMIs. It highlights the value of co-design and user-in-the-loop methods in developing safety-critical systems and underlines the need for maritime-specific interface standards to enhance operational efficiency and crew well-being.

    Download full text (pdf)
    fulltext
  • Skoglund, Robert
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Conditional Football Player Image Generation: Generating pose-consistent images of football players leveraging pose skeletons and depth-maps2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis presents a study on generating photorealistic, pose-consistent images of football players to address the challenge of building accurate pose detection models that often struggle with infrequent or rare poses due to limited labeled data. The primary goal was to develop and evaluate image generation models capable of producing synthetic player images consistent with specified poses, resembling TRACAB’s data. The research utilizes an adapter-style framework employing ControlNet to guide powerful pretrained base models, specifically Stable Diffusion (SD) and Stable Diffusion XL (SDXL). Image generation was conditioned using 2D human pose skeletons and depth-maps, both individually and simultaneously (Multi- ControlNet). Performance was quantitatively assessed using visual fidelity metrics (FID, CMMD) and pose similarity metrics (AP, CAP). The results show that depth-conditioning consistently enhanced visual fidelity over pose-only conditioning, leading to improvements in FID and CMMD. Although the Pose-SD model achieved the highest pose consistency metrics (AP and CAP), pose-only models performed poorly on rare or difficult poses. Scaling the base model to SDXL further improved fidelity and strengthened semantic adaptation, but this incurred a higher computational cost, resulting in increased inference latency (up to 6.3s) compared to single-modality SD models (3s). The findings conclude that depth conditioning is the most reliable modality for generating photorealistic football player images, while SDXL architectures provide stronger generalization. This capability is valuable for creating diverse training datasets to improve downstream pose detection models.

    Download full text (pdf)
    fulltext
  • Eriksson, Matias
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Collision avoidance of MCTSVO path planner for omnidirectional AMRs in multi-agent warehouses2025Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Autonomous mobile robots (AMRs) are increasingly common in modern warehouses where large quantities of goods have to be stored and processed efficiently, they transport goods between points of interest in warehouses and need to navigate safely and quickly. A defining feature of AMRs is their ability to autonomously compute a path toward the goal with the capability to react to changes in their environment, such as finding a new path if their original path becomes obstructed. This require the AMRs to have robust navigational systems capable of local and global path planning. This study analyzes the path planning algorithm MCTSVO to its collision avoidance property in warehouse environments. MCTSVO is a preliminary approach designed to handle both local and global path planning. This is an uncommon approach since these two aspects of path planning typically are handled separately, MCTSVO instead combines Monte Carlo Tree Search (MCTS) with the Velocity Obstacle (VO) algorithm. Unity3D is used to simulate omnidirectional AMRs to evaluate MCTSVO’s collision avoidance for warehouse environments compared to MCTS and the standard collision avoidance algorithms VO, RVO and ORCA from the velocity obstacle paradigm. This is done by using 6 Movement Performance Scenarios derived from warehouse requirements and 7 Computational Scaling Scenarios. The scenarios run multiple times for every algorithm to create average results with confidence intervals that ensures drawn conclusions are statistically significant. The evaluation is derived from state-of-the art and established benchmark frameworks, to measure local planning performance with the following metrics: Unique Collisions, Minimum Distance to Closest Object, Percentage of Time Spent in Dangerous Areas, Path- and Velocity- Smoothness, Flow Time, Successful Runs, and Average- and 95th-Percentile Computation Time. The findings from this study show how MCTSVO, based on the tested scenarios, shows a clear improvement in movement performance to MCTS while not outperforming the standard algorithms. For safety its performance is in the middle of the standard algorithms, while for time efficiency it falls short to each of the standard algorithms. These movement performance results are also accompanied by computational requirements several orders of magnitude higher than the standard algorithms. The study highlights the limitations of MCTSVO to help shape potential future research for the most critical areas of the algorithm.

    Download full text (pdf)
    fulltext
  • Silfving, Marcus
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Liu, Donglin
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Evaluation of OCR Engines for Logistics Documents: Accuracy, Preprocessing and Cloud Integration: A Comparative Study Using Tesseract, Google Vision and AWS Textract2025Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis explores the performance of OCR systems on scanned documents, with a focus on comparing traditional and cloud-based engines. The problem addressed is the inconsistency in recognition accuracy across different OCR tools when applied to real-world low-quality images such as internal corporate documents and regulatory correspondence, which presents a challenge in digital archiving. The project investigates whether preprocessing techniques, such as grayscale conversion, contrast enhancement, thresholding, and morphological operations, can improve recognition quality. Three OCR engines were tested: Tesseract (open-source), Google Cloud Vision, and Amazon Textract. Each engine was evaluated on a dataset of 20 labeled images using CER as the primary metric. A custom Python pipeline was developed to automate batch processing of images, apply preprocessing, execute OCR, and log results for analysis. Both original and preprocessed versions of each image were tested. Preprocessing improved results in most cases, but also worsened them — especially for cloud-based models, suggesting these services already include internal preprocessing. Tesseract showed significant variation in CER, often benefiting solely from thresholding, while cloud-based engines performed more consistently. The findings indicate that naive preprocessing has limited benefit unless tailored per engine. Results also highlight the superior baseline performance of Google Cloud Vision in most cases. Future work could explore machine learning-based enhancement or layout-aware OCR for complex document types. The resulting framework supports fast benchmarking of OCR systems and can serve in future research or industry evaluation of document digitization tools.

    Download full text (pdf)
    fulltext
  • Wistemar, Oscar
    KTH, School of Engineering Sciences (SCI), Physics, Particle Physics, Astrophysics and Medical Imaging.
    Photospheric emission from gamma-ray bursts altered by radiation-mediated shocks2026Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis explores gamma-ray bursts (GRBs), and more specifically the prompt emission phase, which is the first ∼ 10 seconds of gamma-rays. GRBs come from the launching of a relativistic jet in connection with core-collapse supernovae or compact object (neutron star or black hole) mergers. The relativistic jet accelerates and eventually most of the energy is kinetic, and that energy is then somehow converted into internal energy that is emitted in gamma-rays, and is what we observe. The mechanism responsible for this conversion in the prompt phase is not fully understood and this thesis deals with one possible such mechanism, radiation-mediated shocks (RMSs). Such shocks occurring below the photosphere alters the photon spectral energy distribution, which is then released at the photosphere towards an observer. An analogue model of RMSs, called the Kompaneets RMS approximation (KRA) is discussed and then later applied in Paper I & II. In Paper I we generalize a method to measure the bulk outflow Lorentz factor based on the properties of photospheric emission and the evolution of the photon energy distribution in the jet. We find that depending on the quality of the data, either a value or an upper limit can be found for the Lorentz factor. In Paper II we do a time-resolved spectral analysis of GRB 211211A, a GRB with a broad spectrum containing two breaks, one in the tens of keV and one around a few MeV. Using the method presented in Paper I we find typical Lorentz factor values of ∼ 300. From the Lorentz factors and the KRA model we find the time evolution of the RMS parameters, here a strong shock occurring at moderate optical depths. We also show that the KRA model can fit these broad spectra with two breaks very well.

    Download full text (pdf)
    thesis_text