kth.sePublications
Change search
Refine search result
1234567 51 - 100 of 356
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Cederberg, Idun
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Cui, Ida
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Analysing the Optimal Fund Selection and Allocation Structure of a Fund of Funds2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis aims to investigate different types of optimization methods that can be used when optimizing fund of fund portfolios. Moreover, the thesis investigates which funds that should be included and what their respective portfolio weights should be, in order to outperform the Swedish SIX Portfolio Return Index. The funds considered for the particular fund of funds in this thesis are all managed by a particular company. The optimization frameworks applied include traditional mean variance optimization, min conditional value at risk optimization, as well as optimization methods studying alpha in combination with the risk measures tracking error and maximum drawdown, respectively. All four optimization methods were applied on a ten years data period as well as on a five years data period. It was found that while the funds have different strengths and weaknesses, four of the funds were considered most appropriate for the fund of funds. Geography and sector constraints were also taken into account and it was found that, in this particular case, the healthcare sector constraint affected the allocated portfolio weights the most.

    Download full text (pdf)
    fulltext
  • 52. Cedervall, Simon
    et al.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Nonlinear observers for unicycle robots with range sensors2007In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 52, no 7, p. 1325-1329Article in journal (Refereed)
    Abstract [en]

    For nonlinear mobile systems equipped with exteroceptive sensors, the observability does not only depend on the initial conditions, but also on the control and the environment. This presents an interesting issue: how to design an observer together with the exciting control. In this note, the problem of designing an observer based on range sensor readings is studied. A design method based on periodic excitations is proposed for unicycle robotic systems.

  • 53.
    Chachólski, Wojciech
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Levi, R.
    Meshulam, R.
    On the topology of complexes of injective words2020In: Journal of Applied and Computational Topology, ISSN 2367-1726, Vol. 4, no 1, p. 29-44Article in journal (Refereed)
    Abstract [en]

    An injective word over a finite alphabet V is a sequence w= v1v2⋯ vt of distinct elements of V. The set Inj (V) of injective words on V is partially ordered by inclusion. A complex of injective words is the order complex Δ (W) of a subposet W⊂ Inj (V). Complexes of injective words arose recently in applications of algebraic topology to neuroscience, and are of independent interest in topology and combinatorics. In this article we mainly study Permutation Complexes, i.e. complexes of injective words Δ (W) , where W is the downward closed subposet of Inj (V) generated by a set of permutations of V. In particular, we determine the homotopy type of Δ (W) when W is generated by two permutations, and prove that any stable homotopy type is realizable by a permutation complex. We describe a homotopy decomposition for the complex of injective words Γ (K) associated with a simplicial complex K, and point out a connection to a result of Randal-Williams and Wahl. Finally, we discuss some probabilistic aspects of random permutation complexes. 

  • 54.
    Chan, Jenny
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Assisted Annotation of Sequential Image Data With CNN and Pixel Tracking2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this master thesis, different neural networks have investigated annotating objects in video streams with partially annotated data as input. Annotation in this thesis is referring to bounding boxes around the targeted objects. Two different methods have been used ROLO and GOTURN, object detection with tracking respective object tracking with pixels. The data set used for validation is surveillance footage consists of varying image resolution, image size and sequence length. Modifications of the original models have been executed to fit the test data. 

    Promising results for modified GOTURN were shown, where the partially annotated data was used as assistance in tracking. The model is robust and provides sufficiently accurate object detections for practical use. With the new model, human resources for image annotation can be reduced by at least half.

    Download full text (pdf)
    fulltext
  • 55.
    Chaoui El Kaid, Yasmin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Validation of an implementation of MBSE and the possibility of Simulating System Models2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Model-Based Systems Engineering (MBSE) is the process of developing a set of system models that help define, design and document a system under development. These models provide an efficient way to explore, update, and communicate system perspectives to stakeholders, while eliminating dependence on traditional documents. It is also a cost-effective way to test and explore new system characteristics by testing early and getting feedback on different design decisions fast.

    Saab Surveillance has developed a version of MBSE to create system models for Saab's products. These models are made in the modelling tool Rhapsody Architect for SYstems Engineers [1] and use SysML as modelling language.

    With the evolution of electronic warfare systems, the complexity of these system models escalate progressively. This gives a great need for model validation and simulation in order to verify and improve systems both in the start and under development.

    The purpose of this thesis was to examine the possibility of validating and simulating the system models created in the business unit EW Systems at Saab Surveillance. The examination included a validation of the EW-MBSE system design and a suggestion of modelling modifications and tools needed in order to be able to simulate the MBSE models in early stages.

    The result showed that EW-MBSE is an accurate modelling method that does not need many modifications in order to support model validation and simulation. AS for the modelling tools, Rhapsody Designer for Systems Engineers [2] is a suitable choice for functional perspectives. Furthermore, it should be possible to supplement the MBSE models with mathematical models in Simulink [11] and integrate them in Rhapsody for continuous simulations. However, this was not possible to implement in this thesis because of reason described in chapter 4.1.3.

    Download full text (pdf)
    fulltext
  • 56.
    Charbonneau, Talwyn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Energy efficiency of pumps and heat exchangers2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Energy efficiency is a growing field that concerns more and more companies, individuals as well as governments in order to reduce costs and use less energy for a more sustainable development. Engie is no exception and values energy efficient projects.

    In this paper will be documented three energy efficiency related projects that I carried out during my five month end of studies internship. This work spans different topics, the parallel operation of hydraulic distribution pumps, the modeling of an heat exchangers as well as pid calculations for a dedicated ehating network.

    We delve into the technical aspect of each subject before introducing the tools developed, with some of their results.

    Download full text (pdf)
    fulltext
  • 57. Cheng, Daizhan
    et al.
    Wang, Jinhuan
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    An extension of LaSalle's invariance principle and its application to multi-agent consensus2008In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 53, no 7, p. 1765-1770Article in journal (Refereed)
    Abstract [en]

    In the paper, an extension of LaSalle's Invariance Principle to a class of switched linear systems is studied. One of the motivations is the consensus problem in multi-agent systems. Unlike most existing results in which each switching mode in the system needs to be asymptotically stable, this paper allows that the switching modes are only Lyapunov stable. Under certain ergodicity assumptions, an extension of LaSalle's Invariance Principle for global asymptotic stability is obtained. Then it is used to solve the consensus reaching problem of certain multi-agent systems in which each agent is modeled by a double integrator, and the associated interaction graph is switching and is assumed to be only jointly connected.

  • 58.
    Corfitsen, Christian
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Impact of Forward-Looking Macroeconomic Information on Expected Credit Losses According to IFRS 92021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this master thesis, the impact of forward-looking macroeconomic information under IFRS 9 is studied using fictional data from a Swedish mortgage loan portfolio. The study employs a time series analysis approach and employs vector autoregression models to model expected credit loss parameters with multiple incorporated macroeconomic parameters. The models are analyzed using impulse response functions to study the impact of macroeconomic shocks and the results show that the unemployment rate, USD/SEK exchange rate and 3-month interest rates have a significant impact on expected credit losses.

    Download full text (pdf)
    fulltext
  • 59.
    Dahlström, Knut
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Forssbeck, Carl
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Simulation Based Methods for Credit Risk Management in Payment Service Provider Portfolios2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Payment service providers have unique credit portfolios with different characteristics than many other credit providers. It is therefore important to study if common credit risk estimation methods are applicable to their setting. By comparing simulation based methods for credit risk estimation it was found that combining Monte Carlo simulation with importance sampling and the asymptotic single risk factor model is the most suitable model amongst those analyzed. It allows for a combination of variance reduction, scenario analysis and correlation checks, which all are important for estimating credit risk in a payment service provider portfolio.

    Download full text (pdf)
    fulltext
  • 60.
    Dalarsson, Mariana
    et al.
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    Emadi, Seyed Mohamad Hadi
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    Norgren, Martin
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    Perturbation Approach to Reconstructing Deformations in a Coaxial Cylindrical Waveguide2015In: Mathematical problems in engineering (Print), ISSN 1024-123X, E-ISSN 1563-5147, article id 915497Article in journal (Refereed)
    Abstract [en]

    We study a detection method for continuous mechanical deformations of coaxial cylindrical waveguide boundaries, using perturbation theory. The inner boundary of the waveguide is described as a continuous PEC structure with deformations modeled by suitable continuous functions. In the present approach, the computation complexity is significantly reduced compared to discrete conductor models studied in our previous work. If the mechanically deformed metallic structure is irradiated by the microwave fields of appropriate frequencies, then, by means of measurements of the scattered fields at both ends, we can reconstruct the continuous deformation function. We apply the first-order perturbation method to the inverse problem of reconstruction of boundary deformations, using the dominant TEM-mode of the microwave radiation. Different orders of Tikhonov regularization, using the L-curve criterion, are investigated. Using reflection data, we obtain reconstruction results that indicate an agreement between the reconstructed and true continuous deformations of waveguide boundaries.

  • 61.
    Dalfi, Reza Salam
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Mattar, Noel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Evaluation of portfolio optimization methods on decentralized assets and hybridized portfolios2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The market for decentralised financial instruments, more commonly known as cryptocurrencies, has gained momentum over the past recent years and the application areas are many. Modern portfolio theory has for years demonstrated its applicability to traditional assets, such as equities and other instruments, but to some extent omitted the application of mathematical portfolio theory with respect for cryptocurrencies. This master's thesis aims to evaluate both traditional and DeFi assets from a modern optimization perspective. The focus area includes whichallocation structures that minimize the risk-adjusted return. The optimizations strategies are based on the risk measures, standard deviation, Conditional Value at Risk and First linear partial moment. The method has its structure in different scenarios where the outcome is optimized for traditional assets, DeFi assets and a hybrid set of these.

    The input data for the optimization methodology is based on weekly and adjusted price data for the assets. The output variables are weight-distribution, risk levels, return, maximum drawdown and graphic visualizations.

    Our results show that there is a value in incorporating parts of assets from the decentralized financial world in a portfolio provided that the risk-adjusted ratio increases through but through both higher returns and higher potential risk. These results are based on incorporation of certain parts of the new landscape where more established assets such as Bitcoin, Ethereum etc. have proven to perform well while other assets that are less traded shows a significantly worse result relative to risk.

    Download full text (pdf)
    fulltext
  • 62.
    Danielson, Oscar
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Hagéus, Tom
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Optimal Capital Structures under the Vasicek Stochastic Interest Rate Model2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study applies the Vasicek stochastic interest rate model in order to determine optimal capital structures for listed firms. A Swedish interest rate data set is used to estimate Vasicek model parameter that are reliable and independent of initial start values. These interest rate parameters are then used in a capital structure model which is evaluated through a sensitivity analysis and a firm-specific analysis which is applied to listed Swedish firms. The tax benefits of debt must be balanced against transaction costs and bankruptcy costs when determining optimal leverage ratio and optimal debt maturity. The results imply that firms should primarily focus on the long-term mean parameter of the interest rate process, the volatility of its firm value, the transaction cost of issuing debt and the effective corporate tax rate when choosing a capital structure. The capital structure model is well-founded in previous research and yields results which align with empirical data quite well. The conclusions of this study has implications for corporate finance, as the Vasicek model provides a better understanding of the stochastic nature of interest rates and its influence in determining optimal capital structures.

    Download full text (pdf)
    fulltext
  • 63.
    Darke, Felix
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Interpretable Machine Learning for Insurance Risk Pricing2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This Master's Thesis project set out with the objective to propose a machine learning model for predicting insurance risk at the level of an individual coverage, and compare it towards the existing models used by the project provider Gjensidige Försäkring. Due to interpretability constraints, it was found that this problem can be translated into a standard tabular regression task, with well defined target distributions. However, it was early identified that the set of feasible models do not contain pure black box models such as XGBoost, LightGBM and CatBoost which are typical choices for tabular data regression. In the report, we explicitly formulate the interpretability constraints in sharp mathematical language. It is concluded that interpretability can be ensured by enforcing a particular structure on the Hilbert space across which we are looking for the model. 

    Using this formalism, we consider two different approaches for fitting high performing models that maintain interpretability, where we conclude that gradient boosted regression tree based Generalized Additive Models in general, and the Explainable Boosting Machine in particular, is a promising model candidate consisting of functions within the Hilbert space of interest. The other approach considered is the basis expansion approach, which is currently used at the project provider. We make the argument that the gradient boosted regression tree approach used by the Explainable Boosting Machine is a more suitable model type for an automated, data driven modelling approach which is likely to generalize well outside of the training set.

    Finally, we perform an empirical study on three different internal datasets, where the Explainable Boosting Machine is compared towards the current production models. We find that the Explainable Boosting Machine systematically outperforms the current models on unseen test data. There are many potential ways to explain this, but the main hypothesis brought forward in the report is that the sequential model fitting procedure allowed by the regression tree approach allows us to effectively explore a larger portion of the Hilbert space which contains all permitted models in comparison to the basis expansion approach.

    Download full text (pdf)
    fulltext
  • 64.
    de la Bretèšche, Régis
    et al.
    Institut de Mathématiques de Jussieu , UMR 7586, Université Paris-Diderot, UFR de Mathématiques, case 7012, Bâtiment Sophie Germain, 75205 Paris Cedex 13, France.
    Kurlberg, Pär
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Shparlinski, Igor
    Department of Pure Mathematics , University of New South Wales, Sydney, NSW 2052, Australia.
    On the number of products which form perfect powers and discriminants of multiquadratic extensions2021In: International mathematics research notices, ISSN 1073-7928, E-ISSN 1687-0247, Vol. 2021, no 22, p. 17140-17169Article in journal (Refereed)
    Abstract [en]

    We study some counting questions concerning products of positive integers u1, . . . , un which form a nonzero perfect square, or more generally, a perfect k -th power. We obtain an asymptotic formula for the number of such integers of bounded size and in particular improve and generalize a result of D. I. Tolev (2011). We also use similar ideas to count the discriminants of number fields which are multiquadratic extensions of Q and improve and generalize a result of N. Rome (2017)

  • 65.
    de Woul, Jonas
    et al.
    KTH, School of Engineering Sciences (SCI), Theoretical Physics, Mathematical Physics.
    Langmann, Edwin
    KTH, School of Engineering Sciences (SCI), Theoretical Physics, Mathematical Physics.
    Gauge invariance, correlated fermions, and Meissner effect in 2+1 dimensionsArticle in journal (Other academic)
    Abstract [en]

    We present a 2+1 dimensional quantum gauge theory model with correlated fermions that is exactly solvable by bosonization. This model gives an effective description of partially gapped fermions on a square lattice that have density-density interactions and are coupled to photons. We show that the photons in this model are massive due to gauge-invariant normal-ordering, similarly as in the Schwinger model. Moreover, the exact excitation spectrum of the model has two gapped and one gapless mode. We also compute the magnetic field induced by an external current and show that there is a Meissner effect. We find that the transverse photons have significant effects on the low-energy properties of the model even if the fermion-photon coupling is small.

  • 66.
    de Woul, Jonas
    et al.
    KTH, School of Engineering Sciences (SCI), Theoretical Physics, Mathematical Physics.
    Langmann, Edwin
    KTH, School of Engineering Sciences (SCI), Theoretical Physics, Mathematical Physics.
    Partial continuum limit of the 2D Hubbard modelArticle in journal (Other academic)
  • 67.
    Deleplace, Adrien
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Constrained Gaussian Process Regression Applied to the Swaption Cube2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This document is a Master Thesis report in financial mathematics for KTH. This Master thesis is the product of an internship conducted at Nexialog Consulting, in Paris. This document is about the innovative use of Constrained Gaussian process regression in order to build an arbitrage free swaption cube. The methodology introduced in the document is used on a data set of European Swaptions Out of the Money.

    Download full text (pdf)
    fulltext
  • 68.
    Dernsjö, Axel
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Blom, Ebba
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Gradient Boosting Tree Approach for Behavioural Credit Scoring2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This report evaluates the possibility of using sequential learning in a material development setting to help predict material properties and speed up the development of new materials. To do this a Random forest model was built incorporating carefully calibrated prediction uncertainty estimates. The idea behind the model is to use the few data points available in this field and leverage that data to build a better representation of the input-output space as each experiment is performed. Having both predictions and uncertainties to evaluate, several different strategies were developed to investigate performance. Promising results regarding feasibility and potential cost-cutting were found using these strategies. It was found that within a specific performance region of the output space, the mean difference in alloying component price between the cheapest and most expensive material could be as high as 100 %. Also, the model performed fast extrapolation to previously unknown output regions, meaning new, differently performing materials could be found even with very poor initial data.

    Download full text (pdf)
    fulltext
  • 69.
    Dong, Yiheng
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    One-Dimensional Dynamics: from Poincaré to Renormalization2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Renormalization is a powerful tool showing up in different contexts of mathematics and physics. In the context of circle diffeomorphisms, the renormalization operator acts like a microscope and allows to study the dynamics of a circle diffeomorphism on a small scale. The convergence of renormalization leads to a proof of the so-called rigidity theorem, which classifies the dynamics of circle diffeomorphisms geometrically: the conjugacy between $C^3$ circle diffeomorphism with Diophantine rotation number and the corresponding rotation is $C^1$.

    In this thesis, we define the renormalization of circle diffeomorphisms and study its dynamics. In particular, we prove that the renormalization of orientation preserving $C^3$ circle diffeomorphisms with irrational rotation number of bounded type converges to rotations at exponential speed. We also introduce the necessary relevant concepts such as rotation number, distortion and non-linearity and discuss some of their properties.

    This thesis is a summary and supplement to the book One-Dimensional Dynamics: from Poincaré to Renormalization.

    Download full text (pdf)
    fulltext
  • 70.
    Dong, Yuanlin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    A Study of Risk Factor Models: Theoretical Derivations and Practical Applications2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis provides an end-to-end picture of the modelling of interest rates and Foreign Exchange (FX) rates. We start by defining the FX rates and the interest rates. After having a good understanding of the basics, we take a deep dive into the approaches commonly used to model interest rates and FX rates respectively. In particular, we present an interest rate model and a FX rate model that I have developed for man- aging Swedbank’s Counterparty Credit Risk (CCR). In addition to the mathematical derivations, we describe the theories underlying the models, discuss the model com- parisons, and explain the model choices made in practical applications. Finally, we provide a prototype of model implementation to illustrate how theory can be put into practice.

    I had some doubts about the interest rate model and the FX rate model that I have developed for managing Swedbank’s CCR. These doubts have been cleared up through this thesis work. Both the doubts and the clarifications are described in this thesis.

    Download full text (pdf)
    fulltext
  • 71.
    Dougherty, Mark
    KTH, Superseded Departments (pre-2005), Aeronautical and Vehicle Engineering.
    What has literature to offer computer science?2004In: Human IT, ISSN 1402-1501, E-ISSN 1402-151X, Vol. 7, no 1, p. 74-91Article, review/survey (Refereed)
    Abstract [en]

    In this paper I ask the question: what has literature to offer computer science? Can a bilateral programme of research be started with the aim of discovering the same kind of deep intertwining of ideas between computer science and literature, as already exists between computer science and linguistics? What practical use could such results yield? I begin by studying a classic forum for some of the most unintelligible pieces of prose ever written, the computer manual. Why are these books so hard to understand? Could a richer diet of metaphor and onomatopoeia help me get my laser printer working? I then dig down a little deeper and explore computer programs themselves as literature. Do they exhibit aesthetics, emotion and all the other multifarious aspects of true literature? If so, does this support their purpose and understandability? Finally I explore the link between computer code and the human writer. Rather than write large amounts of code directly, we encourage students to write algorithms as pseudo-code as a first step. Pseudo-code tells a story within a semi-formalised framework of conventions. Is this the intertwining we should be looking for?

  • 72.
    Eidmann, Ludvig
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Error estimation for neural network approximations of convection problems2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The problem of approximating a solution to the convection equation $\partial_t u +f\cdot\nabla_xu = h$ given data on the flux $f$, source function $h$ and a final condition $g$ is investigated. Specifically, two layer neural networks are used to approximate $f,h$ and $g$ and a solution is approximated using numerical integration. An upper bound to the expected square error of the approximated solution is derived which is dependent on the number of parameters in the approximating neural networks. The dependency of the error is investigated via numerical experiments concerning both synthetic and real world wind data. The neural networks used in the numerical experiments are trained first by the algorithm Adaptive Metropolis-Hastings and then by the SGD-type algorithm Adam. The rate of convergence of the approximation error is shown to be in line with the derived bound when approximating a solution close in time to the final condition $g$. The error is shown to decrease slower than what the derived bound suggests when approximating far away in time from the final condition $g$.

    Download full text (pdf)
    fulltext
  • 73.
    Ekelöf, Linus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Using a multi-objective cuckoo search algorithm to solve the urban transit routing problem2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The design of public transportation networks includes the problem of finding efficient transit routes. This problem is called the Urban Transit Routing Problem (UTRP) and it is a highly complex combinatorial optimization problem. Solving the UTRP and finding more efficient transit routes may lead to large cost savings as well as shorter average travel times for the passengers. The most common approach to solving it, in the literature, is with the usage of a metaheuristic algorithm. The purpose of this thesis is to solve the UTRP with such an algorithm, and to make the algorithm efficient. To this end, the multi-objective Discrete Cuckoo Search (MODCS) algorithm is introduced and it solves the UTRP with respect to both passenger and operator objectives. Two network instances are solved for: the common benchmark network of Mandl's network, and the Södertälje bus network. For Mandl's network, the results were compared to other algorithms in the literature. The results showed great performance of the MODCS algorithm with respect to the passenger objective, and not as good with respect to the operator objective. The computation times of the MODCS were higher than those of the other algorithms. For the Södertälje bus network, the MODCS algorithm found route sets with significantly better objective values than those of a previous master thesis algorithm. Furthermore, the average computation times of the MODCS algorithm were much less than those of the previous master thesis algorithm.

    Download full text (pdf)
    fulltext
  • 74.
    Ekerå, Martin
    KTH. Swedish NCSA, Swedish Armed Forces, 107 85, Stockholm, Sweden.
    On post-processing in the quantum algorithm for computing short discrete logarithms2020In: Designs, Codes and Cryptography, ISSN 0925-1022, E-ISSN 1573-7586, Vol. 88, no 11, p. 2313-2335Article in journal (Refereed)
    Abstract [en]

    We revisit the quantum algorithm for computing short discrete logarithms that was recently introduced by Ekerå and Håstad. By carefully analyzing the probability distribution induced by the algorithm, we show its success probability to be higher than previously reported. Inspired by our improved understanding of the distribution, we propose an improved post-processing algorithm that is considerably more efficient, enables better tradeoffs to be achieved, and requires fewer runs, than the original post-processing algorithm. To prove these claims, we construct a classical simulator for the quantum algorithm by sampling the probability distribution it induces for given logarithms. This simulator is in itself a key contribution. We use it to demonstrate that Ekerå–Håstad achieves an advantage over Shor, not only in each individual run, but also overall, when targeting cryptographically relevant instances of RSA and Diffie–Hellman with short exponents.

  • 75.
    Ekström, Lukas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Estimating fuel consumption using regression and machine learning2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis focuses on investigating the usage of statistical models for estimating fuel consumption of heavy duty vehicles. Several statistical models are assessed, along with machine learning using artificial neural networks.

    Data recorded by sensors on board trucks in the EU describe the operational usage of the vehicle. The usage of this data for estimating the fuel consumption is assessed, and several variables originating from the operational data is modelled and tested as possible input parameters.

    The estimation model for real world fuel consumption uses 8 parameters describing the operational usage of the vehicles, and 8 parameters describing the vehicles themselves. The operational parameters describe the average speed, topography, variation of speed, idling, and more. This model has an average relative error of 5.75%, with a prediction error less than 11.14% for 95% of all tested vehicles.

    When only vehicle parameters are considered, it is possible to make predictions with an average relative error of 9.30%, with a prediction error less than 19.50% for 95% of all tested vehicles.

    Furthermore, a computer software called Vehicle Energy Consumption Calculation tool(VECTO) must be used to simulate the fuel consumption for all heavy duty vehicles, according to legislation by the EU. Running VECTO is a slow process, and this thesis also investigates how well statistical models can be used to quickly estimate the VECTO fuel consumption. The model estimates VECTO fuel consumption with an average relative error of 0.32%and with a prediction error less than 0.65% for 95% of all tested vehicles

    Download full text (pdf)
    fulltext
  • 76.
    Eliasson, Ebba
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Long Horizon Volatility Forecasting Using GARCH-LSTM Hybrid Models: A Comparison Between Volatility Forecasting Methods on the Swedish Stock Market2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Time series forecasting and volatility forecasting is a particularly active research field within financial mathematics. More recent studies extend well-established forecasting methods with machine learning. This thesis will evaluate and compare the standard Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model and some of its extensions to a proposed Long Short-Term Memory (LSTM) model on historic data from five Swedish stocks. It will also explore hybrid models that combine the two techniques to increase prediction accuracy over longer horizons. The results show that the predictability increases when switching from univariate GARCH and LSTM models to hybrid models combining them both. Combining GARCH, Glosten, Jagannathan, and Runkle GARCH (GJR-GARCH), and Fractionally Integrated GARCH (FIGARCH) yields the most accurate result with regards to mean absolute error and mean square error. The forecasting errors decreased with 10 to 50 percent using the hybrid models. Comparing standard GARCH to the hybrid models, the biggest gains were seen at the longest horizon, while comparing the LSTM to the hybrid models, the biggest gains were seen for the shorter horizons. In conclusion, the prediction ability increases using the hybrid models compared to the regular models.

    Download full text (pdf)
    fulltext
  • 77.
    Elvander, Filip
    et al.
    Lund Univ, Div Math Stat, Lund, Sweden..
    Haasler, Isabel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Jakobsson, Andreas
    Lund Univ, Div Math Stat, Lund, Sweden..
    Karlsson, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    NON-COHERENT SENSOR FUSION VIA ENTROPY REGULARIZED OPTIMAL MASS TRANSPORT2019In: 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE , 2019, p. 4415-4419Conference paper (Refereed)
    Abstract [en]

    This work presents a method for information fusion in source localization applications. The method utilizes the concept of optimal mass transport in order to construct estimates of the spatial spectrum using a convex barycenter formulation. We introduce an entropy regularization term to the convex objective, which allows for low-complexity iterations of the solution algorithm and thus makes the proposed method applicable also to higher-dimensional problems. We illustrate the proposed method's inherent robustness to misalignment and miscalibration of the sensor arrays using numerical examples of localization in two dimensions.

  • 78.
    Engel, Eva
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Balancing Performance and Usage Cost: A Comparative Study of Language Models for Scientific Text Classification2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The emergence of large language models, such as BERT and GPT-3, has revolutionized natural language processing tasks. However, the development and deployment of these models pose challenges, including concerns about computational resources and environmental impact. This study aims to compare discriminative language models for text classification based on their performance and usage cost. We evaluate the models using a hierarchical multi-label text classification task and assess their performance using primarly F1-score. Additionally, we analyze the usage cost by calculating the Floating Point Operations (FLOPs) required for inference. We compare a baseline model, which consists of a classifier chain with logistic regression models, with fine-tuned discriminative language models, including BERT with two different sequence lengths and DistilBERT, a distilled version of BERT. Results show that the DistilBERT model performs optimally in terms of performance, achieving an F1-score of 0.56 averaged on all classification layers. The baseline model and BERT with a maximal sequence length of 128 achieve F1-scores of 0.51. However, the baseline model outperforms the transformers at the most specific classification level with an F1-score of 0.33. Regarding usage cost, the baseline model significantly requires fewer FLOPs compared to the transformers. Furthermore, restricting BERT to a maximum sequence length of 128 tokens instead of 512 sacrifices some performance but offers substantial gains in usage cost. The code and dataset are available on GitHub.

    Download full text (pdf)
    fulltext
  • 79.
    Englund, Olle
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    An approximate version of the Marchenko–Pastur equation2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Marchenko–Pastur equation is a self consistent equation that describes the Stieltjes transform of the limiting spectral distribution of certain covariance-type random matrices. In this project, we use methods including basic resolvent identities and large deviation estimates to prove an approximate version of the Marchenko–Pastur equation. Furthermore, we investigate existence and uniqueness properties of the exact equation, and show that the solution to the exact equation is close to the solution to the approximate equation.

  • 80.
    Engvall Birr, Madeleine
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Lansryd, Lisette
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Periodical Maintenance Modelling and Optimisation Assuming Imperfect Preventive Maintenance and Perfect Corrective Maintenance2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this paper, a periodic maintenance model is formulated assumingcontinuous monitoring, imperfect preventive maintenance (PM) and perfect correctivemaintenance (CM) using three decision variables, (I, N, Z). The model is derived in aninfinite horizon context where the mean cost per unit time is modelled. PM actionsare performed N − 1 times at time instants iT for i = 1, ..., N − 1, where T = ∆T · Iand ∆T is a fixed positive number representing the minimum time allowed betweenPM actions and I is a time interval multiple representing the decision of how oftenPM actions should be performed. The N:th maintenance activity is either a plannedreplacement (if Z = 0) or a corrective replacement from letting the component runto failure (if Z = 1). Imperfect PM is modelled using age reductions, either using aconstant r or a factor γ. Previous research on assumptions of these types has beenlimited as the assumptions yield models of high complexity which are not analyticallytractable. However, assumptions of this type are considered more realistic than othermore thoroughly researched assumptions, using e.g. minimal CM. Therefore, twocomplimentary optimisation methods are proposed and evaluated, namely, completeenumeration and a specially derived genetic algorithm which can be used for differentproblem sizes respectively. Carefully determined solution bounds enabled completeenumeration to be applicable for many input parameter values which is a great strengthof the proposed model.

    Download full text (pdf)
    fulltext
  • 81. Enqvist, Per
    Spectrum estimation by interpolation of covariances and cepstrum parameters in an exponential class of spectral densities2006In: PROCEEDINGS OF THE 45TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-14, 2006, p. 799-804Conference paper (Refereed)
    Abstract [en]

    Given output data of a stationary stochastic process estimates of the covariances and cepstrum parameters can be obtained. Methods of moments have been applied to these parameters for designing ARMA processes, and it has been shown that these two sets of parameters in fact form local coordinates for the set of ARMA processes, but that some combinations of cepstrum parameters and covariances cannot be matched exactly within this class of processes. Therefore, another class of processes is considered in this paper in order to be able to match any combination of covariances and cepstrum parameters. The main result is that a process with spectral density of the form phi(z) = exp {Sigma(m)(k=0) p(k)(z(k) + z(-k))}/Sigma(n)(k=0) q(k)(z(k) + z(-k))/2 can always match given covariances and cepstrum parameters. This is proven using a fixed-point argument, and a non-linear least-squares problem is proposed for determining a solution.

  • 82.
    Eriksson, John
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Holmberg, Jacob
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Portfolio Risk Modelling in Venture Debt2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis project is an experimental study on how to approach quantitative portfolio credit risk modelling in Venture Debt portfolios. Facing a lack of applicable default data from ArK and publicly available sets, as well as seeking to capture companies that fail to service debt obligations before defaulting per se, we present an approach to risk modeling based on trends in revenue. The main framework revolves around driving a Monte Carlo simulation with Copluas to predict future revenue scenarios across a portfolio of early-stage technology companies. Three models for a random Gaussian walk, a Linear Dynamic System and an Autoregressive Integrated Moving Average (ARIMA) time series are implemented and evaluated in terms of their portfolio Value-at-Risk influence. The model performance confirms that modeling portfolio risk in Venture Debt is challenging, especially due to lack of sufficient data and thus a heavy reliance on assumptions. However, the empirical results for Value-at-Risk and Expected Shortfall are in line with expectations. The evaluated portfolio is still in an early stage with a majority of assets not yet in their repayment period and consequently the spread of potential losses within one year is very tight. It should further be recognized that the scope in terms of explanatory variables for sales and model complexities has been narrowed and simplified for computational benefits, transparency and communicability. The main conclusion drawn is that alternative approaches to model Venture Debt risk is fully possible, and should improve in reliability and accuracy with more data feeding the model. For future research it is recommended to incorporate macroeconomic variables as well as similar company analysis to better capture macro, funding and sector conditions. Furthermore, it is suggested to extend the set of financial and operational explanatory variables for sales through machine learning or neural networks.

    Download full text (pdf)
    fulltext
  • 83.
    Erlandsson, Adam
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Spectral sequences for composite functors2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Spectral sequences were developed during the mid-twentieth century as a way of computing (co)homology, and have wide uses in both algebraic topology and algebraic geometry. 

    Grothendieck introduced in his Tôhoku paper the Grothendieck spectral sequence, which given left exact functors $F$ and $G$ between abelian categories, uses the right-derived functors of $F$ and $G$ as initial data and converges to the right-derived functors of the composition $G\circ F.$ 

    This thesis focuses on instead constructing a spectral sequence that uses the derived functors of $G$ and $G\circ F$ as initial data and converges to the derived functors of $F.$ Our approach takes inspiration from the construction of the Eilenberg-Moore spectral sequence, which given a fibration of topological spaces can calculate the singular cohomology of the fiber from the singular cohomology of the base space and total space. The Eilenberg-Moore spectral sequence can be constructed through the use of differential graded algebras and their bar construction, since this defines a double complex for which the column-wise filtration of the corresponding total complex induces the spectral sequence.

    The correct analogue of this with respect to composite functors is the bar construction for monads. Specifically, we let $G$ have an exact left adjoint $H$, which makes $G\circ H$ into a monad. Then, we extend our adjunction so that the derived functor $RG$ has left adjoint $RH$ in the corresponding derived categories, making $RG\circ RH$ into a monad. This allows us to apply the bar construction in the derived category, but we show that there emerge issues in obtaining a double complex and subsequent total complex from this construction. 

    Additionally, we present the essential theory of spectral sequences in general, and of the Serre, Eilenberg-Moore and Grothendieck spectral sequences in particular.

    Download full text (pdf)
    fulltext
  • 84.
    Erlandsson, Kasper
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Exploring the Connection Between Mortality Intensity and Chosen Withdrawal Time of Occupational Pension2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study aims to investigate if taking individuals' chosen time of withdrawing occupational pension into account can be used to better predict the future mortality intensities which describes the probability of death in a given time interval. The main underlying hypothesis for why there would be a difference lies in the fact that some individuals could receive more pension benefits by choosing a withdrawal time based on his or her own expectation of remaining lifespan. In particular individuals that do not expect to live long after retirement could receive more benefits before death by choosing to withdraw his or her money as fast as possible.

    To explore if such a relationship exists and is significant enough data provided by Alecta is analyzed. This data provides among others the chosen withdrawal time and recorded deaths. Based on this dataset three models are created each with their own advantages and disadvantages. 

    The first model is a pure empirical model which simply shows the processed data after calculations have been made, showing the exact data in a more comprehensive form. The second model is a Makeham model which assumes that the mortality intensity is distributed according to the Makeham formula in the studied age range. This model allows random noise to be smoothed out and allows predictions of the mortality intensity in ages not studied. The third model is a Logistic regression model. This model considers two cases, death or no death and based on observed data the model predicts the probability that an individual will die given an age and withdrawal time. The logistic regression model is a robust model which allows stable predictions even where data is scarce.

    From the results of the three models it was concluded that for men there is a clear difference in mortality intensity between limited withdrawal times where shorter withdrawal times have higher mortality intensity and longer withdrawal times have lower mortality intensity. For women there were signs that the same is also true, however the evidence for this was weak. 

    The difference in mortality intensity was more pronounced in the first years after retirement and in particular longer withdrawal times had a very low mortality intensity. This was seen for both men and women. 

    When the mortality intensity was weighed based on the size of individuals' benefits the differences were much more pronounced for both men and women. Indicating that those with larger benefits to a greater extent chose their withdrawal time based on his or her own expectation of remaining lifespan.

    Almost all results agreed with what would be expected based on the underlying hypothesis with one exception, lifelong withdrawal. For lifelong withdrawal both men and women had a significantly higher mortality intensity that would be expected if individuals aim to maximize their received benefits. This was likely explained by lifelong serving as a default or safe option for many individuals which would lead to the option being dominated by these individuals rather than those aiming to maximize their received benefits.

    Download full text (pdf)
    fulltext
  • 85.
    Ernstsson, Hampus
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Börjes Liljesvan, Max
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    The Black-Litterman Asset Allocation Model - An Empirical Analysis of Its Practical Use2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Modern portfolio theory has its attractive characteristics of promoting diversification in a portfolio and can be seen as an easy alternative for setting optimal weights for portfolio managers. Furthermore, as portfolio managers try to beat a defined benchmark for their portfolio the Black-Litterman model allows them to include their own prospects on the future return of markets and securities. This thesis examines how the practical use of the Black-Litterman model can affect portfolios' performance. The analysis was done by calculating historical portfolio weights with investor views, without investor views, and with perfect investor views in the Black-Litterman model. Thereafter, calculating historical return and volatility for six multi-asset portfolios between 2017-09-25 and 2021-01-31. This was then compared with benchmark portfolios, which reflect the practical use. These portfolios included historically used investor views and constraints in the mean-variance optimization. The results showed that investor views had a negative effect on total return (lower return) and a positive effect on volatility (lower risk), however, an increased Sharpe ratio. The constraints in the mean-variance optimization used in the benchmark portfolios resulted in a lower total return. In conclusion, the Black-Litterman model showed robustness and did not generate unintuitive or unreasonable portfolios, and it has great potential with increasing accuracy in the investor views.

    Download full text (pdf)
    fulltext
  • 86.
    Espahbodi, Kamyar
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Roumi, Roumi
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Allocation of Alternative Investments in Portfolio Management.: A Quantitative Study Considering Investors' Liquidity Preferences2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Despite the fact that illiquid assets pose several difficulties regarding portfolio allocation problems for investors, more investors are increasing their allocation towards them. Alternative assets are characterized as being harder to value and trade because of their illiquidity which raises the question of how they should be managed from an allocation optimization perspective. In an attempt to demystify the illiquidity conundrum, shadow allocations are attached to the classical mean-variance framework to account for liquidity activities. The framework is further improved by replacing the variance for the coherent risk measure conditional value at risk (CVaR). This framework is then used to first stress test and optimize a theoretical portfolio and then analyze real-world data in a case study. The investors’ liquidity preferences are based on common institutional investors such as Foundations & Charities, Pension Funds, and Unions. The theoretical results support previous findings of the shadow allocations framework and decrease the allocation towards illiquid assets, while the results of the case study do not support the shadow allocations framework.

    Download full text (pdf)
    fulltext
  • 87.
    Essinger, Hugo
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Kivelä, Alexander
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Object Based Image Retrieval Using Feature Maps of a YOLOv5 Network2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    As Machine Learning (ML) methods have gained traction in recent years, someproblems regarding the construction of such methods have arisen. One such problem isthe collection and labeling of data sets. Specifically when it comes to many applicationsof Computer Vision (CV), one needs a set of images, labeled as either being of someclass or not. Creating such data sets can be very time consuming. This project setsout to tackle this problem by constructing an end-to-end system for searching forobjects in images (i.e. an Object Based Image Retrieval (OBIR) method) using an objectdetection framework (You Only Look Once (YOLO) [16]). The goal of the project wasto create a method that; given an image of an object of interest q, search for that sameor similar objects in a set of other images S. The core concept of the idea is to passthe image q through an object detection model (in this case YOLOv5 [16]), create a”fingerprint” (can be seen as a sort of identity for an object) from a set of feature mapsextracted from the YOLOv5 [16] model and look for corresponding similar parts of aset of feature maps extracted from other images. An investigation regarding whichvalues to select for a few different parameters was conducted, including a comparisonof performance for a couple of different similarity metrics. In the table below,the parameter combination which resulted in the highest F_Top_300-score (a measureindicating the amount of relevant images retrieved among the top 300 recommendedimages) in the parameter selection phase is presented.

    Layer: 23Pool Methd: maxSim. Mtrc: eucFP Kern. Sz: 4

    Evaluation of the method resulted in F_Top_300-scores as can be seen in the table below.

    Mouse: 0.820Duck: 0.640Coin: 0.770Jet ski: 0.443Handgun: 0.807Average: 0.696

    Download full text (pdf)
    fulltext
  • 88.
    Fagerlund, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Comparative Study of Machine Learning Algorithms for Angular Position Estimation in Assembly Tools2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The threaded fastener is by far the most common method for securing components together and plays a significant role in determining the quality of a product. Atlas Copco offers industrial tools for tightening these fasteners, which are today suffering from errors in the applied torque. These errors have been found to behave in periodic patterns which indicate that the errors can be predicted and therefore compensated for. However, this is only possible by knowing the rotational position of the tool. Atlas Copco is interested in the possibility of acquiring this rotational position without installing sensors inside the tools.

    To address this challenge, the thesis explores the feasibility of estimating the rotational position by analysing the behaviour of the errors and finding periodicities in the data. The objective is to determine whether these periodicities can be used to accurately estimate the rotation of the torque errors of unknown data relative to errors of data where the rotational position is known. The tool analysed in this thesis exhibits a periodic pattern in the torque error with a period of 11 revolutions. 

    Two methods for estimating the rotational position were evaluated: a simple nearest neighbour method that uses mean squared error (MSE) as distance measure, and a more complex circular fully convolutional network (CFCN). The project involved data collection from a custom-built setup. However, the setup was not fully completed, and the models were therefore evaluated on a limited dataset.

    The results showed that the CFCN method was not able to identify the rotational position of the signal. The insufficient size of the data is discussed to be the cause for this. The nearest neighbour method, however, was able to estimate the rotational position correctly with 100% accuracy across 1000 iterations, even when looking at a fragment of a signal as small as 40%. Unfortunately, this method is computationally demanding and exhibits slow performance when applied to large datasets. Consequently, adjustments are required to enhance its practical applicability. In summary, the findings suggest that the nearest neighbour method is a promising approach for estimating the rotational position and could potentially contribute to improving the accuracy of tools.

    Download full text (pdf)
    fulltext
  • 89.
    Fageräng, Lucas
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Thoursie, Hugo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Modelling Proxy Credit Cruves Using Recurrent Neural Networks2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Since the global financial crisis of 2008, regulatory bodies worldwide have implementedincreasingly stringent requirements for measuring and pricing default risk in financialderivatives. Counterparty Credit Risk (CCR) serves as the measure for default risk infinancial derivatives, and Credit Valuation Adjustment (CVA) is the pricing method used toincorporate this default risk into derivatives prices. To calculate the CVA, one needs the risk-neutral Probability of Default (PD) for the counterparty, which is the centre in this type ofderivative.The traditional method for calculating risk-neutral probabilities of default involves constructingcredit curves, calibrated using the credit derivative Credit Default Swap (CDS). However,liquidity issues in CDS trading present a major challenge, as the majority of counterpartieslack liquid CDS spreads. This poses the difficult question of how to model risk-neutral PDwithout liquid CDS spreads.The current method for generating proxy credit curves, introduced by the Japanese BankNomura in 2013, involves a cross-sectional linear regression model. Although this model issufficient in most cases, it often generates credit curves unsuitable for larger counterpartiesin more volatile times. In this thesis, we introduce two Long Short-Term Memory (LSTM)models trained on similar entities, which use CDS spreads as input. Our introduced modelsshow some improvement in generating proxy credit curves compared to the Nomura model,especially during times of higher volatility. While the result were more in line with the tradedCDS-market, there remains room for improvement in the model structure by using a moreextensive dataset.

    Download full text (pdf)
    fulltext
  • 90.
    Faiqi, Shaida
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Index replication within Corporate Investment Grade - With implementation of Lasso regression in order to analyze the impact of key figures2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The fixed income market is not as exploited as other markets and has a more complex structure compared with the equity market. On the other hand, it has been seen that demand for research for the fixed income market has increased, which in turn has created greater interest in studying the characteristics of holdings in the market. This work studies whether it is possible to replicate indices through requirements for credit rating, sectors and mathematical key figures such as Duration, convexity, duration time spread (DTS) and option adjusted spread (OAS). Replication is made through linear programming in the program Python. By implementing lasso regression, this study examines whether it is possible to exceed the return by reducing the requirements for key figures that are not selected efter selection of variables in the regression. The investment company Alfred Berg has provided relevant data for this report. The data consists of information on all assets included in the index EUR Investment grade (ER00) over the period 2017-2021. The result of the replication follows the index returns, with small deviations, and the lasso regression selects the key figures DTS and OAS in its model. It is difficult to excess index return by focusing only on the key figures DTS and OAS. Analysis of other key figures and variables selected by the lasso regression can possibly create better results, as a suggestion for further work.

    Download full text (pdf)
    fulltext
  • 91.
    Falgén Enqvist, Olle
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Cardinality estimation with a machine learning approach2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis investigates how three different machine learning models perform on cardinalty estimation for sql queries. All three models were evaluated on three different data sets. The models were tested on both estimating cardinalities when the query just takes information from one table and also a two way join case. Postgresql's own cardinality estimator was used as a baseline. The evaluated models were: Artificial neural networks, random forests and extreme gradient boosted trees. What was found is that the model that performs best is the extreme gradient boosted tree with a tweedie regression loss function. To the authors knowledge, this is the first time an extreme gradient boosted tree has been used in this context.

    Download full text (pdf)
    fulltext
  • 92.
    Fenoaltea, Francesco
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Reliability Based Classification of Transitions in Complex Semi-Markov Models2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Markov processes have a long history of being used to model safety critical systems. However, with the development of autonomous vehicles and their increased complexity, Markov processes have been shown to not be sufficiently precise for reliability calculations. Therefore there has been the need to consider a more general stochastic process, namely the Semi-Markov process (SMP). SMPs allow for transitions with general distributions between different states and can be used to precisely model complex systems. This comes at the cost of increased complexity when calculating the reliability of systems. As such, methods to increase the interpretability of the system and allow for appropriate approximations have been considered and researched. In this thesis, a novel classification approach for transitions in SMP has been defined and complemented with different conjectures and properties. A transition is classified as good or bad by comparing the reliability of the original system with the reliability of any perturbed system, for which the studied transition is more likely to occur. Cases are presented to illustrate the use of this classification technique. Multiple suggestions and conjectures for future work are also presented and discussed.

    Download full text (pdf)
    fulltext
  • 93.
    Filinau, Alexej
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Optimization of the Onboard Computer2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    As the population is growing, the need for sustainable and safe mobility solutions increases. The evolution of technology within the railway industry is crucial for a more connected world. Hence, it is important to potimize the Automatic Train Protection, ATP, to ensure that high-speed trains remain fast and safe. To do this, a part of the real world system is simulated on the Onboard Computer, CoHP, the main part of ATP. Thereafter, scheduling theory is applied to create a Flow shop model of the CPU, and then optimizing the order in which the processes are executed. The results from this, were that the most optimal ordering performed worse than the Default Scenario, while the least optimal ordering performed better than the Default Scenario. The Worst Scenario, where the order was least optimal, had a decrease in load between 1% and 20%. However, not enough testing was done to draw proper conclusions in the project. Further testing is needed to be sure about the application of scheduling theory and its effect on the performance of the CoHP.

    Download full text (pdf)
    fulltext
  • 94.
    Finnson, Anton
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Clinical dose feature extraction for prediction of dose mimicking parameters2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Treating cancer with radiotherapy requires precise planning. Several planning pipelines rely on reference dose mimicking, where one tries to find machine parameters best mimicking a given reference dose. Dose mimicking relies on having a function that quantifies dose similarity well, necessitating methods for feature extraction of dose images. In this thesis we investigate ways of extracting features from clinical doseimages, and propose a few proof-of-concept dose mimicking functions using the extracted features. We extend current techniques and lay the foundation for new techniques for feature extraction, using mathematical frameworks developed in entirely different areas. In particular we give an introduction to wavelet theory, which provides signal decomposition techniques suitable for analysing local structure, and propose two different dose mimicking functions using wavelets. Furthermore, we extend ROI-based mimicking functions to use artificial ROIs, and we investigate variational autoencoders and their application to the clinical dose feature extraction problem. We conclude that the proposed functions have the potential to address certain shortcomings of current dose mimicking functions. The four methods all seem to approximately capture some notion of dose similarity. Used in combination with the current framework they have the potential of improving dose mimickingresults. However, the numerical tests supporting this are brief, and more thorough numerical investigations are necessary to properly evaluate the usefulness of the new dose mimicking functions.

    Download full text (pdf)
    fulltext
  • 95.
    Franzén, Filip
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Nord, Karl Axel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Risk Evaluation in a ML-Approximated Portfolio Environment2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis explores and evaluates the forecasting application of the machine learning method Gradient Boosting Decision Trees. This method is used to forecast the demand of the online grocery market with a 7-day time horizon. The thesis was conducted in collaboration with the online grocery company Mathem. The model is applied and evaluated on three different periods representing the spring, summer and fall. The main evaluation metric is the mean absolute percentage error (MAPE), and clear differences were found depending on the predictability of the period. Apart from the model and its application to demand forecasting, the related risk was investigated. This was done by studying the Value-at-Risk and Expected Shortfall associated with discrepancies between the forecasted and actual values over the three periods. The most important conclusion of the case study at Mathem is that overestimation in the forecast is more costly in terms of monetary value than underestimating. It is also found that this is highly dependent on the cost structure of the company's operation and could therefore vary between companies. Thus, the study has contributed to understanding the applications of machine learning models in forecasting processes as well as the risks related to over/underestimating the demand of the online grocery market.

    Download full text (pdf)
    fulltext
  • 96.
    Fredriksson, Albin
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. RaySearch Labs, Stockholm, Sweden..
    Hårdemark, Björn
    RaySearch Labs, Stockholm, Sweden..
    ROBUST OPTIMIZATION ACCOUNTING FOR ORGAN MOTION, RANGE ERRORS, AND SETUP ERRORS IN IMPT2011In: Radiotherapy and Oncology, ISSN 0167-8140, E-ISSN 1879-0887, Vol. 99, p. S100-S100Article in journal (Other academic)
  • 97.
    Freiberg, Tristan
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Kurlberg, Pär
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    On the Average Exponent of Elliptic Curves Modulo p2014In: International mathematics research notices, ISSN 1073-7928, E-ISSN 1687-0247, Vol. 2014, no 8, p. 2265-2293Article in journal (Refereed)
    Abstract [en]

    Given an elliptic curve E defined over <inline-graphic xlink:href="RNS280IM1" xmlns:xlink="http://www.w3.org/1999/xlink"/> and a prime p of good reduction, let <inline-graphic xlink:href="RNS280IM2" xmlns:xlink="http://www.w3.org/1999/xlink"/> denote the group of <inline-graphic xlink:href="RNS280IM3" xmlns:xlink="http://www.w3.org/1999/xlink"/>-points of the reduction of E modulo p, and let e(p) denote the exponent of this group. Assuming a certain form of the generalized Riemann hypothesis (GRH), we study the average of e(p) as <inline-graphic xlink:href="RNS280IM4" xmlns:xlink="http://www.w3.org/1999/xlink"/> ranges over primes of good reduction, and find that the average exponent essentially equals p center dot c(E), where the constant c(E)> 0 depends on E. For E without complex multiplication (CM), c(E) can be written as a rational number (depending on E) times a universal constant, <inline-graphic xlink:href="RNS280IM5" xmlns:xlink="http://www.w3.org/1999/xlink"/>, the product being over all primes q. Without assuming GRH, we can determine the average exponent when E has CM, as well as give an upper bound on the average in the non-CM case.

  • 98.
    Friberg Femling, Christin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Development of Acoustic Simulation Methods for Exhaust Systems2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Noise pollution is a growing concern due to its harmfulness to human health. Heavy vehicles powered by internal combustion engines stands for a major part of the environmental noise, why noise reduction is an increasing priority in enginve development. Within this study, an optimization problem is posed in order to minimize acoustic output without impairing the engine's overall performance.

    In our quest to diversify out noise reduction strategies, innovative ways of investigating this complex subject are essential. Here, we use simulations to investigate the possibility to reduce noise by component settings, as well as methods available to achieve that.

    Regarding the methods, the results indicates that a built in optimization tool within the simulation software used works well, despite the high complexity of the problem. A significant noise reduction is achieved when adjusting the settings of two of the parameters studied.

    This is a first attempt to tackle noise reduction in internal combustion engines by component settings. From the promising results, further improvements are expected as the simulation methods are refined and more components can be investigated accurately.

    Download full text (pdf)
    fulltext
  • 99.
    Frid Dalarsson, Mariana
    et al.
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    Norgren, Martin
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    Two-dimensional boundary shape reconstructions in rectangular and coaxial waveguidesManuscript (preprint) (Other academic)
  • 100.
    Frimodig, Sara
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Radiation Therapy Patient Scheduling: An Operations Research Approach2023Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The manual scheduling of patients for radiation therapy is difficult and labor-intensive. With the increase in cancer patient numbers, efficient resource planning is an important tool to achieve short waiting times and equal right to care. This thesis studies an operations research approach to the radiation therapy scheduling problem. The four appended papers each provide incremental steps towards a clinical implementation of an automated scheduling algorithm.

    In Paper A, three models for the radiation therapy scheduling problem in a simplified clinical setup are proposed. It is shown that the two constraint programming models find feasible solutions more quickly, while the integer programming model proves optimality faster. However, none of the models can solve large problem instances in sufficient time. In Paper B a collaboration with a large cancer center with ten linear accelerators is initiated. The previous models are refined and adapted to a more realistic clinical setup. Moreover, a column generation approach is introduced. The models are compared using different objective function combinations designed to mimic the scheduling objectives at different cancer centers. The column generation approach outperforms the other methods on all problem instances, regardless of what objective is optimized. In Paper C the column generation approach is further developed to include additional medical and technical constraints. Different methods to ensure that there are available resources for high priority patients at arrival are compared. Finally, in Paper D the potential for clinical implementation of the column generation approach is evaluated. The schedules generated by the column generation model are clinically validated. Compared to manually constructed, historical schedules for a time period of one year, the automatically generated schedules are shown to decrease the average patient waiting time by 80%, improve the consistency in treatment times between appointments by 80%, and increase the number of treatments scheduled the machine best suited for the treatment by more than 90%, without loss of performance in other quality metrics. 

    Since the constraints between radiotherapy centers are similar and multiple objective functions are presented, the column generation approach can be generally used for automated patient scheduling in radiation therapy. This would allow radiotherapy centers to save time during the scheduling process and improve the quality of the schedules. 

    Download (pdf)
    sammanfattning
1234567 51 - 100 of 356
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf