kth.sePublications
Change search
Refine search result
1234567 101 - 150 of 356
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Frimodig, Sara
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. RaySearch Laboratories, Stockholm, Sweden.
    Enqvist, Per
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Carlsson, Mats
    Department of Computer Science, RISE Research Institutes of Sweden.
    Mercier, Carole
    Department of Radiation Oncology, Iridium Netwerk, Antwerp, Belgium.
    Comparing Optimization Methods for Radiation Therapy Patient Scheduling using Different ObjectivesManuscript (preprint) (Other academic)
    Abstract [en]

    Radiation therapy (RT) is a medical treatment to kill cancer cells or shrink tumors. To manually schedule patients for RT is a time-consuming and challenging task. By the use of optimization, patient schedules for RT can be created automatically. This paper presents a study of different optimization methods for modeling and solving the RT patient scheduling problem, which can be used as decision support when implementing an automatic scheduling algorithm in practice. We introduce an Integer Programming (IP) model, a column generation IP model (CG-IP), and a Constraint Programming model. Patients are scheduled on multiple machine types considering their priority for treatment, session duration and allowed machines. Expected future patient arrivals are included in the models as placeholder patients. Since different cancer centers can have different scheduling objectives, the models are compared using multiple objective functions, including minimizing waiting times, and maximizing the fulfillment of patients’ preferences for treatment times. The test data is generated from historical data from Iridium Netwerk, Belgium’s largest cancer center with 10 linear accelerators. The results demonstrate that the CG-IP model can solve all the different problem instances to a mean optimality gap of less than 1% within one hour. The proposed methodology provides a tool for automated scheduling of RT treatments and can be generally applied to RT centers.

  • 102.
    Frimodig, Sara
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Enqvist, Per
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Carlsson, Mats
    Department of Computer Science, RISE Research Institutes of Sweden, Gothenburg, Sweden.
    Mercier, Carole
    Department of Radiation Oncology, Iridium Netwerk, Antwerp, Belgium.
    Comparing Optimization Methods for Radiation Therapy Patient Scheduling using Different Objectives2023In: Operations Research Forum, E-ISSN 2662-2556, Vol. 4, no 4, article id 83Article in journal (Refereed)
    Abstract [en]

    Radiation therapy (RT) is a medical treatment to kill cancer cells or shrink tumors. To manually schedule patients for RT is a time-consuming and challenging task. By the use of optimization, patient schedules for RT can be created automatically. This paper presents a study of different optimization methods for modeling and solving the RT patient scheduling problem, which can be used as decision support when implementing an automatic scheduling algorithm in practice. We introduce an Integer Programming (IP) model, a column generation IP model (CG-IP), and a Constraint Programming model. Patients are scheduled on multiple machine types considering their priority for treatment, session duration and allowed machines. Expected future arrivals of urgent patients are included in the models as placeholder patients. Since different cancer centers can have different scheduling objectives, the models are compared using multiple objective functions, including minimizing waiting times, and maximizing the fulfillment of patients’ preferences for treatment times. The test data is generated from historical data from Iridium Netwerk, Belgium’s largest cancer center with 10 linear accelerators. The results demonstrate that the CG-IP model can solve all the different problem instances to a mean optimality gap of less than 1 % within one hour. The proposed methodology provides a tool for automated scheduling of RT treatments and can be generally applied to RT centers.

  • 103.
    Frimodig, Sara
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Enqvist, Per
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Kronqvist, Jan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    A Column Generation Approach for Radiation Therapy Patient Scheduling with Planned Machine Unavailability and Uncertain Future ArrivalsManuscript (preprint) (Other academic)
    Abstract [en]

    The number of cancer cases per year is rapidly increasing worldwide. In radiation therapy (RT), radiation from linear accelerators is used to kill malignant tumor cells. Scheduling patients for RT is difficult both due to the numerous medical and technical constraints, and because of the stochastic inflow of patients with different urgency levels. In this paper, a Column Generation (CG) approach is proposed for the RT patient scheduling problem. The model includes all the constraints necessary for the generated schedules to work in practice, including for example different machine compatibilities, individualized patient protocols, and multiple hospital sites. The model is the first to include planned interruptions in treatments due to maintenance on machines, which is an important aspect when scheduling patients in practice, as it can create bottlenecks in the patient flow. Different methods to ensure that there are available resources for high priority patients at arrival are compared, including static and dynamic time reservation. Data from Iridium Netwerk, the largest cancer center in Belgium, is used to evaluate the CG approach. The results show that the dynamic time reservation method outperforms the other methods used to handle uncertainty in future urgent patients. A sensitivity analysis also shows that the dynamic time reservation method is robust to fluctuations in arrival rates. The CG approach produces schedules that fulfill all the medical and technical constraints posed at Iridium Netwerk with acceptable computation times.

  • 104.
    Frimodig, Sara
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Mercier, Carole
    Department of Radiation Oncology Iridium Netwerk Antwerp, Belgium.
    De Kerf, Geert
    Department of Radiation Oncology Iridium Netwerk Antwerp, Belgium.
    Automated Radiation Therapy Patient Scheduling: A Case Study at a Belgian HospitalManuscript (preprint) (Other academic)
    Abstract [en]

    The predicted increase in the number of patients receiving radiation therapy (RT) to treat cancer calls for an optimized use of resources. To manually schedule patients on the linear accelerators delivering RT is a time-consuming and challenging task. Operations research (OR), a discipline in applied mathematics, uses a variety of analytical methods to improve decision-making. In this paper, we study the implementation of an OR method that automatically generates RT patient schedules at an RT center with ten linear accelerators. The OR method is designed to produce schedules that mimic the objectives used in the clinical scheduling while following the medical and technical constraints. The resulting schedules are clinically validated and compared to manually constructed, historical schedules for a time period of one year. It is shown that the use of OR to generate schedules decreases the average patient waiting time by 80%, improves the consistency in treatment times between appointments by 80%, and increases the number of treatments scheduled the machine best suited for the treatment by more than 90% compared to the manually constructed clinical schedules, without loss of performance in other quality metrics. Furthermore, automatically creating patient schedules can save the clinic many hours of administrative work every week. 

  • 105.
    Gan, William
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Modelling of Capital Requirements using LSTM and A-SA in CRR 32022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In response to the Great Financial Crisis of 2008, a handful of measures were taken to increase the resilience toward a similar disaster in the future. Global financial regulatory entities implemented several new directives with the intention to enhance global capital markets, leading to regulatory frameworks where financial participants (FPs) are regulated with own fund's requirements for market risks. This thesis intends to investigate two different methods presented in the framework Capital Requirements Regulation 3 (CRR 3), a framework stemming from the Basel Committee and implemented in EU legislation for determining the capital requirements for an FP. The first method, The Alternative Standardised Approach (A-SA), looks at categorical data, whereas the second method, The Alternative Internal Model Approach (A-IMA), uses the risk measure Expected Shortfall (ES) for determining the capital requirement and therefore requires the FP to estimate ES using a proprietary/internal model based on time series data. The proprietary model in this thesis uses a recurrent neural network (RNN) with several long short-term memory (LSTM) layers to predict the next day's ES using the previous 20 day's returns. The data consisted of categorical and time series data of a portfolio with the Nasdaq 100 companies as positions. This thesis concluds that A-IMA with an LSTM-network as the proprietary model, gives a lower capital requirement compared to A-SA but is less reliable in real-life applications due to its behaviour as a "black box" and is, thus, less compliant from a regulatory standpoint. The LSTM-model showed promising results for capturing the overall trend in the data, for example periods with high volatility, but underestimated the true ES.

    Download full text (pdf)
    fulltext
  • 106. Geyer, Anna
    et al.
    Quirchmayr, Ronald
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Shallow water equations for equatorial tsunami waves2018In: Philosophical Transactions. Series A: Mathematical, physical, and engineering science, ISSN 1364-503X, E-ISSN 1471-2962, Vol. 376, no 2111, article id 20170100Article in journal (Refereed)
    Abstract [en]

    We present derivations of shallow water model equations of Korteweg-de Vries and Boussinesq type for equatorial tsunami waves in the f-plane approximation and discuss their applicability. This article is part of the theme issue 'Nonlinear water waves'.

  • 107. Glav, R
    The null-field approach to dissipative silencers of arbitrary cross-section1996In: Journal of Sound and Vibration, ISSN 0022-460X, E-ISSN 1095-8568, Vol. 189, no 4, p. 489-509Article in journal (Refereed)
  • 108.
    Golshani, Kevin
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Ekberg, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Detection and Classification of Sparse Traffic Noise Events2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Noise pollution is a big health hazard for people living in urban areas, and its effects on humans is a growing field of research. One of the major contributors to urban noise pollution is the noise generated by traffic. Noise simulations can be made in order to build noise maps used for noise management action plans, but in order to test their accuracy real measurements needs to be done, in this case in the form of noise measurements taken adjacent to a road. The aim of this project is to test machine learning based methods in order to develop a robust way of detecting and classifying vehicle noise in sparse traffic conditions. The primary focus is to detect traffic noise events, and the secondary focus is to classify what kind of vehicle is producing the noise.

    The data used in this project comes from sensors installed on a testbed at a street in southern Stockholm. The sensors include a microphone that is continuously measuring the local noise environment, a radar that detects each time a vehicle is passing by, and a camera that also detects a vehicle by capturing its license plate. Only sparse traffic noises are considered for this thesis, as such the audio recordings used are those where the radar has only detected one vehicle in a 40 second window. This makes the data gathered weakly labeled.

    The resulting detection method is a two-step process: First, the unsupervised learning method k-means is implemented for the generation of strong labels. Second, the supervised learning method random forest or support vector machine uses the strong labels in order to classify audio features. The detection system of sparse traffic noise achieved satisfactory results. However, the unsupervised vehicle classification method produced inadequate results and the clustering could not differentiate different vehicle classes based on the noise data.

    Download full text (pdf)
    fulltext
  • 109.
    Granström, Erika
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Detecting Risky Riding on Micromobility Vehicles Using Sensor Data and Machine Learning2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Micromobility has gained in popularity in recent years as a convenient alternative to cars and public transportation and offers advantages such as reduced air pollution and more efficient use of street space. There are however some concerns related to micromobility that has to be addressed, such as certain risky riding behaviours. This thesis focuses on a particular behaviour that poses an increased risk of falls and collisions and that is prohibited in several countries. The aim of the thesis is to investigate the feasibility of detecting this behaviour using machine learning methods and multivariate time series data from vehicle sensors. Three distinct methods are examined. In the first method, statistics are extracted from the time series and concatenated into one-dimensional feature vectors. These are then used as input to a ridge regression classifier. The second method is MiniROCKET, an improved version of the ROCKET classifier, which combines random convolutional kernel transforms with a linear classifier. The third investigated method is 1-Nearest Neighbour with Dynamic Time Warping as the distance metric. The methods are evaluated on sensor data from controlled experiments, and the results comprise evaluation metric scores as well as an analysis of feature importance for each method. While all three methods demonstrate the ability to successfully differentiate between risky and compliant riding, MiniROCKET outperforms the other two and achieves an accuracy of nearly 90%.

  • 110.
    Grasso, Giulia
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Optimal process design with simulation-based optimization2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Nowadays, it has become crucial to transform company production processes in order to reduce carbon emissions. Therefore, the company LKAB is working to make the production process of iron ore pellets fossil-free. In particular, this thesis project focuses on the pellet induration stage and addresses the mathematical optimization of the process. In particular, the idea is to combine state-of-the-art optimization algorithms with simulation software.

    This thesis aims to address the problem of multi-objective optimization within the context of simulation-based scenarios, i.e., the aim is to study Simulation-Based Multi-Objective Optimization Problems.

    The primary focus of this thesis is to thoroughly investigate and compare three different algorithms applied to two distinct problem formulations. By doing so, we aim to gain valuable insights into the suitability of different approaches and evaluate the algorithms' performance in achieving the desired objectives.

    Download full text (pdf)
    fulltext
  • 111. Grava, T.
    et al.
    Kriecherbauer, T.
    Mazzuca, G.
    McLaughlin, K. D. T.-R.
    Correlation Functions for a Chain of Short Range Oscillators2021In: Journal of statistical physics, ISSN 0022-4715, E-ISSN 1572-9613, Vol. 183, no 1, article id 1Article in journal (Refereed)
  • 112. Grava, T.
    et al.
    Maspero, A.
    Mazzuca, Guido
    International School for Advanced Studies (SISSA), Via Bonomea 265, Trieste, 34136, Italy.
    Ponno, A.
    Adiabatic Invariants for the FPUT and Toda Chain in the Thermodynamic Limit2020In: Communications in Mathematical Physics, ISSN 0010-3616, E-ISSN 1432-0916, Vol. 380, no 2, p. 811-851Article in journal (Refereed)
    Abstract [en]

    We consider the Fermi–Pasta–Ulam–Tsingou (FPUT) chain composed by N≫ 1 particles and periodic boundary conditions, and endow the phase space with the Gibbs measure at small temperature β- 1. Given a fixed 1 ≤ m≪ N, we prove that the first m integrals of motion of the periodic Toda chain are adiabatic invariants of FPUT (namely they are approximately constant along the Hamiltonian flow of the FPUT) for times of order β, for initial data in a set of large measure. We also prove that special linear combinations of the harmonic energies are adiabatic invariants of the FPUT on the same time scale, whereas they become adiabatic invariants for all times for the Toda dynamics. 

  • 113.
    Greberg, Felix
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Rylander, Andreas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Using Gradient Boosting to Identify Pricing Errors in GLM-Based Tariffs for Non-life Insurance2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Most non-life insurers and many creditors use regressions, more specifically Generalized Linear Models (GLM), to price their liabilities. One limitation with GLMs is that interactions between predictors are handled manually, which makes finding interactions a tedious and time-consuming task. This increases the cost of rate making and, more importantly, actuaries can miss important interactions resulting in sub-optimal customer prices. Several papers have shown that Gradient Tree Boosting can outperform GLMs in insurance pricing since it handles interactions automatically. Insurers and creditors are however reluctant to use so-called ”Black-Box” solutions for both regulatory and technical reasons. Tree-based methods have been used to identify pricing errors in regressions, albeit only as ad-hoc solutions. The authors instead propose a systematic approach to automatically identify and evaluate interactions between predictors before adding them to a traditional GLM. The model can be used in three different ways: Firstly, it can create a table of statistically significant candidate interactions to add to a GLM. Secondly, it can automatically and iteratively add new interactions to an old GLM until no more statistically significant interactions can be found. Lastly, it can automatically create a new GLM without an existing pricing model. All approaches are tested on two motor insurance data sets from a Nordic P&C insurer and the results show that all methods outperform the original GLMs. Although the two iterative modes perform better than the first, insurers are recommended to mainly use the first mode since this results in a reasonable trade-off between automating processes and leveraging actuaries’ professional judgment.

    Download full text (pdf)
    fulltext
  • 114.
    Grägg, Sofia
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Isacson, Paula
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Optimization of Collateral Allocation for Corporate Loans: A nonlinear network problem minimizing the expected loss in case of default2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Collateral management has become an increasingly valuable aspect of credit risk. Managing collaterals and constructing accurate models for decision making can give any lender a competitive advantage and decrease overall risks. This thesis explores how to allocate securities on a set of loans such that the total expected loss is minimized in case of default. A nonlinear optimization problem is formulated and several factors that affect the expected loss are considered. In order to incorporate regulations on collateral allocation, the model is formulated as a network problem. Finally, to account for the risk of the portfolio of securities, the Markowitz approach to variance is adopted.

    The model calculates a loss that is less than the average historical loss for the same type of portfolio. In the case of the network problem with many-to-many relations, an equal or higher expected loss is concluded. Similarly, when the variance constraint is included, the expected loss increases. This is due to some solutions are limited when removing links and including the variance constraint. The optimization problem is forced to choose a less optimal solution. The model created has no limits on the amount of collateral types and loans that can be included.

    An improvement of the model is to account for the stochasticity of the collateral values and the difficulties in validating the results. The latter is a consequence of the expected loss functions being based on a theoretical analysis. Nonetheless, the results of the model can act as an upper bound on expected loss, with a certain variance, since the average of the expected loss lies above the average of the historical loss.

    Download full text (pdf)
    Optimization of Collateral Allocation for Corporate Loans
  • 115.
    Gröndahl, Erik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Pilot models in full missions simulation of JAS 39 Gripen2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

      

    This master thesis was performed at the section of Flight Mechanics and Performance at SAAB Aeronautics in Linköping as a part of my Master of Science in Engineering Physics at KTH, Stockholm. The aim of the thesis is to enable desktop simulations of missions from take-off to landing of JAS 39 Gripen.

    The mission is set up by a series of task to be performed. Each tasks then link to a pilot model that controls the aircraft to perform the given task. The main part of the work has been to create these pilot models as an extension of the work by Ajdén and Backlund presented in [1].

    The tasks that are simulated are, take-off, climb, turn, cruise, combat simulation, descent and landing at a given point. In order to perform these tasks both open and closed loop controls are used. To perform the landing first a path planing based on Dubins minimum path is calculated and then the nonlinear guidance logic presented by Park, Desyst and How in \cite{Park4} is implemented and used for trajectory tracking.

    The results from a simulation of a test mission are presented and shows that mission simulations are possible and that the pilot models perform the intended tasks.

            

    Download full text (pdf)
    fulltext
  • 116.
    Guidolin, Andrea
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics of Data and AI.
    Landi, Claudia
    Univ Modena & Reggio Emilia, Reggio Emilia, Italy..
    Morse inequalities for the Koszul complex of multi-persistence2023In: Journal of Pure and Applied Algebra, ISSN 0022-4049, E-ISSN 1873-1376, Vol. 227, no 7, p. 107319-, article id 107319Article in journal (Refereed)
    Abstract [en]

    In this paper, we define the homological Morse numbers of a filtered cell complex in terms of relative homology of nested filtration pieces, and derive inequalities relating these numbers to the Betti tables of the multi-parameter persistence modules of the considered filtration. Using the Mayer-Vietoris spectral sequence we first obtain strong and weak Morse inequalities involving the above quantities, and then we improve the weak inequalities achieving a sharp lower bound for homological Morse numbers. Furthermore, we prove a sharp upper bound for homological Morse numbers, expressed again in terms of the Betti tables.

  • 117.
    Gustafsson, Axel
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hansen, Jacob
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Combined Actuarial Neural Networks in Actuarial Rate Making2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Insurance is built on the principle that a group of people contributes to a common pool of money which will be used to cover the costs for individuals who suffer from the insured event. In a competitive market, an insurance company will only be profitable if their pricing reflects the covered risks as good as possible. This thesis investigates the recently proposed Combined Actuarial Neural Network (CANN), a model nesting the traditional Generalised Linear Model (GLM) used in insurance pricing into a Neural Network (NN). The main idea of utilising NNs for insurance pricing is to model interactions between features that the GLM is unable to capture. The CANN model is analysed in a commercial insurance setting with respect to two research questions. The first research question, RQ 1, seeks to answer if the CANN model can outperform the underlying GLM with respect to error metrics and actuarial model evaluation tools. The second research question, RQ 2, seeks to identify existing interpretability methods that can be applied to the CANN model and also showcase how they can be applied. The results for RQ 1 show that CANN models are able to consistently outperform the GLM with respect to chosen model evaluation tools. A literature search is conducted to answer RQ 2, identifying interpretability methods that either are applicable or are possibly applicable to the CANN model. One interpretability method is also proposed in this thesis specifically for the CANN model, using model-fitted averages on two-dimensional segments of the data. Three interpretability methods from the literature search and the one proposed in this thesis are demonstrated, illustrating how these may be applied.

    Download full text (pdf)
    fulltext
  • 118.
    Gustafsson, Markus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Copula modeling for Portfolio Return Analysis2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, we investigate the advantages of using high-dimensional copula modeling to understand the riskiness of portfolio investments and to more realistically estimate future portfolio values. Our approach involves benchmarking some pre-determined fitted copulas to the 0.05-quantile, the Tail Conditional Expectation, and the probability of negative returns for each portfolio. We find that the two R-Vine copula models used in this study provide good estimations of the distribution of portfolio values for the 1-month time frame, the shortest we consider in this thesis, most probably due to their flexibility and ability to represet a diverse array of dependence structures. However, for longer time frames (1 year or more), the Clayton copula appears to be a more suitable model. It aligns more closely with market behaviour due to its capacity of capturing lower tail dependence. In conclusion, we argue that by employing the right copula model, in our case the Clayton copula, we obtain a more realistic view on the distribution of the future portfolio values.

    Download full text (pdf)
    fulltext
  • 119. Guzeltepe, Murat
    et al.
    Heden, Olof
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Perfect Mannheim, Lipschitz and Hurwitz weight codes2014In: Mathematical Communications, ISSN 1331-0623, E-ISSN 1848-8013, Vol. 19, no 2, p. 253-276Article in journal (Refereed)
    Abstract [en]

    The set of residue classes modulo an element pi in the rings of Gaussian integers, Lipschitz integers and Hurwitz integers, respectively, is used as alphabets to form the words of error correcting codes. An error occurs as the addition of an element in a set E to the letter in one of the positions of a word. If epsilon is a group of units in the original rings, then we obtain the Mannheim, Lipschitz and Hurwitz metrics, respectively. Some new perfect 1-error-correcting codes in these metrics are constructed. The existence of perfect 2-error-correcting codes is investigated by computer search.

  • 120.
    Gäfvert, Oliver
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Topological and geometrical methods in data analysis2021Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis concerns two related data analysis pipelines, using topological and geometrical methods respectively, to extract relevant information. The first pipeline, referred to as the topological data analysis (TDA) pipeline, constructs a filtered simplicial complex on a given data set in order to describe its shape. The shape is described using a persistence module, which characterizes the topological features of the filtration, and the final step of the pipeline extracts algebraic invariants from this object. The second pipeline, referred to as the geometric data analysis (GDA) pipeline, associates an algebraic variety to a given data set and aims to describe the structure of this variety. Its structure is described using homology, an invariant which for most algebraic varieties can only be computed numerically using sampling methods.

    In Paper A we consider invariants on multi-parameter persistence modules. We explain how to convert discrete invariants into stable ones via what we call hierarchical stabilization. We illustrate  this process by constructing stable invariants for multi-parameter persistence modules with respect to the interleaving distance and so called simple noise systems. For one parameter, we recover the standard barcode information. For more than one parameter we prove that  the constructed invariants are in general NP-hard to calculate. A consequence is that computing the feature counting function, proposed by Scolamiero et. al. (2016), is NP-hard.

    In Paper B we introduce an efficient algorithm to compute a minimal presentation of a multi-parameter persistent homology module, given a chain complex of free modules as input.  Our approach extends previous  work on this problem in the 2-parameter case, and draws on ideas underlying the F4 and F5 algorithms for Gröbner basis computation. In the r-parameter case, our algorithm computes a presentation for the homology of C ->F A ->G B, with modules of rank l,n,m respectively, in O(r2nr+1 + nrm + nr-1m2 + rn2 l) arithmetic operations. We implement this approach in our new software Muphasa, written in C++. In preliminary computational experiments on synthetic TDA examples,      we compare our approach to a version of a classical approach based on Schreyer's algorithm, and find that ours is substantially faster and more memory efficient. In the course of developing our algorithm for computing presentations, we also introduce algorithms for the closely related problems of computing Gröbner bases for the image and kernel of the morphism G.  This algorithm runs in time O(nrm + nr-1m2) and memory O(n2 + mn + nr + K), where K is the size of the output.

    Paper C analyzes the complexity of fitting a variety, coming from a class of varieties, to a configuration of points in RN. The complexity measure, called the algebraic complexity, computes the Euclidean Distance Degree (EDD) of a certain variety called the hypothesis variety as the number of points in the configuration increases. Finally, we establish a connection to complexity of architectures of polynomial neural networks. For the problem of fitting an (N-1)-sphere to a configuration of m points in RN, we give a closed formula for the algebraic complexity of the hypothesis variety as m grows for the case of N=1. For the case N>1 we conjecture a generalization of this formula supported by numerical experiments.

    In Paper D we present an efficient algorithm to produce a provably dense sample of a smooth compact variety. The procedure is partly based on computing bottlenecks of the variety. Using geometric information such as the bottlenecks and the local reach we also provide bounds on the density of the sample needed in order to guarantee that the homology of the variety can be recovered from the sample. An implementation of the algorithm is provided together with numerical experiments and a computational     comparison to the algorithm by Dufresne et. al. (2019).

    Download full text (pdf)
    fulltext
  • 121.
    Gäfvert, Oliver
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Bender, Matías Rafael
    Lesnick, Michael
    Efficient computation of multiparameter persistenceManuscript (preprint) (Other academic)
    Abstract [en]

    Motivated by applications to topological data analysis (TDA), we  introduce an efficient algorithm to compute a minimal presentation  of a multi-parameter persistent homology module, given a chain  complex of free modules as input.  Our approach extends previous  work on this problem in the 2-parameter case, and draws on ideas  underlying the F4 and F5 algorithms for Gröbner basis computation.  In the r-parameter case, our algorithm computes a presentation for  the homology of C ->F A ->G B, with  modules of rank l,n,m respectively, in  O(r2 nr+1 + nr m + nr-1m2 + r n2l)  arithmetic operations.    We implement this approach in our new software Muphasa, written in C++.  In preliminary computational experiments on synthetic TDA examples,  we compare our approach to a version of a classical approach based  on Schreyer's algorithm, and find that ours is substantially faster  and more memory efficient. In the course of developing our algorithm for computing  presentations, we also introduce algorithms for the closely related  problems of computing a Gröbner basis for the image and kernel of  the morphism G.  This algorithm runs in time  O(nr m + nr-1m2) and memory  O(n2 + mn + nr + K), where K is the size of the output.

  • 122.
    Gäfvert, Oliver
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Chachólski, Wojciech
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Stable invariants for multiparameter persistenceManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper we explain how to convert discrete invariants into stable ones via what we call hierarchical stabilization. We illustrate this process by constructing stable invariants for multi-parameter persistence modules with respect to the interleaving distance and so called simple noise systems. For one parameter, we recover the standard barcode information. For more than one parameter we prove that the constructed invariants are in general NP-hard to calculate. A consequence is that computing the feature counting function, proposed by Scolamiero et. al. (2016), is NP-hard.

  • 123.
    Haasler, Isabel
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Chen, Y.
    Karlsson, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Optimal Steering of Ensembles with Origin-Destination Constraints2021In: IEEE Control Systems Letters, E-ISSN 2475-1456, Vol. 5, no 3, p. 881-886, article id 9131801Article in journal (Refereed)
    Abstract [en]

    We consider the optimal control problem of steering a collection of agents over a network. The group behavior of an ensemble is often modeled by a distribution, and thus the optimal control problem we study can be cast as a distribution steering problem. While most existing works for steering distributions require the agents in the ensemble to be indistinguishable, we consider the setting where agents have specified origin-destination constraints. This control problem also resembles a minimum cost network flow problem with a massive number of commodities. We propose a novel optimal transport based framework for this problem and derive an efficient algorithm for solving it. This framework extends multi-marginal optimal transport theory to settings with capacity and origin-destination constraints. The proposed method is illustrated on a numerical simulation for traffic planning.

  • 124.
    Hahn, Karin
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Backlund, Axel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Modelling Risk in Real-Life Multi-Asset Portfolios2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    We develop a risk factor model based on data from a large number of portfolios spanning multiple asset classes. The risk factors are selected based on economic theory through an analysis of the asset holdings, as well as statistical tests. As many assets have limited historical data available, we implement and analyse the impact of regularisation to handle sparsity. Based on the factor model, two parametric methods for calculating Value-at-Risk (VaR) for a portfolio are developed: one with constant volatility and one with a CCC-GARCH volatility updating scheme. These methods are evaluated through backtesting on daily and weekly returns of a selected set of portfolios whose contents reflect the larger majority well. A historical data approach for calculating VaR serves as a benchmark model. We find that under daily returns, the historical data method outperforms the factor models in terms of VaR violation rates. None yield independent violations however. Under weekly returns, both factor models produce more accurate violation rates than the historical data model, with the CCC-GARCH model also yielding independent VaR violations for almost all portfolios due to its ability to adjust up VaR estimates in periods of increased market volatility. We conclude that if weekly VaR estimates are acceptable, tailored risk factor models provide accurate measures of portfolio risk.

    Download full text (pdf)
    fulltext
  • 125. Hall, Jack
    et al.
    Rydh, David
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    The telescope conjecture for algebraic stacks2017In: Journal of Topology, ISSN 1753-8416, E-ISSN 1753-8424, Vol. 10, no 3, p. 776-794Article in journal (Refereed)
    Abstract [en]

    Using Balmer-Favi's generalized idempotents, we establish the telescope conjecture for many algebraic stacks. Along the way, we classify the thick tensor ideals of perfect complexes of stacks.

  • 126.
    Hallander, Elin
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Gutman, Per-Olof
    Active damping of longitudinal oscillations in a wheel loader2006In: Computer Aided Control System Design, 2006, p. 1097-1102Conference paper (Refereed)
    Abstract [en]

    A simplified model of the longitudinal motion around a constant velocity trajectory of a wheel loader is developed, for the purpose of finding a control that actively damps oscillations in the acceleration, following an up shifting gear change. Measurements from different drivers and different gear changes indicate similar oscillation frequencies for the investigated vehicle. The model parameters and the gear change induced disturbance are adjusted so that the model output closely fits the true measurements of the vehicle acceleration after a gear change, in the investigated frequency band. With the engine as a torque actuator, active damping of the oscillations in the acceleration is investigated through simulations. The possible improvements using feedback from measured engine speed seem to be limited, while a predefined feed-forward programme shows promising results.

  • 127.
    Hansson, Sven Ove
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History, Philosophy.
    Descriptor Revision2014In: Studia Logica: An International Journal for Symbolic Logic, ISSN 0039-3215, E-ISSN 1572-8730, Vol. 102, no 5, p. 955-980Article in journal (Refereed)
    Abstract [en]

    A descriptor is a set of sentences that are truth-functional combinations of expressions of the form , where is a metalinguistic belief predicate and p a sentence in the object language in which beliefs are expressed. Descriptor revision (denoted ) is an operation of belief change that takes us from a belief set K to a new belief set where is a descriptor representing the success condition. Previously studied operations of belief change are special cases of descriptor revision, hence sentential revision can be represented as , contraction as , multiple contraction as , replacement as , etc. General models of descriptor revision are constructed and axiomatically characterized. The common selection mechanisms of AGM style belief change cannot be used, but they can be replaced by choice functions operating directly on the set of potential outcomes (available belief sets). The restrictions of this construction to sentential revision () and sentential contraction give rise to operations with plausible properties that are also studied in some some detail.

  • 128.
    Hansson, Sven Ove
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History of Technology, Philosophy.
    Eradication2012In: Journal of Applied Logic, ISSN 1570-8683, E-ISSN 1570-8691, Vol. 10, no 1, p. 75-84Article in journal (Refereed)
    Abstract [en]

    Eradication is a radical form of contraction that removes not only a sentence but also all of its non-tautological consequences from a belief set. Eradication of a single sentence that was included in the original belief set coincides with full meet contraction, but if the sentence is external to the belief set then the two operations differ. Multiple eradication, i.e. simultaneous eradication of several sentences, differs from full meet contraction even if the sentences to be contracted are all included in the original belief set. Eradication is axiomatically characterized and its properties investigated. It is shown to have close connections with the recovery postulate for multiple contraction. Based on these connections it is proposed that eradication rather than full meet contraction is the appropriate lower limiting case for multiple contraction operators.

  • 129.
    Hansson, Sven Ove
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History, Philosophy.
    Introduction2018In: Technology and Mathematics, Springer Nature , 2018, p. 3-10Chapter in book (Refereed)
    Abstract [en]

    This is a brief introduction to a multi-author book that provides both historical and philosophical perspectives on the relationship between technology and mathematics. It consists mainly in summaries of the chapters that follow. The books has three main parts: The Historical Connection, Technology in Mathematics, and Mathematics in Technology.

  • 130.
    Hansson, Sven Ove
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History, Philosophy.
    Mathematics and Technology Before the Modern Era2018In: Technology and Mathematics, Springer Nature , 2018, p. 13-31Chapter in book (Refereed)
    Abstract [en]

    The use of technology to support mathematics goes back to ancient tally sticks, khipus, counting boards, and abacuses. The reciprocal relationship, the use of mathematics to support technology, also has a long history. Preliterate weavers, most of them women, combined geometrical and arithmetical thinking to construct number series that give rise to intricate symmetrical patterns on the cloth. Egyptian scribes performed the technical calculations needed for large building projects. Islamic master builders covered walls and ceilings with complex geometric patterns, constructed with advanced ruler-and-compass methods. In Europe, medieval masons used the same tools to construct intricate geometrical patterns for instance in rose windows. These masters lacked formal mathematical schooling, but they developed advanced skills in constructive geometry. Even today, the practical mathematics of the crafts is often based on traditions that differ from school mathematics.

  • 131.
    Hansson, Sven Ove
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History, Philosophy.
    Preface2018In: Technology and Mathematics / [ed] Sven Ove Hansson, Springer Nature , 2018, p. v-Chapter in book (Refereed)
  • 132.
    Hansson, Sven Ove
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History, Philosophy.
    The Rise and Fall of the Anti-Mathematical Movement2018In: Technology and Mathematics, Springer Nature , 2018, p. 305-323Chapter in book (Refereed)
    Abstract [en]

    Ever since the beginnings of modern engineering education at the end of the eighteenth century, mathematics has had a prominent place in its curricula. In the 1890s, a zealous “anti-mathematical” movement emerged among teachers in technological disciplines at German university colleges. The aim of this movement was to reduce the mathematical syllabus and reorient it towards more applied topics. Its members believed that this would improve engineering education, but many of them also had more ideological motives. They distrusted modern, rigorous mathematics, and demanded a more intuitive approach. For instance, they preferred to base calculus on infinitesimals rather than the modern (“epsilon delta”) definitions in terms of limits. Some of them even demanded that practically oriented engineers should replace mathematicians as teachers of the (reduced) mathematics courses for engineers. The anti-mathematical movement was short-lived, and hardly survived into the next century. However calls for more intuitive and less formal mathematics reappeared in another, more sinister context, namely the Nazi campaign for an intuitive “German” form of mathematics that would replace the more abstract and rigorous “Jewish” mathematics.

  • 133.
    Hardin, Patrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Ingre, Robert
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    BNPL Probability of Default Modeling Including Macroeconomic Factors: A Supervised Learning Approach2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In recent years, the Buy Now Pay Later (BNPL) consumer credit industry associated with e-commerce has been rapidly emerging as an alternative to credit cards and traditional consumer credit products. In parallel, the regulation IFRS 9 was introduced in 2018 requiring creditors to become more proactive in forecasting their Expected Credit Losses and include the impact of macroeconomic factors.

    This study evaluates several methods of supervised statistical learning to model the Probability of Default (PD) for BNPL credit contracts. Furthermore, the study analyzes to what extent macroeconomic factors impact the prediction under the requirements in IFRS 9 and was carried out as a case study with the Swedish fintech firm Klarna.

    The results suggest that XGBoost produces the highest predictive power measured in Precision-Recall and ROC Area Under Curve, with ROC values between 0.80 and 0.91 in three modeled scenarios. Moreover, the inclusion of macroeconomic variables generally improves the Precision-Recall Area Under Curve. Real GDP growth, housing prices, and unemployment rate are frequently among the most important macroeconomic factors.

    The findings are in line with previous research on similar industries and contribute to the literature on PD modeling in the BNPL industry, where limited previous research was identified.

    Download full text (pdf)
    fulltext
  • 134.
    Harting, Alice
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    MCMC estimation of causal VAE architectures with applications to Spotify user behavior2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A common task in data science at internet companies is to develop metrics that capture aspects of the user experience. In this thesis, we are interested in systems of measurement variables without direct causal relations such that covariance is explained by unobserved latent common causes. A framework for modeling the data generating process is given by Neuro-Causal Factor Analysis (NCFA). The graphical model consists of a directed graph with edges pointing from the latent common causes to the measurement variables; its functional relations are approximated with a constrained Variational Auto-Encoder (VAE). We refine the estimation of the graphical model by developing an MCMC algorithm over Bayesian networks from which we read marginal independence relations between the measurement variables. Unlike standard independence testing, the method is guaranteed to yield an identifiable graphical model. Our algorithm is competitive with the benchmark, and it admits additional flexibility via hyperparameters that are natural to the approach. Tuning these parameters yields superior performance over the benchmark. We train the improved NCFA model on Spotify user behavior data. It is competitive with the standard VAE on data reconstruction with the benefit of causal interpretability and model identifiability. We use the learned latent space representation to characterize clusters of Spotify users. Additionally, we train an NCFA model on data from a randomized control trial and observe treatment effects in the latent space.

    Download full text (pdf)
    fulltext
  • 135.
    Harting, Ludvig
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Åkesson, Nils
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    A machine learning approach leveraging technical- and sentiment analysis to forecast price movements in major crypto currencies2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This paper uses a back-propagating neural network (BPN) to predict the price movements of major crypto currencies, leveraging technical factors as well as measurements of collective sentiment derived from the micro-blogging network Twitter. Our dataset consists of daily, hourly and minutely price levels for Bitcoin, Ether and Litecoin along with 8 popular technical indicators, as well as all tweets with the currencies' cash tags during respective time periods. Insprired by previous research which suggest that artificial neural networks are superior forecasting models in this setting, we were able to create a system generating automated investment decisions on a daily, hourly and minutely time basis. The study concluded that price trends are indeed predictable, with a correct prediction rate above 50% for all models, and corrensponding profitable trading strategies for all currencies on an hourly basis when neglecting trading fees, buy-sell spreads and order delays. The overall highest predictability is obtained on the hourly trading interval for Bitcoin, yielding an accuracy of 55.74% and a cumulative return of 175.1% between October 16, 2021 and December 31, 2021.

    Download full text (pdf)
    fulltext
  • 136.
    Hedblom, Nicole
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Random Edge is not faster than Random Facet on Linear Programs2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A Linear Program is a problem where the goal is to maximize a linear function subject to a set of linear inequalities. Geometrically, this can be rephrased as finding the highest point on a polyhedron. The Simplex method is a commonly used algorithm to solve Linear Programs. It traverses the vertices of the polyhedron, and in each step, it selects one adjacent better vertex and moves there. There can be multiple vertices to choose from, and therefore the Simplex method has different variants deciding how the next vertex is selected. One of the most natural variants is Random Edge, which in each step of the Simplex method uniformly at random selects one of the better adjacent vertices.

    It is interesting and non-trivial to study the complexity of variants of the Simplex method in the number of variables, d, and inequalities, N. In 2011, Friedmann, Hansen, and Zwick found a class of Linear Programs for which the Random Edge algorithm is subexponential with complexity 2^Ω(N^(1/4)), where d=Θ(N). Previously all known lower bounds were polynomial. We give an improved lower bound of 2^Ω(N^(1/2)), for Random Edge on Linear Programs where d=Θ(N).

    Another well studied variant of the Simplex method is Random Facet. It is upper bounded by 2^O(N^(1/2)) when d=Θ(N). Thus we prove that Random Edge is not faster than Random Facet on Linear Programs where d=Θ(N).

    Our construction is very similar to the previous construction of Friedmann, Hansen and Zwick. We construct a Markov Decision Process which behaves like a binary counter with linearly many levels and linearly many nodes on each level. The new idea is a new type of delay gadget which can switch quickly from 0 to 1 in some circumstances, leading to fewer nodes needed on each level of the construction. The key idea is that it is worth taking a large risk of getting a small negative reward if the potential positive reward is large enough in comparison.

    Download full text (pdf)
    fulltext
  • 137.
    Hedvall, Paul
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Out-of-distribution Recognition and Classification of Time-Series Pulsed Radar Signals2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis investigates out-of-distribution recognition for time-series data of pulsedradar signals. The classifier is a naive Bayesian classifier based on Gaussian mixturemodels and Dirichlet process mixture models. In the mixture models, we model thedistribution of three pulse features in the time series, namely radio-frequency in thepulse, duration of the pulse, and pulse repetition interval which is the time betweenpulses. We found that simple thresholds on the likelihood can effectively determine ifsamples are out-of-distribution or belong to one of the classes trained on. In addition,we present a simple method that can be used for deinterleaving/pulse classification andshow that it can robustly classify 100 interleaved signals and simultaneously determineif pulses are out-of-distribution.

    Download full text (pdf)
    fulltext
  • 138.
    Henningsson, Nils
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Copula Modelling of High-Dimensional Longitudinal Binary Response Data2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis treats the modelling of a high-dimensional data set of longitudinal binary responses. The data consists of default indicators from different nations around the world as well as some explanatory variables such as exposure to underlying assets. The data used for the modelling is an aggregated term which combines several of the default indicators in the data set into one. 

    The modelling sets out from a portfolio perspective and seeks to find underlying correlations between the nations in the data set as well as see the extreme values produced by a portfolio with assets in the nations in the data set. The modelling takes a copula approach which uses Gaussian copulas to first formulate several different models mathematically and then optimize the parameters in the models to best fit the data set. Models A and B are optimized using standard stochastic gradient ascent on the likelihood function while model C uses variational inference and stochastic gradient ascent on the evidence lower bound for optimization. Using the different Gaussian copulas obtained from the optimization process a portfolio simulation is then done to examine the extreme values.

    The results show low correlations in models A and B while model C with it's additional regional correlations show slightly higher correlations in three of the subgroups. The portfolio simulations show similar tail behaviour in all three models, however model C produces more extreme risk measure outcomes in the form of higher VaR and ES.

    Download full text (pdf)
    fulltext
  • 139.
    Henriksson, Albin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Using the discrete wavelet transform in stock index forecasting2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis aims to investigate the use of the discrete wavelet transform of a stock index as a means to forecast intraday returns. This will be done by having the discrete wavelet transform as an input in a Transformers neural network with binary labels signifying a positive or negative next-day return. The input will be limited to a time horizon of 30 days since the entire history is likely not necessary, meaning we do not care about the discrete wavelet transform 5 years ago when we are trying to predict the next day's return. The network will be evaluated in terms of accuracy and a "trading strategy" on the OMXS30 index, where we compare the performance of the network with that of the original index. Overall, the performance of the discrete wavelet transform and the Transformers network was okay. The performance was slightly better than simply going long on the index, but not by much, and when factoring in transaction costs it is probably not a worthwhile strategy to use this setup.

    Download full text (pdf)
    fulltext
  • 140. Herman, Jon
    et al.
    Usher, William
    SALib: An open-source Python library for Sensitivity Analysis2017In: Journal of Open Source Software, E-ISSN 2475-9066, Vol. 2, no 9, p. 3873-3878Article in journal (Refereed)
    Abstract [en]

    SALib contains Python implementations of commonly used global sensitivity analysis methods, including Sobol (Sobol’ 2001, Andrea Saltelli (2002), Andrea Saltelli et al. (2010)), Morris (Morris 1991, Campolongo, Cariboni, and Saltelli (2007)), FAST (Cukier et al. 1973, A. Saltelli, Tarantola, and Chan (1999)), Delta Moment-Independent Measure (E. Borgonovo 2007, Plischke, Borgonovo, and Smith (2013)) Derivative-based Global Sensitivity Measure (DGSM) (Sobol’ and Kucherenko 2009) , and Fractional Factorial Sensitivity Analysis (Andrea Saltelli et al. 2008) methods. SALib is useful in simulation, optimisation and systems modelling to calculate the influence of model inputs or exogenous factors on outputs of interest. SALib exposes a range of global sensitivity analysis techniques to the scientist, researcher and modeller, making it very easy to easily implement the range of techniques into typical modelling workflows.

  • 141.
    Hlöðver Friðriksson, Jón
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Ågren, Erik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Neural Ordinary Differential Equations for Anomaly Detection2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Today, a large amount of time series data is being produced from a variety of different devices such as smart speakers, cell phones and vehicles. This data can be used to make inferences and predictions. Neural network based methods are among one of the most popular ways to model time series data. The field of neural networks is constantly expanding and new methods and model variants are frequently introduced. In 2018, a new family of neural networks was introduced. Namely, Neural Ordinary Differential Equations (Neural ODEs). Neural ODEs have shown great potential in modelling the dynamics of temporal data. Here we present an investigation into using Neural Ordinary Differential Equations for anomaly detection. We tested two model variants, LSTM-ODE and latent-ODE. The former model utilises a neural ODE to model the continuous-time hidden state in between observations of an LSTM model, the latter is a variational autoencoder that uses the LSTM-ODE as encoding and a Neural ODE as decoding. Both models are suited for modelling sparsely and irregularly sampled time series data. Here, we test their ability to detect anomalies on various sparsity and irregularity ofthe data. The models are compared to a Gaussian mixture model, a vanilla LSTM model and an LSTM variational autoencoder. Experimental results using the Human Activity Recognition dataset showed that the Neural ODEbased models obtained a better ability to detect anomalies compared to their LSTM based counterparts. However, the computational training cost of the Neural ODE models were considerably higher than for the models that onlyutilise the LSTM architecture. The Neural ODE based methods were also more memory consuming than their LSTM counterparts.

    Download full text (pdf)
    fulltext
  • 142. Honkonen, J.
    et al.
    Lučivjansky, T.
    Škultety, Viktor
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA.
    Critical behavior of directed percolation process in the presence of compressible velocity field2017In: CHAOS 2017 - Proceedings: 10th Chaotic Modeling and Simulation International Conference, ISAST: International Society for the Advancement of Science and Technology , 2017, p. 383-402Conference paper (Refereed)
    Abstract [en]

    Various systems exhibit universal behavior at the critical point. A typical example of the non-equilibrium critical behavior is the directed bond percolation that exhibits an active-to-absorbing state phase transition in the vicinity of critical percolation probability. An interesting question is how the turbulent mixing influences its critical behavior. In this work we assume that the turbulent mixing is generated by the compressible Navier-Stokes equation where the compressibility is described by an additional field related to the density. Using field-theoretic models and renormalization group methods we investigate large scale and long time behavior.

  • 143.
    Hu, Jiangping
    et al.
    Univ Elect Sci & Technol China, Sch Automat Engn, Chengdu 611731, Sichuan, Peoples R China.;Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Huzhou, Huzhou 313001, Peoples R China..
    Ghosh, Bijoy Kumar
    Univ Elect Sci & Technol China, Sch Automat Engn, Chengdu 611731, Sichuan, Peoples R China.;Texas Tech Univ, Dept Math & Stat, Lubbock, TX 79409 USA..
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Lynnyk, Volodymyr
    Czech Acad Sci, Inst Informat Theory & Automat, Pod Vodarenskou Vezi 4, Praha 8, Slovakia..
    Papacek, Stepan
    Czech Acad Sci, Inst Informat Theory & Automat, Pod Vodarenskou Vezi 4, Praha 8, Slovakia..
    Rehak, Branislav
    Czech Acad Sci, Inst Informat Theory & Automat, Pod Vodarenskou Vezi 4, Praha 8, Slovakia..
    Special issue on nonlinear and multi-agent systems: Modeling, control and optimization2023In: Kybernetika (Praha), ISSN 0023-5954, E-ISSN 1805-949X, Vol. 59, no 3, p. 339-341Article in journal (Other academic)
  • 144.
    Huang, Weiran
    et al.
    Tsinghua Univ, Beijing, Peoples R China..
    Ok, Jungseul
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Li, Liang
    Ant Financial Grp, Hangzhou, Peoples R China..
    Chen, Wei
    Microsoft Res, Beijing, Peoples R China..
    Combinatorial Pure Exploration with Continuous and Separable Reward Functions and Its Applications2018In: Proceedings Of The Twenty-Seventh International Joint Conference On Artificial Intelligence / [ed] Lang, J, IJCAI-INT JOINT CONF ARTIF INTELL , 2018, p. 2291-2297Conference paper (Refereed)
    Abstract [en]

    We study the Combinatorial Pure Exploration problem with Continuous and Separable reward functions (CPE-CS) in the stochastic multi-armed bandit setting. In a CPE-CS instance, we are given several stochastic arms with unknown distributions, as well as a collection of possible decisions. Each decision has a reward according to the distributions of arms. The goal is to identify the decision with the maximum reward, using as few arm samples as possible. The problem generalizes the combinatorial pure exploration problem with linear rewards, which has attracted significant attention in recent years. In this paper, we propose an adaptive learning algorithm for the CPE-CS problem, and analyze its sample complexity. In particular, we introduce a new hardness measure called the consistent optimality hardness, and give both the upper and lower bounds of sample complexity. Moreover, we give examples to demonstrate that our solution has the capacity to deal with non-linear reward functions.

  • 145.
    Hulst, Naomi
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Exploring persistent homology as a method for capturing functional connectivity differences in Parkinson’s Disease.2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Parkinson’s Disease (PD) is the fastest growing neurodegenerative disease, currently affecting two to three percent of the population over 65. Studying functional connectivity (FC) in PD patients may provide new insights into how the disease alters brain organization in different subjects. We explored persistent homology (PH) as a method for studying FC based on the functional magnetic resonance imaging (fMRI) recordings of 63 subjects, of which 56 were diagnosed with PD. 

    We used PH to translate each set of fMRI recordings into a stable rank. Stable ranks are homological invariants that are amenable for statistical analysis. The pipeline has multiple parameters, and we explored the effect of these parameters on the shape of the stable ranks. Moreover, we fitted functions to reduce the stable ranks to points in two or three dimensions. We clustered the stable ranks based on the fitted parameter values and based on the integral distance between them.

    For some of the parameter combinations, not all clusters were located in the space covered by controls. These clusters correspond to patients with a topologically distinct connectivity structure, which may be clinically relevant. However, we found no relation between the clusters and the medication status or cognitive ability of the patients.

    It should be noted that this study was an exploration of applying persistent homology to PD data, and that statistical testing was not performed. Consequently, the presented results should be considered with care. Furthermore, we did not explore the full parameter space, as time was limited and the data set was small. In a follow-up study, a measurable desired outcome of the pipeline should be defined and the data set should be expanded to allow for optimizing over the full parameter space.

    Download full text (pdf)
    fulltext
  • 146.
    Hulström, Gabriella
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Consistent Projection of the Balance Sheet: A Holistic Approach to Modelling Interest Rate Risk in the Banking Book2021Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    When modelling risk in the banking book, a simple capital level approach can fail to capture the interactions between different risk measures or risk classes since they are modelled separately. In this thesis we propose a model for projecting the book value of a run-off balance sheet portfolio of fixed and variable rate loans, while also calculating net interest income, economic value of equity, capital requirement and capital cost within the same model. Using adjoint algorithmic differentiation, we also retrieve the sensitivities of each measure and the balance sheet towards a term structure of zero rates, for the lifetime of the portfolio. The model is an attempt at a holistic approach to modelling interest rate risk in the banking book, and its design allows for extensions to other financial risk classes such as credit risk and liquidity risk.

    Download full text (pdf)
    fulltext
  • 147.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Kiessling, Jonas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Algorithmic trading with Markov chainsManuscript (preprint) (Other academic)
    Abstract [en]

    An order book consists of a list of all buy and sell offers, represented by price and quantity, available to a market agent. The order book changes rapidly, within fractions of a second, due to new orders being entered into the book. The volume at a certain price level may increase due to limitorders, i.e. orders to buy or sell placed at the end of the queue, or decrease because of market orders or cancellations.

    In this paper a high-dimensional Markov chain is used to represent the state and evolution of the entire order book. The design and evaluation of optimal algorithmic strategies for buying and selling is studied within the theory of Markov decision processes. General conditions are provided that guarantee the existence of optimal strategies. Moreover, a value-iteration algorithm is presented that enables finding optimal strategies numerically.

    As an illustration a simple version of the Markov chain model is calibrated to high-frequency observations of the order book in a foreign exchange market. In this model, using an optimally designed strategy for buying one unit provides a significant improvement, in terms of the expected buy price, over a naive buy-one-unit strategy.

  • 148.
    Hultin, Hanna
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.). SEB Group, Stockholm, Sweden.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Proutiere, Alexandre
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Samama, Samuel
    SEB Group, Stockholm, Sweden.
    Tarighati, Ala
    SEB Group, Stockholm, Sweden.
    A generative model of a limit order book using recurrent neural networks2023In: Quantitative finance (Print), ISSN 1469-7688, E-ISSN 1469-7696, Vol. 23, no 6, p. 931-958Article in journal (Refereed)
    Abstract [en]

    In this work, a generative model based on recurrent neural networks for the complete dynamics of a limit order book is developed. The model captures the dynamics of the limit order book by decomposing the probability of each transition into a product of conditional probabilities of order type, price level, order size and time delay. Each such conditional probability is modelled by a recurrent neural network. Several evaluation metrics for generative models related to trading execution are introduced. Using these metrics, it is demonstrated that the generative model can be successfully trained to fit both synthetic and real data from the Nasdaq Stockholm exchange.

  • 149.
    Hummelgren, Olof
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Financial Modelling Using Fractional Processes And The Wiener Chaos Expansion2022Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this thesis is to simulate stochastic models that are driven by a fractional Brownian motion process and to apply these methods to financial applications related to yield rate and asset price modelling. Several rough volatility processes are used to model the asset price and yield dynamics.

    Firstly fractional processes of Cox-Ingersoll-Ross, CEV and Vasicek types are introduced as models for volatility and yield data. In this framework it holds that the Hurst parameter that determines the covariance structure of the fBM process can be directly estimated from observed data series using a least squares log-periodogram approach. The remaining parameters in the model are estimated using a combination of Maximum Likelihood estimates and expectation estimations.

    In the modelling and pricing of assets one model that is studied is the fractional Heston model, that is used to model an asset price process using both observed asset and volatility data. Similarly two other similar rough volatility models are also studied, which are constructed so as to have log-Normal returns. These processes which in the thesis are called the exponential models 1 and 2 have rough volatility that are characterized by the CEV and Vasicek processes.

    Additionally the first order Wiener Chaos Expansion is implemented and explored in two ways. Firstly the Chaos Expansion is applied to a parametric fractional stochastic model which is used to generate a Wick product process, which is found to resemble the underlying process. It is also used to generate an approximate expansion of real yield rate data using a bootstrap sampling approach.

    Download full text (pdf)
    fulltext
  • 150.
    Hussein, Seif
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Minimum Cost Distributed Computing using Sparse Matrix Factorization2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Distributed computing is an approach where computationally heavy problems are broken down into more manageable sub-tasks, which can then be distributed across a number of different computers or servers, allowing for increased efficiency through parallelization. This thesis explores an established distributed computing setting, in which the computationally heavy task involves a number of users requesting a linearly separable function to be computed across several servers. This setting results in a condition for feasible computation and communication that can be described by a matrix factorization problem. Moreover, the associated costs with computation and communication are directly related to the number of nonzero elements of the matrix factors, making sparse factors desirable for minimal costs. The Alternating Direction Method of Multipliers (ADMM) is explored as a possible method of solving the sparse matrix factorization problem. To obtain convergence results, extensive convex analysis is conducted on the ADMM iterates, resulting in a theorem that characterizes the limiting points of the iterates as KKT points for the sparse matrix factorization problem. Using the results of the analysis, an algorithm is devised from the ADMM iterates, which can be applied to the sparse matrix factorization problem. Furthermore, an additional implementation is considered for a noisy scenario, in which existing theoretical results are used to justify convergence. Finally, numerical implementations of the devised algorithms are used to perform sparse matrix factorization.

    Download full text (pdf)
    fulltext
1234567 101 - 150 of 356
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf