Change search
Refine search result
1234567 51 - 100 of 464
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    BERGROTH, JONAS
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Performance and risk analysis of the Hodrick-Prescott filter2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 52.
    Bergroth, Magnus
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Carlsson, Anders
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Estimation of a Liquidity Premium for Swedish Inflation Linked Bonds2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    It is well known that the inflation linked breakeven inflation, defined as the difference between a nominal yield and an inflation linked yield, sometimes is used as an approximation of the market’s inflation expectation. D’Amico et al. (2009, [5]) show that this is a poor approximation for the US market. Based on their work, this thesis shows that the approximation also is poor for the Swedish bond market. This is done by modelling the Swedish bond market using a five-factor latent variable model, where an inflation linked bond specific premium is introduced. Latent variables and parameters are estimated using a Kalman filter and a maximum likelihood estimation. The conclusion is drawn that the modelling was successful and that the model implied outputs gave plausible results.

  • 53.
    Bergström, Sebastian
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Customer segmentation of retail chain customers using cluster analysis2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, cluster analysis was applied to data comprising of customer spending habits at a retail chain in order to perform customer segmentation. The method used was a two-step cluster procedure in which the first step consisted of feature engineering, a square root transformation of the data in order to handle big spenders in the data set and finally principal component analysis in order to reduce the dimensionality of the data set. This was done to reduce the effects of high dimensionality. The second step consisted of applying clustering algorithms to the transformed data. The methods used were K-means clustering, Gaussian mixture models in the MCLUST family, t-distributed mixture models in the tEIGEN family and non-negative matrix factorization (NMF). For the NMF clustering a slightly different data pre-processing step was taken, specifically no PCA was performed. Clustering partitions were compared on the basis of the Silhouette index, Davies-Bouldin index and subject matter knowledge, which revealed that K-means clustering with K = 3 produces the most reasonable clusters. This algorithm was able to separate the customer into different segments depending on how many purchases they made overall and in these clusters some minor differences in spending habits are also evident. In other words there is some support for the claim that the customer segments have some variation in their spending habits.

  • 54.
    Berlin, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Multi-class Supervised Classification Techniques for High-dimensional Data: Applications to Vehicle Maintenance at Scania2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In vehicle repairs, many times locating the cause of error could turn out more time consuming than the reparation itself. Hence a systematic way to accurately predict a fault causing part would constitute a valuable tool especially for errors difficult to diagnose. This thesis explores the predictive ability of Diagnostic Trouble Codes (DTC’s), produced by the electronic system on Scania vehicles, as indicators for fault causing parts. The statistical analysis is based on about 18800 observations of vehicles where both DTC’s and replaced parts could be identified during the period march 2016 - march 2017. Two different approaches of forming classes is evaluated. Many classes had only few observations and, to give the classifiers a fair chance, it is decided to omit observations of classes based on their frequency in data. After processing, the resulting data could comprise 1547 observations on 4168 features, demonstrating very high dimensionality and making it impossible to apply standard methods of large-sample statistical inference. Two procedures of supervised statistical learning, that are able to cope with high dimensionality and multiple classes, Support Vector Machines and Neural Networks are exploited and evaluated. The analysis showed that on data with 1547 observations of 4168 features (unique DTC’s) and 7 classes SVM yielded an average prediction accuracy of 79.4% compared to 75.4% using NN.The conclusion of the analysis is that DTC’s holds potential to be used as indicators for fault causing parts in a predictive model, but in order to increase prediction accuracy learning data needs improvements. Scope for future research to improve and expand the model, along with practical suggestions for exploiting supervised classifiers at Scania is provided. keywords: Statistical learning, Machine learning, Neural networks, Deep learning, Supervised learning, High dimensionality

  • 55.
    Berntsson, Fredrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Methods of high-dimensional statistical analysis for the prediction and monitoring of engine oil quality2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Engine oils fill important functions in the operation of modern internal combustion engines. Many essential functions are provided by compounds that are either sacrificial or susceptible to degradation. The engine oil will eventually fail to provide these functions with possibly unrepairable damages as a result. To decide how often the oil should be changed, there are several laboratory tests to monitor the oil condition, e.g. FTIR (oxidation, nitration, soot, water), viscosity, TAN (acidity), TBN (alkalinity), ICP (elemental analysis) and GC (fuel dilution). These oil tests are however often labor intensive and costly and it would be desirable to supplement and/or replace some of them with simpler and faster methods. One way, is to utilise the whole spectrum of the FTIR-measurements already performed. FTIR is traditionally used to monitor chemical properties at specific wave lengths, but also provides information, in a more multivariate way though, relevant for viscosity, TAN, and TBN. In order to make use of the whole FTIR-spectrum, methods capable of handling high dimensional data have to be used. Partial Least Squares Regression (PLSR) will be used in order to predict the relevant chemical properties.

    This survey also considers feature selection methods based on the second order statistic Higher Criticism as well as Hierarchical Clustering. The Feature Selection methods are used in order to ease further research on how infrared data may be put into usage as a tool for more automated oil analyses.

    Results show that PLSR may be utilised to provide reliable estimates of mentioned chemical quantities. In addition may mentioned feature selection methods be applied without losing prediction power. The feature selection methods considered may also aid analysis of the engine oil itself and feature work on how to utilise infrared properties in the analysis of engine oil in other situations.

  • 56.
    Bisot, Clémence
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Spectral Data Processing for Steel Industry2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    For steel industry, knowing and understanding characteristics of a steel strip surface at every steps of the production process is a key element to control final product quality. Today as the quality requirements increase this task gets more and more important. The surface of new steel grades with complex chemical compositions has behaviors especially hard to master. For those grades in particular, surface control is critical and difficult.

    One of the promising technics to assess the problem of surface quality control is spectra analysis. Over the last few years, ArcelorMittal, world’s leading integrated steel and mining company,

    has led several projects to investigate the possibility of using devices to measure light spectrum of their product at different stage of the production.

    The large amount of data generated by these devices makes it absolutely necessary to develop efficient data treatment pipelines to get meaningful information out of the recorded spectra. In this thesis, we developed mathematical models and statistical tools to treat signal measured with spectrometers in the framework of different research projects.

  • 57. Bizjajeva, Svetlana
    et al.
    Olsson, Jimmy
    Antithetic sampling for sequential Monte Carlo methods with application to state-space models2016In: Annals of the Institute of Statistical Mathematics, ISSN 0020-3157, E-ISSN 1572-9052, Vol. 68, no 5, p. 1025-1053Article in journal (Refereed)
    Abstract [en]

    In this paper, we cast the idea of antithetic sampling, widely used in standard Monte Carlo simulation, into the framework of sequential Monte Carlo methods. We propose a version of the standard auxiliary particle filter where the particles are mutated blockwise in such a way that all particles within each block are, first, offspring of a common ancestor and, second, negatively correlated conditionally on this ancestor. By deriving and examining the weak limit of a central limit theorem describing the convergence of the algorithm, we conclude that the asymptotic variance of the produced Monte Carlo estimates can be straightforwardly decreased by means of antithetic techniques when the particle filter is close to fully adapted, which involves approximation of the so-called optimal proposal kernel. As an illustration, we apply the method to optimal filtering in state-space models.

  • 58.
    Bjarnadottir, Frida
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Implementation of CoVaR, A Measure for Systemic Risk2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    In recent years we have witnessed how distress can spread quickly through the financial system and threaten financial stability. Hence there has been increased focus on developing systemic risk indicators that can be used by central banks and others as a monitoring tool. For Sveriges Riksbank it is of great value to be able to quantify the risks that can threaten the Swedish financial system CoVaR is a systemic risk measure implemented here with that with that purpose. CoVaR, which stands for conditional Value at Risk, measures a financial institutions contribution to systemic risk and its contribution to the risk of other financial institutions. The conclusion is that CoVaR can together with other systemic risk indicators help get a better understanding of the risks threatening the stability of the Swedish financial system.

  • 59.
    Bjarnason, Jónas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Optimized Transport Planning through Coordinated Collaboration between Transport Companies2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis studies a specific transport planning problem, which is based on a realistic scenario in the transport industry and deals with the delivery of goods by transport companies to their customers. The main aspect of the planning problem is to consider if each company should deliver the cargo on its own or through a collaboration of companies, in which the companies share the deliveries. In order to find out whether or not collaboration should take place, the transport planning problem is represented in terms of a mathematical optimization problem, which is formulated by using a column generation method and whose objective function involves minimization of costs. Three different solution cases are considered where each of them takes into account different combinations of vehicles used for delivering the cargo as well as the different maximum allowed driving time of the vehicles.

    The goal of the thesis is twofold; firstly, to see if the optimization problem can be solved and secondly, in case the problem is solvable, investigate whether it is beneficial for transport companies to collaborate under the aforementioned circumstances in order to incur lower costs in all instances considered. It turns out that both goals are achieved. To achieve the first goal, a few simplifications need to be made. The simplifications pertain both to the formulation of the problem and its implementation, as it is not only difficult to formulate a transport planning problem of this kind with respect to real life situations, but the problem is also difficult to solve due to its computational complexity. As for the second goal of the thesis, a numerical comparison between the different instances for the two scenarios demonstrates that the costs according to collaborative transport planning turns out to be considerably lower, which suggests that, under the circumstances considered in the thesis, collaboration between transport companies is beneficial for the companies involved.

  • 60.
    Bjärkby, Sarah
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Grägg, Sofia
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Cluster Analysis of Stocks to Define an Investment Strategy2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis investigates the possibilities of creating an investment strategy by performing a cluster analysis on stock returns. This to provide a diversified portfolio, which has multiple advantages, for instance that the risk of the investment decreases. The cluster analysis was performed using various methods – Average linkage, Centroid and Ward's method, for the purpose of determining a preferable method. Ward's method was the most appropriate method to use according to the results, since it was the only method providing an analysable result. The investment strategy was therefore based on the result of Ward's method. This resulted in a portfolio consisting of eight stocks from four different clusters, with the eight stocks representing four sectors. Most of the results were not interpretable and some of the decision making regarding the number of clusters and the appropriate portfolio composition was not entirely scientific. Therefore, this thesis should be considered as a first indication of the adequacy of using cluster analysis for the purpose of creating an investment strategy.

  • 61.
    Björk, Tomas
    et al.
    Stockholm School of Economics.
    Hult, Henrik
    Dept. of Appl. Math. and Statistics, Universitetsparken 5, 2100 Copenhagen, Denmark.
    A note on Wick products and the fractional Black-Scholes model2005In: Finance and Stochastics, ISSN 0949-2984, E-ISSN 1432-1122, Vol. 9, no 2, p. 197-209Article in journal (Refereed)
    Abstract [en]

    In some recent papers (Elliott and van der Hoek 2003; Hu and Oksendal 2003) a fractional Black-Scholes model has been proposed as an improvement of the classical Black-Scholes model (see also Benth 2003; Biagini et al. 2002; Biagini and Oksendal 2004). Common to these fractional Black-Scholes models is that the driving Brownian motion is replaced by a fractional Brownian motion and that the Ito integral is replaced by the Wick integral, and proofs have been presented that these fractional Black-Scholes models are free of arbitrage. These results on absence of arbitrage complelety contradict a number of earlier results in the literature which prove that the fractional Black-Scholes model (and related models) will in fact admit arbitrage. The objective of the present paper is to resolve this contradiction by pointing out that the definition of the self-financing trading strategies and/or the definition of the value of a portfolio used in the above papers does not have a reasonable economic interpretation, and thus that the results in these papers are not economically meaningful. In particular we show that in the framework of Elliott and van der Hoek (2003), a naive buy-and-hold strategy does not in general qualify as "self-financing". We also show that in Hu and Oksendal (2003), a portfolio consisting of a positive number of shares of a stock with a positive price may, with positive probability, have a negative "value".

  • 62.
    Blanchet, Jose
    et al.
    Columbia University.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Leder, Kevin
    University of Minnesota.
    Importance sampling for stochastic recurrence equations with heavy-tailed increments2011In: Proceedings of the 2011 Winter Simulation Conference, 2011, p. 3824-3831Conference paper (Other academic)
    Abstract [en]

    Importance sampling in the setting of heavy tailed random variables has generally focused on models withadditive noise terms. In this work we extend this concept by considering importance sampling for theestimation of rare events in Markov chains of the formXn+1 = An+1Xn+Bn+1; X0 = 0;where the Bn’s and An’s are independent sequences of independent and identically distributed (i.i.d.) randomvariables and the Bn’s are regularly varying and the An’s are suitably light tailed relative to Bn. We focuson efficient estimation of the rare event probability P(Xn > b) as b%¥. In particular we present a stronglyefficient importance sampling algorithm for estimating these probabilities, and present a numerical exampleshowcasing the strong efficiency.

  • 63.
    Blanchet, Jose
    et al.
    Columbia University.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Leder, Kevin
    University of Minnesota.
    Rare-Event Simulation for Stochastic Recurrence Equations with Heavy-Tailed Innovations2013In: ACM Transactions on Modeling and Computer Simulation, ISSN 1049-3301, E-ISSN 1558-1195, Vol. 23, no 4, p. 22-Article in journal (Refereed)
    Abstract [en]

    In this article, rare-event simulation for stochastic recurrence equations of the form Xn+1 = A(n+1)X(n) + Bn+1, X-0 = 0 is studied, where {A(n);n >= 1} and {B-n;n >= 1} are independent sequences consisting of independent and identically distributed real-valued random variables. It is assumed that the tail of the distribution of B-1 is regularly varying, whereas the distribution of A(1) has a suitably light tail. The problem of efficient estimation, via simulation, of quantities such as P{X-n > b} and P{sup(k <= n) X-k > b} for large b and n is studied. Importance sampling strategies are investigated that provide unbiased estimators with bounded relative error as b and n tend to infinity.

  • 64.
    Blazevic, Darko
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Marcusson, Fredrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Volatility Evaluation Using Conditional Heteroscedasticity Models on Bitcoin, Ethereum and Ripple2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study examines and compares the volatility in sample fit and out of sample forecast of four different heteroscedasticity models, namely ARCH, GARCH, EGARCH and GJR-GARCH applied to Bitcoin, Ethereum and Ripple. The models are fitted over the period from 2016-01-01 to 2019-01-01 and then used to obtain one day rolling forecasts during the period from 2018-01-01 to 2019-01-01. The study investigates three different themes consisting of the modelling framework structure, complexity of models and the relation between a good in sample fit and good out of sample forecast. AIC and BIC are used to evaluate the in sample fit while MSE, MAE and R2LOG are used as loss functions when evaluating the out of sample forecast against the chosen Parkinson volatility proxy. The results show that a heavier tailed reference distribution than the normal distribution generally improves the in sample fit, while this generality is not found for the out of sample forecast. Furthermore, it is shown that GARCH type models clearly outperform ARCH models in both in sample fit and out of sample forecast. For Ethereum, it is shown that the best fitted models also result in the best out of sample forecast for all loss functions, while for Bitcoin non of the best fitted models result in the best out of sample forecast. Finally, for Ripple, no generality between in sample fit and out of sample forecast is found.

  • 65.
    Blomberg, Niclas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Higher Criticism Testing for Signal Detection in Rare And Weak Models2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    examples - we need models for selecting a small subset of useful features from high-dimensional data, where the useful features are both rare and weak, this being crucial for e.g. supervised classfication of sparse high- dimensional data. A preceding step is to detect the presence of useful features, signal detection. This problem is related to testing a very large number of hypotheses, where the proportion of false null hypotheses is assumed to be very small. However, reliable signal detection will only be possible in certain areas of the two-dimensional sparsity-strength parameter space, the phase space.

    In this report, we focus on two families of distributions, N and χ2. In the former case, features are supposed to be independent and normally distributed. In the latter, in search for a more sophisticated model, we suppose that features depend in blocks, whose empirical separation strength asymptotically follows the non-central χ2ν-distribution.

    Our search for informative features explores Tukey's higher criticism (HC), which is a second-level significance testing procedure, for comparing the fraction of observed signi cances to the expected fraction under the global null.

    Throughout the phase space we investgate the estimated error rate,

    Err = (#Falsely rejected H0+ #Falsely rejected H1)/#Simulations,

    where H0: absence of informative signals, and H1: presence of informative signals, in both the N-case and the χ2ν-case, for ν= 2; 10; 30. In particular, we find, using a feature vector of the approximately same size as in genomic applications, that the analytically derived detection boundary is too optimistic in the sense that close to it, signal detection is still failing, and we need to move far from the boundary into the success region to ensure reliable detection. We demonstrate that Err grows fast and irregularly as we approach the detection boundary from the success region.

    In the χ2ν-case, ν > 2, no analytical detection boundary has been derived, but we show that the empirical success region there is smaller than in the N-case, especially as ν increases.

  • 66.
    Blomberg, Renée
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Who is Granted Disability Benefit in Sweden?: Description of risk factors and the effect of the 2008 law reform2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Disabilitybenefit is a publicly funded benefit in Sweden that provides financialprotection to individuals with permanent working ability impairments due todisability, injury, or illness. The eligibility requirements for disabilitybenefit were tightened June 1, 2008 to require that the working abilityimpairment be permanent and that no other factors such as age or local labormarket conditions can affect eligibility for the benefit. The goal of thispaper is to investigate risk factors for the incidence disability benefit andthe effects of the 2008 reform. This is the first study to investigate theimpact of the 2008 reform on the demographics of those that received disabilitybenefit. A logistic regression model was used to study the effect of the 2008law change. The regression results show that the 2008 reform did have astatistically significant effect on the demographics of the individuals whowere granted disability benefit. After the reform women were lessoverrepresented, the older age groups were more overrepresented, and peoplewith short educations were more overrepresented. Although the variables for SKLregions together were jointly statistically significant, their coefficientswere small and the group of variables had the least amount of explanatory valuecompared to the variables for age, education, gender and the interactionvariables.

  • 67.
    Blomkvist, Oscar
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Smart Beta - index weighting2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study is a thesis ending a 120 credit masters program in Mathematics with specialization Financial Mathematics and Mathematical Statistics at the Royal Institute of Technology (KTH).

    The subject of Smart beta is defined and studied in an index fund context. The portfolio weighting schemes tested are: equally weighting, maximum Sharpe ratio, maximum diversification, and fundamental weighting using P/E-ratios. The outcome of the strategies is measured in performance (accumulated return), risk, and cost of trading, along with measures of the proportions of different assets in the portfolio.

    The thesis goes through the steps of collecting, ordering, and ”cleaning” the data used in the process. A brief explanation of historical simulation used in estimation of stochastic variables such as expected return and covariance matrices is included, as well as analysis on the data’s distribution.

    The process of optimization and how rules for being UCITS compliant forms optimization programs with constraints is described.

    The results indicate that all, but the most diversified, portfolios tested outperform the market cap weighted portfolio. In all cases, the trading volumes and the market impact is increased, in comparison with the cap weighted portfolio. The Sharpe ratio maximizer yields a high level of return, while keeping the risk low. The fundamentally weighted portfolio performs best, but with higher risk. A combination of the two finds the portfolio with highest return and lowest risk. 

  • 68.
    Bofeldt, Josefine
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Joon, Sara
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing of a balance sheet option limited by a minimum solvency boundary2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Pension companies are required by law to remain above a certain solvency level. The main purpose of this thesis is to determine the cost of remaining above a lower solvency level for different pension companies. This will be modelled by an option with a balance sheet as the underlying asset. The balance sheet is assumed to consist of bonds, stocks, liabilities and own funds. Both liabilities and bonds are modelled using forward rates. Data used in this thesis is historical stock prices and forward rates. Several potential models for stock and forward rate processes are considered. Examples of models considered are Bates model, Libor market model and a discrete model based on normal log-normal mixture random variables which have different properties and distributions. The discrete normal log-normal mixture model is concluded to be the model best suited for stocks and bonds, i.e. the assets, and for liabilities. The price of the balance sheet option is determined using quasi-Monte Carlo simulations. The price is determined in relation to the initial value of the own funds for different portfolios with different initial solvency levels and different lower solvency bounds.

    The price as a function of the lower solvency bound seems to be an exponential function and varies depending on portfolio, initial solvency level and lower solvency bound. The price converges with sufficient accuracy. It is concluded that the model proves that remaining above a lower solvency level results in a significant cost for the pension company. A further improvement suggested is to validate the constructed model with other models.

  • 69.
    Bogren, Felix
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Estimating the Term Structure of Default Probabilities for Heterogeneous Credit Porfolios2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this thesis is to estimate the term structure of default probabilities for heterogeneous credit portfolios. The term structure is defined as the cumulative distribution function (CDF) of the time until default. Since the CDF is the complement of the survival function, survival analysis is applied to estimate the term structures. To manage long-term survivors and plateaued survival functions, the data is assumed to follow a parametric as well as a semi-parametric mixture cure model. Due to the general intractability of the maximum likelihood of mixture models, the parameters are estimated by the EM algorithm. A simulation study is conducted to assess the accuracy of the EM algorithm applied to the parametric mixture cure model with data characterized by a low default incidence. The simulation study recognizes difficulties in estimating the parameters when the data is not gathered over a sufficiently long observational window. The estimated term structures are compared to empirical term structures, determined by the Kaplan-Meier estimator. The results indicated a good fit of the model for longer horizons when applied to each credit type separately, despite difficulties capturing the dynamics of the term structure for the first one to two years. Both models performed poorly with few defaults. The parametric model did however not seem sensitive to low default rates. In conclusion, the class of mixture cure models are indeed viable for estimating the term structure of default probabilities for heterogeneous credit portfolios.

  • 70.
    Book, Emil
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Ekelöf, Linus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Multiple Linear Regression Model To Assess The Effects of Macroeconomic Factors On Small and Medium-Sized Enterprises2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Small and medium-sized enterprises (SMEs) have long been considered the backbone in any country’s economy for their contribution to growth and prosperity. It is therefore of great importance that the government and legislators adopt policies that optimise the success of SMEs. Recent concerns of an impending recession has made this topic even more relevant since small companies will have greater difficulty withstanding such an event. This thesis will focus on the effects of macroeconomic factors on SMEs in Sweden, with the usage of multiple linear regression. Data was collected for a 10 year period, from 2009 to 2019 at a monthly interval. The end result was a five variable model with an coefficient of determination of 98%.

  • 71.
    Boros, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On Lapse risk factors in Solvency II2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the wake of the sub-prime crisis of 2008, the European Insurance and Occupational Pensions Authority issued the Solvency II directive, aiming at replacing the obsolete Solvency I framework by 2016. Among the quantitative requirements of Solvency II, a measure for an insurance firms solvency risk, the solvency risk capital, is found. It aims at establishing the amount of equity the company needs to hold to be able to meet its insurance obligations with a probability of 0.995 over the coming year. The SCR of a company is essentially built up by the SCR induced by a set of quantifiable risks. Among these, risks originating from the take up rate of contractual options, lapse risks, are included.

    In this thesis, the contractual options of a life insurer have been identified and risk factors aiming at capturing the risks arising are suggested. It has been concluded that a risk factor estimating the size of mass transfer events captures the risk arising through the resulting rescaling of the balance sheet. Further, a risk factor modeling the deviation of the Company's assumption for the yearly transfer rate is introduced to capture the risks induced by the characteristics of traditional life insurance and unit-linked insurance contracts upon transfer. The risk factors are modeled in a manner to introduce co-dependence with equity returns as well as interest rates of various durations and the model parameters are estimated using statistical methods for Norwegian transfer-frequency data obtained from Finans Norge.

    The univariate and multivariate properties of the models are investigated in a scenario setting and it is concluded the the suggested models provide predominantly plausible results for the mass-lapse risk factors. However, the performance of the models for the risk factors aiming at capturing deviations in the transfer assumptions are questionable, why two means of increasing its validity have been proposed.

  • 72.
    Borysov, Stanislav
    et al.
    KTH, School of Engineering Sciences (SCI), Applied Physics, Nanostructure Physics. KTH, Centres, Nordic Institute for Theoretical Physics NORDITA.
    Roudi, Yasser
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA. The Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway.
    Balatsky, Alexander V.
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA. Institute for Materials Science, Los Alamos National Laboratory, Los Alamos, NM, United States.
    U.S. stock market interaction network as learned by the Boltzmann machine2015In: European Physical Journal B: Condensed Matter Physics, ISSN 1434-6028, E-ISSN 1434-6036, Vol. 88, no 12, p. 1-14Article in journal (Refereed)
    Abstract [en]

    We study historical dynamics of joint equilibrium distribution of stock returns in the U.S. stock market using the Boltzmann distribution model being parametrized by external fields and pairwise couplings. Within Boltzmann learning framework for statistical inference, we analyze historical behavior of the parameters inferred using exact and approximate learning algorithms. Since the model and inference methods require use of binary variables, effect of this mapping of continuous returns to the discrete domain is studied. The presented results show that binarization preserves the correlation structure of the market. Properties of distributions of external fields and couplings as well as the market interaction network and industry sector clustering structure are studied for different historical dates and moving window sizes. We demonstrate that the observed positive heavy tail in distribution of couplings is related to the sparse clustering structure of the market. We also show that discrepancies between the model’s parameters might be used as a precursor of financial instabilities.

  • 73.
    Bramstång, Philip
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hermanson, Richard
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Extreme value theory with Markov chain Monte Carlo - an automated process for EVT in finance2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this thesis was to create an automated procedure for estimating financial risk using extreme value theory (EVT).

    The "peaks over threshold" (POT) result from EVT was chosen for modelling the tails of the distribution of financial returns. The main difficulty with POT is choosing a convergence threshold above which the data points are regarded as extreme events and modelled using a limit distribution. It was investigated how risk measures are affected by variations in this threshold and it was deemed that fixed-threshold models are inadequate in the context of few relevant data points, as is often the case in EVT applications. A model for automatic threshold weighting was proposed and shows promise.

    Moreover, the choice of Bayesian vs frequentist inference, with focus on Markov chain Monte Carlo (MCMC) vs maximum likelihood estimation (MLE), was investigated with regards to EVT applications, favoring Bayesian inference and MCMC. Two MCMC algorithms, independence Metropolis (IM) and automated factor slice sampler (AFSS), were analyzed and improved in order to increase performance of the final procedure.

    Lastly, the effects of a reference prior and a prior based on expert opinion were compared and exemplified for practical applications in finance.

  • 74.
    Brodin, Kristoffer
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical Machine Learning from Classification Perspective:: Prediction of Household Ties for Economical Decision Making2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In modern society, many companies have large data records over their individual customers, containing information about attributes, such as name, gender, marital status, address, etc. These attributes can be used to link costumers together, depending on whether they share some sort of relationship with each other or not. In this thesis the goal is to investigate and compare methods to predict relationships between individuals in the terms of what we define as a household relationship, i.e. we wish to identify which individuals are sharing living expenses with one another. The objective is to explore the ability of three supervised statistical machine learning methods, namely, logistic regression (LR), artificial neural networks (ANN) and the support vector machine (SVM), to predict these household relationships and evaluate their predictive performance for different settings on their corresponding tuning parameters. Data over a limited population of individuals, containing information about household affiliation and attributes, were available for this task. In order to apply these methods, the problem had to be formulated on a form enabling supervised learning, i.e. a target Y and input predictors X = (X1, …, Xp), based on the set of p attributes associated with each individual, had to be derived. We have presented a technique which forms pairs of individuals under the hypothesis H0, that they share a household relationship, and then a test of significance is constructed. This technique transforms the problem into a standard binary classification problem. A sample of observations could be generated by randomly pair individuals and using the available data over each individual to code the corresponding outcome on Y and X for each random pair. For evaluation and tuning of the three supervised learning methods, the sample was split into a training set, a validation set and a test set.

    We have seen that the prediction error, in term of misclassification rate, is very small for all three methods since the two classes, H0 is true, and H0 is false, are far away from each other and well separable. The data have shown pronounced linear separability, generally resulting in minor differences in misclassification rate as the tuning parameters are modified. However, some variations in the prediction results due to tuning have been observed, and if also considering computational time and requirements on computational power, optimal settings on the tuning parameters could be determined for each method. Comparing LR, ANN and SVM, using optimal tuning settings, the results from testing have shown that there is no significant difference between the three methods performances and they all predict well. Nevertheless, due to difference in complexity between the methods, we have concluded that SVM is the least suitable method to use, whereas LR most suitable. However, the ANN handles complex and non-linear data better than LR, therefore, for future application of the model, where data might not have such a pronounced linear separability, we find it suitable to consider ANN as well.

    This thesis has been written at Svenska Handelsbanken, one of the large major banks in Sweden, with offices all around the world. Their headquarters are situated in Kungsträdgården, Stockholm. Computations have been performed using SAS software and data have been processed in SQL relational database management system.

  • 75.
    Brynolfsson Borg, Andreas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Non-Contractual Churn Prediction with Limited User Information2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This report compares the effectiveness of three statistical methods for predicting defecting viewers in SVT's video on demand (VOD) services: logistic regression, random forests, and long short-term memory recurrent neural networks (LSTMs). In particular, the report investigates whether or not sequential data consisting of users' weekly watch histories can be used with LSTMs to achieve better predictive performance than the two other methods. The study found that the best LSTM models did outperform the other methods in terms of precision, recall, F-measure and AUC – but not accuracy. Logistic regression and random forests offered comparable performance results. The models are however subject to several notable limitations, so further research is advised.

  • 76. Buckdahn, Rainer
    et al.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Li, Juan
    A General Stochastic Maximum Principle for SDEs of Mean-field Type2011In: Applied mathematics and optimization, ISSN 0095-4616, E-ISSN 1432-0606, Vol. 64, no 2, p. 197-216Article in journal (Refereed)
    Abstract [en]

    We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng's-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966-979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng's stochastic maximum principle.

  • 77.
    Budai, Daniel
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Jallo, David
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The Market Graph: A study of its characteristics, structure & dynamics2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis we have considered three different market graphs; one solely based on stock returns, another one based on stock returns with vertices weighted with a liquidity measure and lastly one based on correlations of volume fluctuations. Research is conducted on two different markets; the Swedish and the American stock market. We want to introduce graph theory as a method for representing the stock market in order to show that one can more fully understand the structural properties and dynamics of the stock market by studying the market graph. We found many signs of increased globalization by studying the clustering coefficient and the correlation distribution. The structure of the market graph is such that it pinpoints specific sectors when the correlation threshold is increased and different sectors are found in the two different markets. For low correlation thresholds we found groups of independent stocks that can be used as diversified portfolios. Furthermore, the dynamics revealed that it is possible to use the daily absolute change in edge density as an indicator for when the market is about to make a downturn. This could be an interesting topic for further studies. We had hoped to get additional results by considering volume correlations, but that did not turn out to be the case. Regardless of that, we think that it would be interesting to study volume based market graphs further.

  • 78.
    Budhiraja, Amarjit
    et al.
    University of North Carolina at Chapel Hill United States.
    Nyquist, Pierre
    Brown University, United States.
    Large deviations for multidimensional state-dependent shot noise processes2015In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 52, no 4, p. 1097-1114Article in journal (Refereed)
    Abstract [en]

    Shot-noise processes are used in applied probability to model a variety of physical systems in, for example, teletraffic theory, insurance and risk theory, and in the engineering sciences. In this paper we prove a large deviation principle for the sample-paths of a general class of multidimensional state-dependent Poisson shot-noise processes. The result covers previously known large deviation results for one-dimensional state-independent shot-noise processes with light tails. We use the weak convergence approach to large deviations, which reduces the proof to establishing the appropriate convergence of certain controlled versions of the original processes together with relevant results on existence and uniqueness.

  • 79.
    Callert, Gustaf
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Halén Dahlström, Filip
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A performance investigation and evaluation of selected portfolio optimization methods with varying assets and market scenarios2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study investigates and evaluates how different portfolio optimization methods perform when varying assets and financial market scenarios. Methods included are mean variance, Conditional Value-at-Risk, utility based, risk factor based and Monte Carlo optimization. Market scenarios are represented by stagnating, bull and bear market data from the Bloomberg database. In order to perform robust optimizations resampling of the Bloomberg data has been done hundred times. The evaluation of the methods has been done with respect to selected ratios and two benchmark portfolios. Namely an equally weighted portfolio and an equally weighted risk contributions portfolio. The study found that mean variance and Conditional Value-at-Risk optimization performed best when using linear assets in all the investigated cases. Considering non-linear assets such as options an equally weighted portfolio performs best.

  • 80. Cappé, Olivier
    et al.
    Moulines, Eric
    Rydén, Tobias
    Lund University.
    Inference in Hidden Markov Models2005Book (Refereed)
  • 81.
    Carlqvist, Håkan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Multiscale analysis of multi-channel signals2005Doctoral thesis, comprehensive summary (Other scientific)
    Abstract [en]

    I: Amplitude and phase relationship between alpha and beta oscillations in the human EEG We have studied the relation between two oscillatory patterns within EEG signals (oscillations with main frequency 10 Hz and 20 Hz), with wavelet-based methods. For better comparison, a variant of the continuous wavelet transform, was derived. As a conclusion, the two patterns were closely related and 70-90 % of the activity in the 20 Hz pattern could be seen as a resonance phenomenon of the 10 Hz activity.

    II: A local discriminant basis algorithm using wavelet packets for discrimination between classes of multidimensional signals We have improved and extended the local discriminant basis algorithm for application on multidimensional signals appearing from multichannels. The improvements includes principal-component analysis and crossvalidation- leave-one out. The method is furthermore applied on two classes of EEG signals, one group of control subjects and one group of subjects with type I diabetes. There was a clear discrimination between the two groups. The discrimination follows known differences in the EEG between the two groups of subjects.

    III: Improved classification of multidimensional signals using orthogonality properties of a time-frequency library We further improve and refine the method in paper2 and apply it on 4 classes of EEG signals from subjects differing in age and/or sex, which are known factors of EEG alterations. As a method for deciding the best basis we derive an orthogonalbasis- pursuit-like algorithm which works statistically better (Tukey's test for simultaneous confidence intervals) than the basis selection method in the original local discriminant basis algorithm. Other methods included were Fisher's class separability, partial-least-squares and cross-validation-leave-one-subject out. The two groups of younger subjects were almost fully discriminated between each other and to the other groups, while the older subjects were harder to discriminate.

  • 82.
    Carlsson, Filip
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Can students' progress data be modeled using Markov chains?2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this thesis a Markov chain model, which can be used for analysing students’ performance and their academic progress, is developed. Being able to evaluate students progress is useful for any educational system. It gives a better understanding of how students resonates and it can be used as support for important decisions and planning. Such a tool can be helpful for managers of the educational institution to establish a more optimal educational policy, which ensures better position in the educational market. To show that it is reasonable to use a Markov chain model for this purpose, a test for how well data fits such a model is created and used. The test shows that we cannot reject the hypothesis that the data can be fitted to a Markov chain model.

  • 83.
    Chaqchaq, Othmane
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Fixed Income Modeling2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Besides financial analysis, quantitative tools play a major role in asset management. By managing the aggregation of large amount of historical and prospective data on different asset classes, it can give portfolio allocation solution with respect to risk and regulatory constraints.

    Asset class modeling requires three main steps, the first one is to assess the product features (risk premium and risks) by considering historical and prospective data, which in the case of fixed income depends on spread and default levels. The second is choosing the quantitative model, in this study we introduce a new credit model, which unlike equity like models, model default as a main feature of fixed income performance. The final step consists on calibrating the model.

    We start in this study with the modeling of bond classes and study its behavior in asset allocation, we than model the capital solution transaction as an example of a fixed income structured product.

  • 84. Charalambous, C. D.
    et al.
    Stavrou, Photios
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    Kourtellaris, C. K.
    Tzortzis, I.
    Directed Information Subject to a Fidelity. Applications to Conditionally Gaussian Processes2018In: 2018 European Control Conference, ECC 2018, Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 3071-3076, article id 8550054Conference paper (Refereed)
    Abstract [en]

    This paper is concerned with the minimization of directed information over conditional distributions that satisfy a fidelity of reconstructing a conditionally Gaussian random process by another process, causally. This information theoretic extremum problem is directly linked, via bounds to the optimal performance theoretically attainable by non-causal, causal and zero-delay codes of data compression. The application example includes the characterization of causal rate distortion function for conditionally Gaussian random processes subject to a meansquare error fidelity.

  • 85.
    Chatall, Kim
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Johansson, Niklas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    An Analysis of Asynchronous Data2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Risk analysis and financial decision making requires true and appropriate estimates of correlations today and how they are expected to evolve in the future. If a portfolio consists of assets traded in markets with different trading hours, there could potentially occur an underestimation of the right correlation. This is due the asynchronous data - there exist an asynchronicity within the assets time series in the portfolio. The purpose of this paper is twofold. First, we suggest a modified synchronization model of Burns, Engle and Mezrich (1998) which replaces the first-order vector moving average with an first-order vector autoregressive process. Second, we study the time-varying dynamics along with forecasting the conditional variance-covariance and correlation through a DCC model. The performance of the DCC model is compared to the industrial standard RiskMetrics Exponentially Weighted Moving Averages (EWMA) model. The analysis shows that the covariance of the DCC model is slightly lower than of the RiskmMetrics EWMA model. Our conclusion is that the DCC model is simple and powerful and therefore a promising tool. It provides good insight into how correlations are likely to evolve in the short-run time horizon.

  • 86.
    Chen, Peng
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Modelling the Stochastic Correlation2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, we mainly study the correlation between stocks. The correlation between stocks has been receiving increasing attention. Usually the correlation is considered to be a constant, although it is observed to be varying over time. In this thesis, we study the properties of correlations between Wiener processes and introduce a stochastic correlation model. Following the calibration methods by Zetocha, we implement the calibration for a new set of market data.

  • 87. Chhita, S.
    et al.
    Johansson, Kurt
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Young, B.
    Asymptotic domino statistics in the Aztec diamond2015In: The Annals of Applied Probability, ISSN 1050-5164, E-ISSN 2168-8737, Vol. 25, no 3, p. 1232-1278Article in journal (Refereed)
    Abstract [en]

    We study random domino tilings of the Aztec diamond with different weights for horizontal and vertical dominoes. A domino tiling of an Aztec diamond can also be described by a particle system which is a determinantal process. We give a relation between the correlation kernel for this process and the inverse Kasteleyn matrix of the Aztec diamond. This gives a formula for the inverse Kasteleyn matrix which generalizes a result of Helfgott. As an application, we investigate the asymptotics of the process formed by the southern dominoes close to the frozen boundary. We find that at the northern boundary, the southern domino process converges to a thinned Airy point process. At the southern boundary, the process of holes of the southern domino process converges to a multiple point process that we call the thickened Airy point process. We also study the convergence of the domino process in the unfrozen region to the limiting Gibbs measure.

  • 88.
    Choutri, Salah Eddine
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Topics in Mean-Field Control and Games for Pure Jump Processes2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis is the collection of four papers addressing topics in stochastic optimal control, zero-sum games, backward stochastic differential equations, Pontryagin stochastic maximum principle and relaxed stochastic optimal control.

    In the first two papers, we establish existence of Markov chains of mean-field type, with countable state space and unbounded jump intensities. We further show existence of nearly-optimal controls and, using a Markov chain backward SDE approach, we derive conditions for existence of an optimal control and a saddle-point for a zero-sum differential game associated with risk-neutral and risk-sensitive payoff functionals of mean-field type, under dynamics driven by Markov chains of mean-field type. Our formulation of the control problems is of weak-type, where the dynamics are given in terms of a family of probability measures, under which the coordinate process is a pure jump process with controlled jump intensities.

    In the third paper, we characterize the optimal controls obtained in the first pa-per by deriving sufficient and necessary optimality conditions in terms of a stochastic maximum principle (SMP). Finally, within a completely different setup, in the fourth paper we establish existence of an optimal stochastic relaxed control for stochastic differential equations driven by a G-Brownian motion.

  • 89.
    Choutri, Salah Eddine
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Mean-Field Risk Sensitive Control and Zero-Sum Games for Markov Chains2018Manuscript (preprint) (Other academic)
    Abstract [en]

    We establish existence of controlled Markov chain of mean-field type with unbounded jump intensities by means of a fixed point argument using the Wasserstein distance. Using a Markov chain entropic backward SDE approach, we further suggest conditions for existence of an optimal control and a saddle-point for respectively a control problem and a zero-sum differential game associated with risk sensitive payoff functionals of mean-field type.

  • 90.
    Choutri, Salah eddine
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Tembine, Hamidou
    Optimal Control and Zero-Sum Games for Markov Chains of Mean-Field Type2018In: Mathematical Control and Related Fields, ISSN 2156-8472, E-ISSN 2156-8499Article in journal (Refereed)
    Abstract [en]

    We establish existence of Markov chains of mean-field type with unbounded jump intensities by means of a fixed point argument using the Total Variation distance. We further show existence of nearly-optimal controls and, using a Markov chain backward SDE approach, we suggest conditions for existence of an optimal control and a saddle-point for respectively a control problem and a zero-sum differential game associated with payoff functionals of mean-field type, under dynamics driven by such Markov chains of mean-field type.

  • 91.
    Choutri, Salah eddine
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hamidou, Tembine
    A Stochastic Maximum Principle for Markov Chains of Mean-Field Type2018In: Games, ISSN 2073-4336, E-ISSN 2073-4336, Vol. 9, no 4, article id 84Article in journal (Refereed)
    Abstract [en]

    We derive sufficient and necessary optimality conditions in terms of a stochastic maximum principle (SMP) for controls associated with cost functionals of mean-field type, under dynamics driven by a class of Markov chains of mean-field type which are pure jump processes obtained as solutions of a well-posed martingale problem. As an illustration, we apply the result to generic examples of control problems as well as some applications. 

  • 92.
    Clason Diop, Noah
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Forecasting Euro Area Inflation By Aggregating Sub-components2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this paper is to see whether one can improve on the naiveforecast of Euro Area inflation, where by naive forecast we mean theyear-over-year inflation rate one-year ahead will be the same as the past year.Various model selection procedures are employed on anautoregressive-moving-average model and several Phillips curvebasedmodels. We test also whether we can improve on the Euro Area inflation forecastby first forecasting the sub-components and aggregating them. We manage tosubstantially improve on the forecast by using a Phillips curve based model. Wealso find further improvement by forecasting the sub-components first andaggregating them to Euro Area inflation

  • 93.
    Combes, Richard
    et al.
    Centrale-Supelec, L2S, France.
    Magureanu, Stefan
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Proutiere, Alexandre
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Minimal Exploration in Structured Stochastic Bandits2017In: Advances in Neural Information Processing Systems, Neural information processing systems foundation , 2017, p. 1764-1772Conference paper (Refereed)
    Abstract [en]

    This paper introduces and addresses a wide class of stochastic bandit problems where the function mapping the arm to the corresponding reward exhibits some known structural properties. Most existing structures (e.g. linear, lipschitz, unimodal, combinatorial, dueling,...) are covered by our framework. We derive an asymptotic instance-specific regret lower bound for these problems, and develop OSSB, an algorithm whose regret matches this fundamental limit. OSSB is not based on the classical principle of " role="presentation" style="box-sizing: border-box; display: inline-block; line-height: 0; font-size: 16.38px; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border: 0px; margin: 0px; padding: 1px 0px; color: rgb(51, 51, 51); font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; position: relative;">optimism in the face of uncertainty'' or on Thompson sampling, and rather aims at matching the minimal exploration rates of sub-optimal arms as characterized in the derivation of the regret lower bound. We illustrate the efficiency of OSSB using numerical experiments in the case of the linear bandit problem and show that OSSB outperforms existing algorithms, including Thompson sampling.

  • 94.
    Corander, Jukka
    et al.
    University of Helsinki .
    Cui, Yaqiong
    University of Helsinki .
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Inductive Inference and Partition Exchangeability in Classification2013In: Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence: Papers from the Ray Solomonoff 85th Memorial Conference. / [ed] Dowe, David L., Springer Berlin/Heidelberg, 2013, p. 91-105Conference paper (Refereed)
    Abstract [en]

    Inductive inference has been a subject of intensive research efforts over several decades. In particular, for classification problems substantial advances have been made and the field has matured into a wide range of powerful approaches to inductive inference. However, a considerable challenge arises when deriving principles for an inductive supervised classifier in the presence of unpredictable or unanticipated events corresponding to unknown alphabets of observable features. Bayesian inductive theories based on de Finetti type exchangeability which have become popular in supervised classification do not apply to such problems. Here we derive an inductive supervised classifier based on partition exchangeability due to John Kingman. It is proven that, in contrast to classifiers based on de Finetti type exchangeability which can optimally handle test items independently of each other in the presence of infinite amounts of training data, a classifier based on partition exchangeability still continues to benefit from a joint prediction of labels for the whole population of test items. Some remarks about the relation of this work to generic convergence results in predictive inference are also given.

  • 95. Corander, Jukka
    et al.
    Gyllenberg, Mats
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Learning Genetic Population Structures Using Minimization of Stochastic Complexity2010In: Entropy, ISSN 1099-4300, E-ISSN 1099-4300, Vol. 12, no 5, p. 1102-1124Article in journal (Refereed)
    Abstract [en]

    Considerable research efforts have been devoted to probabilistic modeling of genetic population structures within the past decade. In particular, a wide spectrum of Bayesian models have been proposed for unlinked molecular marker data from diploid organisms. Here we derive a theoretical framework for learning genetic population structure of a haploid organism from bi-allelic markers for which potential patterns of dependence are a priori unknown and to be explicitly incorporated in the model. Our framework is based on the principle of minimizing stochastic complexity of an unsupervised classification under tree augmented factorization of the predictive data distribution. We discuss a fast implementation of the learning framework using deterministic algorithms.

  • 96.
    Cui, Titing
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Short term traffic speed prediction on a large road network2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Traffic flow speed prediction has been an important element in the application of intelligent transportation system (ITS). The timely and accurate traffic flow speed prediction can be utilized to support the control, management, and improvement of traffic conditions. In this project, we investigate the short term traffic flow speed prediction on a large highway network. To eliminate the vagueness, we first give a formal mathematical definition of traffic flow speed prediction problem on a road network. In the last decades, traffic flow prediction research has been advancing from the theoretically well established parametric methods to nonparametric data-driven algorithms, like the deep neural networks. In this research, we give a detailed review of the state-of-art prediction models appeared in the literature.However, we find that the road networks are rather small in most of the literature, usually hundreds of road segments. The highway network in our project is much larger, consists of more than eighty thousand road segments, which makes it almost impossible to use the models in the literature directly. Therefore, in this research, we employ the time series clustering method to divide the road network into different disjoint regions. After that, several prediction models include historical average (HA), univariate and vector Autoregressive Integrated Moving Average model (ARIMA), support vector regression (SVR), Gaussian process regression (GPR), Stacked Autoencoders (SAEs), long short-term memory neural networks (LSTM) are selected to do the prediction on each region. We give a performance analysis of selected models at the end of the thesis.

  • 97. Cui, Y.
    et al.
    Sirén, J.
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Corander, J.
    Simultaneous Predictive Gaussian Classifiers2016In: Journal of Classification, ISSN 0176-4268, E-ISSN 1432-1343, p. 1-30Article in journal (Refereed)
    Abstract [en]

    Gaussian distribution has for several decades been ubiquitous in the theory and practice of statistical classification. Despite the early proposals motivating the use of predictive inference to design a classifier, this approach has gained relatively little attention apart from certain specific applications, such as speech recognition where its optimality has been widely acknowledged. Here we examine statistical properties of different inductive classification rules under a generic Gaussian model and demonstrate the optimality of considering simultaneous classification of multiple samples under an attractive loss function. It is shown that the simpler independent classification of samples leads asymptotically to the same optimal rule as the simultaneous classifier when the amount of training data increases, if the dimensionality of the feature space is bounded in an appropriate manner. Numerical investigations suggest that the simultaneous predictive classifier can lead to higher classification accuracy than the independent rule in the low-dimensional case, whereas the simultaneous approach suffers more from noise when the dimensionality increases.

  • 98.
    Dacke, Fredrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Non-local means denoising ofprojection images in cone beamcomputed tomography2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A new edge preserving denoising method is used to increase image quality in cone beam computed tomography. The reconstruction algorithm for cone beam computed tomography used by Elekta enhances high frequency image details, e.g. noise, and we propose that denoising is done on the projection images before reconstruction. The denoising method is shown to have a connection with computational statistics and some mathematical improvements to the method are considered. Comparisons are made with the state-of-theart method on both artificial and physical objects. The results show that the smoothness of the images is enhanced at the cost of blurring out image details. Some results show how the setting of the method parameters influence the trade off between smoothness and blurred image details in the images.

  • 99.
    Dahlin, Fredrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Storkitt, Samuel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Estimation of Loss Given Default for Low Default Portfolios2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Basel framework allows banks to assess their credit risk by using their own estimates of Loss Given Default (LGD). However, for a Low Default Portfolio (LDP), estimating LGD is difficult due to shortage of default data. This study evaluates different LGD estimation approaches in an LDP setting by using pooled industry data obtained from a subset of the PECDC LGD database. Based on the characteristics of a LDP a Workout LGD approach is suggested. Six estimation techniques, including OLS regression, Ridge regression, two techniques combining logistic regressions with OLS regressions and two tree models, are tested. All tested models give similar error levels when tested against the data but the tree models might produce rather different estimates for specific exposures compared to the other models. Using historical averages yield worse results than the tested models within and out of sample but are not considerably worse out of time.

  • 100.
    Dahlkvist, Victor
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Wendt, Wilhelm
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Värdering av nordiska industribolag - en studie inom regressionsanalys2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Prior to a company being sold or acquired they usually contact an investment bank to support with the valuation of the company, execute the sale and act as advisors for the actors that wish to buy or sell. Investment banks acts as a kind of company broker which is either on the buy or the sell side. When the company value is presented, they usually utilize several methods to calculate the value of the company. During the last decade the frequency of transactions on the Nordic industry market have increased significantly.

    To increase the precision in the valuation of a Nordic industrial company, the question was asked if multiple regression analysis could be used as a valuation method? Also, how did it compare itself against a classical valuation method like Precedent Transaction Analysis?

    These questions came to be analyzed and answered by creating a regression modell built of data gathered from financial reports. The regression model then came to be compared to the PTA-valuation which built on previous company transactions with companies that were alike in financial background.

    This study shows that regression analysis could be used as a complement to the different valuation methods. However the model should not be used to evaluate Nordic industrial companies with the choice of variables in the thesis, since the reliability of the model is unpredictable. Regression analysis as a stand-alone valuation method should be taken with great caution and not replace neither of the classical valuation methods.

1234567 51 - 100 of 464
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf