Change search
Refine search result
2345678 201 - 250 of 380
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 201.
    Johansson, Carl-Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Model risk in a hedging perspective2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 202. Johansson, U.
    et al.
    Linusson, H.
    Löfström, T.
    Boström, Henrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Interpretable regression trees using conformal prediction2018In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 97, p. 394-404Article in journal (Refereed)
    Abstract [en]

    A key property of conformal predictors is that they are valid, i.e., their error rate on novel data is bounded by a preset level of confidence. For regression, this is achieved by turning the point predictions of the underlying model into prediction intervals. Thus, the most important performance metric for evaluating conformal regressors is not the error rate, but the size of the prediction intervals, where models generating smaller (more informative) intervals are said to be more efficient. State-of-the-art conformal regressors typically utilize two separate predictive models: the underlying model providing the center point of each prediction interval, and a normalization model used to scale each prediction interval according to the estimated level of difficulty for each test instance. When using a regression tree as the underlying model, this approach may cause test instances falling into a specific leaf to receive different prediction intervals. This clearly deteriorates the interpretability of a conformal regression tree compared to a standard regression tree, since the path from the root to a leaf can no longer be translated into a rule explaining all predictions in that leaf. In fact, the model cannot even be interpreted on its own, i.e., without reference to the corresponding normalization model. Current practice effectively presents two options for constructing conformal regression trees: to employ a (global) normalization model, and thereby sacrifice interpretability; or to avoid normalization, and thereby sacrifice both efficiency and individualized predictions. In this paper, two additional approaches are considered, both employing local normalization: the first approach estimates the difficulty by the standard deviation of the target values in each leaf, while the second approach employs Mondrian conformal prediction, which results in regression trees where each rule (path from root node to leaf node) is independently valid. An empirical evaluation shows that the first approach is as efficient as current state-of-the-art approaches, thus eliminating the efficiency vs. interpretability trade-off present in existing methods. Moreover, it is shown that if a validity guarantee is required for each single rule, as provided by the Mondrian approach, a penalty with respect to efficiency has to be paid, but it is only substantial at very high confidence levels.

  • 203.
    Jolin, Shan Williams
    et al.
    KTH, School of Engineering Sciences (SCI), Applied Physics, Nanostructure Physics. Royal Inst Technol, Nanostruct Phys, Stockholm, Sweden..
    Rosquist, Kjell
    Stockholm Univ, Dept Phys, Stockholm, Sweden..
    Analytic analysis of irregular discrete universes2018In: General Relativity and Gravitation, ISSN 0001-7701, E-ISSN 1572-9532, Vol. 50, no 9, article id 115Article in journal (Refereed)
    Abstract [en]

    In this work we investigate the dynamics of cosmological models with spherical topology containing up to 600 Schwarzschild black holes arranged in an irregular manner. We solve the field equations by tessellating the 3-sphere into eight identical cells, each having a single edge which is shared by all cells. The shared edge is enforced to be locally rotationally symmetric, thereby allowing for solving the dynamics to high accuracy along this edge. Each cell will then carry an identical (up to parity) configuration which can however have an arbitrarily random distribution. The dynamics of such models is compared to that of previous works on regularly distributed black holes as well as with the standard isotropic dust models of the FLRW type. The irregular models are shown to have richer dynamics than that of the regular models. The randomization of the distribution of the black holes is done both without bias and also with a certain clustering bias. The geometry of the initial configuration of our models is shown to be qualitatively different from the regular case in the way it approaches the isotropic model.

  • 204.
    Jonsson, Sara
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rönnlund, Beatrice
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The New Standardized Approach for Measuring Counterparty Credit Risk2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study investigates the differences in calculationof exposure at default between the current exposure method (CEM) and the newstandardized approach for measuring counterparty credit risk exposures (SA-CCR)for over the counter (OTC) derivatives. The study intends to analyze theconsequence of the usage of different approaches for netting as well as the differencesin EAD between asset classes. After implementing both models and calculating EADon real trades of a Swedish commercial bank it was obvious that SA-CCR has ahigher level of complexity than its predecessor. The results from this studyindicate that SA-CCR gives a lower EAD than CEM because of the higherrecognition of netting but higher EAD when netting is not allowed. Foreignexchange derivatives are affected to a higher extent than interest ratederivatives in this particular study. Foreign exchange derivatives got lowerEAD both when netting was allowed and when netting was not allowed under SA-CCR.A change of method for calculating EAD from CEM to SA-CCR could result in lowerminimum capital requirements

  • 205.
    Jägerhult Fjelberg, Marianne
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Predicting data traffic in cellular data networks2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The exponential increase in cellular data usage in recent time is evident, which introduces challenges and opportunities for the telecom industry. From a Radio Resource Management perspective, it is therefore most valuable to be able to predict future events such as user load. The objective of this thesis is thus to investigate whether one can predict such future events based on information available in a base station. This is done by clustering data obtained from a simulated 4G network using Gaussian Mixture Models. Based on this, an evaluation based on the cluster signatures is performed, where heavy-load users seem to be identified. Furthermore, other evaluations on other temporal aspects tied to the clusters and cluster transitions is performed. Secondly, supervised classification using Random Forest is performed, in order to investigate whether prediction of these cluster labels is possible. High accuracies for most of these classifications are obtained, suggesting that prediction based on these methods can be made.

  • 206. Jääskinen, Väinö
    et al.
    Xiong, Jie
    Corander, Jukka
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Sparse Markov Chains for Sequence Data. Scandinavian Journal of Statistics2014In: Scandinavian Journal of Statistics, ISSN 0303-6898, E-ISSN 1467-9469, Vol. 41, no 3, p. 639-655Article in journal (Refereed)
    Abstract [en]

    Finite memory sources and variable-length Markov chains have recently gained popularity in data compression and mining, in particular, for applications in bioinformatics and language modelling. Here, we consider denser data compression and prediction with a family of sparse Bayesian predictive models for Markov chains in finite state spaces. Our approach lumps transition probabilities into classes composed of invariant probabilities, such that the resulting models need not have a hierarchical structure as in context tree-based approaches. This can lead to a substantially higher rate of data compression, and such non-hierarchical sparse models can be motivated for instance by data dependence structures existing in the bioinformatics context. We describe a Bayesian inference algorithm for learning sparse Markov models through clustering of transition probabilities. Experiments with DNA sequence and protein data show that our approach is competitive in both prediction and classification when compared with several alternative methods on the basis of variable memory length.

  • 207.
    Jöhnemark, Alexander
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Modeling Operational Risk2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Basel II accord requires banks to put aside a capital buffer against unexpected operational losses, resulting from inadequate or failed internal processes, people and systems or from external events. Under the sophisticated Advanced Measurement Approach banks are given the opportunity to develop their own model to estimate operational risk.This report focus on a loss distribution approach based on a set of real data.

    First a comprehensive data analysis was made which suggested that the observations belonged to a heavy tailed distribution. An evaluation of commonly used distributions was performed. The evaluation resulted in the choice of a compound Poisson distribution to model frequency and a piecewise defined distribution with an empirical body and a generalized Pareto tail to model severity. The frequency distribution and the severity distribution define the loss distribution from which Monte Carlo simulations were made in order to estimate the 99.9% quantile, also known as the the regulatory capital.

    Conclusions made on the journey were that including all operational risks in a model is hard, but possible, and that extreme observations have a huge impact on the outcome.

  • 208.
    Kallur, Oskar
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On the use of Value-at-Risk based models for the Fixed Income market as a risk measure for Central Counterparty clearing2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis the use of VaR based models are investigated for the purpose of setting margin requirements for Fixed Income portfolios. VaR based models has become one of the standard ways for Central Counterparties to determine the margin requirements for different types of portfolios. However there are a lot of different ways to implement a VaR based model in practice, especially for Fixed Income portfolios. The models presented in this thesis are based on Filtered Historical Simulation (FHS). Furthermore a model that combines FHS with a Student’s t copula to model the correlation between instruments in a portfolio is presented. All models are backtested using historical data dating from 1998 to 2016. The FHS models seems to produce reasonably accurate VaR estimates. However there are other market related properties that must be fulfilled for a model to be used to set margin requirements. These properties are investigated and discussed.

  • 209.
    Karlsson, Johan
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Enqvist, Per
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Gattami, A.
    Confidence assessment for spectral estimation based on estimated covariances2016In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 4343-4347Conference paper (Refereed)
    Abstract [en]

    In probability theory, time series analysis, and signal processing, many identification and estimation methods rely on covariance estimates as an intermediate statistics. Errors in estimated covariances propagate and degrade the quality of the estimation result. In particular, in large network systems where each system node of the network gather and pass on results, it is important to know the reliability of the information so that informed decisions can be made. In this work, we design confidence regions based on covariance estimates and study how these can be used for spectral estimation. In particular, we consider three different confidence regions based on sets of unitarily invariant matrices and bound the eigenvalue distribution based on three principles: uniform bounds; arithmetic and harmonic means; and the Marcenko-Pastur Law eigenvalue distribution for random matrices. Using these methodologies we robustly bound the energy in a selected frequency band, and compare the resulting spectral bound from the respective confidence regions.

  • 210.
    Katzler, Sigrid
    KTH, School of Architecture and the Built Environment (ABE), Real Estate and Construction Management, Building and Real Estate Economics.
    Methods for comparing diversification strategies on the Swedish real estate market2016In: International Journal of Strategic Property Management, ISSN 1648-715X, E-ISSN 1648-9179, Vol. 20, no 1, p. 17-30Article in journal (Refereed)
    Abstract [en]

    This paper compares the effectiveness of different property portfolio diversification strategies using five methods; (1) correlation matrices, (2) efficient frontiers, (3) Sharpe ratios, using three different sub methods, (4) coefficients in equations explaining total returns and (5) R-square values in equations explaining total returns. The evaluation methods are applied to both value weighted and equally weighted indices based on Swedish real estate return data. All methods show that, if any, diversifying over property types is a better strategy on the Swedish market than diversifying over regions. No test yields significant support for regional diversification. The support for the property type strategy is stronger when using equally weighted indices.

  • 211.
    Kiamehr, Ramin
    et al.
    Department of Geodesy and Geomatics, Zanjan University, Iran).
    Eshagh, Mehdi
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Estimating variance components of ellipsoidal, orthometric and geoidalheights through the GPS/levelling Network in Iran2008In: Journal of the Earth and Space Physics, ISSN 0378-1046, Vol. 34, no 3, p. 1-13Article in journal (Refereed)
    Abstract [en]

    The Best Quadratic Unbiased Estimation (BQUE) of variance components in the Gauss-Helmert model is used to combine adjustment of GPS/levelling and geoid to determinethe individual variance components for each of the three height types. Through theresearch, different reasons for achievement of the negative variance components werediscussed and a new modified version of the Best Quadratic Unbiased Non-negativeEstimator (MBQUNE) was successfully developed and applied. This estimation could beuseful for estimating the absolute accuracy level which can be achieved using theGPS/levelling method. A general MATLAB function is presented for numericalestimation of variance components by using the different parametric models. Themodified BQUNE and developed software was successfully applied for estimating thevariance components through the sample GPS/levelling network in Iran. In the followingresearch, we used the 75 outlier free and well distributed GPS/levelling data. Threecorrective surface models based on the 4, 5 and 7 parameter models were used throughthe combined adjustment of the GPS/levelling and geoidal heights. Using the 7-parametermodel, the standard deviation indexes of the geoidal, geodetic and orthometric heights inIran were estimated to be about 27, 39 and 35 cm, respectively.

  • 212.
    Kihlström, Gustav
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A self-normalizing neural network approach to bond liquidity classication2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Bond liquidity risk is complex and something that every bond-investor needs to take into account. In this paper we investigate how well a selfnormalizing neural network (SNN) can be used to classify bonds with respect to their liquidity, and compare the results with that of a simpler logistic regression. This is done by analyzing the two algorithms' predictive capabilities on the Swedish bond market. Performing this analysis we find that the performance of the SNN and the logistic regression are broadly on the same level. However, the substantive overfitting to the training data in the case of the SNN suggests that a better performing model could be created by applying regularization techniques. As such, the conclusion is formed as such that there is need of more research in order to determine whether neural networks are the premier method to modelling liquidity.

  • 213.
    Klingmann Rönnqvist, Max
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Numerical Instability of Particle Learning: a case study2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This master's thesis is about a method called Particle Learning (PL) which can be used to analyze so called hidden Markov models (HMM) or, with an alternative terminology, state-space models (SSM) which are very popular for modeling time series. The advantage of PL over more established methods is its capacity to process new datapoints with a constant demand on computational resources but it has been suspected to su er from a problem known as particle path degeneracy. The purpose with this report is to investigate the degeneracy of PL by testing it on two examples. The results suggest that the method may not work very well for long time series.

  • 214.
    Koivusalo, Richard
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical analysis of empirical pairwise copulas for the S&P 500 stocks2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    It is of great importance to find an analytical copula that will represent the empirical lower tail dependence. In this study, the pairwise empirical copula are estimated using data of the S&P 500 stocks during the period 2007-2010.Different optimization methods and measures of dependence have been used to fit Gaussian, t and Clayton copula to the empirical copulas, in order to represent the empirical lower tail dependence. These different measures of dependence and optimization methods with their restrictions, point at different analytical copulas being optimal. In this study the t copula with 5 degrees of freedom is giving the most fulfilling result, when it comes to representing lower tail dependence. The t copula with 5 degrees of freedom gives the best representation of empirical lower tail dependence, whether one uses the 'Empirical maximum likelihood estimator', or 'Equal Ƭ' as an approach.

     

  • 215. Koski, Timo
    Hidden Markov models for bioinformatics2001Book (Refereed)
  • 216.
    Koski, Timo
    KTH, Superseded Departments, Mathematics.
    Hidden Markov Models for Bioinformatics2001Book (Refereed)
  • 217.
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The Likelihood Ratio Statistic for Testing Spatial Independence using a Separable Covariance Matrix2009Report (Other academic)
    Abstract [en]

    his paper deals with the problem of testing spatial independence for dependent observations. The sample observationmatrix is assumed to follow a matrix normal distribution with a separable covariance matrix, in other words it can be written as a Kronecker product of two positive definite matrices. Two cases are considered, when the temporal covariance is known and when it is unknown. When the temporal covariance is known, the maximum likelihood estimates are computed and the asymptotic null distribution is given. In the case when the temporal covariance is unknown the maximum likelihood estimates of the parameters are found by an iterative alternating algorithm

  • 218.
    Koski, Timo J. T.
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Noble, John M.
    Rios, Felix L.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The Minimal Hoppe-Beta Prior Distribution for Directed Acyclic Graphs and Structure LearningManuscript (preprint) (Other academic)
    Abstract [en]

    The main contribution of this article is a new prior distribution over directed acyclic graphs intended for structured Bayesian networks, where the structure is given by an ordered block model. That is, the nodes of the graph are objects which fall into categories or blocks; the blocks have a natural ordering or ranking. The presence of a relationship between two objects is denoted by a directed edge, from the object of category of lower rank to the object of higher rank. The models considered here were introduced in Kemp et al. [7] for relational data and extended to multivariate data in Mansinghka et al. [12].

    We consider the situation where the nodes of the graph represent random variables, whose joint probability distribution factorises along the DAG. We use a minimal layering of the DAG to express the prior. We describe Monte Carlo schemes, with a similar generative that was used for prior, for finding the optimal a posteriori structure given a data matrix and compare the performance with Mansinghka et al. and also with the uniform prior. 

  • 219.
    Koski, Timo
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Noble, John
    University of Warsaw .
    A Review of Bayesian Networks and Structure Learning2012In: Mathematica Applicanda (Matematyka Stosowana), ISSN 2299-4009, Vol. 40, no 1, p. 51-103Article in journal (Refereed)
    Abstract [en]

    This article reviews the topic of Bayesian networks. A Bayesian networkis a factorisation of a probability distribution along a directed acyclic graph. Therelation between graphicald-separation and independence is described. A short ar-ticle from 1853 by Arthur Cayley [8] is discussed, which contains several ideas laterused in Bayesian networks: factorisation, the noisy ‘or’ gate, applications of algebraicgeometry to Bayesian networks. The ideas behind Pearl’s intervention calculus whenthe DAG represents acausaldependence structure and the relation between the workof Cayley and Pearl is commented on.Most of the discussion is about structure learning, outlining the two main approaches,search and score versus constraint based. Constraint based algorithms often rely onthe assumption offaithfulness, that the data to which the algorithm is applied isgenerated from distributions satisfying a faithfulness assumption where graphicald-separation and independence are equivalent. The article presents some considerationsfor constraint based algorithms based on recent data analysis, indicating a variety ofsituations where the faithfulness assumption does not hold. There is a short discussionabout the causal discovery controversy, the idea thatcausalrelations may be learnedfrom data.

  • 220.
    Krebs, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing a basket option when volatility is capped using affinejump-diffusion models2013Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis considers the price and characteristics of an exotic option called the Volatility-Cap-Target-Level(VCTL) option. The payoff function is a simple European option style but the underlying value is a dynamic portfolio which is comprised of two components: A risky asset and a non-risky asset. The non-risky asset is a bond and the risky asset can be a fund or an index related to any asset category such as equities, commodities, real estate, etc.

    The main purpose of using a dynamic portfolio is to keep the realized volatility of the portfolio under control and preferably below a certain maximum level, denoted as the Volatility-Cap-Target-Level (VCTL). This is attained by a variable allocation between the risky asset and the non-risky asset during the maturity of the VCTL-option. The allocation is reviewed and if necessary adjusted every 15th day. Adjustment depends entirely upon the realized historical volatility of the risky asset.

    Moreover, it is assumed that the risky asset is governed by a certain group of stochastic differential equations called affine jump-diffusion models. All models will be calibrated using out-of-the money European call options based on the Deutsche-Aktien-Index(DAX).

    The numerical implementation of the portfolio diffusions and the use of Monte Carlo methods will result in different VCTL-option prices. Thus, to price a nonstandard product and to comply with good risk management, it is advocated that the financial institution use several research models such as the SVSJ- and the Seppmodel in addition to the Black-Scholes model.

    Keywords: Exotic option, basket option, risk management, greeks, affine jumpdiffusions, the Black-Scholes model, the Heston model, Bates model with lognormal jumps, the Bates model with log-asymmetric double exponential jumps, the Stochastic-Volatility-Simultaneous-Jumps(SVSJ)-model, the Sepp-model.

  • 221.
    Kremer, Laura
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Assessment of a Credit Value atRisk for Corporate Credits2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis I describe the essential steps of developing a credit rating system. This comprises the credit scoring process that assigns a credit score to each credit, the forming of rating classes by the k-means algorithm and the assignment of a probability of default (PD) for the rating classes. The main focus is on the PD estimation for which two approaches are presented. The first and simple approach in form of a calibration curve assumes independence of the defaults of different corporate credits. The second approach with mixture models is more realistic as it takes default dependence into account. With these models we can use an estimate of a country’s GDP to calculate an estimate for the Value-at-Risk of some credit portfolio.

  • 222.
    Köll, Joonas
    et al.
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Hallström, Stefan
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Influence from polydispersity on the morphology of Voronoi and equilibrium foams2017In: Journal of cellular plastics (Print), ISSN 0021-955X, E-ISSN 1530-7999, Vol. 53, no 2, p. 199-214Article in journal (Refereed)
    Abstract [en]

    Stochastic foam models are generated from Voronoi spatial partitioning, using the centers of equi-sized hard spheres in random periodic distributions as seed points. Models with different levels of polydispersity are generated by varying the packing of the spheres. Subsequent relaxation is then performed with the Surface Evolver software which minimizes the surface area for better resemblance with real foam structures. The polydispersity of the Voronoi precursors is conserved when the models are converted into equilibrium models. The relation between the sphere packing fraction and the resulting degree of volumetric polydispersity is examined and the relations between the polydispersity and a number of associated morphology parameters are then investigated for both the Voronoi and the equilibrium models. Comparisons with data from real foams in the literature indicate that the used method is somewhat limited in terms of spread in cell volume but it provides a very controlled way of varying the foam morphology while keeping it periodic and truly stochastic. The study shows several strikingly consistent relations between the spread in cell volume and other geometric parameters, considering the stochastic nature of the models.

  • 223.
    Lamm, Ludvig
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Sunnegårdh, Erik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Efficient Sensitivity Analysis using Algorithmic  Differentiation in Financial Applications2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    One of the most essential tasks of a financial institution is to keep the financial risk the institution is facing down to an acceptable level. This risk can for example be incurred due to bought or sold financial contracts, however, it can usually be dealt with using some kind of hedging technique. Certain quantities refereed to as "the Greeks" are often used to manage risk. The Greeks are usually determined using Monte Carlo simulation in combination with a finite difference approach, this can in some cases be very demanding considering the computational cost. Because of this, alternative methods for determining the Greeks are of interest.

    In this report a method called Algorithmic differentiation is evaluated. As will be described, there are two different settings of Algorithmic differentiation, namely, forward and adjoint mode. The evaluation will be done by firstly introducing the theory of the method and applying it to a simple, non financial, example. Then the method is applied to three different situations often arising in financial applications. The first example covers the case where a grid of local volatilities is given and sensitivities of an option price with respect to all grid points are sought. The second example deals with the case of a basket option. Here sensitivities of the option with respect to all of the underlying assets are desired. The last example covers the case where sensitivities of a caplet with respect to all initial LIBOR rates, under the assumption of a LIBOR Market Model, are sought.

     It is shown that both forward and adjoint mode produces results aligning with the ones determined using a finite difference approach. Also, it is shown that using the adjoint method, in all these three cases, large savings in computational cost can be made compared to using forward mode or finite difference.

  • 224. Larsson, Sara
    et al.
    Rydén, Tobias
    Lund University.
    Holst, Ulla
    Oredsson, Stina
    Johansson, Maria
    Estimating the distribution of the G2 phase duration from flow cytometric histograms2008In: Mathematical Biosciences, ISSN 0025-5564, E-ISSN 1879-3134, Vol. 211, no 1, p. 1-17Article in journal (Refereed)
    Abstract [en]

    A mathematical model, based on branching processes, is proposed to interpret BrdUrd DNA FCM-derived data. Our main interest is in determining the distribution of the G(2) phase duration. Two different model classes involving different assumptions on the distribution of the G(2) phase duration are considered. Different assumptions of the G(2) phase duration result in very similar distributions of the S phase duration and the estimated means and standard deviations of the G(2) phase duration are all in the same range.

  • 225. Larsson, Sara
    et al.
    Rydén, Tobias
    Lund University.
    Holst, Ulla
    Oredsson, Stina
    Johansson, Maria
    Estimating the Total Rate of DNA Replication Using Branching Processes2008In: Bulletin of Mathematical Biology, ISSN 0092-8240, E-ISSN 1522-9602, Vol. 70, no 8, p. 2177-2194Article in journal (Refereed)
    Abstract [en]

    Increasing the knowledge of various cell cycle kinetic parameters, such as the length of the cell cycle and its different phases, is of considerable importance for several purposes including tumor diagnostics and treatment in clinical health care and a deepened understanding of tumor growth mechanisms. Of particular interest as a prognostic factor in different cancer forms is the S phase, during which DNA is replicated. In the present paper, we estimate the DNA replication rate and the S phase length from bromodeoxyuridine-DNA flow cytometry data. The mathematical analysis is based on a branching process model, paired with an assumed gamma distribution for the S phase duration, with which the DNA distribution of S phase cells can be expressed in terms of the DNA replication rate. Flow cytometry data typically contains rather large measurement variations, however, and we employ nonparametric deconvolution to estimate the underlying DNA distribution of S phase cells; an estimate of the DNA replication rate is then provided by this distribution and the mathematical model.

  • 226. Larsson, Sara
    et al.
    Rydén, Tobias
    Lund University.
    Holst, Ulla
    Oredsson, Stina
    Johansson, Maria
    Estimating the variation in S phase duration from flow cytometric histograms2008In: Mathematical Biosciences, ISSN 0025-5564, E-ISSN 1879-3134, Vol. 213, no 1, p. 40-49Article in journal (Refereed)
    Abstract [en]

    A stochastic model for interpreting BrdUrd DNA FCM-derived data is proposed. The model is based on branching processes and describes the progression of the DNA distribution of BrdUrd-labelled cells through the cell cycle. With the main focus on estimating the S phase duration and its variation, the DNA replication rate is modelled by a piecewise linear function, while assuming a gamma distribution for the S phase duration. Estimation of model parameters was carried out using maximum likelihood for data from two different cell lines. The results provided quite a good fit to the data, suggesting that stochastic models may be a valuable tool for analysing this kind of data.

  • 227.
    Lauri, Linus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Algorithmic evaluation of Parameter Estimation for Hidden Markov Models in Finance2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Modeling financial time series is of great importance for being successful within the financial market. Hidden Markov Models is a great way to include the regime shifting nature of financial data. This thesis will focus on getting an in depth knowledge of Hidden Markov Models in general and specifically the parameter estimation of the models. The objective will be to evaluate if and how financial data can be fitted nicely with the model. The subject was requested by Nordea Markets with the purpose of gaining knowledge of HMM’s for an eventual implementation of the theory by their index development group. The research chiefly consists of evaluating the algorithmic behavior of estimating model parameters. HMM’s proved to be a good approach of modeling financial data, since much of the time series had properties that supported a regime shifting approach. The most important factor for an effective algorithm is the number of states, easily explained as the distinguishable clusters of values. The suggested algorithm of continuously modeling financial data is by doing an extensive monthly calculation of starting parameters that are used daily in a less time consuming usage of the EM-algorithm.

  • 228.
    Leijonmarck, Eric
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Exploiting Temporal Difference for Energy Disaggregation via Discriminative Sparse Coding2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis analyzes one hour based energy disaggregation using Sparse Coding by exploiting temporal differences. Energy disaggregation is the task of taking a whole-home energy signal and separating it into its component appliances. Studies have shown that having device-level energy information can cause users to conserve significant amounts of energy, but current electricity meters only report whole-home data. Thus, developing algorithmic methods for disaggregation presents a key technical challenge in the effort to maximize energy conservation. In Energy Disaggregation or sometimes called Non- Intrusive Load Monitoring (NILM) most approaches are based on high frequent monitored appliances, while households only measure their consumption via smart-meters, which only account for one-hour measurements. This thesis aims at implementing key algorithms from J. Zico Kotler, Siddarth Batra and Andrew Ng paper "Energy Disaggregation via Discriminative Sparse Coding" and try to replicate the results by exploiting temporal differences that occur when dealing with time series data. The implementation was successful, but the results were inconclusive when dealing with large datasets, as the algorithm was too computationally heavy for the resources available. The work was performed at the Swedish company Greenely, who develops visualizations based on gamification for energy bills via a mobile application.

  • 229. Li, Bo
    et al.
    Wu, Junfeng
    Qi, Hongsheng
    Proutiere, Alexandre
    KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control.
    Shi, Guodong
    Boolean Gossip Networks2018In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 26, no 1, p. 118-130Article in journal (Refereed)
    Abstract [en]

    This paper proposes and investigates a Boolean gossip model as a simplified but non-trivial probabilistic Boolean network. With positive node interactions, in view of standard theories from Markov chains, we prove that the node states asymptotically converge to an agreement at a binary random variable, whose distribution is characterized for large-scale networks by mean-field approximation. Using combinatorial analysis, we also successfully count the number of communication classes of the positive Boolean network explicitly in terms of the topology of the underlying interaction graph, where remarkably minor variation in local structures can drastically change the number of network communication classes. With general Boolean interaction rules, emergence of absorbing network Boolean dynamics is shown to be determined by the network structure with necessary and sufficient conditions established regarding when the Boolean gossip process defines absorbing Markov chains. Particularly, it is shown that for the majority of the Boolean interaction rules, except for nine out of the total 2(16) - 1 possible nonempty sets of binary Boolean functions, whether the induced chain is absorbing has nothing to do with the topology of the underlying interaction graph, as long as connectivity is assumed. These results illustrate the possibilities of relating dynamical properties of Boolean networks to graphical properties of the underlying interactions.

  • 230.
    Liang, Zhimo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Forecasting Shanghai Composite Index using hidden Markov model2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this thesis is to forecast the future trend of Shanghai Composite Index and other securities. Our approach is applying the hidden Markov models to the market-transaction data indirectly. Previous work has not consider the independent problem between each training samples, which may result in inference bias. So we should select samples which is not significantly dependent, and suppose those samples are independent to each other. Rather than forecasting the future trend by estimating the hidden state one day before the trend, we measure the probabilities of the trend directions by calculating the gaps between the likelihoods of two hidden Markov models in a periods of time before the trends. As we have altered the target function of the optimization in parameter-estimation process, the accuracy of our model is improved. Furthermore, the experiment result reveals that it is lucrative to select securities for portfolios by our method.

  • 231.
    Lidholm, Erik
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Entrepreneurship and innovation.
    Nudel, Benjamin
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Entrepreneurship and innovation.
    Implications of Multiple Curve Construction in the Swedish Swap Market2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The global financial crisis of 2007 caused abrupt changes in the financial markets. Interest rates that were known to follow each other diverged. Furthermore, both regulation and an increased awareness of counterparty credit risks have fuelled a growth of collateralised contracts. As a consequence, pre-crisis swap pricing methods are no longer valid. In light of this, the purpose of this thesis is to apply a framework to the Swedish swap market that is able to consistently price interest rate and cross currency swaps in the presence of non-negligible cross currency basis spreads, and to investigate the pricing differences arising from the use and type of collat- eral. Through the implementation of a framework proposed by Fujii, Shimada and Takahashi (2010b), it is shown that the usage of collateral has a noticeable impact on the pricing. Ten year forward starting swaps are found to be priced at lower rates under collateral. Moreover, the results from pricing off-market swaps show that disregarding the impact of collateral would cause one to consistently underestimate the change in value of a contract, whether in or out of the money. The choice of collateral currency is also shown to matter, as pricing under SEK and USD as the collateral currencies yielded different results, in terms of constructed curves as well as in the pricing of spot starting, forward starting and off-market swaps. Based on the results from the pricing of off-market swaps, two scenarios are outlined that exemplify the importance of correct pricing methods when terminating and novating swaps. It is concluded that a market participant who fails to recognise the pricing implications from the usage and type of collateral could incur substantial losses.

  • 232.
    Ligai, Wolmir
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistisk analys avtågförseningar i Sverige2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this work, data for train travel was analyzed to determine if any factors could affect the delay time, and risk of delay. Data was given by Trafikverket for all railway travels in Sweden (n = 827087) for year 2013, excluding metro and tram travels. Models used were a multiple linear regression, and two logistic regressions, the result of the latter was examined.

    The dependent variables in the models that were examined were which trains had delays to the final destination, and which trains had delays at all. Variables that showed correlation (p<0.01) with delayed trains were non-holidays, planned travel time, and type of train. Number of days to summer solstice, a variable for approximating the weather, showed a weak correlation with train delays. Reasons for delay during the travel were also positively correlated with delays to final destination, and among these the greatest correlation belonged to “Accidents/danger and external factors” and “Infrastructure reasons”.

    It is suggested from the results that factors that strongly affect delays are train routes, and the route traffic capacity.

  • 233.
    Limin, Emelie
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Analysis of purchase behaviors of IKEA family card users, using Generalized linear model2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the end of 2009 IKEA had 267 stores in 25 countries with total sales of 21,5 billion Euros and is one of the most successful home furnishing companies in the world. IKEA has its roots in Småland, a region in Sweden with a history of poorness. This has come to characterize IKEAs business concept with quality furniture’s at low prices. With this business concept in mind IKEA strives to save money and avoid wasting resources, this also goes in line with IKEAs efforts in saving the environment. Market adaptation is a key to succeed with this concept. A better understanding of what the customer wants reduces the risk of producing too much and therefore decreases the amount of waste. This thesis studies the customers purchase habits when it comes to chairs and tables. IKEA collects information of their customers by the IKEA family card, which stores all purchase information of the customer. By access to that database we are able to compare the purchase habits on different markets. In this thesis we are interested of knowing which chairs the customer has bought on the same receipt as a certain table. With this information we can compare actual figures with IKEAs belief and make models over the purchase patterns.

  • 234.
    Lindblad, Kalle
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    How big is large?: A study of the limit for large insurance claims in case reserves2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A company issuing an insurance will provide, in return for a monetary premium, acceptance of the liability to make certain payments to the insured person or company if some beforehand specified event occurs. There will always be a delay between occurrence of this event and actual payment from the insurance company. It is therefore necessary for the company to put aside money for this liability. This money is called the reserve. When a claim is reported, a claim handler will make an estimate of how much the company will have to pay to the claimant. This amount is booked as a liability. This type of reserve is called; "case reserve". When making the estimate, the claim handler has the option of giving the claim a standard reserve or a manual reserve. A standard reserve is a statistically calculated amount based on historical claim costs. This type of reserve is more often used in small claims. A manual reserve is a reserve subjectively decided by the claim handler. This type of reserve is more often used in large claims. This thesis propose a theory to model and calculate an optimal limit above which a claim should be considered large. An application of the method is also applied to some different types of claims.

  • 235.
    Lindskog, Filip
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hammarlid, Ola
    Rehn, Carl-Johan
    Risk and portfolio analysis: principles and methods2012Book (Refereed)
  • 236.
    Linusson, Svante
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Potka, Samu
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Sulzgruber, Robin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    On random shifted standard Young tableaux and 132-avoiding sorting networksManuscript (preprint) (Other academic)
    Abstract [en]

    We study shifted standard Young tableaux (SYT). The limiting surface of uniformly random shifted SYT of staircase shape is determined, with the integers in the SYT as heights. This implies via properties of the Edelman-Greene bijection results about random 132-avoiding sorting networks, including limit shapes for trajectories and intermediate permutations. Moreover, the expected number of adjacencies in SYT is considered. It is shown that on average each row and each column of a shifted SYT of staircase shape contains precisely one adjacency.

  • 237.
    Liu, Du
    et al.
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Energy Compaction on Graphs for Motion-Adaptive Transforms2015In: Data Compression Conference Proceedings, 2015, p. 457-Conference paper (Refereed)
    Abstract [en]

    It is well known that the Karhunen-Loeve Transform (KLT) diagonalizes the covariance matrix and gives the optimal energy compaction. Since the real covariance matrix may not be obtained in video compression, we consider a covariance model that can be constructed without extra cost. In this work, a covariance model based on a graph is considered for temporal transforms of videos. The relation between the covariance matrix and the Laplacian is studied. We obtain an explicit expression of the relation for tree graphs, where the trees are defined by motion information. The proposed graph-based covariance is a good model for motion-compensated image sequences. In terms of energy compaction, our graph-based covariance model has the potential to outperform the classical Laplacian-based signal analysis.

  • 238.
    Ljung, Carl
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Copula selection and parameter estimation in market risk models2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, literature is reviewed for theory regarding elliptical copulas (Gaussian, Student’s t, and Grouped t) and methods for calibrating parametric copulas to sets of observations. Theory regarding model diagnostics is also summarized in the thesis. Historical data of equity indices and government bond rates from several geo-graphical regions along with U.S. corporate bond indices are used as proxies of the most significant stochastic variables in the investment portfolio of If P&C. These historical observations are transformed into pseudo-uniform observations, pseudo-observations, using parametric and non-parametric univariate models. The parametric models are fitted using both maximum likelihood and least squares of the quantile function. Ellip-tical copulas are then calibrated to the pseudo-observations using the well known methods Inference Function for Margins (IFM) and Semi-Parametric (SP) as well as compositions of these methods and a non-parametric estimator of Kendall’s tau.The goodness-of-fit of the calibrated multivariate models is assessed in aspect of general dependence, tail dependence, mean squared error as well as by using universal measures such as Akaike and Bayesian Informa-tion Criterion, AIC and BIC. The mean squared error is computed both using the empirical joint distribution and the empirical Kendall distribution function. General dependence is measured using the scale-invariant measures Kendall’s tau, Spearman’s rho, and Blomqvist’s beta, while tail dependence is assessed using Krup-skii’s tail-weighted measures of dependence (see [16]). Monte Carlo simulation is used to estimate these mea-sures for copulas where analytical calculation is not feasible.Gaussian copulas scored lower than Student’s t and Grouped t copulas in every test conducted. However, not all test produced conclusive results. Further, the obtained values of the tail-weighted measures of depen-dence imply a systematically lower tail dependence of Gaussian copulas compared to historical observations.

  • 239.
    Lorentz, Pär
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Modified Sharpe Ratio Based Portfolio Optimization2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The performance of an optimal-weighted portfolio strategy is evaluated when transaction costs are penalized compared to an equal-weighted portfolio strategy. The optimal allocation weights are found by maximizing a modified Sharpe ratio measure each trading day, where modified refers to the expected return of an asset in this context. The leverage of the investment is determined by a conditional expectation estimate of the number of portfolio assets of the next-coming day. A moving window is used to historically measure the transition probabilities of moving from one state to another within this stochastic count process and this is used as an input to the estimator. It is found that the most accurate estimate is the actual trading day’s number of portfolio assets and this is obtained when the size of the moving window is one. Increasing the penalty parameter on transaction costs of selling and buying assets between trading days lowers the aggregated transaction cost and increases the performance of the optimal-weighted portfolio considerably. The best portfolio performance is obtained when at least 50% of the capital is invested equally among the assets when maximizing the modified Sharpe ratio. The optimal-weighted and equal-weighted portfolios are constructed on a daily basis, where the allowed VaR0:05 is €300 000 for each portfolio. This sets the limit on the amount of capital allowed to be invested each trading day, and is determined by empirical VaR0:05 simulations of these two portfolios.

  • 240.
    Lorenzo Varela, Juan Manuel
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Transport Science.
    Börjesson, Maria
    KTH.
    Daly, Andrew
    Measuring errors by latent variables in transport models2017Conference paper (Refereed)
  • 241.
    Loso, Jesper
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Forecasting of Self-Rated Health Using Hidden Markov Algorithm2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a model for predicting a person’s monthly average of self-rated health the following month was developed. It was based on statistics from a form constructed by HealthWatch. The model used is a Hidden Markov Algorithm based on Hidden Markov Models where the hidden part is the future value of self-rated health. The emissions were based on five of the eleven questions that make the HealthWatch form. The questions are answered on a scale from zero to one hundred. The model predicts in which of three intervals of SRH the responder most likely will answer on average during the following month. The final model has an accuracy of 80 %.

  • 242.
    Lundemo, Anna
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Detecting change points in remote sensing time series2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    We analyse methods for detecting change points in optical remote sensing lake drainage time series. Change points are points in a data set where the statistical properties of the data change. The data that we look at represent drained lakes in the Arctic hemisphere. It is generally noisy, with observations missing due to difficult weather conditions. We evaluate a partitioning algorithm, with five different approaches to model the data, based on least-squares regression and an assumption of normally distributed measurement errors. We also evaluate two computer programs called DBEST and TIMESAT and a MATLAB function called findchangepts(). We find that TIMESAT, DBEST and the MATLAB function are not relevant for our purposes. We also find that the partitioning algorithm that models the data as normally distributed around a piecewise constant function, is best suited for finding change points in our data.

  • 243.
    Lundström, Ina
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Finding Risk Factors for Long-Term Sickness Absence Using Classification Trees2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a model for predicting if someone has an over-risk for long-term sickness absence during the forthcoming year is developed. The model is a classification tree that classifies objects as having high or low risk for long-term sickness absence based on their answers on the Health-Watch form. The HealthWatch form is a questionnaire about health consisting of eleven questions, such as "How do you feel right now?", "How did you sleep last night?", "How is your job satisfaction right now?" etc. As a measure on risk for long-term sickness absence, the Oldenburg Burnout Inventory and a scale for performance based self-esteem are used. Separate models are made for men and for women. The model for women shows good enough performance on a test set for being acceptable as a general model and can be used for prediction. Some conclusions can also be drawn from the additional information given by the classification tree; workload and work atmosphere do not seem to contribute a lot to an in-creased risk for long-term sickness absence, while job satisfaction seems to be one of the most important factors. The model for men performs poorly on a test set, and therefore it is not advisable to use it for prediction or to draw other conclusions from it.

  • 244.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Stochastic modelling in disability insurance2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of two papers related to the stochastic modellingof disability insurance. In the first paper, we propose a stochastic semi-Markovian framework for disability modelling in a multi-period discrete-time setting. The logistic transforms of disability inception and recovery probabilities are modelled by means of stochastic risk factors and basis functions, using counting processes and generalized linear models. The model for disability inception also takes IBNR claims into consideration. We fit various versions of the models into Swedish disability claims data.

    In the second paper, we consider a large, homogeneous portfolio oflife or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic environment. Using a conditional law of large numbers, we establish the connection between risk aggregation and claims reserving for large portfolios. Further, we derive a partial differential equation for moments of present values. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for solving the PDEs very efficiently. Finally, we givea numerical example where moments of present values of disabilityannuities are computed using finite difference methods.

  • 245.
    Löfdahl Grelsson, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Topics in life and disability insurance2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of five papers, presented in Chapters A-E, on topics in life and disability insurance. It is naturally divided into two parts, where papers A and B discuss disability rates estimation based on historical claims data, and papers C-E discuss claims reserving, risk management and insurer solvency.In Paper A, disability inception and recovery probabilities are modelled in a generalized linear models (GLM) framework. For prediction of future disability rates, it is customary to combine GLMs with time series forecasting techniques into a two-step method involving parameter estimation from historical data and subsequent calibration of a time series model. This approach may in fact lead to both conceptual and numerical problems since any time trend components of the model are incoherently treated as both model parameters and realizations of a stochastic process. In Paper B, we suggest that this general two-step approach can be improved in the following way: First, we assume a stochastic process form for the time trend component. The corresponding transition densities are then incorporated into the likelihood, and the model parameters are estimated using the Expectation-Maximization algorithm.In Papers C and D, we consider a large portfolio of life or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic-demographic environment. Using the Conditional Law of Large Numbers (CLLN), we establish the connection between claims reserving and risk aggregation for large portfolios. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for computing reserves and capital requirements efficiently. Paper C focuses on claims reserving and ultimate risk, whereas the focus of Paper D is on the one-year risks associated with the Solvency II directive.In Paper E, we consider claims reserving for life insurance policies with reserve-dependent payments driven by multi-state Markov chains. The associated prospective reserve is formulated as a recursive utility function using the framework of backward stochastic differential equations (BSDE). We show that the prospective reserve satisfies a nonlinear Thiele equation for Markovian BSDEs when the driver is a deterministic function of the reserve and the underlying Markov chain. Aggregation of prospective reserves for large and homogeneous insurance portfolios is considered through mean-field approximations. We show that the corresponding prospective reserve satisfies a BSDE of mean-field type and derive the associated nonlinear Thiele equation.

  • 246. Magnusson, M.
    et al.
    Jonsson, L.
    Villani, M.
    Broman, David
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Sparse Partially Collapsed MCMC for Parallel Inference in Topic Models2018In: Journal of Computational And Graphical Statistics, ISSN 1061-8600, E-ISSN 1537-2715, Vol. 27, no 2, p. 449-463Article in journal (Refereed)
    Abstract [en]

    Topic models, and more specifically the class of latent Dirichlet allocation (LDA), are widely used for probabilistic modeling of text. Markov chain Monte Carlo (MCMC) sampling from the posterior distribution is typically performed using a collapsed Gibbs sampler. We propose a parallel sparse partially collapsed Gibbs sampler and compare its speed and efficiency to state-of-the-art samplers for topic models on five well-known text corpora of differing sizes and properties. In particular, we propose and compare two different strategies for sampling the parameter block with latent topic indicators. The experiments show that the increase in statistical inefficiency from only partial collapsing is smaller than commonly assumed, and can be more than compensated by the speedup from parallelization and sparsity on larger corpora. We also prove that the partially collapsed samplers scale well with the size of the corpus. The proposed algorithm is fast, efficient, exact, and can be used in more modeling situations than the ordinary collapsed sampler. Supplementary materials for this article are available online.

  • 247.
    Magureanu, Stefan
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Combes, Richard
    Supelec, France.
    Proutiere, Alexandre
    KTH, School of Electrical Engineering (EES), Automatic Control. INRIA, France.
    Lipschitz Bandits: Regret Lower Bounds and Optimal Algorithms2014Conference paper (Refereed)
    Abstract [en]

    We consider stochastic multi-armed bandit problems where the expected reward is a Lipschitzfunction of the arm, and where the set of arms is either discrete or continuous. For discrete Lipschitzbandits, we derive asymptotic problem specific lower bounds for the regret satisfied by anyalgorithm, and propose OSLB and CKL-UCB, two algorithms that efficiently exploit the Lipschitzstructure of the problem. In fact, we prove that OSLB is asymptotically optimal, as its asymptoticregret matches the lower bound. The regret analysis of our algorithms relies on a new concentrationinequality for weighted sums of KL divergences between the empirical distributions of rewards andtheir true distributions. For continuous Lipschitz bandits, we propose to first discretize the actionspace, and then apply OSLB or CKL-UCB, algorithms that provably exploit the structure efficiently.This approach is shown, through numerical experiments, to significantly outperform existing algorithmsthat directly deal with the continuous set of arms. Finally the results and algorithms areextended to contextual bandits with similarities.

  • 248. Maire, Florian
    et al.
    Douc, Randal
    Olsson, Jimmy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    COMPARISON OF ASYMPTOTIC VARIANCES OF INHOMOGENEOUS MARKOV CHAINS WITH APPLICATION TO MARKOV CHAIN MONTE CARLO METHODS2014In: Annals of Statistics, ISSN 0090-5364, E-ISSN 2168-8966, Vol. 42, no 4, p. 1483-1510Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the asymptotic variance of sample path averages for inhomogeneous Markov chains that evolve alternatingly according to two different 7-reversible Markov transition kernels P and Q. More specifically, our main result allows us to compare directly the asymptotic variances of two inhomogeneous Markov chains associated with different kernels Pi and Q(i), i is an element of {0, 1}, as soon as the kernels of each pair (P-0, P-1) and (Q(0), Q(1)) can be ordered in the sense of lag-one autocovariance. As an important application, we use this result for comparing different data-augmentation-type Metropolis Hastings algorithms. In particular, we compare some pseudo-marginal algorithms and propose a novel exact algorithm, referred to as the random refreshment algorithm, which is more efficient, in terms of asymptotic variance, than the Grouped Independence Metropolis Hastings algorithm and has a computational complexity that does not exceed that of the Monte Carlo Within Metropolis algorithm.

  • 249.
    Malgrat, Maxime
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing of a “worst of” option using a Copula method2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, we use a Copula Method in order to price basket options and especially “worst of” options. The dependence structure of the underlying assets will be modeled using different families of copulas. The copulas parameters are estimated via the Maximum Likelihood Method from a sample of observed daily returns.

    The Monte Carlo method will be revisited when it comes to generate underlying assets daily returns from the fitted copula.

    Two baskets are priced: one composed of two correlated assets and one composed of two uncorrelated assets. The obtained prices are then compared with the price obtained using the Pricing Partners software

  • 250.
    Malmberg, Emilie
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Sjöberg, Jonas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Förklarande faktorer bakom statsobligationsspread mellan USA och Tyskland2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

     

    This bachelor’s thesis in Mathematical Statistics and Industrial Economics aims to determine explanatory variables of yield spread between U.S. and German government bonds. The bonds used in this thesis have maturities of five and ten years. To accomplish the task at hand, a multiple linear regression model is used. Regression models are commonly used to describe government bond spread, and this bachelor’s thesis aims to create a basis for further modeling and contribute to improvement of existing models. The problem formulation and course of action have been developed in cooperation with a Swedish bank, not named for reasons of confidentiality. Two main parts constitute this bachelor’s thesis. The Industrial Economics part investigates which macroeconomic factors are of interest in order to create the model. The economics are (in this case) the statistical context, which emphasizes the importance of this part. For the mathematical part of the thesis, a multiple linear regression and related statistical tests are performed on the chosen variables. The results of these tests indicate that the policy rate spread between the countries is the most significant variable, and in itself describes the government bond spread quite well. However, the policy rate does not seem to describe the bond spread as well regarding the last five years. This gives a hint that the importance of the variable policy spread is diminishing, while the importance of other factors is increasing.

2345678 201 - 250 of 380
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf