Change search
Refine search result
2345678 201 - 250 of 392
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 201.
    Isaksson, Fredrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Quantifying effects of deformable CT-MR registration2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Rigid image registration is an important part of many medical applications. In order to make correct decisions from the registration process the un-certainty of the results should be included. In this thesis a framework for estimating and visualising the spatial uncertainty of rigid image registration without groundtruth measurements is presented. The framework uses a deformable registration algorithm to estimate the errors and a groupwise registration for collocating multiple image sets to generate multiple realisations of the error field. A mean and covariance field are then generated which allows a characterisation of the error field. The framework is used to evaluate errors in CT-MR registration and a statistically significant bias field is detected using random field theory. It is also established that B-spline registration of CT images to themselves exhibit a bias.

  • 202.
    Ivert, Annica
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Aranha, C.
    Iba, H.
    Feature selection and classification using ensembles of genetic programs and within-class and between-class permutations2015In: 2015 IEEE Congress on Evolutionary Computation, CEC 2015, IEEE , 2015, p. 1121-1128Conference paper (Refereed)
    Abstract [en]

    Many feature selection methods are based on the assumption that important features are highly correlated with their corresponding classes, but mainly uncorrelated with each other. Often, this assumption can help eliminate redundancies and produce good predictors using only a small subset of features. However, when the predictability depends on interactions between features, such methods will fail to produce satisfactory results. In this paper a method that can find important features, both independently and dependently discriminative, is introduced. This method works by performing two different types of permutation tests that classify each of the features as either irrelevant, independently predictive or dependently predictive. It was evaluated using a classifier based on an ensemble of genetic programs. The attributes chosen by the permutation tests were shown to yield classifiers at least as good as the ones obtained when all attributes were used during training-and often better. The proposed method also fared well when compared to other attribute selection methods such as RELIEFF and CFS. Furthermore, the ability to determine whether an attribute was independently or dependently predictive was confirmed using artificial datasets with known dependencies.

  • 203.
    Jangenstål, Lovisa
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hedging Interest Rate Swaps2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis investigates hedging strategies for a book of interest rate swaps of the currencies EUR and SEK. The aim is to minimize the variance of the portfolio and keep the transaction costs down. The analysis is performed using historical simulation for two different cases. First, with the real changes of the forward rate curve and the discount curve. Then, with principal component analysis to reduce the dimension of the changes in the curves. These methods are compared with a method using the principal component variance to randomize new principal components.

  • 204. Janson, Svante
    et al.
    Stefánsson, Sigurdur Örn
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA.
    Scaling limits of random planar maps with a unique large face2015In: Annals of Probability, ISSN 0091-1798, E-ISSN 2168-894X, Vol. 43, no 3, p. 1045-1081Article in journal (Refereed)
    Abstract [en]

    We study random bipartite planar maps defined by assigning nonnegative weights to each face of a map. We prove that for certain choices of weights a unique large face, having degree proportional to the total number of edges in the maps, appears when the maps are large. It is furthermore shown that as the number of edges n of the planar maps goes to infinity, the profile of distances to a marked vertex rescaled by n(-1/2) is described by a Brownian excursion. The planar maps, with the graph metric resealed by n(-1/2), are then shown to converge in distribution toward Aldous' Brownian tree in the Gromov-Hausdorff topology. In the proofs, we rely on the Bouttier-di Francesco-Guitter bijection between maps and labeled trees and recent results on simply generated trees where a unique vertex of a high degree appears when the trees are large.

  • 205.
    Jeppsson, Johannes
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Modeling Natural Human Hand Motion for Grasp Animation2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This report was carried out at Gleechi, a Swedish start-up company working with implementing hand use in Virtual Reality. The thesis presents hand models used to generate natural looking grasping motions. One model were made for each of the thirty-three different grasp types in Feix’s The GRASP Taxonomy.

    Each model is based on functional principal components analysis which was performed on data containing recorded joint angles of grasping motions from real subjects. Prior to functional principal components analysis, dynamic time warping was performed on the recorded joint angles in order to put them on the same length and make them suitable for statistical analysis. The last step of the analysis was to project the data onto the functional principal components and train Gaussian mixture models on the weights obtained. New grasping motions could be generated by sampling weights from the Gaussian mixture models and attaching them to the functional principal components.

    The generated grasps were in general satisfying, but all of the thirty-three grasps were not distinguishable from each other. This was most likely caused by the fact that each degree of freedom was modelled in isolation from each other, so that no correlation between them was included in the model.

     

  • 206.
    Johannesson, Niclas
    et al.
    KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
    Bogodorova, Tetiana
    KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
    Vanfretti, Luigi
    Rensselaer Polytech Inst, Elect Comp & Syst Dept, Troy, NY USA.
    Identifying Low-Order Frequency-Dependent Transmission Line Model Parameters2017In: 2017 IEEE PES Innovative Smart Grid Technologies Conference Europe, ISGT-Europe 2017 - Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2017Conference paper (Refereed)
    Abstract [en]

    This paper describes the modeling and parameter identification of a frequency dependent transmission line model from time-domain data. To achieve this, a single-phase transmission line model was implemented in OpenModelica where the frequency dependent behavior of the line was realized by a series of rational functions using the Modelica language. Next, the developed line model was exported as a Functional Mock-up Unit (FMU). The RaPId toolbox was then used for automated parameter optimization of the model within the FMU that was interfaced to RaPId via the FMI Toolbox for MATLAB. Given a reasonable starting guess of the set of parameters, the toolbox improved the model's response significantly, resulting in a good approximation even though low-order representations were used for the identification process. It was found that even though the process was straightforward, it can be enhanced by exploiting the physical/numerical properties of this specific problem.

  • 207.
    Johansson, Annelie
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Claims Reserving on Macro- and Micro-Level2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Three methods for claims reserving are compared on two data sets. The first two methods are the commonly used chain ladder method that uses aggregated payments and the relatively new method, double chain ladder, that apart from the payments data also uses the number of reported claims. The third method is more advanced, data on micro-level is needed such as the reporting delay and the number of payment periods for every single claim. The two data sets that are used consist of claims with typically shorter and longer settlement time, respectively. The questions considered are if you can gain anything from using a method that is more advanced than the chain ladder method and if the gain differs from the two data sets. The methods are compared by simulating the reserves distributions as well as comparing the point estimates of the reserve with the real out-of-sample reserve. The results show that there is no gain in using the micro-level method considered. The double chain lad- der method on the other hand performs better than the chain ladder method. The difference between the two data sets is that the reserve in the data set with longer settlement times is harder to estimate, but no difference can be seen when it comes to method choice.

  • 208.
    Johansson, Carl-Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Model risk in a hedging perspective2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 209. Johansson, U.
    et al.
    Linusson, H.
    Löfström, T.
    Boström, Henrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Interpretable regression trees using conformal prediction2018In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 97, p. 394-404Article in journal (Refereed)
    Abstract [en]

    A key property of conformal predictors is that they are valid, i.e., their error rate on novel data is bounded by a preset level of confidence. For regression, this is achieved by turning the point predictions of the underlying model into prediction intervals. Thus, the most important performance metric for evaluating conformal regressors is not the error rate, but the size of the prediction intervals, where models generating smaller (more informative) intervals are said to be more efficient. State-of-the-art conformal regressors typically utilize two separate predictive models: the underlying model providing the center point of each prediction interval, and a normalization model used to scale each prediction interval according to the estimated level of difficulty for each test instance. When using a regression tree as the underlying model, this approach may cause test instances falling into a specific leaf to receive different prediction intervals. This clearly deteriorates the interpretability of a conformal regression tree compared to a standard regression tree, since the path from the root to a leaf can no longer be translated into a rule explaining all predictions in that leaf. In fact, the model cannot even be interpreted on its own, i.e., without reference to the corresponding normalization model. Current practice effectively presents two options for constructing conformal regression trees: to employ a (global) normalization model, and thereby sacrifice interpretability; or to avoid normalization, and thereby sacrifice both efficiency and individualized predictions. In this paper, two additional approaches are considered, both employing local normalization: the first approach estimates the difficulty by the standard deviation of the target values in each leaf, while the second approach employs Mondrian conformal prediction, which results in regression trees where each rule (path from root node to leaf node) is independently valid. An empirical evaluation shows that the first approach is as efficient as current state-of-the-art approaches, thus eliminating the efficiency vs. interpretability trade-off present in existing methods. Moreover, it is shown that if a validity guarantee is required for each single rule, as provided by the Mondrian approach, a penalty with respect to efficiency has to be paid, but it is only substantial at very high confidence levels.

  • 210.
    Jolin, Shan Williams
    et al.
    KTH, School of Engineering Sciences (SCI), Applied Physics, Nanostructure Physics. Royal Inst Technol, Nanostruct Phys, Stockholm, Sweden..
    Rosquist, Kjell
    Stockholm Univ, Dept Phys, Stockholm, Sweden..
    Analytic analysis of irregular discrete universes2018In: General Relativity and Gravitation, ISSN 0001-7701, E-ISSN 1572-9532, Vol. 50, no 9, article id 115Article in journal (Refereed)
    Abstract [en]

    In this work we investigate the dynamics of cosmological models with spherical topology containing up to 600 Schwarzschild black holes arranged in an irregular manner. We solve the field equations by tessellating the 3-sphere into eight identical cells, each having a single edge which is shared by all cells. The shared edge is enforced to be locally rotationally symmetric, thereby allowing for solving the dynamics to high accuracy along this edge. Each cell will then carry an identical (up to parity) configuration which can however have an arbitrarily random distribution. The dynamics of such models is compared to that of previous works on regularly distributed black holes as well as with the standard isotropic dust models of the FLRW type. The irregular models are shown to have richer dynamics than that of the regular models. The randomization of the distribution of the black holes is done both without bias and also with a certain clustering bias. The geometry of the initial configuration of our models is shown to be qualitatively different from the regular case in the way it approaches the isotropic model.

  • 211.
    Jonsson, Sara
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rönnlund, Beatrice
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The New Standardized Approach for Measuring Counterparty Credit Risk2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study investigates the differences in calculationof exposure at default between the current exposure method (CEM) and the newstandardized approach for measuring counterparty credit risk exposures (SA-CCR)for over the counter (OTC) derivatives. The study intends to analyze theconsequence of the usage of different approaches for netting as well as the differencesin EAD between asset classes. After implementing both models and calculating EADon real trades of a Swedish commercial bank it was obvious that SA-CCR has ahigher level of complexity than its predecessor. The results from this studyindicate that SA-CCR gives a lower EAD than CEM because of the higherrecognition of netting but higher EAD when netting is not allowed. Foreignexchange derivatives are affected to a higher extent than interest ratederivatives in this particular study. Foreign exchange derivatives got lowerEAD both when netting was allowed and when netting was not allowed under SA-CCR.A change of method for calculating EAD from CEM to SA-CCR could result in lowerminimum capital requirements

  • 212.
    Jägerhult Fjelberg, Marianne
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Predicting data traffic in cellular data networks2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The exponential increase in cellular data usage in recent time is evident, which introduces challenges and opportunities for the telecom industry. From a Radio Resource Management perspective, it is therefore most valuable to be able to predict future events such as user load. The objective of this thesis is thus to investigate whether one can predict such future events based on information available in a base station. This is done by clustering data obtained from a simulated 4G network using Gaussian Mixture Models. Based on this, an evaluation based on the cluster signatures is performed, where heavy-load users seem to be identified. Furthermore, other evaluations on other temporal aspects tied to the clusters and cluster transitions is performed. Secondly, supervised classification using Random Forest is performed, in order to investigate whether prediction of these cluster labels is possible. High accuracies for most of these classifications are obtained, suggesting that prediction based on these methods can be made.

  • 213. Jääskinen, Väinö
    et al.
    Xiong, Jie
    Corander, Jukka
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Sparse Markov Chains for Sequence Data. Scandinavian Journal of Statistics2014In: Scandinavian Journal of Statistics, ISSN 0303-6898, E-ISSN 1467-9469, Vol. 41, no 3, p. 639-655Article in journal (Refereed)
    Abstract [en]

    Finite memory sources and variable-length Markov chains have recently gained popularity in data compression and mining, in particular, for applications in bioinformatics and language modelling. Here, we consider denser data compression and prediction with a family of sparse Bayesian predictive models for Markov chains in finite state spaces. Our approach lumps transition probabilities into classes composed of invariant probabilities, such that the resulting models need not have a hierarchical structure as in context tree-based approaches. This can lead to a substantially higher rate of data compression, and such non-hierarchical sparse models can be motivated for instance by data dependence structures existing in the bioinformatics context. We describe a Bayesian inference algorithm for learning sparse Markov models through clustering of transition probabilities. Experiments with DNA sequence and protein data show that our approach is competitive in both prediction and classification when compared with several alternative methods on the basis of variable memory length.

  • 214.
    Jöhnemark, Alexander
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Modeling Operational Risk2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Basel II accord requires banks to put aside a capital buffer against unexpected operational losses, resulting from inadequate or failed internal processes, people and systems or from external events. Under the sophisticated Advanced Measurement Approach banks are given the opportunity to develop their own model to estimate operational risk.This report focus on a loss distribution approach based on a set of real data.

    First a comprehensive data analysis was made which suggested that the observations belonged to a heavy tailed distribution. An evaluation of commonly used distributions was performed. The evaluation resulted in the choice of a compound Poisson distribution to model frequency and a piecewise defined distribution with an empirical body and a generalized Pareto tail to model severity. The frequency distribution and the severity distribution define the loss distribution from which Monte Carlo simulations were made in order to estimate the 99.9% quantile, also known as the the regulatory capital.

    Conclusions made on the journey were that including all operational risks in a model is hard, but possible, and that extreme observations have a huge impact on the outcome.

  • 215.
    Kallur, Oskar
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On the use of Value-at-Risk based models for the Fixed Income market as a risk measure for Central Counterparty clearing2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis the use of VaR based models are investigated for the purpose of setting margin requirements for Fixed Income portfolios. VaR based models has become one of the standard ways for Central Counterparties to determine the margin requirements for different types of portfolios. However there are a lot of different ways to implement a VaR based model in practice, especially for Fixed Income portfolios. The models presented in this thesis are based on Filtered Historical Simulation (FHS). Furthermore a model that combines FHS with a Student’s t copula to model the correlation between instruments in a portfolio is presented. All models are backtested using historical data dating from 1998 to 2016. The FHS models seems to produce reasonably accurate VaR estimates. However there are other market related properties that must be fulfilled for a model to be used to set margin requirements. These properties are investigated and discussed.

  • 216.
    Karlsson, Johan
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Enqvist, Per
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Gattami, A.
    Confidence assessment for spectral estimation based on estimated covariances2016In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 4343-4347Conference paper (Refereed)
    Abstract [en]

    In probability theory, time series analysis, and signal processing, many identification and estimation methods rely on covariance estimates as an intermediate statistics. Errors in estimated covariances propagate and degrade the quality of the estimation result. In particular, in large network systems where each system node of the network gather and pass on results, it is important to know the reliability of the information so that informed decisions can be made. In this work, we design confidence regions based on covariance estimates and study how these can be used for spectral estimation. In particular, we consider three different confidence regions based on sets of unitarily invariant matrices and bound the eigenvalue distribution based on three principles: uniform bounds; arithmetic and harmonic means; and the Marcenko-Pastur Law eigenvalue distribution for random matrices. Using these methodologies we robustly bound the energy in a selected frequency band, and compare the resulting spectral bound from the respective confidence regions.

  • 217.
    Kiamehr, Ramin
    et al.
    Department of Geodesy and Geomatics, Zanjan University, Iran).
    Eshagh, Mehdi
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Estimating variance components of ellipsoidal, orthometric and geoidalheights through the GPS/levelling Network in Iran2008In: Journal of the Earth and Space Physics, ISSN 0378-1046, Vol. 34, no 3, p. 1-13Article in journal (Refereed)
    Abstract [en]

    The Best Quadratic Unbiased Estimation (BQUE) of variance components in the Gauss-Helmert model is used to combine adjustment of GPS/levelling and geoid to determinethe individual variance components for each of the three height types. Through theresearch, different reasons for achievement of the negative variance components werediscussed and a new modified version of the Best Quadratic Unbiased Non-negativeEstimator (MBQUNE) was successfully developed and applied. This estimation could beuseful for estimating the absolute accuracy level which can be achieved using theGPS/levelling method. A general MATLAB function is presented for numericalestimation of variance components by using the different parametric models. Themodified BQUNE and developed software was successfully applied for estimating thevariance components through the sample GPS/levelling network in Iran. In the followingresearch, we used the 75 outlier free and well distributed GPS/levelling data. Threecorrective surface models based on the 4, 5 and 7 parameter models were used throughthe combined adjustment of the GPS/levelling and geoidal heights. Using the 7-parametermodel, the standard deviation indexes of the geoidal, geodetic and orthometric heights inIran were estimated to be about 27, 39 and 35 cm, respectively.

  • 218.
    Kihlström, Gustav
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A self-normalizing neural network approach to bond liquidity classication2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Bond liquidity risk is complex and something that every bond-investor needs to take into account. In this paper we investigate how well a selfnormalizing neural network (SNN) can be used to classify bonds with respect to their liquidity, and compare the results with that of a simpler logistic regression. This is done by analyzing the two algorithms' predictive capabilities on the Swedish bond market. Performing this analysis we find that the performance of the SNN and the logistic regression are broadly on the same level. However, the substantive overfitting to the training data in the case of the SNN suggests that a better performing model could be created by applying regularization techniques. As such, the conclusion is formed as such that there is need of more research in order to determine whether neural networks are the premier method to modelling liquidity.

  • 219.
    Klingmann Rönnqvist, Max
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Numerical Instability of Particle Learning: a case study2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This master's thesis is about a method called Particle Learning (PL) which can be used to analyze so called hidden Markov models (HMM) or, with an alternative terminology, state-space models (SSM) which are very popular for modeling time series. The advantage of PL over more established methods is its capacity to process new datapoints with a constant demand on computational resources but it has been suspected to su er from a problem known as particle path degeneracy. The purpose with this report is to investigate the degeneracy of PL by testing it on two examples. The results suggest that the method may not work very well for long time series.

  • 220.
    Koivusalo, Richard
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical analysis of empirical pairwise copulas for the S&P 500 stocks2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    It is of great importance to find an analytical copula that will represent the empirical lower tail dependence. In this study, the pairwise empirical copula are estimated using data of the S&P 500 stocks during the period 2007-2010.Different optimization methods and measures of dependence have been used to fit Gaussian, t and Clayton copula to the empirical copulas, in order to represent the empirical lower tail dependence. These different measures of dependence and optimization methods with their restrictions, point at different analytical copulas being optimal. In this study the t copula with 5 degrees of freedom is giving the most fulfilling result, when it comes to representing lower tail dependence. The t copula with 5 degrees of freedom gives the best representation of empirical lower tail dependence, whether one uses the 'Empirical maximum likelihood estimator', or 'Equal Ƭ' as an approach.

     

  • 221. Koski, Timo
    Hidden Markov models for bioinformatics2001Book (Refereed)
  • 222.
    Koski, Timo
    KTH, Superseded Departments, Mathematics.
    Hidden Markov Models for Bioinformatics2001Book (Refereed)
  • 223.
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The Likelihood Ratio Statistic for Testing Spatial Independence using a Separable Covariance Matrix2009Report (Other academic)
    Abstract [en]

    his paper deals with the problem of testing spatial independence for dependent observations. The sample observationmatrix is assumed to follow a matrix normal distribution with a separable covariance matrix, in other words it can be written as a Kronecker product of two positive definite matrices. Two cases are considered, when the temporal covariance is known and when it is unknown. When the temporal covariance is known, the maximum likelihood estimates are computed and the asymptotic null distribution is given. In the case when the temporal covariance is unknown the maximum likelihood estimates of the parameters are found by an iterative alternating algorithm

  • 224.
    Koski, Timo J. T.
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Noble, John M.
    Rios, Felix L.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The Minimal Hoppe-Beta Prior Distribution for Directed Acyclic Graphs and Structure LearningManuscript (preprint) (Other academic)
    Abstract [en]

    The main contribution of this article is a new prior distribution over directed acyclic graphs intended for structured Bayesian networks, where the structure is given by an ordered block model. That is, the nodes of the graph are objects which fall into categories or blocks; the blocks have a natural ordering or ranking. The presence of a relationship between two objects is denoted by a directed edge, from the object of category of lower rank to the object of higher rank. The models considered here were introduced in Kemp et al. [7] for relational data and extended to multivariate data in Mansinghka et al. [12].

    We consider the situation where the nodes of the graph represent random variables, whose joint probability distribution factorises along the DAG. We use a minimal layering of the DAG to express the prior. We describe Monte Carlo schemes, with a similar generative that was used for prior, for finding the optimal a posteriori structure given a data matrix and compare the performance with Mansinghka et al. and also with the uniform prior. 

  • 225.
    Koski, Timo
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Noble, John
    University of Warsaw .
    A Review of Bayesian Networks and Structure Learning2012In: Mathematica Applicanda (Matematyka Stosowana), ISSN 2299-4009, Vol. 40, no 1, p. 51-103Article in journal (Refereed)
    Abstract [en]

    This article reviews the topic of Bayesian networks. A Bayesian networkis a factorisation of a probability distribution along a directed acyclic graph. Therelation between graphicald-separation and independence is described. A short ar-ticle from 1853 by Arthur Cayley [8] is discussed, which contains several ideas laterused in Bayesian networks: factorisation, the noisy ‘or’ gate, applications of algebraicgeometry to Bayesian networks. The ideas behind Pearl’s intervention calculus whenthe DAG represents acausaldependence structure and the relation between the workof Cayley and Pearl is commented on.Most of the discussion is about structure learning, outlining the two main approaches,search and score versus constraint based. Constraint based algorithms often rely onthe assumption offaithfulness, that the data to which the algorithm is applied isgenerated from distributions satisfying a faithfulness assumption where graphicald-separation and independence are equivalent. The article presents some considerationsfor constraint based algorithms based on recent data analysis, indicating a variety ofsituations where the faithfulness assumption does not hold. There is a short discussionabout the causal discovery controversy, the idea thatcausalrelations may be learnedfrom data.

  • 226.
    Krebs, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing a basket option when volatility is capped using affinejump-diffusion models2013Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis considers the price and characteristics of an exotic option called the Volatility-Cap-Target-Level(VCTL) option. The payoff function is a simple European option style but the underlying value is a dynamic portfolio which is comprised of two components: A risky asset and a non-risky asset. The non-risky asset is a bond and the risky asset can be a fund or an index related to any asset category such as equities, commodities, real estate, etc.

    The main purpose of using a dynamic portfolio is to keep the realized volatility of the portfolio under control and preferably below a certain maximum level, denoted as the Volatility-Cap-Target-Level (VCTL). This is attained by a variable allocation between the risky asset and the non-risky asset during the maturity of the VCTL-option. The allocation is reviewed and if necessary adjusted every 15th day. Adjustment depends entirely upon the realized historical volatility of the risky asset.

    Moreover, it is assumed that the risky asset is governed by a certain group of stochastic differential equations called affine jump-diffusion models. All models will be calibrated using out-of-the money European call options based on the Deutsche-Aktien-Index(DAX).

    The numerical implementation of the portfolio diffusions and the use of Monte Carlo methods will result in different VCTL-option prices. Thus, to price a nonstandard product and to comply with good risk management, it is advocated that the financial institution use several research models such as the SVSJ- and the Seppmodel in addition to the Black-Scholes model.

    Keywords: Exotic option, basket option, risk management, greeks, affine jumpdiffusions, the Black-Scholes model, the Heston model, Bates model with lognormal jumps, the Bates model with log-asymmetric double exponential jumps, the Stochastic-Volatility-Simultaneous-Jumps(SVSJ)-model, the Sepp-model.

  • 227.
    Kremer, Laura
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Assessment of a Credit Value atRisk for Corporate Credits2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis I describe the essential steps of developing a credit rating system. This comprises the credit scoring process that assigns a credit score to each credit, the forming of rating classes by the k-means algorithm and the assignment of a probability of default (PD) for the rating classes. The main focus is on the PD estimation for which two approaches are presented. The first and simple approach in form of a calibration curve assumes independence of the defaults of different corporate credits. The second approach with mixture models is more realistic as it takes default dependence into account. With these models we can use an estimate of a country’s GDP to calculate an estimate for the Value-at-Risk of some credit portfolio.

  • 228.
    Köll, Joonas
    et al.
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Hallström, Stefan
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Influence from polydispersity on the morphology of Voronoi and equilibrium foams2017In: Journal of cellular plastics (Print), ISSN 0021-955X, E-ISSN 1530-7999, Vol. 53, no 2, p. 199-214Article in journal (Refereed)
    Abstract [en]

    Stochastic foam models are generated from Voronoi spatial partitioning, using the centers of equi-sized hard spheres in random periodic distributions as seed points. Models with different levels of polydispersity are generated by varying the packing of the spheres. Subsequent relaxation is then performed with the Surface Evolver software which minimizes the surface area for better resemblance with real foam structures. The polydispersity of the Voronoi precursors is conserved when the models are converted into equilibrium models. The relation between the sphere packing fraction and the resulting degree of volumetric polydispersity is examined and the relations between the polydispersity and a number of associated morphology parameters are then investigated for both the Voronoi and the equilibrium models. Comparisons with data from real foams in the literature indicate that the used method is somewhat limited in terms of spread in cell volume but it provides a very controlled way of varying the foam morphology while keeping it periodic and truly stochastic. The study shows several strikingly consistent relations between the spread in cell volume and other geometric parameters, considering the stochastic nature of the models.

  • 229.
    Lamm, Ludvig
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Sunnegårdh, Erik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Efficient Sensitivity Analysis using Algorithmic  Differentiation in Financial Applications2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    One of the most essential tasks of a financial institution is to keep the financial risk the institution is facing down to an acceptable level. This risk can for example be incurred due to bought or sold financial contracts, however, it can usually be dealt with using some kind of hedging technique. Certain quantities refereed to as "the Greeks" are often used to manage risk. The Greeks are usually determined using Monte Carlo simulation in combination with a finite difference approach, this can in some cases be very demanding considering the computational cost. Because of this, alternative methods for determining the Greeks are of interest.

    In this report a method called Algorithmic differentiation is evaluated. As will be described, there are two different settings of Algorithmic differentiation, namely, forward and adjoint mode. The evaluation will be done by firstly introducing the theory of the method and applying it to a simple, non financial, example. Then the method is applied to three different situations often arising in financial applications. The first example covers the case where a grid of local volatilities is given and sensitivities of an option price with respect to all grid points are sought. The second example deals with the case of a basket option. Here sensitivities of the option with respect to all of the underlying assets are desired. The last example covers the case where sensitivities of a caplet with respect to all initial LIBOR rates, under the assumption of a LIBOR Market Model, are sought.

     It is shown that both forward and adjoint mode produces results aligning with the ones determined using a finite difference approach. Also, it is shown that using the adjoint method, in all these three cases, large savings in computational cost can be made compared to using forward mode or finite difference.

  • 230. Larsson, Sara
    et al.
    Rydén, Tobias
    Lund University.
    Holst, Ulla
    Oredsson, Stina
    Johansson, Maria
    Estimating the distribution of the G2 phase duration from flow cytometric histograms2008In: Mathematical Biosciences, ISSN 0025-5564, E-ISSN 1879-3134, Vol. 211, no 1, p. 1-17Article in journal (Refereed)
    Abstract [en]

    A mathematical model, based on branching processes, is proposed to interpret BrdUrd DNA FCM-derived data. Our main interest is in determining the distribution of the G(2) phase duration. Two different model classes involving different assumptions on the distribution of the G(2) phase duration are considered. Different assumptions of the G(2) phase duration result in very similar distributions of the S phase duration and the estimated means and standard deviations of the G(2) phase duration are all in the same range.

  • 231. Larsson, Sara
    et al.
    Rydén, Tobias
    Lund University.
    Holst, Ulla
    Oredsson, Stina
    Johansson, Maria
    Estimating the Total Rate of DNA Replication Using Branching Processes2008In: Bulletin of Mathematical Biology, ISSN 0092-8240, E-ISSN 1522-9602, Vol. 70, no 8, p. 2177-2194Article in journal (Refereed)
    Abstract [en]

    Increasing the knowledge of various cell cycle kinetic parameters, such as the length of the cell cycle and its different phases, is of considerable importance for several purposes including tumor diagnostics and treatment in clinical health care and a deepened understanding of tumor growth mechanisms. Of particular interest as a prognostic factor in different cancer forms is the S phase, during which DNA is replicated. In the present paper, we estimate the DNA replication rate and the S phase length from bromodeoxyuridine-DNA flow cytometry data. The mathematical analysis is based on a branching process model, paired with an assumed gamma distribution for the S phase duration, with which the DNA distribution of S phase cells can be expressed in terms of the DNA replication rate. Flow cytometry data typically contains rather large measurement variations, however, and we employ nonparametric deconvolution to estimate the underlying DNA distribution of S phase cells; an estimate of the DNA replication rate is then provided by this distribution and the mathematical model.

  • 232. Larsson, Sara
    et al.
    Rydén, Tobias
    Lund University.
    Holst, Ulla
    Oredsson, Stina
    Johansson, Maria
    Estimating the variation in S phase duration from flow cytometric histograms2008In: Mathematical Biosciences, ISSN 0025-5564, E-ISSN 1879-3134, Vol. 213, no 1, p. 40-49Article in journal (Refereed)
    Abstract [en]

    A stochastic model for interpreting BrdUrd DNA FCM-derived data is proposed. The model is based on branching processes and describes the progression of the DNA distribution of BrdUrd-labelled cells through the cell cycle. With the main focus on estimating the S phase duration and its variation, the DNA replication rate is modelled by a piecewise linear function, while assuming a gamma distribution for the S phase duration. Estimation of model parameters was carried out using maximum likelihood for data from two different cell lines. The results provided quite a good fit to the data, suggesting that stochastic models may be a valuable tool for analysing this kind of data.

  • 233.
    Lauri, Linus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Algorithmic evaluation of Parameter Estimation for Hidden Markov Models in Finance2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Modeling financial time series is of great importance for being successful within the financial market. Hidden Markov Models is a great way to include the regime shifting nature of financial data. This thesis will focus on getting an in depth knowledge of Hidden Markov Models in general and specifically the parameter estimation of the models. The objective will be to evaluate if and how financial data can be fitted nicely with the model. The subject was requested by Nordea Markets with the purpose of gaining knowledge of HMM’s for an eventual implementation of the theory by their index development group. The research chiefly consists of evaluating the algorithmic behavior of estimating model parameters. HMM’s proved to be a good approach of modeling financial data, since much of the time series had properties that supported a regime shifting approach. The most important factor for an effective algorithm is the number of states, easily explained as the distinguishable clusters of values. The suggested algorithm of continuously modeling financial data is by doing an extensive monthly calculation of starting parameters that are used daily in a less time consuming usage of the EM-algorithm.

  • 234.
    Leijonmarck, Eric
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Exploiting Temporal Difference for Energy Disaggregation via Discriminative Sparse Coding2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis analyzes one hour based energy disaggregation using Sparse Coding by exploiting temporal differences. Energy disaggregation is the task of taking a whole-home energy signal and separating it into its component appliances. Studies have shown that having device-level energy information can cause users to conserve significant amounts of energy, but current electricity meters only report whole-home data. Thus, developing algorithmic methods for disaggregation presents a key technical challenge in the effort to maximize energy conservation. In Energy Disaggregation or sometimes called Non- Intrusive Load Monitoring (NILM) most approaches are based on high frequent monitored appliances, while households only measure their consumption via smart-meters, which only account for one-hour measurements. This thesis aims at implementing key algorithms from J. Zico Kotler, Siddarth Batra and Andrew Ng paper "Energy Disaggregation via Discriminative Sparse Coding" and try to replicate the results by exploiting temporal differences that occur when dealing with time series data. The implementation was successful, but the results were inconclusive when dealing with large datasets, as the algorithm was too computationally heavy for the resources available. The work was performed at the Swedish company Greenely, who develops visualizations based on gamification for energy bills via a mobile application.

  • 235. Li, Bo
    et al.
    Wu, Junfeng
    Qi, Hongsheng
    Proutiere, Alexandre
    KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control.
    Shi, Guodong
    Boolean Gossip Networks2018In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 26, no 1, p. 118-130Article in journal (Refereed)
    Abstract [en]

    This paper proposes and investigates a Boolean gossip model as a simplified but non-trivial probabilistic Boolean network. With positive node interactions, in view of standard theories from Markov chains, we prove that the node states asymptotically converge to an agreement at a binary random variable, whose distribution is characterized for large-scale networks by mean-field approximation. Using combinatorial analysis, we also successfully count the number of communication classes of the positive Boolean network explicitly in terms of the topology of the underlying interaction graph, where remarkably minor variation in local structures can drastically change the number of network communication classes. With general Boolean interaction rules, emergence of absorbing network Boolean dynamics is shown to be determined by the network structure with necessary and sufficient conditions established regarding when the Boolean gossip process defines absorbing Markov chains. Particularly, it is shown that for the majority of the Boolean interaction rules, except for nine out of the total 2(16) - 1 possible nonempty sets of binary Boolean functions, whether the induced chain is absorbing has nothing to do with the topology of the underlying interaction graph, as long as connectivity is assumed. These results illustrate the possibilities of relating dynamical properties of Boolean networks to graphical properties of the underlying interactions.

  • 236.
    Liang, Zhimo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Forecasting Shanghai Composite Index using hidden Markov model2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this thesis is to forecast the future trend of Shanghai Composite Index and other securities. Our approach is applying the hidden Markov models to the market-transaction data indirectly. Previous work has not consider the independent problem between each training samples, which may result in inference bias. So we should select samples which is not significantly dependent, and suppose those samples are independent to each other. Rather than forecasting the future trend by estimating the hidden state one day before the trend, we measure the probabilities of the trend directions by calculating the gaps between the likelihoods of two hidden Markov models in a periods of time before the trends. As we have altered the target function of the optimization in parameter-estimation process, the accuracy of our model is improved. Furthermore, the experiment result reveals that it is lucrative to select securities for portfolios by our method.

  • 237.
    Lidholm, Erik
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Entrepreneurship and innovation.
    Nudel, Benjamin
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Entrepreneurship and innovation.
    Implications of Multiple Curve Construction in the Swedish Swap Market2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The global financial crisis of 2007 caused abrupt changes in the financial markets. Interest rates that were known to follow each other diverged. Furthermore, both regulation and an increased awareness of counterparty credit risks have fuelled a growth of collateralised contracts. As a consequence, pre-crisis swap pricing methods are no longer valid. In light of this, the purpose of this thesis is to apply a framework to the Swedish swap market that is able to consistently price interest rate and cross currency swaps in the presence of non-negligible cross currency basis spreads, and to investigate the pricing differences arising from the use and type of collat- eral. Through the implementation of a framework proposed by Fujii, Shimada and Takahashi (2010b), it is shown that the usage of collateral has a noticeable impact on the pricing. Ten year forward starting swaps are found to be priced at lower rates under collateral. Moreover, the results from pricing off-market swaps show that disregarding the impact of collateral would cause one to consistently underestimate the change in value of a contract, whether in or out of the money. The choice of collateral currency is also shown to matter, as pricing under SEK and USD as the collateral currencies yielded different results, in terms of constructed curves as well as in the pricing of spot starting, forward starting and off-market swaps. Based on the results from the pricing of off-market swaps, two scenarios are outlined that exemplify the importance of correct pricing methods when terminating and novating swaps. It is concluded that a market participant who fails to recognise the pricing implications from the usage and type of collateral could incur substantial losses.

  • 238.
    Ligai, Wolmir
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistisk analys avtågförseningar i Sverige2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this work, data for train travel was analyzed to determine if any factors could affect the delay time, and risk of delay. Data was given by Trafikverket for all railway travels in Sweden (n = 827087) for year 2013, excluding metro and tram travels. Models used were a multiple linear regression, and two logistic regressions, the result of the latter was examined.

    The dependent variables in the models that were examined were which trains had delays to the final destination, and which trains had delays at all. Variables that showed correlation (p<0.01) with delayed trains were non-holidays, planned travel time, and type of train. Number of days to summer solstice, a variable for approximating the weather, showed a weak correlation with train delays. Reasons for delay during the travel were also positively correlated with delays to final destination, and among these the greatest correlation belonged to “Accidents/danger and external factors” and “Infrastructure reasons”.

    It is suggested from the results that factors that strongly affect delays are train routes, and the route traffic capacity.

  • 239.
    Limin, Emelie
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Analysis of purchase behaviors of IKEA family card users, using Generalized linear model2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the end of 2009 IKEA had 267 stores in 25 countries with total sales of 21,5 billion Euros and is one of the most successful home furnishing companies in the world. IKEA has its roots in Småland, a region in Sweden with a history of poorness. This has come to characterize IKEAs business concept with quality furniture’s at low prices. With this business concept in mind IKEA strives to save money and avoid wasting resources, this also goes in line with IKEAs efforts in saving the environment. Market adaptation is a key to succeed with this concept. A better understanding of what the customer wants reduces the risk of producing too much and therefore decreases the amount of waste. This thesis studies the customers purchase habits when it comes to chairs and tables. IKEA collects information of their customers by the IKEA family card, which stores all purchase information of the customer. By access to that database we are able to compare the purchase habits on different markets. In this thesis we are interested of knowing which chairs the customer has bought on the same receipt as a certain table. With this information we can compare actual figures with IKEAs belief and make models over the purchase patterns.

  • 240.
    Lindblad, Kalle
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    How big is large?: A study of the limit for large insurance claims in case reserves2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A company issuing an insurance will provide, in return for a monetary premium, acceptance of the liability to make certain payments to the insured person or company if some beforehand specified event occurs. There will always be a delay between occurrence of this event and actual payment from the insurance company. It is therefore necessary for the company to put aside money for this liability. This money is called the reserve. When a claim is reported, a claim handler will make an estimate of how much the company will have to pay to the claimant. This amount is booked as a liability. This type of reserve is called; "case reserve". When making the estimate, the claim handler has the option of giving the claim a standard reserve or a manual reserve. A standard reserve is a statistically calculated amount based on historical claim costs. This type of reserve is more often used in small claims. A manual reserve is a reserve subjectively decided by the claim handler. This type of reserve is more often used in large claims. This thesis propose a theory to model and calculate an optimal limit above which a claim should be considered large. An application of the method is also applied to some different types of claims.

  • 241.
    Lindskog, Filip
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hammarlid, Ola
    Rehn, Carl-Johan
    Risk and portfolio analysis: principles and methods2012Book (Refereed)
  • 242.
    Linusson, Svante
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Potka, Samu
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Sulzgruber, Robin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    On random shifted standard Young tableaux and 132-avoiding sorting networksManuscript (preprint) (Other academic)
    Abstract [en]

    We study shifted standard Young tableaux (SYT). The limiting surface of uniformly random shifted SYT of staircase shape is determined, with the integers in the SYT as heights. This implies via properties of the Edelman-Greene bijection results about random 132-avoiding sorting networks, including limit shapes for trajectories and intermediate permutations. Moreover, the expected number of adjacencies in SYT is considered. It is shown that on average each row and each column of a shifted SYT of staircase shape contains precisely one adjacency.

  • 243.
    Liu, Du
    et al.
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Energy Compaction on Graphs for Motion-Adaptive Transforms2015In: Data Compression Conference Proceedings, 2015, p. 457-Conference paper (Refereed)
    Abstract [en]

    It is well known that the Karhunen-Loeve Transform (KLT) diagonalizes the covariance matrix and gives the optimal energy compaction. Since the real covariance matrix may not be obtained in video compression, we consider a covariance model that can be constructed without extra cost. In this work, a covariance model based on a graph is considered for temporal transforms of videos. The relation between the covariance matrix and the Laplacian is studied. We obtain an explicit expression of the relation for tree graphs, where the trees are defined by motion information. The proposed graph-based covariance is a good model for motion-compensated image sequences. In terms of energy compaction, our graph-based covariance model has the potential to outperform the classical Laplacian-based signal analysis.

  • 244.
    Ljung, Carl
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Copula selection and parameter estimation in market risk models2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, literature is reviewed for theory regarding elliptical copulas (Gaussian, Student’s t, and Grouped t) and methods for calibrating parametric copulas to sets of observations. Theory regarding model diagnostics is also summarized in the thesis. Historical data of equity indices and government bond rates from several geo-graphical regions along with U.S. corporate bond indices are used as proxies of the most significant stochastic variables in the investment portfolio of If P&C. These historical observations are transformed into pseudo-uniform observations, pseudo-observations, using parametric and non-parametric univariate models. The parametric models are fitted using both maximum likelihood and least squares of the quantile function. Ellip-tical copulas are then calibrated to the pseudo-observations using the well known methods Inference Function for Margins (IFM) and Semi-Parametric (SP) as well as compositions of these methods and a non-parametric estimator of Kendall’s tau.The goodness-of-fit of the calibrated multivariate models is assessed in aspect of general dependence, tail dependence, mean squared error as well as by using universal measures such as Akaike and Bayesian Informa-tion Criterion, AIC and BIC. The mean squared error is computed both using the empirical joint distribution and the empirical Kendall distribution function. General dependence is measured using the scale-invariant measures Kendall’s tau, Spearman’s rho, and Blomqvist’s beta, while tail dependence is assessed using Krup-skii’s tail-weighted measures of dependence (see [16]). Monte Carlo simulation is used to estimate these mea-sures for copulas where analytical calculation is not feasible.Gaussian copulas scored lower than Student’s t and Grouped t copulas in every test conducted. However, not all test produced conclusive results. Further, the obtained values of the tail-weighted measures of depen-dence imply a systematically lower tail dependence of Gaussian copulas compared to historical observations.

  • 245.
    Lorentz, Pär
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Modified Sharpe Ratio Based Portfolio Optimization2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The performance of an optimal-weighted portfolio strategy is evaluated when transaction costs are penalized compared to an equal-weighted portfolio strategy. The optimal allocation weights are found by maximizing a modified Sharpe ratio measure each trading day, where modified refers to the expected return of an asset in this context. The leverage of the investment is determined by a conditional expectation estimate of the number of portfolio assets of the next-coming day. A moving window is used to historically measure the transition probabilities of moving from one state to another within this stochastic count process and this is used as an input to the estimator. It is found that the most accurate estimate is the actual trading day’s number of portfolio assets and this is obtained when the size of the moving window is one. Increasing the penalty parameter on transaction costs of selling and buying assets between trading days lowers the aggregated transaction cost and increases the performance of the optimal-weighted portfolio considerably. The best portfolio performance is obtained when at least 50% of the capital is invested equally among the assets when maximizing the modified Sharpe ratio. The optimal-weighted and equal-weighted portfolios are constructed on a daily basis, where the allowed VaR0:05 is €300 000 for each portfolio. This sets the limit on the amount of capital allowed to be invested each trading day, and is determined by empirical VaR0:05 simulations of these two portfolios.

  • 246.
    Lorenzo Varela, Juan Manuel
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Transport Science.
    Börjesson, Maria
    KTH.
    Daly, Andrew
    Measuring errors by latent variables in transport models2017Conference paper (Refereed)
  • 247.
    Lorenzo Varela, Juan Manuel
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, System Analysis and Economics.
    Parameter bias in misspecified Hybrid Choice Models: An empirical study.2018In: Transportation Research Procedia, Elsevier B.V. , 2018, p. 99-106Conference paper (Refereed)
    Abstract [en]

    Model misspecification is likely to occur when working with real datasets. However, previous studies showing the advantages of hybrid choice models have mostly used models where structural and measurement equations match the functions employed in the data generating process, especially when parameter biases were discussed. The aim of this study is to investigate the extent of parameter bias in misspecified hybrid choice models, and assess if different modelling assumptions impact the parameter estimates of the choice model. For this task, a mode choice model is estimated on synthetic data with efforts focus on mimicking the conditions present in real datasets, where the postulated structural and measurement equations are less flexible than the functions used to generate the data. Results show that hybrid choice models, even if misspecified, manage to recover better parameter estimates than a multinomial logit. However, hybrid choice models are not unbeatable, as results also indicate that misspecified hybrid choice models might still yield biased parameter estimates. Moreover, results suggest that hybrid choice models successfully isolate the source of model bias, preventing its propagation to other parameter estimates. Results also show that parameter estimates from hybrid choice models are sensible to modelling assumptions, and that parameter estimates of the utility function are robust given that errors are modelled.

  • 248.
    Loso, Jesper
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Forecasting of Self-Rated Health Using Hidden Markov Algorithm2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a model for predicting a person’s monthly average of self-rated health the following month was developed. It was based on statistics from a form constructed by HealthWatch. The model used is a Hidden Markov Algorithm based on Hidden Markov Models where the hidden part is the future value of self-rated health. The emissions were based on five of the eleven questions that make the HealthWatch form. The questions are answered on a scale from zero to one hundred. The model predicts in which of three intervals of SRH the responder most likely will answer on average during the following month. The final model has an accuracy of 80 %.

  • 249.
    Lundemo, Anna
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Detecting change points in remote sensing time series2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    We analyse methods for detecting change points in optical remote sensing lake drainage time series. Change points are points in a data set where the statistical properties of the data change. The data that we look at represent drained lakes in the Arctic hemisphere. It is generally noisy, with observations missing due to difficult weather conditions. We evaluate a partitioning algorithm, with five different approaches to model the data, based on least-squares regression and an assumption of normally distributed measurement errors. We also evaluate two computer programs called DBEST and TIMESAT and a MATLAB function called findchangepts(). We find that TIMESAT, DBEST and the MATLAB function are not relevant for our purposes. We also find that the partitioning algorithm that models the data as normally distributed around a piecewise constant function, is best suited for finding change points in our data.

  • 250.
    Lundström, Ina
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Finding Risk Factors for Long-Term Sickness Absence Using Classification Trees2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a model for predicting if someone has an over-risk for long-term sickness absence during the forthcoming year is developed. The model is a classification tree that classifies objects as having high or low risk for long-term sickness absence based on their answers on the Health-Watch form. The HealthWatch form is a questionnaire about health consisting of eleven questions, such as "How do you feel right now?", "How did you sleep last night?", "How is your job satisfaction right now?" etc. As a measure on risk for long-term sickness absence, the Oldenburg Burnout Inventory and a scale for performance based self-esteem are used. Separate models are made for men and for women. The model for women shows good enough performance on a test set for being acceptable as a general model and can be used for prediction. Some conclusions can also be drawn from the additional information given by the classification tree; workload and work atmosphere do not seem to contribute a lot to an in-creased risk for long-term sickness absence, while job satisfaction seems to be one of the most important factors. The model for men performs poorly on a test set, and therefore it is not advisable to use it for prediction or to draw other conclusions from it.

2345678 201 - 250 of 392
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf