Change search
Refine search result
45678 301 - 350 of 375
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 301.
    Ringh, Emil
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Low complexity algorithms for faster-than-Nyquistsign: Using coding to avoid an NP-hard problem2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis is an investigation of what happens when communication links are pushed towards their limits and the data-bearing-pulses are packed tighter in time than previously done. This is called faster-than-Nyquist (FTN) signaling and it will violate the Nyquist inter-symbol interference criterion, implying that the data-pulsesare no longer orthogonal and thus that the samples at the receiver will be dependent on more than one of the transmitted symbols. Inter-symbol interference (ISI) has occurred and the consequences of it are studied for the AWGN-channel model. Here it is shown that in order to do maximum likelihood estimation on these samples the receiver will face an NP-hard problem. The standard algorithm to make good estimations in the ISI case is the Viterbi algorithm, but applied on a block with N bits and interference among K bits thecomplexity is O(N *2K), hence limiting the practical applicability. Here, a precoding scheme is proposed together with a decoding that reduce the estimation complexity. By applying the proposed precoding/decoding to a data block of length N the estimation can be done in O(N2) operations preceded by a single off-line O(N3) calculation. The precoding itself is also done in O(N2)operations, with a single o ff-line operation of O(N3) complexity.

    The strength of the precoding is shown in simulations. In the first it was tested together with turbo codes of code rate 2/3 and block lengthof 6000 bits. When sending 25% more data (FTN) the non-precoded case needed about 2.5 dB higher signal-to-noise ratio (SNR) to have the same error rate as the precoded case. When the precoded case performed without any block errors, the non-precoded case still had a block error rate almost equal to 1.

    We also studied the scenario of transmission with low latency and high reliability. Here, 600 bits were transmitted with a code rate of 2/3, and hence the target was to communicate 400 bits of data. Applying FTN with doublepacking, that is transmitting 1200 bits during the same amount of time, it was possible to lower the code rate to 1/3 since only 400 bits of data was to be communicated. This technique greatly improves the robustness. When the FTN case performed error free, the classical Nyquist case still had a block error rate of 0.19. To reach error free performance the Nyquist case needed 1.25 dB higher SNR compared to the precoded FTN case with lower code rate.

  • 302.
    Rios, Felix Leopold
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Personalized health care: Switching to a subpopulation in Phase III2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    Since different patients may have different causes of getting a disease, treating every patient having a certain disease in the same manner is not always be the best way to go. A treatment having effect in one type of patients may not have the same effect in a different type of patients. This makes it possible to partition a patient population into subpopulations in which a drug has distinct expected response. In this thesis the patient population is partitioned into two subpopulations where we have prior knowledge that one of them has a higher expected response to a drug than the other. Based on responses to a drug in Phase II, it has been analyzed in which of the populations Phase III should continue. The results show that the decision is highly dependent on the utility function on which the analysis is based. One interesting case is when the vast majority of the patient population belongs to the subpopulation with the higher expected response and a utility function that takes into account the prevalence of the populations. In that case the simulations show that when the difference in expected response between the subpopulations is large, it is a safer choice in continuing in Phase III in the subpopulation having the higher expected response than in the full population even though the expected utility will be less. This is an expected result which indicates that the approach used to model the situation studied in this report is reasonable

  • 303.
    Rios, Felix Leopoldo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Bayesian inference in probabilistic graphical models2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers studying structure learning and Bayesian inference in probabilistic graphical models for both undirected and directed acyclic graphs (DAGs).

    Paper A presents a novel algorithm, called the Christmas tree algorithm (CTA), that incrementally construct junction trees for decomposable graphs by adding one node at a time to the underlying graph. We prove that CTA with positive probability is able to generate all junction trees of any given number of underlying nodes. Importantly for practical applications, we show that the transition probability of the CTA kernel has a computationally tractable expression. Applications of the CTA transition kernel are demonstrated in a sequential Monte Carlo (SMC) setting for counting the number of decomposable graphs.

    Paper B presents the SMC scheme in a more general setting specifically designed for approximating distributions over decomposable graphs. The transition kernel from CTA from Paper A is incorporated as proposal kernel. To improve the traditional SMC algorithm, a particle Gibbs sampler with a systematic refreshment step is further proposed. A simulation study is performed for approximate graph posterior inference within both log-linear and decomposable Gaussian graphical models showing efficiency of the suggested methodology in both cases.

    Paper C explores the particle Gibbs sampling scheme of Paper B for approximate posterior computations in the Bayesian predictive classification framework. Specifically, Bayesian model averaging (BMA) based on the posterior exploration of the class-specific model is incorporated into the predictive classifier to take full account of the model uncertainty. For each class, the dependence structure underlying the observed features is represented by a distribution over the space of decomposable graphs. Due to the intractability of explicit expression, averaging over the approximated graph posterior is performed. The proposed BMA classifier reveals superior performance compared to the ordinary Bayesian predictive classifier that does not account for the model uncertainty, as well as to a number of out-of-the-box classifiers.

    Paper D develops a novel prior distribution over DAGs with the ability to express prior knowledge in terms of graph layerings. In conjunction with the prior, a stochastic optimization algorithm based on the layering property of DAGs is developed for performing structure learning in Bayesian networks. A simulation study shows that the algorithm along with the prior has superior performance compared with existing priors when used for learning graph with a clearly layered structure.

  • 304.
    Rios, Felix Leopoldo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Bayesian structure learning in graphical models2016Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of two papers studying structure learning in probabilistic graphical models for both undirected graphs anddirected acyclic graphs (DAGs).

    Paper A, presents a novel family of graph theoretical algorithms, called the junction tree expanders, that incrementally construct junction trees for decomposable graphs. Due to its Markovian property, the junction tree expanders are shown to be suitable for proposal kernels in a sequential Monte Carlo (SMC) sampling scheme for approximating a graph posterior distribution. A simulation study is performed for the case of Gaussian decomposable graphical models showing efficiency of the suggested unified approach for both structural and parametric Bayesian inference.

    Paper B, develops a novel prior distribution over DAGs with the ability to express prior knowledge in terms of graph layerings. In conjunction with the prior, a search and score algorithm based on the layering property of DAGs, is developed for performing structure learning in Bayesian networks. A simulation study shows that the search and score algorithm along with the prior has superior performance for learning graph with a clearly layered structure compared with other priors.

  • 305. Roueff, Francois
    et al.
    Rydén, Tobias
    Lund University.
    Non-parametric estimation of mixing densities for discrete distributions2005In: Annals of Statistics, ISSN 0090-5364, E-ISSN 2168-8966, Vol. 33, no 5, p. 2066-2108Article in journal (Refereed)
    Abstract [en]

    By a mixture density is meant a density of the form πμ(⋅)=∫πθ(⋅)×μ(dθ), where (πθ)θ∈Θ is a family of probability densities and μ is a probability measure on Θ. We consider the problem of identifying the unknown part of this model, the mixing distribution μ, from a finite sample of independent observations from πμ. Assuming that the mixing distribution has a density function, we wish to estimate this density within appropriate function classes. A general approach is proposed and its scope of application is investigated in the case of discrete distributions. Mixtures of power series distributions are more specifically studied. Standard methods for density estimation, such as kernel estimators, are available in this context, and it has been shown that these methods are rate optimal or almost rate optimal in balls of various smoothness spaces. For instance, these results apply to mixtures of the Poisson distribution parameterized by its mean. Estimators based on orthogonal polynomial sequences have also been proposed and shown to achieve similar rates. The general approach of this paper extends and simplifies such results. For instance, it allows us to prove asymptotic minimax efficiency over certain smoothness classes of the above-mentioned polynomial estimator in the Poisson case. We also study discrete location mixtures, or discrete deconvolution, and mixtures of discrete uniform distributions.

  • 306. Rubenthaler, Sylvain
    et al.
    Rydén, Tobias
    Lund University.
    Wiktorsson, Magnus
    Fast simulated annealing in Rd with an application to maximum likelihood estimation in state-space models2009In: Stochastic Processes and their Applications, ISSN 0304-4149, E-ISSN 1879-209X, Vol. 119, no 6, p. 1912-1931Article in journal (Refereed)
    Abstract [en]

    We study simulated annealing algorithms to maximise a function psi on a subset of R(d). In classical simulated annealing, given a current state theta(n) in stage n of the algorithm, the probability to accept a proposed state z at which psi is smaller, is exp(-beta(n+1)(psi(z) - psi (theta(n))) where (beta(n)) is the inverse temperature. With the standard logarithmic increase of (beta(n)) the probability P(psi(theta(n)) <= psi(max) - epsilon), with psi(max) the maximal value of psi, then tends to zero at a logarithmic rate as n increases. We examine variations of this scheme in which (beta(n)) is allowed to grow faster, but also consider other functions than the exponential for determining acceptance probabilities. The main result shows that faster rates of convergence can be obtained, both with the exponential and other acceptance functions. We also show how the algorithm may be applied to functions that cannot be computed exactly but only approximated, and give an example of maximising the log-likelihood function for a state-space model.

  • 307.
    Rydén, Otto
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical learning procedures for analysis of residential property price indexes2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Residential Price Property Indexes (RPPIs) are used to study the price development of residential property over time. Modeling and analysing an RPPI is not straightforward due to residential property being a heterogeneous good. This thesis focuses on analysing the properties of the two most conventional hedonic index modeling approaches, the hedonic time dummy method and the hedonic imputation method. These two methods are analysed with statistical learning procedures from a regression perspective, specifically, ordinary least squares regression, and a number of more advanced regression approaches, Huber regression, lasso regression, ridge regression and principal component regression. The analysis is based on the data from 56 000 apartment transactions in Stockholm during the period 2013-2016 and results in several models of a RPPI. These suggested models are then validated using both qualitative and quantitative methods, specifically a bootstrap re-sampling to perform analyses of an empirical confidence interval for the index values and a mean squared errors analysis of the different index periods. Main results of this thesis show that the hedonic time dummy index methodology produces indexes with smaller variances and more robust indexes for smaller datasets. It is further shown that modeling of RPPIs with robust regression generally results in a more stable index that is less affected by outliers in the underlying transaction data. This type of robust regression strategy is therefore recommended for a commercial implementation of an RPPI.

  • 308.
    Rydén, Tobias
    Lund University.
    EM versus Markov chain Monte Carlo for estimation of hidden Markov models: a computational perspective2008In: Bayesian Analysis, ISSN 1931-6690, Vol. 3, no 4, p. 659-688Article in journal (Refereed)
    Abstract [en]

    Hidden Markov models (HMMs) and related models have become standard in statistics during the last 15-20 years, with applications in diverse areas like speech and other statistical signal processing, hydrology, financial statistics and econometrics, bioinformatics etc. Inference in HMMs is traditionally often carried out using the EM algorithm, but examples of Bayesian estimation, in general implemented through Markov chain Monte Carlo (MCMC) sampling are also frequent in the HMM literature. The purpose of this paper is to compare the EM and MCMC approaches in three cases of different complexity; the examples include model order selection, continuous-time HMMs and variants of HMMs in which the observed data depends on many hidden variables in an overlapping fashion. All these examples in some way or another originate from real-data applications. Neither EM nor MCMC analysis of HMMs is a black-box methodology without need for user-interaction, and we will illustrate some of the problems, like poor mixing and long computation times, one may expect to encounter.

  • 309.
    Rydén, Tobias
    Lund University.
    Hidden Markov Models2004In: Encyclopedia of Actuarial Science: vol 2 / [ed] Teugels, J., and Sundt, B., Wiley-Blackwell, 2004, p. 821-827Chapter in book (Refereed)
  • 310.
    Röhss, Josefine
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Statistical Framework for Classification of Tumor Type from microRNA Data2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Hepatocellular carcinoma (HCC) is a type of liver cancer with low survival rate, not least due to the difficulty of diagnosing it in an early stage. The objective of this thesis is to build a random forest classification method based on microRNA (and messenger RNA) expression profiles from patients with HCC. The main purpose is to be able to distinguish between tumor samples and normal samples by measuring the miRNA expression. If successful, this method can be used to detect HCC at an earlier stage and to design new therapeutics. The microRNAs and messenger RNAs which have a significant difference in expression between tumor samples and normal samples are selected for building random forest classification models. These models are then tested on paired samples of tumor and surrounding normal tissue from patients with HCC. The results show that the classification models built for classifying tumor and normal samples have high prediction accuracy and hence show high potential for using microRNA and messenger RNA expression levels for diagnosis of HCC.

  • 311. Seita, D.
    et al.
    Pokorny, Florian T.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Mahler, J.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Franklin, M.
    Canny, J.
    Goldberg, K.
    Large-scale supervised learning of the grasp robustness of surface patch pairs2017In: 2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, SIMPAR 2016, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 216-223Conference paper (Refereed)
    Abstract [en]

    The robustness of a parallel-jaw grasp can be estimated by Monte Carlo sampling of perturbations in pose and friction but this is not computationally efficient. As an alternative, we consider fast methods using large-scale supervised learning, where the input is a description of a local surface patch at each of two contact points. We train and test with disjoint subsets of a corpus of 1.66 million grasps where robustness is estimated by Monte Carlo sampling using Dex-Net 1.0. We use the BIDMach machine learning toolkit to compare the performance of two supervised learning methods: Random Forests and Deep Learning. We find that both of these methods learn to estimate grasp robustness fairly reliably in terms of Mean Absolute Error (MAE) and ROC Area Under Curve (AUC) on a held-out test set. Speedups over Monte Carlo sampling are approximately 7500x for Random Forests and 1500x for Deep Learning.

  • 312.
    Serpeka, Rokas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Analyzing and modelling exchange rate data using VAR framework2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

     

    In this report analysis of foreign exchange rates time series are performed. First, triangular arbitrage is detected and eliminated from data series using linear algebra tools. Then Vector Autoregressive processes are calibrated and used to replicate dynamics of exchange rates as well as to forecast time series. Finally, optimal portfolio of currencies with minimal Expected Shortfall is formed using one time period ahead forecasts

  • 313.
    Shahrabi Farahani, Hossein
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Computational Modeling of Cancer Progression2013Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Cancer is a multi-stage process resulting from accumulation of genetic mutations. Data obtained from assaying a tumor only contains the set of mutations in the tumor and lacks information about their temporal order. Learning the chronological order of the genetic mutations is an important step towards understanding the disease. The probability of introduction of a mutation to a tumor increases if certain mutations that promote it, already happened. Such dependencies induce what we call the monotonicity property in cancer progression. A realistic model of cancer progression should take this property into account.

    In this thesis, we present two models for cancer progression and algorithms for learning them. In the first model, we propose Progression Networks (PNs), which are a special class of Bayesian networks. In learning PNs the issue of monotonicity is taken into consideration. The problem of learning PNs is reduced to Mixed Integer Linear Programming (MILP), which is a NP-hard problem for which very good heuristics exist. We also developed a program, DiProg, for learning PNs.

    In the second model, the problem of noise in the biological experiments is addressed by introducing hidden variable. We call this model Hidden variable Oncogenetic Network (HON). In a HON, there are two variables assigned to each node, a hidden variable that represents the progression of cancer to the node and an observable random variable that represents the observation of the mutation corresponding to the node. We devised a structural Expectation Maximization (EM) algorithm for learning HONs. In the M-step of the structural EM algorithm, we need to perform a considerable number of inference tasks. Because exact inference is tractable only on Bayesian networks with bounded treewidth, we also developed an algorithm for learning bounded treewidth Bayesian networks by reducing the problem to a MILP.

    Our algorithms performed well on synthetic data. We also tested them on cytogenetic data from renal cell carcinoma. The learned progression networks from both algorithms are in agreement with the previously published results.

    MicroRNAs are short non-coding RNAs that are involved in post transcriptional regulation. A-to-I editing of microRNAs converts adenosine to inosine in the double stranded RNA. We developed a method for determining editing levels in mature microRNAs from the high-throughput RNA sequencing data from the mouse brain. Here, for the first time, we showed that the level of editing increases with development. 

  • 314.
    Shahrabi Farahani, Hossein
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Lagergren, Jens
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    A structural EM algorithm for learning hidden variable oncogenetic networksManuscript (preprint) (Other academic)
  • 315.
    Shahrabi Farahani, Hossein
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Parviainen, Pekka
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    Lagergren, Jens
    KTH, School of Computer Science and Communication (CSC), Computational Biology, CB.
    A Linear Programming Approach for Learning Bounded Treewidth Bayesian Networks2013Manuscript (preprint) (Other academic)
    Abstract [en]

    In many applications, one wants to compute conditional probabilities from a Bayesian network. This inference problem is NP-hard in general but becomes tractable when the network has bounded treewidth. Motivated by the needs of applications, we study learning bounded treewidth Bayesian networks. We formulate this problem as a mixed integer linear program (MILP) which can be solved by an anytime algorithm. 

  • 316. Shi, Guodong
    et al.
    Proutiere, Alexandre
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Johansson, Mikael
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Baras, John S.
    Johansson, Karl H.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    The Evolution of Beliefs over Signed Social Networks2016In: Operations Research, ISSN 0030-364X, E-ISSN 1526-5463, Vol. 64, no 3, p. 585-604Article in journal (Refereed)
    Abstract [en]

    We study the evolution of opinions (or beliefs) over a social network modeled as a signed graph. The sign attached to an edge in this graph characterizes whether the corresponding individuals or end nodes are friends (positive links) or enemies (negative links). Pairs of nodes are randomly selected to interact over time, and when two nodes interact, each of them updates its opinion based on the opinion of the other node and the sign of the corresponding link. This model generalizes the DeGroot model to account for negative links: when two adversaries interact, their opinions go in opposite directions. We provide conditions for convergence and divergence in expectation, in mean-square, and in almost sure sense and exhibit phase transition phenomena for these notions of convergence depending on the parameters of the opinion update model and on the structure of the underlying graph. We establish a no-survivor theorem, stating that the difference in opinions of any two nodes diverges whenever opinions in the network diverge as a whole. We also prove a live-or-die lemma, indicating that almost surely, the opinions either converge to an agreement or diverge. Finally, we extend our analysis to cases where opinions have hard lower and upper limits. In these cases, we study when and how opinions may become asymptotically clustered to the belief boundaries and highlight the crucial influence of (strong or weak) structural balance of the underlying network on this clustering phenomenon.

  • 317.
    Singh, Alex
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A risk-transaction cost trade-off model for index tracking2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This master thesis considers and evaluates a few different risk models for stock portfolios, including an ordinary sample covariance matrix, factor models and an approach inspired from random matrix theory. The risk models are evaluated by simulating minimum variance portfolios and employing a cross-validation. The Bloomberg+ transaction cost model is investigated and used to optimize portfolios of stocks, with respect to a trade off between the active risk of the portfolio and transaction costs. Further a few different simulations are performed while using the optimizer to rebalance long-only portfolios. The optimization problem is solved using an active-set algorithm. A couple of approaches are shown that may be used to visually try to decide a value for the risk aversion parameter λ in the objective function of the optimization problem.

    The thesis concludes that there is a practical difference between the different risk models that are evaluated. The ordinary sample covariance matrix is shown to not perform as well as the other models. It also shows that more frequent rebalancing is preferable to less frequent. Further the thesis goes on to show a peculiar behavior of the optimization problem, which is that the optimizer does not rebalance all the way to 0 in simulations, even if enough time is provided, unless it is explicitly required by the constraints.

  • 318.
    Singh, Ravi Shankar
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Electric Power and Energy Systems.
    Hooshyar, Hossein
    KTH, School of Electrical Engineering and Computer Science (EECS), Electric Power and Energy Systems.
    Vanfretti, Luigi
    KTH.
    Experimental Real-Time Testing of a Decentralized PMU Data-Based Power Systems Mode Estimator2017In: 2017 IEEE POWER & ENERGY SOCIETY GENERAL MEETING, IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    This paper presents the results and testing of a Phasor Measurement Unit (PMU) data-based mode estimation application deployed within a decentralized architecture using a real-time test platform. This work is a continuation of that in [1], which described a decentralized mode estimation architecture that enables the application to better detect local modes whose observability is affected by other more observable modes. The tests in this paper were carried out using an active distribution network (ADN) comprised of a high voltage network connected to a distribution grid including renewable energy resources (RES). The developed application was run in a decentralized architecture where each PMU was associated with its own processing unit which was running the application to estimate modes from the time-series data. The results of the decentralized mode estimation architecture are analyzed and compared with its centralized counterpart.

  • 319.
    Singull, Martin
    et al.
    Linköpings universitet .
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On the Distribution of Matrix Quadratic Forms2012In: Communications in Statistics - Theory and Methods, ISSN 0361-0926, E-ISSN 1532-415X, Vol. 41, no 18, p. 3403-3415Article in journal (Refereed)
    Abstract [en]

     A characterization of the distribution of the multivariate quadratic form given by XAX', where X is a p x n normally distributed matrix and A is an n x n symmetric real matrix, is presented. We show that the distribution of the quadratic form is the same as the distribution of a weighted sum of non central Wishart distributed matrices. This is applied to derive the distribution of the sample covariance between the rows of X when the expectation is the same for every column and is estimated with the regular mean.

  • 320.
    Sjöberg, Lars Erik
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geoinformatik och Geodesi.
    On the Best Quadratic Minimum Bias Non-Negative Estimator of a Two-Variance Component Model2011In: Journal of Geodetic Science, ISSN 2081-9943, Vol. 1, no 3, p. 280-285Article in journal (Refereed)
    Abstract [en]

    Variance components (VCs) in linear adjustment models are usually successfully computed by unbiased estimators. However, for many unbiased VC techniques estimated variance components might be negative, a result that cannot be tolerated by the user. This is, for example, the case with the simple additive VC model aσ2/1 + bσ2/2 with known coefficients a and b, where either of the unbiasedly estimated variance components σ2/1 + σ2/2 may frequently come out negative. This fact calls for so-called non-negative VC estimators. Here the Best Quadratic Minimum Bias Non-negative Estimator (BQMBNE) of a two-variance component model is derived. A special case with independent observations is explicitly presented.

  • 321.
    Skanke, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Analysis of Pension Strategies2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In a time where people tend to retire earlier and live longer in combination with an augmented personal responsibility of allocating or at least choosing adequately composed pension funds, the importance of a deeper understanding of long term investment strategies is inevitably accentuated. On the background of discrepancies in suggested pension fund strategies by influential fund providers, professional advisers and previous literature, this thesis aims at addressing foremost one particular research question: How should an investor optimally allocate between risky and risk-less assets in a pension fund depending on age? In order to answer the question the sum of Human wealth, defined as the present value of all expected future incomes, and ordinary Financial wealth is maximized by applying a mean-variance and a expected utility approach. The latter, and mathematically more sound method yields a strategy suggesting 100% of available capital to be invested in risky assets until the age of 47 whereafter the portion should be gradually reduced and reach the level of 32% at the last period before retirement. The strategy is clearly favorable to solely holding a risk-free asset and it just outperforms the commonly applied "100 minus age"-strategy.

  • 322. Sköld, Martin
    et al.
    Rydén, Tobias
    Lund University.
    Samuelsson, Viktoria
    Bratt, Charlotte
    Ekblad, Lars
    Olsson, Håkan
    Baldetorp, Bo
    Regression analysis and modelling of data acquisition for SELDI-TOF mass spectrometry2007In: Bioinformatics, ISSN 1367-4803, E-ISSN 1367-4811, Vol. 23, no 11, p. 1401-1409Article in journal (Refereed)
    Abstract [en]

    Motivation: Pre-processing of SELDI-TOF mass spectrometry data is currently performed on a largel y ad hoc basis. This makes comparison of results from independent analyses troublesome and does not provide a framework for distinguishing different sources of variation in data. Results: In this article, we consider the task of pooling a large number of single-shot spectra, a task commonly performed automatically by the instrument software. By viewing the underlying statistical problem as one of heteroscedastic linear regression, we provide a framework for introducing robust methods and for dealing with missing data resulting from a limited span of recordable intensity values provided by the instrument. Our framework provides an interpretation of currently used methods as a maximum-likelihood estimator and allows theoretical derivation of its variance. We observe that this variance depends crucially on the total number of ionic species, which can vary considerably between different pooled spectra. This variation in variance can potentially invalidate the results from naive methods of discrimination/classification and we outline appropriate data transformations. Introducing methods from robust statistics did not improve the standard errors of the pooled samples. Imputing missing values however-using the EM algorithm-had a notable effect on the result; for our data, the pooled height of peaks which were frequently truncated increased by up to 30%.

  • 323.
    Stattin, Oskar
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Large scale inference under sparse and weak alternatives: non-asymptotic phase diagram for CsCsHM statistics2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    High-throughput measurement technology allows to generate and store huge amounts of features, of which very few can be useful for any one single problem at hand. Examples include genomics, proteomics and astronomy, where massive multiple testing often needs to be per- formed, expecting a few significant effects and essentially a null back- ground. A number of new test procedures have been developed for detecting these, so-called sparse and weak effects, in large scale statistical inference. The most widely used is Higher Criticism, HC (see e.g. Donoho and Jin (2004)). A new class of goodness-of-fit test statistics, called CsCsHM, has recently been derived (see Stepanova and Pavlenko (2017)) for the same type of multiple testing, it is shown to achieve better asymptotic properties than the traditional HC approach.This report empirically investigates the behavior of both test procedures in the neighborhood of the detection boundary, i.e. the threshold for the detectability of sparse and weak effects. This theoretical boundary sharply separates the phase space, spanned by the sparsity and weakness parameters, into two subregions the region of detectability and the region of undetectability. The statistics are also applied and compared for both methodologies for features selection in high dimensional binary classification problems. Besides the study of the methods and simulations, applications of both methods on realistic data are carried out. It is found that the statistics are comparable in performance accuracy. 

  • 324.
    Steffen, Richard
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Risk premia implied by derivative prices2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The thesis investigates the potential to recover the real world probabilities of an underlying asset from derivative prices by using the recovery approach developed in (Carr & Yu, 2012) and (Ross, 2011). For this purpose the VIX Index and US Treasury bills are used to recover the VIX dynamics and the short rate dynamics under the real world probability measure. The approach implies that VIX and its derivatives has a risk premium equal to zero contradicting empirical evidence of a substantial negative risk premium. In fact, we show that for any asset unrelated to the short rate its risk premium is zero. In the case of recovering the short rate, the CIR model is calibrated to the US zero coupon Treasury yield curve. The predictions of the recovered CIR process is benchmarked against the risk neutral CIR process and a naive predictor. The recovered process is found to outperform the risk neutral process suggesting that the recovery step was successful. However, it underperforms the naive process in its predictions.

  • 325.
    Stenberg, Kristoffer
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Wikerman, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Evaluating Regime Switching in Dynamic Conditional Correlation2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This paper provides a comparative study of the Dynamic Conditional Correlation model introduced by Engle (2002) and the Independent Switching Dynamic Conditional Correlation model introduced by Lee (2010) by evaluating the models for a set of known correlation processes. The evaluation is also extended to cover empirical data to assess the practical performance of the models. The data include the price of gold and oil, the yield on benchmark 10 year U.S. Treasury notes and the Euro-U.S. dollar exchange rate from January 2007 to December 2009. In addition, a general description of the difficulties of estimating correlations is presented to give the reader a better understanding of the limitations of the models. From the results, it is concluded that there is no general superiority of neither the IS-DCC model nor the DCC model, except for very short-lived correlation shifts. For short-lived shifts, the IS-DCC model outperforms in both detecting and measuring correlations. However, this paper recommends that these models are used in combination with a qualitative study in empirical situations to better understand the underlying correlation dynamics.

  • 326. Stjernqvist, Susann
    et al.
    Rydén, Tobias
    Lund University.
    A continuous-index hidden Markov jump process for modeling DNA copy number data2009In: Biostatistics, ISSN 1465-4644, E-ISSN 1468-4357, Vol. 10, no 4, p. 773-778Article in journal (Refereed)
    Abstract [en]

    The number of copies of DNA in human cells can be measured using array comparative genomic hybridization (aCGH), which provides intensity ratios of sample to reference DNA at genomic locations corresponding to probes on a microarray. In the present paper, we devise a statistical model, based on a latent continuous-index Markov jump process, that is aimed to capture certain features of aCGH data, including probes that are unevenly long, unevenly spaced, and overlapping. The model has a continuous state space, with 1 state representing a normal copy number of 2, and the rest of the states being either amplifications or deletions. We adopt a Bayesian approach and apply Markov chain Monte Carlo (MCMC) methods for estimating the parameters and the Markov process. The model can be applied to data from both tiling bacterial artificial chromosome arrays and oligonucleotide arrays. We also compare a model with normal distributed noise to a model with t-distributed noise, showing that the latter is more robust to outliers.

  • 327. Stjernqvist, Susann
    et al.
    Rydén, Tobias
    Lund University.
    Sköld, Martin
    Staaf, Johan
    Continuous-index hidden Markov modelling of array CGH copy number data2007In: Bioinformatics, ISSN 1367-4803, E-ISSN 1367-4811, Vol. 23, no 8, p. 1006-1014Article in journal (Refereed)
  • 328.
    Styrud, Lovisa
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Risk Premium Prediction of Car Damage Insurance using Artificial Neural Networks and Generalized Linear Models2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Over the last few years the interest in statistical learning methods, in particular artificial neural networks, has reawakened due to increasing computing capacity, available data and a strive towards automatization of different tasks. Artificial neural networks have numerous applications, why they appear in various contexts. Using artificial neural networks in insurance rate making is an area in which a few pioneering studies have been conducted, with promising results. This thesis suggests using a multilayer perceptron neural network for pricing car damage insurance. The MLP is compared with two traditionally used methods within the framework of generalized linear models. The MLP was selected by cross-validation of a set of candidate models. For the comparison models, a log-link GLM with Tweedie's compound Poisson distribution modeling the risk premium as dependent variable was set up, as well as a two-parted GLM with a log-link Poisson GLM for claim frequency and a log-link Gamma GLM for claim severity. Predictions on an independent test set showed that the Tweedie GLM had the lowest prediction error, followed by the MLP model and last the Poisson-Gamma GLM. Analysis of risk ratios for the different explanatory variables showed that the Tweedie GLM was also the least discriminatory model, followed by the Poisson-Gamma GLM and the MLP. The MLP had the highest bootstrap estimate of variance in prediction error on the test set. Overall however, the MLP model performed roughly in line with the GLM models and given the basic model configurations cross-validated and the restricted computing power, the MLP results should be seen as successful for the use of artificial neural networks in car damage insurance rate making. Nevertheless, practical aspects argue in favor of using GLM.

    This thesis is written at If P&C Insurance, a property and casualty insurance company active in Scandinavia, Finland and the Baltic countries. The headquarters are situated in Bergshamra, Stockholm.

  • 329.
    Su, Xun
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Cheung, Mei Ting
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Day-of-the-week eects in stock market data2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this thesis is to investigate day-of-the-week effects for stock index returns. The investigations include analysis of means and variances as well as return-distribution properties such as skewness and tail behavior. Moreover, the existences of conditional day-of-the-week effects, depending on the outcome of returns from the previous week, are analyzed. Particular emphasis is put on determining useful testing procedures for differences in variance in return data from different weekdays. Two time series models, AR and GARCH(1,1), are used to find out if any weekday's mean return is different from other days. The investigations are repeated for two-day re- turns and for returns of diversified portfolios made up of several stock index returns.

  • 330.
    Sundberg, Victor
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Application and Bootstrapping of the Munich Chain Ladder Method2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Point estimates of the Standard Chain Ladder method (CLM) and of the more complex Munich Chain Ladder method (MCL) are compared to real data on 38 different datasets in order to evaluate if MCL produces better predictions on average with a dataset from an arbitrary insurance portfolio. MCL is also examined to determine if the future paid and incurred claims converge as time progresses. A bootstrap model based on MCL (BMCL) is examined in order to evaluate its possibility to estimate the probability density function (PDF) of future claims and observable claim development results (OCDR). The results show that the paid and incurred predictions by MCL converge. The results also show that when considering all datasets MCL produce on average better estimations than CLM with paid data but no improvement can be seen with incurred data. Further the results show that by considering a subset of datasets which fulfil certain criteria, or by only considering accident years after 1999 the percentage of datasets in which MCL produce superior estimations increases. When examining BMCL one finds that it can produce estimated PDFs of ultimate reserves and OCDRs, however the mean of estimate of ultimate reserves does not converge to the MCL estimates nor do the mean of the OCDRs converge to zero. In order to get the right convergence the estimated OCDR PDFs are centered and the mean of the BMCL estimated ultimate reserve is set to the MCL estimate by multiplication.

  • 331.
    Sundin, Jesper
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Risk contribution and its application in asset and risk management for life insurance2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In risk management one important aspect is the allocation of total portfolio risk into its components. This can be done by measuring each components' risk contribution relative to the total risk, taking into account the covariance between components. The measurement procedure is straightforward under assumptions of elliptical distributions but not under the commonly used multivariate log-normal distributions. Two portfolio strategies are considered, the "buy and hold" and the "constant mix" strategy. The profits and losses of the components of a generic portfolio strategy are defined in order to enable a proper definition of risk contribution for the constant mix strategy. Then kernel estimation of risk contribution is performed for both portfolio strategies using Monte Carlo simulation. Further, applications for asset and risk management with risk contributions are discussed in the context of life insurance.

  • 332.
    Sundqvist, Greger
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Model risk in a hedging perspective2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 333.
    Svensson Depraetere, Xavier
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Application of new particle-based solutions to the Simultaneous Localization and Mapping (SLAM) problem2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

        In this thesis, we explore novel solutions to the Simultaneous Localization and Mapping (SLAM) problem based on particle filtering and smoothing methods. In essence, the SLAM problem constitutes of two interdependent tasks: map building and tracking. Three solution methods utilizing different smoothing techniques are explored. The smoothing methods used are fixed lag smoothing (FLS), forward-only forward-filtering backward-smoothing (forward-only FFBSm) and the particle-based, rapid incremental smoother (PaRIS). In conjunction with these smoothing techniques the well-established Expectation-Maximization (EM) algorithm is used to produce maximum-likelihood estimates of the map. The three solution method are then evaluated and compared in a simulated setting.

  • 334.
    Svensson, Jens
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    On Importance Sampling and Dependence Modeling2009Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers.

    In the first paper, Monte Carlo simulation for tail probabilities of heavy-tailed random walks is considered. Importance sampling algorithms are constructed by using mixtures of the original distribution with some other state-dependent distributions. Sufficient conditions under which the relative error of such algorithms is bounded are found, and the bound is calculated. A new mixture algorithm based on scaling of the original distribution is presented and compared to existing algorithms.

    In the second paper, Monte Carlo simulation of quantiles is treated. It is shown that by using importance sampling algorithms developed for tail probability estimation, efficient quantile estimators can be obtained. A functional limit of the quantile process under the importance sampling measure is found, and the variance of the limit process is calculated for regularly varying distributions. The procedure is also applied to the calculation of expected shortfall. The algorithms are illustrated numerically for a heavy-tailed random walk.

    In the third paper, large deviation probabilities for a sum of dependent random variables are derived. The dependence stems from a few underlying random variables, so-called factors. Each summand is composed of two parts: an idiosyncratic part and a part given by the factors. Conditions under which both factors and idiosyncratic components contribute to the large deviation behavior are found, and the resulting approximation is evaluated in a simple example.

    In the fourth paper, the asymptotic eigenvalue distribution of the exponentially weighted moving average covariance estimator is studied. Equations for the asymptotic spectral density and the boundaries of its support are found using the Marchenko-Pastur theorem.

  • 335.
    Svensson, Jens
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Some asymptotic results in dependence modelling2007Licentiate thesis, comprehensive summary (Other scientific)
    Abstract [en]

    This thesis consists of two papers, both devoted to the study of asymptotics in dependence modelling.

    The first paper studies large deviation probabilities for a sum of dependent random variables, where the dependence stems from a few underlying random variables, so-called factors. Each summand is composed of two parts: an idiosyncratic part and a part given by the factors. Conditions under which both factors and idiosyncratic components contribute to the large deviation behaviour are found and the resulting approximation is evaluated in a simple special case. The results are then applied to stochastic processes with the same structure. Based on the results of the first part of the paper, it is concluded that large deviations on a finite time interval are due to one large jump that can come from either the factor or the idiosyncratic part of the process.

    The second paper studies the asymptotic eigenvalue distribution of the exponentially weighted moving average (EWMA) covariance estimator. Equations for the limiting eigenvalue density and the boundaries of its support are found using the Marchenko-Pastur theorem.

  • 336.
    Svensson, Jens
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    The asymptotic spectrum of the EWMA covariance estimator2007In: Physica A: Statistical Mechanics and its Applications, ISSN 0378-4371, E-ISSN 1873-2119, no 385, p. 621-630Article in journal (Refereed)
    Abstract [en]

    The exponentially weighted moving average (EWMA) covariance estimator is a standard estimator for financial time series, and its spectrum can be used for so-called random matrix filtering. Random matrix filtering using the spectrum of the sample covariance matrix is an established tool in finance and signal detection and the EWMA spectrum can be used analogously. In this paper, the asymptotic spectrum of the EWMA covariance estimator is calculated using the Mar enko-Pastur theorem. Equations for the spectrum and the boundaries of the support of the spectrum are obtained and solved numerically. The spectrum is compared with covariance estimates using simulated i.i.d. data and log-returns from a subset of stocks from the S&P 500. The behaviour of the EWMA estimator in this limited empirical study is similar to the results in previous studies of sample covariance matrices. Correlations in the data are found to only affect a small part of the EWMA spectrum, suggesting that a large part may be filtered out

  • 337.
    Svensson, Jens
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Large deviations for heavy-tailed factor models2009In: Statistics and Probability Letters, ISSN 0167-7152, E-ISSN 1879-2103, Vol. 79, no 3, p. 304-311Article in journal (Refereed)
    Abstract [en]

    We study large deviation probabilities for a sum of dependent random variables from a heavy-tailed factor model, assuming that the components are regularly varying. Depending on the regions considered, probabilities are determined by different parts of the model.

  • 338.
    Säterbrink, Filip
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hedonic House Price Index2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Nasdaq OMX Valueguard-KTH Housing Index (HOX) is a hedonic price index that illustrates the price development of condominiums in Sweden, and that is obtained by using regression technique. Concerns have been raised regarding the influence of the monthly fee on the index. Low fee condominiums could be more popular because of the low monthly cost, high fee condominiums tend to sell for a lower price due to the high monthly cost. As the price of a condominium rises the importance of the monthly fee decreases. Because of this the monthly fee might affect the regression that produces the index. Furthermore,housing cooperatives are usually indebted. These loans are paid off by the monthly fee which can be considered to finance a debt that few are aware of.

    This issue has been investigated by iteratively estimating the importance of the level of debt in order to find a model that better takes into account the possible impact of the monthly fee on the price development.

    Due to a somewhat simplified model that produces index values with many cases of high standard deviation, no conclusive evidence has been found that confirms the initial hypothesis. Nevertheless, converting part of the monthly fee into debt has shown a general improvement of fitting a regression equation to the data. It is therefore recommended that real data on debt in housing cooperatives be tested in Valuegua

  • 339.
    Teneberg, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing Contingent Convertibles using an EquityDerivatives Jump Diusion Approach2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This paper familiarizes the reader with contingent convertibles and their role in the current financial landscape. A contingent convertible is a security behaving like a bond in normal times, but that converts into equity or is written down in times of turbulence. The paper presents a few existing pricing approaches and introduces an extension to one of these, the equity derivatives approach, by letting the underlying asset follow a jump-diffusion process instead of a standard Geometrical Brownian Motion. The extension requires sophisticated computational techniques in order for the pricing to stay within reasonable time frames. Since market data is sparse and incomplete in this area, the validation of the model is not performed quantitatively, but instead supported by qualitative arguments.

  • 340.
    Tewolde Berhan, Damr
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing Inflation Derivatives: A survey of short rate- and market models2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis presents an overview of strategies for pricing inflation derivatives. The paper is structured as follows. Firstly, the basic definitions and concepts such as nominal-, real- and inflation rates are introduced. We introduce the benchmark contracts of the inflation derivatives market, and using standard results from no-arbitrage pricing theory, derive pricing formulas for linear contracts on inflation. In addition, the risk profile of inflation contracts is illustrated and we highlight how it’s captured in the models to be studied in the paper.

    We then move on to the main objective of the thesis and present three approaches for pricing inflation derivatives, where we focus in particular on two popular models. The first one, is a so called HJM approach, that models the nominal and real forward curves and relates the two by making an analogy to domestic and foreign fx rates. By the choice of volatility functions in the HJM framework, we produce nominal and real term structures similar to the popular interest-rate derivatives model of Hull-White. This approach was first suggested by Jarrow and Yildirim[1] and it’s main attractiveness lies in that it results in analytic pricing formulas for both linear and non-linear benchmark inflation derivatives.

    The second approach, is a so called market model, independently proposed by Mercurio[2] and Belgrade, Benhamou, and Koehler[4]. Just like the - famous - Libor Market Model, the modeled quantities are observable market entities, namely, the respective forward inflation indices. It is shown how this model as well - by the use of certain approximations - can produce analytic formulas for both linear and non-linear benchmark inflation derivatives.

    The advantages and shortcomings of the respective models are eveluated. In particular, we focus on how well the models calibrate to market data. To this end, model parameters are calibrated to market prices of year-on-year inflation floors; and it is evaluated how well market prices can be recovered by theoretical pricing with the calibrated model parameters. The thesis is concluded with suggestions for possible extensions and improvements.

  • 341.
    Tillman, Måns
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On-Line Market Microstructure Prediction Using Hidden Markov Models2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Over the last decades, financial markets have undergone dramatic changes. With the advent of the arbitrage pricing theory, along with new technology, markets have become more efficient. In particular, the new high-frequency markets, with algorithmic trading operating on micro-second level, make it possible to translate ”information” into price almost instantaneously. Such phenomena are studied in the field of market microstructure theory, which aims to explain and predict them.

    In this thesis, we model the dynamics of high frequency markets using non-linear hidden Markov models (HMMs). Such models feature an intuitive separation between observations and dynamics, and are therefore highly convenient tools in financial settings, where they allow a precise application of domain knowledge. HMMs can be formulated based on only a few parameters, yet their inherently dynamic nature can be used to capture well-known intra-day seasonality effects that many other models fail to explain.

    Due to recent breakthroughs in Monte Carlo methods, HMMs can now be efficiently estimated in real-time. In this thesis, we develop a holistic framework for performing both real-time inference and learning of HMMs, by combining several particle-based methods. Within this framework, we also provide methods for making accurate predictions from the model, as well as methods for assessing the model itself.

    In this framework, a sequential Monte Carlo bootstrap filter is adopted to make on-line inference and predictions. Coupled with a backward smoothing filter, this provides a forward filtering/backward smoothing scheme. This is then used in the sequential Monte Carlo expectation-maximization algorithm for finding the optimal hyper-parameters for the model.

    To design an HMM specifically for capturing information translation, we adopt the observable volume imbalance into a dynamic setting. Volume imbalance has previously been used in market microstructure theory to study, for example, price impact. Through careful selection of key model assumptions, we define a slightly modified observable as a process that we call scaled volume imbalance. The outcomes of this process retain the key features of volume imbalance (that is, its relationship to price impact and information), and allows an efficient evaluation of the framework, while providing a promising platform for future studies. This is demonstrated through a test on actual financial trading data, where we obtain high-performance predictions. Our results demonstrate that the proposed framework can successfully be applied to the field of market microstructure.

  • 342.
    Tingström, Victor
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Sequential parameter and state learning in continuous time stochastic volatility models using the SMC² algorithm2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this Master’s thesis, joint sequential inference of both parameters and states of stochastic volatility models is carried out using the SMC2 algorithm found in SMC2: an efficient algorithm for sequential analysis of state-space models, Nicolas Chopin, Pierre E. Jacob, Omiros Papaspiliopoulos. The models under study are the continuous time s.v. models (i) Heston, (ii) Bates, and (iii) SVCJ, where inference is based on options prices. It is found that the SMC2 performs well for the simpler models (i) and (ii), wheras filtering in (iii) performs worse. Furthermore, it is found that the FFT option price evaluation is the most computationally demanding step, and it is suggested to explore other avenues of computation, such as GPGPU-based computing.

  • 343.
    Torell, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Name Concentration Risk and Pillar 2 Compliance: The Granularity Adjustment2013Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A credit portfolio where each obligor contributes infinitesimally to the risk is said to be infinitely granular. The risk related to the fact that no real credit portfolio is infinitely granular, is called name concentration risk.

    Under Basel II, banks are required to hold a capital buffer for credit risk in order to sustain the probability of default on an acceptable level. Credit risk capital charges computed under pillar 1 of Basel II have been calibrated for a specific level of name concentration. If a bank deviates from this benchmark it is expected to address this under pillar 2, which may involve increased capital charges.

    Here, we look at some of the difficulties that a bank may encounter when computing a name concentration risk add-on under pillar 2. In particular, we study the granularity adjustment for the Vasicek and CreditRisk+ models. An advantage of this approach is that no vendor software products are necessary. We also address the questions of when the granularity adjustment is a coherent risk measure and how to allocate the add-on to exposures in order to optimize the credit portfolio. Finally, the discussed models are applied to real data

  • 344.
    Trost, Johanna
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Tail Dependence Considerations for Cross-Asset Portfolios2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Extreme events, heaviness of log return distribution tails and bivariate asymptotic dependence are important aspects of cross-asset tail risk hedging and diversification. These are in this thesis investigated with the help of threshold copulas, scalar tail dependence measures and bivariate Value-at-Risk. The theory is applied to a global equity portfolio extended with various other asset classes as proxied by different market indices. The asset class indices are shown to possess so-called stylised facts of financial asset returns such as heavy-tailedness, clustered volatility and aggregational Gaussianity. The results on tail dependence structure show on lack of strong joint tail dependence, but suitable bivariate dependence models can nonetheless be found and fitted to the data. These dependence structures are then used when concluding about tail hedging opportunities as defined by highly tail correlated long vs short positions as well as diversification benefits of lower estimated Value-at-Risk for cross-asset portfolios than univariate portfolios.

  • 345.
    Vallin, Simon
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Small Cohort Population Forecasting via Bayesian Learning2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A set of distributional assumptions regarding the demographic processes of birth, death, emigration and immigration have been assembled to form a probabilistic model framework of population dynamics. This framework was summarized as a Bayesian network and Bayesian inference techniques are exploited to infer the posterior distributions of the model parameters from observed data. The birth, death and emigration processes are modelled using a hierarchical beta-binomial model from which the inference of the posterior parameter distribution was analytically tractable. The immigration process was modelled with a Poisson type regression model where posterior distribution of the parameters has to be estimated numerically. This thesis suggests an implementation of the Metropolis-Hasting algorithm for this task. Classifi cation of incomings into subpopulations of age and gender is subsequently made using a Dirichlet-multinomial hierarchic model, for which parameter inference is analytically tractable. This model framework is used to generate forecasts of demographic data, which can be validated using the observed outcomes. A key component of the Bayesian model framework used is that is estimates the full posterior distributions of demographic data, which can take into account the full amount of uncertainty when forecasting population growths.

  • 346.
    Vignon, Marc
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Implementing Sensitivity Calculations for Long Interest Rate Futures2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 347.
    Viktorsson, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The GARCH-copula model for gaugeing time conditional dependence in the risk management of electricity derivatives2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the risk management of electricity derivatives, time to delivery can be divided into a time grid, with the assumption that within each cell of the grid, volatility is more or less constant. This setup however does not take in to account dependence between the different cells in the time grid.

    This thesis tries to develop a way to gauge the dependence between electricity derivatives at the different places in the time grid and different delivery periods. More specifically, the aim is to estimate the size of the ratio of the quantile of the sum of price changes against the sum of the marginal quantiles of the price changes.

    The approach used is a combination of Generalised Autoregressive Conditional Heteroscedasticity (GARCH) processes and copulas. The GARCH process is used to filter out heteroscedasticity in the price data. Copulas are fitted to the filtered data using pseudo maximum likelihood and the fitted copulas are evaluated using a goodness of fit test.

    GARCH processes alone are found to be insufficient to capture the dynamics of the price data. It is found that combining GARCH with Autoregressive Moving Average processes provides better fit to the data. The resulting dependence is the found to be best captured by elliptical copulas. The estimated ratio is found to be quite small in the cases studied. The use of the ARMA-GARCH filtering gives in general a better fit for copulas when applied to financial data. A time dependency in the dependence can also be observed.

  • 348.
    Villaume, Erik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Predicting customer level risk patterns in non-life insurance2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Several models for predicting future customer profitability early into customer life-cycles in the property and casualty business are constructed and studied. The objective is to model risk at a customer level with input data available early into a private consumer’s lifespan. Two retained models, one using Generalized Linear Model another using a multilayer perceptron, a special form of Artificial Neural Network are evaluated using actual data. Numerical results show that differentiation on estimated future risk is most effective for customers with highest claim frequencies.

     

  • 349.
    von Feilitzen, Helena
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Modeling non-maturing liabilities2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Non‐maturing liabilities, such as savings accounts, lack both predetermined maturity and reset dates due to the fact that the depositor is free to withdraw funds at any time and that the depository institution is free to change the rate. These attributes complicate the risk management of such products and no standardized solution exists. The problem is important however since non‐maturing liabilities typically make up a considerable part of the funding of a bank. In this report different modeling approaches to the risk management are described and a method for managing the interest rate risk is implemented. It is a replicating portfolio approach used to approximate the non‐maturing liabilities with a portfolio of fixed income instruments. The search for a replicating portfolio is formulated as an optimization problem based on regression between the deposit rate and market ratesseparated by a fixed margin. In the report two different optimization criteria are compared for the replicating portfolio, minimizing the standard deviation of the margin versus maximizing the risk‐adjusted margin represented by the Sharpe ratio, of which the latter is found to yield superior results. The choice of historical sample interval over which the portfolio is optimized seems to have a rather big impact on the outcome but recalculating the portfolio weights at regular intervals is found to stabilize the results somewhat. All in all, despite the fact that this type of method cannot fully capture the most advanced dynamics of the non‐maturing liabilities, a replicating portfolio still appears to be a feasible approach for the interest risk management.

  • 350.
    von Mentzer, Simon
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Risks and scenarios in the Swedish income-based pension system2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this master thesis the risks and scenarios in the Swedish income-based pension system are investigated. To investigate the risks one has chosen to look at a vector autoregressive (VAR) model for three variables (AP-fund returns, average wage returns and inflation). Bootstrap is used to simulate the VAR model. When the simulated values are received they are put back in equations that describes real average wage return, real return from the AP-funds, average wage and income index. Lastly the pension balance is calculated with the simulated data.

    Scenarios are created by changing one variable at the time in the VAR model. Then it is investigated how different scenarios affect the indexation and pension balance.

    The result show a cross correlation structure between average wage return and inflation in the VAR model, but AP-fund returns can simply be modelled as an exogenous white noise random variable. In the scenario when average wage return is altered, one can see the largest changes in indexation and pension balance.

45678 301 - 350 of 375
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf