Change search
Refine search result
345678 251 - 300 of 392
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 251.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Stochastic modelling in disability insurance2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of two papers related to the stochastic modellingof disability insurance. In the first paper, we propose a stochastic semi-Markovian framework for disability modelling in a multi-period discrete-time setting. The logistic transforms of disability inception and recovery probabilities are modelled by means of stochastic risk factors and basis functions, using counting processes and generalized linear models. The model for disability inception also takes IBNR claims into consideration. We fit various versions of the models into Swedish disability claims data.

    In the second paper, we consider a large, homogeneous portfolio oflife or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic environment. Using a conditional law of large numbers, we establish the connection between risk aggregation and claims reserving for large portfolios. Further, we derive a partial differential equation for moments of present values. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for solving the PDEs very efficiently. Finally, we givea numerical example where moments of present values of disabilityannuities are computed using finite difference methods.

  • 252.
    Löfdahl Grelsson, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Topics in life and disability insurance2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of five papers, presented in Chapters A-E, on topics in life and disability insurance. It is naturally divided into two parts, where papers A and B discuss disability rates estimation based on historical claims data, and papers C-E discuss claims reserving, risk management and insurer solvency.In Paper A, disability inception and recovery probabilities are modelled in a generalized linear models (GLM) framework. For prediction of future disability rates, it is customary to combine GLMs with time series forecasting techniques into a two-step method involving parameter estimation from historical data and subsequent calibration of a time series model. This approach may in fact lead to both conceptual and numerical problems since any time trend components of the model are incoherently treated as both model parameters and realizations of a stochastic process. In Paper B, we suggest that this general two-step approach can be improved in the following way: First, we assume a stochastic process form for the time trend component. The corresponding transition densities are then incorporated into the likelihood, and the model parameters are estimated using the Expectation-Maximization algorithm.In Papers C and D, we consider a large portfolio of life or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic-demographic environment. Using the Conditional Law of Large Numbers (CLLN), we establish the connection between claims reserving and risk aggregation for large portfolios. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for computing reserves and capital requirements efficiently. Paper C focuses on claims reserving and ultimate risk, whereas the focus of Paper D is on the one-year risks associated with the Solvency II directive.In Paper E, we consider claims reserving for life insurance policies with reserve-dependent payments driven by multi-state Markov chains. The associated prospective reserve is formulated as a recursive utility function using the framework of backward stochastic differential equations (BSDE). We show that the prospective reserve satisfies a nonlinear Thiele equation for Markovian BSDEs when the driver is a deterministic function of the reserve and the underlying Markov chain. Aggregation of prospective reserves for large and homogeneous insurance portfolios is considered through mean-field approximations. We show that the corresponding prospective reserve satisfies a BSDE of mean-field type and derive the associated nonlinear Thiele equation.

  • 253. Magnusson, M.
    et al.
    Jonsson, L.
    Villani, M.
    Broman, David
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Sparse Partially Collapsed MCMC for Parallel Inference in Topic Models2018In: Journal of Computational And Graphical Statistics, ISSN 1061-8600, E-ISSN 1537-2715, Vol. 27, no 2, p. 449-463Article in journal (Refereed)
    Abstract [en]

    Topic models, and more specifically the class of latent Dirichlet allocation (LDA), are widely used for probabilistic modeling of text. Markov chain Monte Carlo (MCMC) sampling from the posterior distribution is typically performed using a collapsed Gibbs sampler. We propose a parallel sparse partially collapsed Gibbs sampler and compare its speed and efficiency to state-of-the-art samplers for topic models on five well-known text corpora of differing sizes and properties. In particular, we propose and compare two different strategies for sampling the parameter block with latent topic indicators. The experiments show that the increase in statistical inefficiency from only partial collapsing is smaller than commonly assumed, and can be more than compensated by the speedup from parallelization and sparsity on larger corpora. We also prove that the partially collapsed samplers scale well with the size of the corpus. The proposed algorithm is fast, efficient, exact, and can be used in more modeling situations than the ordinary collapsed sampler. Supplementary materials for this article are available online.

  • 254.
    Magureanu, Stefan
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Combes, Richard
    Supelec, France.
    Proutiere, Alexandre
    KTH, School of Electrical Engineering (EES), Automatic Control. INRIA, France.
    Lipschitz Bandits: Regret Lower Bounds and Optimal Algorithms2014Conference paper (Refereed)
    Abstract [en]

    We consider stochastic multi-armed bandit problems where the expected reward is a Lipschitzfunction of the arm, and where the set of arms is either discrete or continuous. For discrete Lipschitzbandits, we derive asymptotic problem specific lower bounds for the regret satisfied by anyalgorithm, and propose OSLB and CKL-UCB, two algorithms that efficiently exploit the Lipschitzstructure of the problem. In fact, we prove that OSLB is asymptotically optimal, as its asymptoticregret matches the lower bound. The regret analysis of our algorithms relies on a new concentrationinequality for weighted sums of KL divergences between the empirical distributions of rewards andtheir true distributions. For continuous Lipschitz bandits, we propose to first discretize the actionspace, and then apply OSLB or CKL-UCB, algorithms that provably exploit the structure efficiently.This approach is shown, through numerical experiments, to significantly outperform existing algorithmsthat directly deal with the continuous set of arms. Finally the results and algorithms areextended to contextual bandits with similarities.

  • 255. Maire, Florian
    et al.
    Douc, Randal
    Olsson, Jimmy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    COMPARISON OF ASYMPTOTIC VARIANCES OF INHOMOGENEOUS MARKOV CHAINS WITH APPLICATION TO MARKOV CHAIN MONTE CARLO METHODS2014In: Annals of Statistics, ISSN 0090-5364, E-ISSN 2168-8966, Vol. 42, no 4, p. 1483-1510Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the asymptotic variance of sample path averages for inhomogeneous Markov chains that evolve alternatingly according to two different 7-reversible Markov transition kernels P and Q. More specifically, our main result allows us to compare directly the asymptotic variances of two inhomogeneous Markov chains associated with different kernels Pi and Q(i), i is an element of {0, 1}, as soon as the kernels of each pair (P-0, P-1) and (Q(0), Q(1)) can be ordered in the sense of lag-one autocovariance. As an important application, we use this result for comparing different data-augmentation-type Metropolis Hastings algorithms. In particular, we compare some pseudo-marginal algorithms and propose a novel exact algorithm, referred to as the random refreshment algorithm, which is more efficient, in terms of asymptotic variance, than the Grouped Independence Metropolis Hastings algorithm and has a computational complexity that does not exceed that of the Monte Carlo Within Metropolis algorithm.

  • 256.
    Malgrat, Maxime
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing of a “worst of” option using a Copula method2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, we use a Copula Method in order to price basket options and especially “worst of” options. The dependence structure of the underlying assets will be modeled using different families of copulas. The copulas parameters are estimated via the Maximum Likelihood Method from a sample of observed daily returns.

    The Monte Carlo method will be revisited when it comes to generate underlying assets daily returns from the fitted copula.

    Two baskets are priced: one composed of two correlated assets and one composed of two uncorrelated assets. The obtained prices are then compared with the price obtained using the Pricing Partners software

  • 257.
    Malmberg, Emilie
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Sjöberg, Jonas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Förklarande faktorer bakom statsobligationsspread mellan USA och Tyskland2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

     

    This bachelor’s thesis in Mathematical Statistics and Industrial Economics aims to determine explanatory variables of yield spread between U.S. and German government bonds. The bonds used in this thesis have maturities of five and ten years. To accomplish the task at hand, a multiple linear regression model is used. Regression models are commonly used to describe government bond spread, and this bachelor’s thesis aims to create a basis for further modeling and contribute to improvement of existing models. The problem formulation and course of action have been developed in cooperation with a Swedish bank, not named for reasons of confidentiality. Two main parts constitute this bachelor’s thesis. The Industrial Economics part investigates which macroeconomic factors are of interest in order to create the model. The economics are (in this case) the statistical context, which emphasizes the importance of this part. For the mathematical part of the thesis, a multiple linear regression and related statistical tests are performed on the chosen variables. The results of these tests indicate that the policy rate spread between the countries is the most significant variable, and in itself describes the government bond spread quite well. However, the policy rate does not seem to describe the bond spread as well regarding the last five years. This gives a hint that the importance of the variable policy spread is diminishing, while the importance of other factors is increasing.

  • 258.
    Martinsson Engshagen, Jan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Nothing is normal in nance!: On Tail Correlations and Robust Higher Order Moments in Normal Portfolio Frameworks2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    This thesis project is divided in two parts. The first part examines the possibility that correlation matrix estimates based on an outlier sample would contain information about extreme events. According to my findings, such methods do not perform better than simple shrinkage methods where robust shrinkage targets are used. The method tested is especially outperformed when it comes to the extreme events, where a shrinkage of the correlation matrix towards the identity matrix seems to give the best result.

    The second part is about valuation of skewness in marginal distributions and the penalizing of heavy tails. I argue that it is reasonable to use a degrees of freedom parameter instead of kurtosis and a certain regression parameter, that I develop, instead of skewness due to robustness issues. When minimizing the one period draw-down is our target, the "value" of skewness seems to have a linear relationship with expected returns. Re-valuing of expected returns, in terms of skewness, in the standard Markowitz framework will tend to lower expected shortfall (ES), increase skewness and lower the realized portfolio variance. Penalizing of heavy tails will most times in the same way lower ES, lower kurtosis and realized portfolio variance. The results indicate that the parameters representing higher order moments in some way characterize the assets and also reflect their future behavior. These properties can be used in a simple optimization framework and seem to have a positive impact even on portfolio level

  • 259. Maruotti, Antonello
    et al.
    Rydén, Tobias
    Lund University.
    A semiparametric approach to hidden Markov models under longitudinal observations2009In: Statistics and computing, ISSN 0960-3174, E-ISSN 1573-1375, Vol. 19, no 4, p. 381-393Article in journal (Refereed)
    Abstract [en]

    We propose a hidden Markov model for longitudinal count data where sources of unobserved heterogeneity arise, making data overdispersed. The observed process, conditionally on the hidden states, is assumed to follow an inhomogeneous Poisson kernel, where the unobserved heterogeneity is modeled in a generalized linear model (GLM) framework by adding individual-specific random effects in the link function. Due to the complexity of the likelihood within the GLM framework, model parameters may be estimated by numerical maximization of the log-likelihood function or by simulation methods; we propose a more flexible approach based on the Expectation Maximization (EM) algorithm. Parameter estimation is carried out using a non-parametric maximum likelihood (NPML) approach in a finite mixture context. Simulation results and two empirical examples are provided.

  • 260.
    Mazhar, Othmane
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control.
    Rojas, Cristian R.
    KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control.
    Fischione, Carlo
    KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Hesamzadeh, Mohammad Reza
    KTH, School of Electrical Engineering and Computer Science (EECS), Electric Power and Energy Systems.
    Bayesian model selection for change point detection and clustering2018In: 35th International Conference on Machine Learning, ICML 2018, International Machine Learning Society (IMLS) , 2018, p. 5497-5520Conference paper (Refereed)
    Abstract [en]

    We address a generalization of change point detection with the purpose of detecting the change locations and the levels of clusters of a piece- wise constant signal. Our approach is to model it as a nonparametric penalized least square model selection on a family of models indexed over the collection of partitions of the design points and propose a computationally efficient algorithm to approximately solve it. Statistically, minimizing such a penalized criterion yields an approximation to the maximum a-posteriori probability (MAP) estimator. The criterion is then ana-lyzed and an oracle inequality is derived using a Gaussian concentration inequality. The oracle inequality is used to derive on one hand conditions for consistency and on the other hand an adaptive upper bound on the expected square risk of the estimator, which statistically motivates our approximation. Finally, we apply our algorithm to simulated data to experimentally validate the statistical guarantees and illustrate its behavior.

  • 261.
    Mhitarean, Ecaterina
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Marketing Mix Modelling from the multiple regression perspective2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The optimal allocation of the marketing budget has become a difficult issue that each company is facing. With the appearance of new marketing techniques, such as online advertising and social media advertising, the complexity of data has increased, making this problem even more challenging. Statistical tools for explanatory and predictive modelling have commonly been used to tackle the problem of budget allocation. Marketing Mix Modelling involves the use of a range of statistical methods which are suitable for modelling the variable of interest (in this thesis it is sales) in terms of advertising strategies and external variables, with the aim to construct an optimal combination of marketing strategies that would maximize the profit.

    The purpose of this thesis is to investigate a number of regression-based model building strategies, with the focus on advanced regularization methods of linear regression, with the analysis of advantages and disadvantages of each method. Several crucial problems that modern marketing mix modelling is facing are discussed in the thesis. These include the choice of the most appropriate functional form that describes the relationship between the set of explanatory variables and the response, modelling the dynamical structure of marketing environment by choosing the optimal decays for each marketing advertising strategy, evaluating the seasonality effects and collinearity of marketing instruments.

    To efficiently tackle two common challenges when dealing with marketing data, which are multicollinearity and selection of informative variables, regularization methods are exploited. In particular, the performance accuracy of ridge regression, the lasso, the naive elastic net and elastic net is compared using cross-validation approach for the selection of tuning parameters. Specific practical recommendations for modelling and analyzing Nepa marketing data are provided.

  • 262. Millán, P.
    et al.
    Vivas, C.
    Fischione, Carlo
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Distributed event-based observers for LTI systems2015In: Asynchronous Control for Networked Systems, Springer Publishing Company, 2015, p. 181-191Chapter in book (Other academic)
    Abstract [en]

    This chapter is concerned with the networked distributed estimation problem. A set of agents (observers) are assumed to be estimating the state of a large-scale process. Each of them must provide a reliable estimate of the state of the plant, but it have only access to some plant outputs. Local observability is not assumed, so the agents need to communicate and collaborate to obtain their estimates. This chapter proposes a structure of the observers, which merges local Luenberger-like estimators with consensus matrices.

  • 263.
    Molavipour, Sina
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Bassi, German
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Skoglund, Mikael
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Testing for Directed Information Graphs2017In: 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton), IEEE, 2017, p. 212-219Conference paper (Refereed)
    Abstract [en]

    In this paper, we study a hypothesis test to determine the underlying directed graph structure of nodes in a network, where the nodes represent random processes and the direction of the links indicate a causal relationship between said processes. Specifically, a k-th order Markov structure is considered for them, and the chosen metric to determine a connection between nodes is the directed information. The hypothesis test is based on the empirically calculated transition probabilities which are used to estimate the directed information. For a single edge, it is proven that the detection probability can be chosen arbitrarily close to one, while the false alarm probability remains negligible. When the test is performed on the whole graph, we derive bounds for the false alarm and detection probabilities, which show that the test is asymptotically optimal by properly setting the threshold test and using a large number of samples. Furthermore, we study how the convergence of the measures relies on the existence of links in the true graph.

  • 264.
    Mollaret, Sébastian
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Collateral choice option valuation2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A bank borrowing some money has to give some securities to the lender, which is called collateral. Different kinds of collateral can be posted, like cash in different currencies or a stock portfolio depending on the terms of the contract, which is called a Credit Support Annex (CSA). Those contracts specify eligible collateral, interest rate, frequency of collateral posting, minimum transfer amounts, etc. This guarantee reduces the counterparty risk associated with this type of transaction.

    If a CSA allows for posting cash in different currencies as collateral, then the party posting collateral can, now and at each future point in time, choose which currency to post. This choice leads to optionality that needs to be accounted for when valuing even the most basic of derivatives such as forwards or swaps.

    In this thesis, we deal with the valuation of embedded optionality in collateral contracts. We consider the case when collateral can be posted in two different currencies, which seems sufficient since collateral contracts are soon going to be simplified.

    This study is based on the conditional independence approach proposed by Piterbarg [8]. This method is compared to both Monte-Carlo simulation and finite- difference method.

    A practical application is finally presented with the example of a contract between Natixis and Barclays.

     

  • 265.
    Monin Nylund, Jean-Alexander
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Semi-Markov modelling in a Gibbssampling algorithm for NIALM2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Residential households in the EU are estimated to have a savings potential of around 27% [1]. The question yet remains on how to realize this savings potential. Non-Intrusive Appliance Load Monitoring (NIALM) aims to disaggregate the combination of household appliance energy signals with only measurements of the total household power load.

    The core of this thesis has been the implementation of an extension to a Gibbs sampling model with Hidden Markov Models for energy disaggregation. The goal has been to improve overall performance, by including the duration times of electrical appliances in the probabilistic model.

    The final algorithm was evaluated in comparison to the base algorithm, but results remained at the very best inconclusive, due to the model's inherent limitations.

    The work was performed at the Swedish company Watty. Watty develops the first energy data analytic tool that can automate the energy efficiency process in buildings.

  • 266.
    Mumm, Lennart
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Reject Inference in Online Purchases2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

     

    As accurately as possible, creditors wish to determine if a potential debtor will repay the borrowed sum. To achieve this mathematical models known as credit scorecards quantifying the risk of default are used. In this study it is investigated whether the scorecard can be improved by using reject inference and thereby include the characteristics of the rejected population when refining the scorecard. The reject inference method used is parcelling. Logistic regression is used to estimate probability of default based on applicant characteristics. Two models, one with and one without reject inference, are compared using Gini coefficient and estimated profitability. The results yield that, when comparing the two models, the model with reject inference both has a slightly higher Gini coefficient as well a showing an increase in profitability. Thus, this study suggests that reject inference does improve the predictive power of the scorecard, but in order to verify the results additional testing on a larger calibration set is needed

  • 267. Munkhammar, J.
    et al.
    Widén, J.
    Grahn, Pia
    KTH, School of Electrical Engineering (EES), Electric Power Systems.
    Rydén, J.
    A Bernoulli distribution model for plug-in electric vehicle charging based on time-use data for driving patterns2014In: 2014 IEEE International Electric Vehicle Conference, IEVC 2014, IEEE conference proceedings, 2014Conference paper (Refereed)
    Abstract [en]

    This paper presents a Bernoulli distribution model for plug-in electric vehicle (PEV) charging based on high resolution activity data for Swedish driving patterns. Based on the activity 'driving vehicle' from a time diary study a Monte Carlo simulation is made of PEV state of charge which is then condensed down to Bernoulli distributions representing charging for each hour during weekday and weekend day. These distributions are then used as a basis for simulations of PEV charging patterns. Results regarding charging patterns for a number of different PEV parameters are shown along with a comparison with results from a different stochastic model for PEV charging. A convergence test for Monte Carlo simulations of the distributions is also provided. In addition to this we show that multiple PEV charging patterns are represented by Binomial distributions via convolution of Bernoulli distributions. Also the distribution for aggregate charging of many PEVs is shown to be normally distributed. Finally a few remarks regarding the applicability of the model are given along with a discussion on potential extensions.

  • 268.
    Murase, Takeo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Interest Rate Risk – Using Benchmark Shifts in a Multi Hierarchy Paradigm2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This master thesis investigates the generic benchmark approach to measuring interest rate risk. First the background and market situation is described followed by an outline of the concept and meaning of measuring interest rate risk with generic benchmarks. Finally a single yield curve in an arbitrary currency is analyzed in the cases where linear interpolation and cubic interpolation technique is utilized. It is shown that in the single yield curve setting with linear interpolation or cubic interpolation the problem of finding interest rate scenarios can be formulated as convex optimization problems implying properties such as convexity and monotonicity. The analysis also shed light on the difference between linear interpolation and cubic interpolation technique for which scenario is generated and means to go about solving for the scenarios generated by the views imposed on the generic benchmark instruments. Further research on the topic of the generic benchmark approach that would advance the understanding of the model is suggested at the end of the paper. However at this stage it seems like using generic benchmark instruments for measuring interest rate risk is a consistent and computational viable option which not only measures the interest rate risk exposure but also provide a guidance in how to act in order to manage interest rate risk in a multi hierarchy paradigm

  • 269.
    Muratov, Anton
    et al.
    KTH, School of Electrical Engineering (EES).
    Zuyev, Sergei
    Neighbour-dependent point shifts and random exchange models: Invariance and attractors2017In: Bernoulli, ISSN 1350-7265, E-ISSN 1573-9759, Vol. 23, no 1, p. 539-551Article in journal (Refereed)
    Abstract [en]

    Consider a partition of the real line into intervals by the points of a stationary renewal point process. Subdivide the intervals in proportions given by i.i.d. random variables with distribution G supported by [0, 1]. We ask ourselves for what interval length distribution F and what division distribution G, the subdivision points themselves form a renewal process with the same F? An evident case is that of degenerate F and G. As we show, the only other possibility is when F is Gamma and G is Beta with related parameters. In particular, the process of division points of a Poisson process is again Poisson, if the division distribution is Beta: B(r, 1 - r) for some 0 < r < 1. We show a similar behaviour of random exchange models when a countable number of "agents" exchange randomly distributed parts of their "masses" with neighbours. More generally, a Dirichlet distribution arises in these models as a fixed point distribution preserving independence of the masses at each step. We also show that for each G there is a unique attractor, a distribution of the infinite sequence of masses, which is a fixed point of the random exchange and to which iterations of a non-equilibrium configuration of masses converge weakly. In particular, iteratively applying B(r, 1 - r)-divisions to a realisation of any renewal process with finite second moment of F yields a Poisson process of the same intensity in the limit.

  • 270.
    Möllberg, Martin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On Calibrating an Extension of the Chen Model2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    There are many ways of modeling stochastic processes of short-term interest rates. One way is to use one-factor models which may be easy to use and easy to calibrate. Another way is to use a three-factor model in the strive for a higher degree of congruency with real world market data. Calibrating such models may however take much more effort. One of the main questions here is which models will be better fit to the data in question. Another question is if the use of a three-factor model can result in better fitting compared to one-factor models.

    This is investigated by using the Efficient Method of Moments to calibrate a three-factor model with a Lévy process. This model is an extension of the Chen Model. The calibration is done with Euribor 6-month interest rates and these rates are also used with the Vasicek and Cox-Ingersoll-Ross (CIR) models. These two models are calibrated by using Maximum Likelihood Estimation and they are one-factor models. Chi-square goodness-of-fit tests are also performed for all models.

    The findings indicate that the Vasicek and CIR models fail to describe the stochastic process of the Euribor 6-month rate. However, the result from the goodness-of-fit test of the three-factor model gives support for that model.

  • 271.
    Nguyen Andersson, Peter
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Liquidity and corporate bond pricing on the Swedish market2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a corporate bond valuation model based on Dick-Nielsen, Feldhütter, and Lando (2011) and Chen, Lesmond, and Wei (2007) is examined. The aim is for the model to price corporate bond spreads and in particular capture the price effects of liquidity as well as credit risk. The valuation model is based on linear regression and is conducted on the Swedish market with data provided by Handelsbanken. Two measures of liquidity are analyzed: the bid-ask spread and the zero-trading days. The investigation shows that the bid-ask spread outperforms the zero-trading days in both significance and robustness. The valuation model with the bid-ask spread explains 59% of the cross-sectional variation and has a standard error of 56 bps in its pricing predictions of corporate spreads. A reduced version of the valuation model is also developed to address simplicity and target a larger group of users. The reduced model is shown to maintain a large proportion of the explanation power while including fewer and simpler variables.

     

  • 272.
    Nilsson, Hans-Erik
    et al.
    KTH, Superseded Departments, Microelectronics and Information Technology, IMIT.
    Martinez, Antonio B.
    KTH, Superseded Departments, Microelectronics and Information Technology, IMIT.
    Hjelm, Mats
    KTH, Superseded Departments, Microelectronics and Information Technology, IMIT.
    Full band Monte Carlo simulation-beyond the semiclassical approach2004In: Monte Carlo Methods and Applications, ISSN 0929-9629, Vol. 10, no 3-4, p. 481-490Article in journal (Refereed)
    Abstract [en]

    A quantum mechanical extension of the full band ensemble Monte Carlo (MC) simulation method is presented. The new approach goes beyond the traditional semi-classical method generally used in MC simulations of charge transport in semiconductor materials and devices. The extension is necessary in high-field simulations of semiconductor materials with a complex unit cell, such as the hexagonal SiC polytypes or wurtzite GaN. Instead of complex unit cells the approach can also be used for super-cells, in order to understand charge transport at surfaces, around point defects, or in quantum wells.

  • 273.
    Nordling, Torbjörn E. M.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Robust inference of gene regulatory networks: System properties, variable selection, subnetworks, and design of experiments2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this thesis, inference of biological networks from in vivo data generated by perturbation experiments is considered, i.e. deduction of causal interactions that exist among the observed variables. Knowledge of such regulatory influences is essential in biology.

    A system property–interampatteness–is introduced that explains why the variation in existing gene expression data is concentrated to a few “characteristic modes” or “eigengenes”, and why previously inferred models have a large number of false positive and false negative links. An interampatte system is characterized by strong INTERactions enabling simultaneous AMPlification and ATTEnuation of different signals and we show that perturbation of individual state variables, e.g. genes, typically leads to ill-conditioned data with both characteristic and weak modes. The weak modes are typically dominated by measurement noise due to poor excitation and their existence hampers network reconstruction.

    The excitation problem is solved by iterative design of correlated multi-gene perturbation experiments that counteract the intrinsic signal attenuation of the system. The next perturbation should be designed such that the expected response practically spans an additional dimension of the state space. The proposed design is numerically demonstrated for the Snf1 signalling pathway in S. cerevisiae.

    The impact of unperturbed and unobserved latent state variables, that exist in any real biological system, on the inferred network and required set-up of the experiments for network inference is analysed. Their existence implies that a subnetwork of pseudo-direct causal regulatory influences, accounting for all environmental effects, in general is inferred. In principle, the number of latent states and different paths between the nodes of the network can be estimated, but their identity cannot be determined unless they are observed or perturbed directly.

    Network inference is recognized as a variable/model selection problem and solved by considering all possible models of a specified class that can explain the data at a desired significance level, and by classifying only the links present in all of these models as existing. As shown, these links can be determined without any parameter estimation by reformulating the variable selection problem as a robust rank problem. Solution of the rank problem enable assignment of confidence to individual interactions, without resorting to any approximation or asymptotic results. This is demonstrated by reverse engineering of the synthetic IRMA gene regulatory network from published data. A previously unknown activation of transcription of SWI5 by CBF1 in the IRMA strain of S. cerevisiae is proven to exist, which serves to illustrate that even the accumulated knowledge of well studied genes is incomplete.

  • 274.
    Nykvist, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Topics in importance sampling and derivatives pricing2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers, presented in Chapters 2-5, on the topics of derivatives pricing and importance sampling for stochastic processes.

    In the first paper a model for the evolution of the forward density of the future value of an asset is proposed. The model is constructed with the aim of being both simple and realistic, and avoid the need for frequent re-calibration. The model is calibrated to liquid options on the S\&P 500 index and an empirical study illustrates that the model provides a good fit to option price data.

    In the last three papers of this thesis efficient importance sampling algorithms are designed for computing rare-event probabilities in the setting of stochastic processes. The algorithms are based on subsolutions of partial differential equations of Hamilton-Jacobi type and the construction of appropriate subsolutions is facilitated by a minmax representation involving the \mane potential.

    In the second paper, a general framework is provided for the case of one-dimensional diffusions driven by Brownian motion. An analytical formula for the \mane potential is provided and the performance of the algorithm is analyzed in detail for geometric Brownian motion and for the Cox-Ingersoll-Ross process. Depending on the choice of the parameters of the models, the importance sampling algorithm is either proven to be asymptotically optimal or its good performance is demonstrated in numerical investigations.

    The third paper extends the results from the previous paper to the setting of high-dimensional stochastic processes. Using the method of characteristics, the partial differential equation for the \mane potential is rewritten as a system of ordinary differential equations which can be efficiently solved. The methodology is used to estimate loss probabilities of large portfolios in the Black-Scholes model and in the stochastic volatility model proposed by Heston. Numerical experiments indicate that the algorithm yields significant variance reduction when compared with standard Monte-Carlo simulation.

    In the final paper, an importance sampling algorithm is proposed for computing the probability of voltage collapse in a power system. The power load is modeled by a high-dimensional stochastic process and the sought probability is formulated as an exit problem for the diffusion. A particular challenge is that the boundary of the domain cannot be characterized explicitly. Simulations for two power systems shows that the algorithm can be effectively implemented and provides a viable alternative to existing system risk indices.

    The thesis begins with a historical review of mathematical finance, followed by an introduction to importance sampling for stochastic processes.

  • 275.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Large deviations for weighted empirical measures and processes arising in importance sampling2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of two papers related to large deviation results associated with importance sampling algorithms. As the need for efficient computational methods increases, so does the need for theoretical analysis of simulation algorithms. This thesis is mainly concerned with algorithms using importance sampling. Both papers make theoretical contributions to the development of a new approach for analyzing efficiency of importance sampling algorithms by means of large deviation theory.

    In the first paper of the thesis, the efficiency of an importance sampling algorithm is studied using a large deviation result for the sequence of weighted empirical measures that represent the output of the algorithm. The main result is stated in terms of the Laplace principle for the weighted empirical measure arising in importance sampling and it can be viewed as a weighted version of Sanov's theorem. This result is used to quantify the performance of an importance sampling algorithm over a collection of subsets of a given target set as well as quantile estimates. The method of proof is the weak convergence approach to large deviations developed by Dupuis and Ellis.

    The second paper studies moderate deviations of the empirical process analogue of the weighted empirical measure arising in importance sampling. Using moderate deviation results for empirical processes the moderate deviation principle is proved for weighted empirical processes that arise in importance sampling. This result can be thought of as the empirical process analogue of the main result of the first paper and the proof is established using standard techniques for empirical processes and Banach space valued random variables. The moderate deviation principle for the importance sampling estimator of the tail of a distribution follows as a corollary. From this, moderate deviation results are established for importance sampling estimators of two risk measures: The quantile process and Expected Shortfall. The results are proved using a delta method for large deviations established by Gao and Zhao (2011) together with more classical results from the theory of large deviations.

    The thesis begins with an informal discussion of stochastic simulation, in particular importance sampling, followed by short mathematical introductions to large deviations and importance sampling.

  • 276.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics. Brown Univ, USA.
    MODERATE DEVIATION PRINCIPLES FOR IMPORTANCE SAMPLING ESTIMATORS OF RISK MEASURES2017In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 54, no 2, p. 490-506Article in journal (Refereed)
    Abstract [en]

    Importance sampling has become an important tool for the computation of extreme quantiles and tail-based risk measures. For estimation of such nonlinear functionals of the underlying distribution, the standard efficiency analysis is not necessarily applicable. In this paper we therefore study importance sampling algorithms by considering moderate deviations of the associated weighted empirical processes. Using a delta method for large deviations, combined with classical large deviation techniques, the moderate deviation principle is obtained for importance sampling estimators of two of the most common risk measures: value at risk and expected shortfall.

  • 277.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Moderate deviation principles for importance sampling estimators of risk measures2017In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072Article in journal (Refereed)
    Abstract [en]

    Importance sampling has become an important tool for the computation of tail-based risk measures. Since such quantities are often determined mainly by rare events standard Monte Carlo can be inefficient and importance sampling provides a way to speed up computations. This paper considers moderate deviations for the weighted empirical process, the process analogue of the weighted empirical measure, arising in importance sampling. The moderate deviation principle is established as an extension of existing results. Using a delta method for large deviations established by Gao and Zhao (Ann. Statist., 2011) together with classical large deviation techniques, the moderate deviation principle for the weighted empirical process is extended to functionals of the weighted empirical process which correspond to risk measures. The main results are moderate deviation principles for importance sampling estimators of the quantile function of a distribution and Expected Shortfall.

  • 278.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On large deviations and design of efficient importance sampling algorithms2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers, presented in Chapters 2-5, on the topics large deviations and stochastic simulation, particularly importance sampling. The four papers make theoretical contributions to the development of a new approach for analyzing efficiency of importance sampling algorithms by means of large deviation theory, and to the design of efficient algorithms using the subsolution approach developed by Dupuis and Wang (2007).

    In the first two papers of the thesis, the random output of an importance sampling algorithm is viewed as a sequence of weighted empirical measures and weighted empirical processes, respectively. The main theoretical results are a Laplace principle for the weighted empirical measures (Paper 1) and a moderate deviation result for the weighted empirical processes (Paper 2). The Laplace principle for weighted empirical measures is used to propose an alternative measure of efficiency based on the associated rate function.The moderate deviation result for weighted empirical processes is an extension of what can be seen as the empirical process version of Sanov's theorem. Together with a delta method for large deviations, established by Gao and Zhao (2011), we show moderate deviation results for importance sampling estimators of the risk measures Value-at-Risk and Expected Shortfall.

    The final two papers of the thesis are concerned with the design of efficient importance sampling algorithms using subsolutions of partial differential equations of Hamilton-Jacobi type (the subsolution approach).

    In Paper 3 we show a min-max representation of viscosity solutions of Hamilton-Jacobi equations. In particular, the representation suggests a general approach for constructing subsolutions to equations associated with terminal value problems and exit problems. Since the design of efficient importance sampling algorithms is connected to such subsolutions, the min-max representation facilitates the construction of efficient algorithms.

    In Paper 4 we consider the problem of constructing efficient importance sampling algorithms for a certain type of Markovian intensity model for credit risk. The min-max representation of Paper 3 is used to construct subsolutions to the associated Hamilton-Jacobi equation and the corresponding importance sampling algorithms are investigated both theoretically and numerically.

    The thesis begins with an informal discussion of stochastic simulation, followed by brief mathematical introductions to large deviations and importance sampling. 

  • 279. Nyström, Kaj
    et al.
    Önskog, Thomas
    Remarks on the Skorohod problem and reflected Lévy driven SDEs in time-dependent domains2015In: Stochastics: An International Journal of Probablitiy and Stochastic Processes, ISSN 1744-2508, E-ISSN 1744-2516, Vol. 87, no 5, p. 747-765Article in journal (Refereed)
    Abstract [en]

    We consider the Skorohod problem for cadlag functions, and the subsequent construction of solutions to normally reflected stochastic differential equations driven by Levy processes, in the setting of non-smooth and time-dependent domains.

  • 280. Nyström, Kaj
    et al.
    Önskog, Thomas
    The Skorohod oblique reflection problem in time-dependent domains2010In: Annals of Probability, ISSN 0091-1798, E-ISSN 2168-894X, Vol. 38, no 6, p. 2170-2223Article in journal (Refereed)
    Abstract [en]

    The deterministic Skorohod problem plays an important role in the construction and analysis of diffusion processes with reflection. In the form studied here, the multidimensional Skorohod problem was introduced, in time-independent domains, by H. Tanaka [61] and further investigated by P.-L. Lions and A.-S. Sznitman [42] in their celebrated article. Subsequent results of several researchers have resulted in a large literature on the Skorohod problem in time-independent domains. In this article we conduct a thorough study of the multidimensional Skorohod problem in time-dependent domains. In particular, we prove the existence of cadlag solutions (x, lambda) to the Skorohod problem, with oblique reflection, for (D, Gamma, w) assuming, in particular, that D is a time-dependent domain (Theorem 1.2). In addition, we prove that if w is continuous, then x is continuous as well (Theorem 1.3). Subsequently, we use the established existence results to construct solutions to stochastic differential equations with oblique reflection (Theorem 1.9) in time-dependent domains. In the process of proving these results we establish a number of estimates for solutions to the Skorohod problem with bounded jumps and, in addition, several results concerning the convergence of sequences of solutions to Skorohod problems in the setting of time-dependent domains.

  • 281. Nyström, Kaj
    et al.
    Önskog, Thomas
    Weak approximation of obliquely reflected diffusions in time-dependent domains.2010In: Journal of Computational Mathematics, ISSN 0254-9409, E-ISSN 1991-7139, Vol. 28, no 5, p. 579-605Article in journal (Refereed)
    Abstract [en]

    In an earlier paper, we proved the existence of solutions to the Skorohod problem with oblique reflection in time-dependent domains and, subsequently, applied this result to the problem of constructing solutions, in time-dependent domains, to stochastic differential equations with oblique reflection. In this paper we use these results to construct weak approximations of solutions to stochastic differential equations with oblique reflection, in time-dependent domains in R-d, by means of a projected Euler scheme. We prove that the constructed method has, as is the case for normal reflection and time-independent domains, an order of convergence equal to 1/2 and we evaluate the method empirically by means of two numerical examples. Furthermore, using a well-known extension of the Feynman-Kac formula, to stochastic differential equations with reflection, our method gives, in addition, a Monte Carlo method for solving second order parabolic partial differential equations with Robin boundary conditions in time-dependent domains.

  • 282.
    Näsman, P
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Centres, Centre for Transport Studies, CTS. KTH, School of Architecture and the Built Environment (ABE), Transport Science, Transport and Location Analysis.
    Thedéen, T
    Valdeltagande, bebyggelsetyp och röstandelar i storstadsområdena Stockholm, Göteborg och Malmö vid riksdagsvalen 1982, 1985 och 19881990Report (Other academic)
  • 283.
    Näsman, Per
    KTH, School of Architecture and the Built Environment (ABE), Centres, Centre for Transport Studies, CTS. KTH, School of Architecture and the Built Environment (ABE), Transport Science, Transport and Location Analysis.
    Jan Gustavsson, mentor och vän: Festskrift med anledning av att Jan Gustavsson, Statistiska Institutionen, går i pension.1998Other (Other (popular science, discussion, etc.))
  • 284.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pavlenko, Tatjana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rios, Felix
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Bayesian structure learning in graphical models using sequential Monte CarloManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper we present a family of algorithms, the junction tree expanders, for expanding junction trees in the sense that the number of nodes in the underlying decomposable graph is increased by one. The family of junction tree expanders is equipped with a number of theoretical results including a characterization stating that every junction tree and consequently every de- composable graph can be constructed by iteratively using a junction tree expander. Further, an important feature of a stochastic implementation of a junction tree expander is the Markovian property inherent to the tree propagation dynamics. Using this property, a sequential Monte Carlo algorithm for approximating a probability distribution defined on the space of decompos- able graphs is developed with the junction tree expander as a proposal kernel. Specifically, we apply the sequential Monte Carlo algorithm for structure learning in decomposable Gaussian graphical models where the target distribution is a junction tree posterior distribution. In this setting, posterior parametric inference on the underlying decomposable graph is a direct by- product of the suggested methodology; working with the G-Wishart family of conjugate priors, we derive a closed form expression for the Bayesian estimator of the precision matrix of Gaus- sian graphical models Markov with respect to a decomposable graph. Performance accuracy of the graph and parameter estimators are illustrated through a collection of numerical examples demonstrating the feasibility of the suggested approach in high-dimensional domains. 

  • 285.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pavlenko, Tatjana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rios, Felix Leopoldo
    Bayesian inference in decomposable graphical models using sequential Monte Carlo methodsManuscript (preprint) (Other academic)
    Abstract [en]

    In this study we present a sequential sampling methodology for Bayesian inference in decomposable graphical models. We recast the problem of graph estimation, which in general lacks natural sequential interpretation, into a sequential setting. Specifically, we propose a recursive Feynman-Kac model which generates a flow of junction tree distributions over a space of increasing dimensions and develop an efficient sequential Monte Carlo sampler. As a key ingredient of the proposal kernel in our sampler we use the Christmas tree algorithm developed in the companion paper Olsson et al. [2017]. We focus on particle MCMC methods, in particular particle Gibbs (PG) as it allows for generating MCMC chains with global moves on an underlying space of decomposable graphs. To further improve the algorithm mixing properties of this PG, we incorporate a systematic refreshment step implemented through direct sampling from a backward kernel. The theoretical properties of the algorithm are investigated, showing in particular that the refreshment step improves the algorithm performance in terms of asymptotic variance of the estimated distribution. Performance accuracy of the graph estimators are illustrated through a collection of numerical examples demonstrating the feasibility of the suggested approach in both discrete and continuous graphical models.

  • 286.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pavlenko, Tatjana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rios, Felix Leopoldo
    Generating junction trees of decomopsable graphs with the Christmas tree algorithmManuscript (preprint) (Other academic)
    Abstract [en]

    The junction tree representation provides an attractive structural property for organizing a decomposable graph. In this study, we present a novel stochastic algorithm which we call the Christmas tree algorithm for building of junction trees sequentially by adding one node at a time to the underlying decomposable graph. The algorithm has two important theoretical properties. Firstly, every junction tree and hence every decomposable graph have positive probability of being generated. Secondly, the transition probability from one tree to another has a tractable expression. These two properties, along with the reversed version of the proposed algorithm are key ingredients in the construction of a sequential Monte Carlo sampling scheme for approximating distributions over decomposable graphs, see Olsson et al. [2016]. As an illustrating example, we specify a distribution over the space of junction trees and estimate of the number of decomposable graph through the normalizing constant.

  • 287. Olsson, Jimmy
    et al.
    Rydén, Tobias
    Lund University.
    Asymptotic properties of particle filter-based maximum likelihood estimators for state space models2008In: Stochastic Processes and their Applications, ISSN 0304-4149, E-ISSN 1879-209X, Vol. 118, no 4, p. 649-680Article in journal (Refereed)
    Abstract [en]

    We study the asymptotic performance of approximate maximum likelihood estimators for state space models obtained via sequential Monte Carlo methods. The state space of the latent Markov chain and the parameter space are assumed to be compact. The approximate estimates are computed by, firstly, running possibly dependent particle filters on a fixed grid in the parameter space, yielding a pointwise approximation of the log-likelihood function. Secondly, extensions of this approximation to the whole parameter space are formed by means of piecewise constant functions or B-spline interpolation, and approximate maximum likelihood estimates are obtained through maximization of the resulting functions. In this setting we formulate criteria for how to increase the number of particles and the resolution of the grid in order to produce estimates that are consistent and asymptotically normal.

  • 288. Olsson, Jimmy
    et al.
    Rydén, Tobias
    Lund University.
    Particle filter-based approximate maximum likelihood inference asymptotics in state-space models2007In: ESAIM: Proc. Volume 19, 2007, Conference Oxford sur les méthodes de Monte Carlo séquentielles / [ed] Andrieu, C. and Crisan, D., 2007, p. 115-120Conference paper (Refereed)
    Abstract [en]

    To implement maximum likelihood estimation in state-space models, the log-likelihoodfunction must be approximated. We study such approximations based on particle filters, and in particularconditions for consistency of the corresponding approximate maximum likelihood estimator.Numerical results illustrate the theory.

  • 289.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    An efficient particle-based online EM algorithm for general state-space modelsManuscript (preprint) (Other academic)
  • 290.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Efficient particle-based online smoothing in general hidden Markov models: the PaRIS algorithmManuscript (preprint) (Other academic)
  • 291.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Particle-based adaptive-lag online marginal smoothing in general state-space modelsManuscript (preprint) (Other academic)
  • 292.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Particle-based, online estimation of tangent filters with application to parameter estimation in nonlinear state-space modelsManuscript (preprint) (Other academic)
  • 293.
    Olsén, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Logistic regression modelling for STHR analysis2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Coronary artery heart disease (CAD) is a common condition which can impair the quality of life and lead to cardiac infarctions. Traditional criteria during exercise tests are good but far from perfect. A lot of patients with inconclusive tests are referred to radiological examinations. By finding better evaluation criteria during the exercise test we can save a lot of money and let the patients avoid unnecessary examinations.

    Computers record amounts of numerical data during the exercise test. In this retrospective study 267 patients with inconclusive exercise test and performed radiological examinations were included. The purpose was to use clinical considerations as-well as mathematical statistics to be able to find new diagnostic criteria.

    We created a few new parameters and evaluated them together with previously used parameters. For women we found some interesting univariable results where new parameters discriminated better than the formerly used. However, the number of females with observed CAD was small (14) which made it impossible to obtain strong significance. For men we computed a multivariable model, using logistic regression, which discriminates way better than the traditional parameters for these patients. The area under the ROC curve was 0:90 (95 % CI: 0.83-0.97) which is excellent to outstanding discrimination in a group initially included due to their inconclusive results.

    If the model can be proved to hold for another population it could contribute a lot to the diagnostics of this common medical conditions

  • 294.
    Orrenius, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Optimal mass transport: a viable alternative to copulas in financial risk modeling?2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Copulas as a description of joint probability distributions is today common when modeling financial risk. The optimal mass transport problem also describes dependence structures, although it is not well explored. This thesis explores the dependence structures of the entropy regularized optimal mass transport problem. The basic copula properties are replicated for the optimal mass transport problem. The estimation of the parameters of the optimal mass transport problem is attempted using a maximum likelihood analogy, but only successful when observing the general tendencies on a grid of the parameters.

  • 295.
    Osika, Anton
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical analysis of online linguistic sentiment measures with financial applications2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Gavagai is a company that uses different methods to aggregate senti-ment towards specific topics from a large stream of real time published documents. Gavagai wants to find a procedure to decide which way of measuring sentiment (sentiment measure) towards a topic is most useful in a given context. This work discusses what criterion are desirable for aggregating sentiment and derives and evaluates procedures to select "optimal" sentiment measures.

    Three novel models for selecting a set of sentiment measures that describe independent attributes of the aggregated data are evaluated. The models can be summarized as: maximizing variance of the last principal compo-nent of the data, maximizing the differential entropy of the data and, in the special case of selecting an additional sentiment measure, maximizing the unexplained variance conditional on the previous sentiment measures.

    When exogenous time varying data considering a topic is available, the data can be used to select the sentiment measure that best explain the data. With this goal in mind, the hypothesis that sentiment data can be used to predict financial volatility and political poll data is tested. The null hypothesis can not be rejected.

    A framework for aggregating sentiment measures in a mathematically co-herent way is summarized in a road map.

     

  • 296.
    Owrang, Arash
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering. KTH Royal Inst Technol, Dept Informat Sci & Engn, SE-10044 Stockholm, Sweden.;KTH Royal Inst Technol, ACCESS Linnaeus Ctr, SE-10044 Stockholm, Sweden..
    Jansson, Magnus
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    A Model Selection Criterion for High-Dimensional Linear Regression2018In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 66, no 13, p. 3436-3446Article in journal (Refereed)
    Abstract [en]

    Statistical model selection is a great challenge when the number of accessible measurements is much smaller than the dimension of the parameter space. We study the problem of model selection in the context of subset selection for high-dimensional linear regressions. Accordingly, we propose a new model selection criterion with the Fisher information that leads to the selection of a parsimonious model from all the combinatorial models up to some maximum level of sparsity. We analyze the performance of our criterion as the number of measurements grows to infinity, as well as when the noise variance tends to zero. In each case, we prove that our proposed criterion gives the true model with a probability approaching one. Additionally, we devise a computationally affordable algorithm to conduct model selection with the proposed criterion in practice. Interestingly, as a side product, our algorithm can provide the ideal regularization parameter for the Lasso estimator such that Lasso selects the true variables. Finally, numerical simulations are included to support our theoretical findings.

  • 297.
    Paajanen, Sara
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Model Risk in Economic Capital Models2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With increasingly complex financial markets, many financial institutions rely on mathematical models to estimate their risk exposure. These models are subject to a relatively unexplored risk type known as model risk. This study aims to quantify the model risk associated with the top-down aggregation of different risk types when computing the economic capital of a financial institution. The types of aggregation models considered combines the risks of a firm into a final economic capital value through the use of a joint distribution function or some other summation method. Specifically, the variance-covariance method and some common elliptical and Archimedean copulas are considered.

    The scope of this study is limited to estimating the parameter estimation risk and the misspecification risk of these aggregation models. Seven model risk measures are presented that are intended to measure the sensitivity of the models to model risk. These risk measures are based on existing approaches to model risk and also utilize the Rearrangement Algorithm developed by Embrechts et al. (2013).

    The study shows that the variance-covariance method, the Gaussian copula and the Student's t copulas with many degrees of freedom tend to carry the highest parameter estimation risk of the models tested. The Cauchy copula and the Archimedean copulas have significantly lower parameter estimation risk and are thus less sensitive to their input parameters. When testing for misspecification risk the heavy-tailed Cauchy and Gumbel copulas carry the least amount of risk while the variance-covariance method and the lighter tailed copulas are more risky. The study also shows that none of the models considered come close to the theoretical upper bound of the economic capital, putting into question the common assumption that a Gaussian copula with perfect correlation between all of the risk types of a firm will yield a conservative value of the economic capital.

  • 298.
    Palikuca, Aleksandar
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Seidl,, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Predicting High Frequency Exchange Rates using Machine Learning2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis applies a committee of Artificial Neural Networks and Support Vector Machines on high-dimensional, high-frequency EUR/USD exchange rate data in an effort to predict directional market movements on up to a 60 second prediction horizon. The study shows that combining multiple classifiers into a committee produces improved precision relative to the best individual committee members and outperforms previously reported results. A trading simulation implementing the committee classifier yields promising results and highlights the possibility of developing a profitable trading strategy based on the limit order book and historical transactions alone.

  • 299.
    Palmborg, Lina
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    On Constructing o Market Consistent Economic Scenario Generator2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 300.
    Pavlenko, Tatjana
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Björkström, Anders
    Stockholm Univ, Stockholm, Sweden.
    Tillander, Annika
    Stockholm Univ, Stockholm, Sweden.
    Covariance structure approximation via gLasso in high-dimensional supervised classification2012In: Journal of Applied Statistics, ISSN 0266-4763, E-ISSN 1360-0532, Vol. 39, no 8, p. 1643-1666Article in journal (Refereed)
    Abstract [en]

    Recent work has shown that the Lasso-based regularization is very useful for estimating the high-dimensional inverse covariance matrix. A particularly useful scheme is based on penalizing the l(1) norm of the off-diagonal elements to encourage sparsity. We embed this type of regularization into high-dimensional classification. A two-stage estimation procedure is proposed which first recovers structural zeros of the inverse covariance matrix and then enforces block sparsity by moving non-zeros closer to the main diagonal. We show that the block-diagonal approximation of the inverse covariance matrix leads to an additive classifier, and demonstrate that accounting for the structure can yield better performance accuracy. Effect of the block size on classification is explored, and a class of as ymptotically equivalent structure approximations in a high-dimensional setting is specified. We suggest a variable selection at the block level and investigate properties of this procedure in growing dimension asymptotics. We present a consistency result on the feature selection procedure, establish asymptotic lower an upper bounds for the fraction of separative blocks and specify constraints under which the reliable classification with block-wise feature selection can be performed. The relevance and benefits of the proposed approach are illustrated on both simulated and real data.

345678 251 - 300 of 392
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf