Change search
Refine search result
45678910 301 - 350 of 464
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 301. Lundström, Niklas L. P.
    et al.
    Önskog, Thomas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Stochastic and partial differential equations on non-smooth time-dependent domains2019In: Stochastic Processes and their Applications, ISSN 0304-4149, E-ISSN 1879-209X, Vol. 129, no 4, p. 1097-1131Article in journal (Refereed)
    Abstract [en]

    In this article, we consider non-smooth time-dependent domains whose boundary is W^{1,p} in time and single-valued, smoothly varying directions of reflection at the boundary. In this setting, we first prove existence and uniqueness of strong solutions to stochastic differential equations with oblique reflection. Secondly, we prove, using the theory of viscosity solutions, a comparison principle for fully nonlinear second-order parabolic partial differential equations with oblique derivative boundary conditions. As a consequence, we obtain uniqueness, and, by barrier construction and Perron’s method, we also conclude existence of viscosity solutions. Our results generalize two articles by Dupuis and Ishii to time-dependent domains.

  • 302.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Stochastic modelling in disability insurance2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of two papers related to the stochastic modellingof disability insurance. In the first paper, we propose a stochastic semi-Markovian framework for disability modelling in a multi-period discrete-time setting. The logistic transforms of disability inception and recovery probabilities are modelled by means of stochastic risk factors and basis functions, using counting processes and generalized linear models. The model for disability inception also takes IBNR claims into consideration. We fit various versions of the models into Swedish disability claims data.

    In the second paper, we consider a large, homogeneous portfolio oflife or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic environment. Using a conditional law of large numbers, we establish the connection between risk aggregation and claims reserving for large portfolios. Further, we derive a partial differential equation for moments of present values. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for solving the PDEs very efficiently. Finally, we givea numerical example where moments of present values of disabilityannuities are computed using finite difference methods.

  • 303.
    Löfdahl Grelsson, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Topics in life and disability insurance2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of five papers, presented in Chapters A-E, on topics in life and disability insurance. It is naturally divided into two parts, where papers A and B discuss disability rates estimation based on historical claims data, and papers C-E discuss claims reserving, risk management and insurer solvency.In Paper A, disability inception and recovery probabilities are modelled in a generalized linear models (GLM) framework. For prediction of future disability rates, it is customary to combine GLMs with time series forecasting techniques into a two-step method involving parameter estimation from historical data and subsequent calibration of a time series model. This approach may in fact lead to both conceptual and numerical problems since any time trend components of the model are incoherently treated as both model parameters and realizations of a stochastic process. In Paper B, we suggest that this general two-step approach can be improved in the following way: First, we assume a stochastic process form for the time trend component. The corresponding transition densities are then incorporated into the likelihood, and the model parameters are estimated using the Expectation-Maximization algorithm.In Papers C and D, we consider a large portfolio of life or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic-demographic environment. Using the Conditional Law of Large Numbers (CLLN), we establish the connection between claims reserving and risk aggregation for large portfolios. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for computing reserves and capital requirements efficiently. Paper C focuses on claims reserving and ultimate risk, whereas the focus of Paper D is on the one-year risks associated with the Solvency II directive.In Paper E, we consider claims reserving for life insurance policies with reserve-dependent payments driven by multi-state Markov chains. The associated prospective reserve is formulated as a recursive utility function using the framework of backward stochastic differential equations (BSDE). We show that the prospective reserve satisfies a nonlinear Thiele equation for Markovian BSDEs when the driver is a deterministic function of the reserve and the underlying Markov chain. Aggregation of prospective reserves for large and homogeneous insurance portfolios is considered through mean-field approximations. We show that the corresponding prospective reserve satisfies a BSDE of mean-field type and derive the associated nonlinear Thiele equation.

  • 304.
    Madsen, Christopher
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Clustering of the Stockholm County housing market2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a clustering of the Stockholm county housing market has been performed using different clustering methods. Data has been derived and different geographical constraints have been used. DeSO areas (Demographic statistical areas), developed by SCB, have been used to divide the housing market in to smaller regions for which the derived variables have been calculated. Hierarchical clustering methods, SKATER and Gaussian mixture models have been applied. Methods using different kinds of geographical constraints have also been applied in an attempt to create more geographically contiguous clusters. The different methods are then compared with respect to performance and stability. The best performing method is the Gaussian mixture model EII, also known as the K-means algorithm. The most stable method when applied to bootstrapped samples is the ClustGeo-method.

  • 305. Magnusson, M.
    et al.
    Jonsson, L.
    Villani, M.
    Broman, David
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Sparse Partially Collapsed MCMC for Parallel Inference in Topic Models2018In: Journal of Computational And Graphical Statistics, ISSN 1061-8600, E-ISSN 1537-2715, Vol. 27, no 2, p. 449-463Article in journal (Refereed)
    Abstract [en]

    Topic models, and more specifically the class of latent Dirichlet allocation (LDA), are widely used for probabilistic modeling of text. Markov chain Monte Carlo (MCMC) sampling from the posterior distribution is typically performed using a collapsed Gibbs sampler. We propose a parallel sparse partially collapsed Gibbs sampler and compare its speed and efficiency to state-of-the-art samplers for topic models on five well-known text corpora of differing sizes and properties. In particular, we propose and compare two different strategies for sampling the parameter block with latent topic indicators. The experiments show that the increase in statistical inefficiency from only partial collapsing is smaller than commonly assumed, and can be more than compensated by the speedup from parallelization and sparsity on larger corpora. We also prove that the partially collapsed samplers scale well with the size of the corpus. The proposed algorithm is fast, efficient, exact, and can be used in more modeling situations than the ordinary collapsed sampler. Supplementary materials for this article are available online.

  • 306.
    Magureanu, Stefan
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Combes, Richard
    Supelec, France.
    Proutiere, Alexandre
    KTH, School of Electrical Engineering (EES), Automatic Control. INRIA, France.
    Lipschitz Bandits: Regret Lower Bounds and Optimal Algorithms2014Conference paper (Refereed)
    Abstract [en]

    We consider stochastic multi-armed bandit problems where the expected reward is a Lipschitzfunction of the arm, and where the set of arms is either discrete or continuous. For discrete Lipschitzbandits, we derive asymptotic problem specific lower bounds for the regret satisfied by anyalgorithm, and propose OSLB and CKL-UCB, two algorithms that efficiently exploit the Lipschitzstructure of the problem. In fact, we prove that OSLB is asymptotically optimal, as its asymptoticregret matches the lower bound. The regret analysis of our algorithms relies on a new concentrationinequality for weighted sums of KL divergences between the empirical distributions of rewards andtheir true distributions. For continuous Lipschitz bandits, we propose to first discretize the actionspace, and then apply OSLB or CKL-UCB, algorithms that provably exploit the structure efficiently.This approach is shown, through numerical experiments, to significantly outperform existing algorithmsthat directly deal with the continuous set of arms. Finally the results and algorithms areextended to contextual bandits with similarities.

  • 307. Maire, Florian
    et al.
    Douc, Randal
    Olsson, Jimmy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    COMPARISON OF ASYMPTOTIC VARIANCES OF INHOMOGENEOUS MARKOV CHAINS WITH APPLICATION TO MARKOV CHAIN MONTE CARLO METHODS2014In: Annals of Statistics, ISSN 0090-5364, E-ISSN 2168-8966, Vol. 42, no 4, p. 1483-1510Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the asymptotic variance of sample path averages for inhomogeneous Markov chains that evolve alternatingly according to two different 7-reversible Markov transition kernels P and Q. More specifically, our main result allows us to compare directly the asymptotic variances of two inhomogeneous Markov chains associated with different kernels Pi and Q(i), i is an element of {0, 1}, as soon as the kernels of each pair (P-0, P-1) and (Q(0), Q(1)) can be ordered in the sense of lag-one autocovariance. As an important application, we use this result for comparing different data-augmentation-type Metropolis Hastings algorithms. In particular, we compare some pseudo-marginal algorithms and propose a novel exact algorithm, referred to as the random refreshment algorithm, which is more efficient, in terms of asymptotic variance, than the Grouped Independence Metropolis Hastings algorithm and has a computational complexity that does not exceed that of the Monte Carlo Within Metropolis algorithm.

  • 308.
    Malgrat, Maxime
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing of a “worst of” option using a Copula method2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, we use a Copula Method in order to price basket options and especially “worst of” options. The dependence structure of the underlying assets will be modeled using different families of copulas. The copulas parameters are estimated via the Maximum Likelihood Method from a sample of observed daily returns.

    The Monte Carlo method will be revisited when it comes to generate underlying assets daily returns from the fitted copula.

    Two baskets are priced: one composed of two correlated assets and one composed of two uncorrelated assets. The obtained prices are then compared with the price obtained using the Pricing Partners software

  • 309.
    Malmberg, Emilie
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Sjöberg, Jonas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Förklarande faktorer bakom statsobligationsspread mellan USA och Tyskland2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

     

    This bachelor’s thesis in Mathematical Statistics and Industrial Economics aims to determine explanatory variables of yield spread between U.S. and German government bonds. The bonds used in this thesis have maturities of five and ten years. To accomplish the task at hand, a multiple linear regression model is used. Regression models are commonly used to describe government bond spread, and this bachelor’s thesis aims to create a basis for further modeling and contribute to improvement of existing models. The problem formulation and course of action have been developed in cooperation with a Swedish bank, not named for reasons of confidentiality. Two main parts constitute this bachelor’s thesis. The Industrial Economics part investigates which macroeconomic factors are of interest in order to create the model. The economics are (in this case) the statistical context, which emphasizes the importance of this part. For the mathematical part of the thesis, a multiple linear regression and related statistical tests are performed on the chosen variables. The results of these tests indicate that the policy rate spread between the countries is the most significant variable, and in itself describes the government bond spread quite well. However, the policy rate does not seem to describe the bond spread as well regarding the last five years. This gives a hint that the importance of the variable policy spread is diminishing, while the importance of other factors is increasing.

  • 310.
    Martinsson Engshagen, Jan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Nothing is normal in nance!: On Tail Correlations and Robust Higher Order Moments in Normal Portfolio Frameworks2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    This thesis project is divided in two parts. The first part examines the possibility that correlation matrix estimates based on an outlier sample would contain information about extreme events. According to my findings, such methods do not perform better than simple shrinkage methods where robust shrinkage targets are used. The method tested is especially outperformed when it comes to the extreme events, where a shrinkage of the correlation matrix towards the identity matrix seems to give the best result.

    The second part is about valuation of skewness in marginal distributions and the penalizing of heavy tails. I argue that it is reasonable to use a degrees of freedom parameter instead of kurtosis and a certain regression parameter, that I develop, instead of skewness due to robustness issues. When minimizing the one period draw-down is our target, the "value" of skewness seems to have a linear relationship with expected returns. Re-valuing of expected returns, in terms of skewness, in the standard Markowitz framework will tend to lower expected shortfall (ES), increase skewness and lower the realized portfolio variance. Penalizing of heavy tails will most times in the same way lower ES, lower kurtosis and realized portfolio variance. The results indicate that the parameters representing higher order moments in some way characterize the assets and also reflect their future behavior. These properties can be used in a simple optimization framework and seem to have a positive impact even on portfolio level

  • 311. Maruotti, Antonello
    et al.
    Rydén, Tobias
    Lund University.
    A semiparametric approach to hidden Markov models under longitudinal observations2009In: Statistics and computing, ISSN 0960-3174, E-ISSN 1573-1375, Vol. 19, no 4, p. 381-393Article in journal (Refereed)
    Abstract [en]

    We propose a hidden Markov model for longitudinal count data where sources of unobserved heterogeneity arise, making data overdispersed. The observed process, conditionally on the hidden states, is assumed to follow an inhomogeneous Poisson kernel, where the unobserved heterogeneity is modeled in a generalized linear model (GLM) framework by adding individual-specific random effects in the link function. Due to the complexity of the likelihood within the GLM framework, model parameters may be estimated by numerical maximization of the log-likelihood function or by simulation methods; we propose a more flexible approach based on the Expectation Maximization (EM) algorithm. Parameter estimation is carried out using a non-parametric maximum likelihood (NPML) approach in a finite mixture context. Simulation results and two empirical examples are provided.

  • 312.
    Mattsson, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Constructing Residential Price Property Indices Using Robust and Shrinkage Regression Modelling2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis intends to construct and compare multiple Residential Price Property Indices (RPPI) with the aim to express the price development of houses in Stockholm county from January 2013 to September 2018. The index method used is the hedonic time dummy variable method. Different methods of imputation of missing data will be applied and new variables will be derived from the available data in order to develop various regression models. Observations judged as not part of the index's target population will be excluded to improve the quality of the training data. The indices will be computed by fitting the final model with OLS regression (as a benchmark), Huber regression, Tukey regression, Ridge regression as well as least-angle regression. Lastly, the obtained indices will be assessed by analyzing different measures of performance when included in \textit{Booli}'s valuation engine. The main result of this thesis is that a specific regression model is produced and that it is concluded that Huber regression slightly outperforms the other methods.

     

  • 313.
    Maupin, Thomas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Can Bitcoin, and other cryptocurrencies, be modeled effectively with a Markov-Switching approach?2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This research is an attempt at deepening the understanding of hyped cryptocurrencies. A deductive nature is used where we attempt to estimate the linear dependencies of cryptocurrencies with four different time series models. Investigating linear dependencies of univariate time series offers the reader an understanding on how previous prices of cryptocurrencies affect future prices. The linear interdepencies for a multivariate scenario will provide an apprehension on how, and if, the cryptocurrency market is correlated. The dataset used consists of the prices between January 1, 2016 to March 31, 2019 of the four cryptocurrency rivals: Bitcoin, Ethereum, Ripple and Litecoin. The modeling is performed by using autoregression and fitting on 80% of the data. Thereafter, the models are forecasted on the last 20% of the data in order to test the accuracy of the model. The four types of model are used in this thesis and are named by the abbreviations AR(p), MSAR(p), VAR(p) and MSVAR(p) where AR(p) represents an autoregressive model of order p; MSAR(p) represents a Markov-Switching autoregressive model of order p; VAR(p) represents a multivariate model for of the AR(p) also known as the vector autoregressive model of order p; finally MSVAR(p) stands for a Markov-Switching vector autoregressive model of order p. As cryptocurrencies are said to be very volatile, we hope that the Markov-Switching approach would help to classify the level of volatility into different regimes. Further, we anticipate that the fitted time series for each regime will offer a greater accuracy than the regular AR(p) and VAR(p) models. By using scale-dependent error estimators, the thesis concludes that the Markov-Switching approach does in fact improve the efficiency of chosen time series models for our cryptocurrencies.

  • 314.
    Mazhar, Othmane
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control.
    Rojas, Cristian R.
    KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control.
    Fischione, Carlo
    KTH, School of Electrical Engineering and Computer Science (EECS), Network and Systems engineering. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Hesamzadeh, Mohammad Reza
    KTH, School of Electrical Engineering and Computer Science (EECS), Electric Power and Energy Systems.
    Bayesian model selection for change point detection and clustering2018In: 35th International Conference on Machine Learning, ICML 2018, International Machine Learning Society (IMLS) , 2018, p. 5497-5520Conference paper (Refereed)
    Abstract [en]

    We address a generalization of change point detection with the purpose of detecting the change locations and the levels of clusters of a piece- wise constant signal. Our approach is to model it as a nonparametric penalized least square model selection on a family of models indexed over the collection of partitions of the design points and propose a computationally efficient algorithm to approximately solve it. Statistically, minimizing such a penalized criterion yields an approximation to the maximum a-posteriori probability (MAP) estimator. The criterion is then ana-lyzed and an oracle inequality is derived using a Gaussian concentration inequality. The oracle inequality is used to derive on one hand conditions for consistency and on the other hand an adaptive upper bound on the expected square risk of the estimator, which statistically motivates our approximation. Finally, we apply our algorithm to simulated data to experimentally validate the statistical guarantees and illustrate its behavior.

  • 315.
    Mhitarean, Ecaterina
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Marketing Mix Modelling from the multiple regression perspective2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The optimal allocation of the marketing budget has become a difficult issue that each company is facing. With the appearance of new marketing techniques, such as online advertising and social media advertising, the complexity of data has increased, making this problem even more challenging. Statistical tools for explanatory and predictive modelling have commonly been used to tackle the problem of budget allocation. Marketing Mix Modelling involves the use of a range of statistical methods which are suitable for modelling the variable of interest (in this thesis it is sales) in terms of advertising strategies and external variables, with the aim to construct an optimal combination of marketing strategies that would maximize the profit.

    The purpose of this thesis is to investigate a number of regression-based model building strategies, with the focus on advanced regularization methods of linear regression, with the analysis of advantages and disadvantages of each method. Several crucial problems that modern marketing mix modelling is facing are discussed in the thesis. These include the choice of the most appropriate functional form that describes the relationship between the set of explanatory variables and the response, modelling the dynamical structure of marketing environment by choosing the optimal decays for each marketing advertising strategy, evaluating the seasonality effects and collinearity of marketing instruments.

    To efficiently tackle two common challenges when dealing with marketing data, which are multicollinearity and selection of informative variables, regularization methods are exploited. In particular, the performance accuracy of ridge regression, the lasso, the naive elastic net and elastic net is compared using cross-validation approach for the selection of tuning parameters. Specific practical recommendations for modelling and analyzing Nepa marketing data are provided.

  • 316. Millán, P.
    et al.
    Vivas, C.
    Fischione, Carlo
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Distributed event-based observers for LTI systems2015In: Asynchronous Control for Networked Systems, Springer Publishing Company, 2015, p. 181-191Chapter in book (Other academic)
    Abstract [en]

    This chapter is concerned with the networked distributed estimation problem. A set of agents (observers) are assumed to be estimating the state of a large-scale process. Each of them must provide a reliable estimate of the state of the plant, but it have only access to some plant outputs. Local observability is not assumed, so the agents need to communicate and collaborate to obtain their estimates. This chapter proposes a structure of the observers, which merges local Luenberger-like estimators with consensus matrices.

  • 317.
    Molavipour, Sina
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Bassi, German
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Skoglund, Mikael
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Testing for Directed Information Graphs2017In: 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton), IEEE, 2017, p. 212-219Conference paper (Refereed)
    Abstract [en]

    In this paper, we study a hypothesis test to determine the underlying directed graph structure of nodes in a network, where the nodes represent random processes and the direction of the links indicate a causal relationship between said processes. Specifically, a k-th order Markov structure is considered for them, and the chosen metric to determine a connection between nodes is the directed information. The hypothesis test is based on the empirically calculated transition probabilities which are used to estimate the directed information. For a single edge, it is proven that the detection probability can be chosen arbitrarily close to one, while the false alarm probability remains negligible. When the test is performed on the whole graph, we derive bounds for the false alarm and detection probabilities, which show that the test is asymptotically optimal by properly setting the threshold test and using a large number of samples. Furthermore, we study how the convergence of the measures relies on the existence of links in the true graph.

  • 318.
    Mollaret, Sébastian
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Collateral choice option valuation2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A bank borrowing some money has to give some securities to the lender, which is called collateral. Different kinds of collateral can be posted, like cash in different currencies or a stock portfolio depending on the terms of the contract, which is called a Credit Support Annex (CSA). Those contracts specify eligible collateral, interest rate, frequency of collateral posting, minimum transfer amounts, etc. This guarantee reduces the counterparty risk associated with this type of transaction.

    If a CSA allows for posting cash in different currencies as collateral, then the party posting collateral can, now and at each future point in time, choose which currency to post. This choice leads to optionality that needs to be accounted for when valuing even the most basic of derivatives such as forwards or swaps.

    In this thesis, we deal with the valuation of embedded optionality in collateral contracts. We consider the case when collateral can be posted in two different currencies, which seems sufficient since collateral contracts are soon going to be simplified.

    This study is based on the conditional independence approach proposed by Piterbarg [8]. This method is compared to both Monte-Carlo simulation and finite- difference method.

    A practical application is finally presented with the example of a contract between Natixis and Barclays.

     

  • 319.
    Monin Nylund, Jean-Alexander
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Semi-Markov modelling in a Gibbssampling algorithm for NIALM2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Residential households in the EU are estimated to have a savings potential of around 27% [1]. The question yet remains on how to realize this savings potential. Non-Intrusive Appliance Load Monitoring (NIALM) aims to disaggregate the combination of household appliance energy signals with only measurements of the total household power load.

    The core of this thesis has been the implementation of an extension to a Gibbs sampling model with Hidden Markov Models for energy disaggregation. The goal has been to improve overall performance, by including the duration times of electrical appliances in the probabilistic model.

    The final algorithm was evaluated in comparison to the base algorithm, but results remained at the very best inconclusive, due to the model's inherent limitations.

    The work was performed at the Swedish company Watty. Watty develops the first energy data analytic tool that can automate the energy efficiency process in buildings.

  • 320.
    Mozayyan Esfahani, Sina
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Algorithmic Trading and Prediction of Foreign Exchange Rates Based on the Option Expiration Effect2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The equity option expiration effect is a well observed phenomenon and is explained by delta hedge rebalancing and pinning risk, which makes the strike price of an option work as a magnet for the underlying price. The FX option expiration effect has not previously been explored to the same extent. In this paper the FX option expiration effect is investigated with the aim of finding out whether it provides valuable information for predicting FX rate movements. New models are created based on the concept of the option relevance coefficient that determines which options are at higher risk of being in the money or out of the money at a specified future time and thus have an attraction effect. An algorithmic trading strategy is created to evaluate these models. The new models based on the FX option expiration effect strongly outperform time series models used as benchmarks. The best results are obtained when the information about the FX option expiration effect is included as an exogenous variable in a GARCH-X model. However, despite promising and consistent results, more scientific research is required to be able to draw significant conclusions.

  • 321.
    Mumm, Lennart
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Reject Inference in Online Purchases2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

     

    As accurately as possible, creditors wish to determine if a potential debtor will repay the borrowed sum. To achieve this mathematical models known as credit scorecards quantifying the risk of default are used. In this study it is investigated whether the scorecard can be improved by using reject inference and thereby include the characteristics of the rejected population when refining the scorecard. The reject inference method used is parcelling. Logistic regression is used to estimate probability of default based on applicant characteristics. Two models, one with and one without reject inference, are compared using Gini coefficient and estimated profitability. The results yield that, when comparing the two models, the model with reject inference both has a slightly higher Gini coefficient as well a showing an increase in profitability. Thus, this study suggests that reject inference does improve the predictive power of the scorecard, but in order to verify the results additional testing on a larger calibration set is needed

  • 322. Munkhammar, J.
    et al.
    Widén, J.
    Grahn, Pia
    KTH, School of Electrical Engineering (EES), Electric Power Systems.
    Rydén, J.
    A Bernoulli distribution model for plug-in electric vehicle charging based on time-use data for driving patterns2014In: 2014 IEEE International Electric Vehicle Conference, IEVC 2014, IEEE conference proceedings, 2014Conference paper (Refereed)
    Abstract [en]

    This paper presents a Bernoulli distribution model for plug-in electric vehicle (PEV) charging based on high resolution activity data for Swedish driving patterns. Based on the activity 'driving vehicle' from a time diary study a Monte Carlo simulation is made of PEV state of charge which is then condensed down to Bernoulli distributions representing charging for each hour during weekday and weekend day. These distributions are then used as a basis for simulations of PEV charging patterns. Results regarding charging patterns for a number of different PEV parameters are shown along with a comparison with results from a different stochastic model for PEV charging. A convergence test for Monte Carlo simulations of the distributions is also provided. In addition to this we show that multiple PEV charging patterns are represented by Binomial distributions via convolution of Bernoulli distributions. Also the distribution for aggregate charging of many PEVs is shown to be normally distributed. Finally a few remarks regarding the applicability of the model are given along with a discussion on potential extensions.

  • 323.
    Murase, Takeo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Interest Rate Risk – Using Benchmark Shifts in a Multi Hierarchy Paradigm2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This master thesis investigates the generic benchmark approach to measuring interest rate risk. First the background and market situation is described followed by an outline of the concept and meaning of measuring interest rate risk with generic benchmarks. Finally a single yield curve in an arbitrary currency is analyzed in the cases where linear interpolation and cubic interpolation technique is utilized. It is shown that in the single yield curve setting with linear interpolation or cubic interpolation the problem of finding interest rate scenarios can be formulated as convex optimization problems implying properties such as convexity and monotonicity. The analysis also shed light on the difference between linear interpolation and cubic interpolation technique for which scenario is generated and means to go about solving for the scenarios generated by the views imposed on the generic benchmark instruments. Further research on the topic of the generic benchmark approach that would advance the understanding of the model is suggested at the end of the paper. However at this stage it seems like using generic benchmark instruments for measuring interest rate risk is a consistent and computational viable option which not only measures the interest rate risk exposure but also provide a guidance in how to act in order to manage interest rate risk in a multi hierarchy paradigm

  • 324.
    Muratov, Anton
    et al.
    KTH, School of Electrical Engineering (EES).
    Zuyev, Sergei
    Neighbour-dependent point shifts and random exchange models: Invariance and attractors2017In: Bernoulli, ISSN 1350-7265, E-ISSN 1573-9759, Vol. 23, no 1, p. 539-551Article in journal (Refereed)
    Abstract [en]

    Consider a partition of the real line into intervals by the points of a stationary renewal point process. Subdivide the intervals in proportions given by i.i.d. random variables with distribution G supported by [0, 1]. We ask ourselves for what interval length distribution F and what division distribution G, the subdivision points themselves form a renewal process with the same F? An evident case is that of degenerate F and G. As we show, the only other possibility is when F is Gamma and G is Beta with related parameters. In particular, the process of division points of a Poisson process is again Poisson, if the division distribution is Beta: B(r, 1 - r) for some 0 < r < 1. We show a similar behaviour of random exchange models when a countable number of "agents" exchange randomly distributed parts of their "masses" with neighbours. More generally, a Dirichlet distribution arises in these models as a fixed point distribution preserving independence of the masses at each step. We also show that for each G there is a unique attractor, a distribution of the infinite sequence of masses, which is a fixed point of the random exchange and to which iterations of a non-equilibrium configuration of masses converge weakly. In particular, iteratively applying B(r, 1 - r)-divisions to a realisation of any renewal process with finite second moment of F yields a Poisson process of the same intensity in the limit.

  • 325.
    Möllberg, Martin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On Calibrating an Extension of the Chen Model2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    There are many ways of modeling stochastic processes of short-term interest rates. One way is to use one-factor models which may be easy to use and easy to calibrate. Another way is to use a three-factor model in the strive for a higher degree of congruency with real world market data. Calibrating such models may however take much more effort. One of the main questions here is which models will be better fit to the data in question. Another question is if the use of a three-factor model can result in better fitting compared to one-factor models.

    This is investigated by using the Efficient Method of Moments to calibrate a three-factor model with a Lévy process. This model is an extension of the Chen Model. The calibration is done with Euribor 6-month interest rates and these rates are also used with the Vasicek and Cox-Ingersoll-Ross (CIR) models. These two models are calibrated by using Maximum Likelihood Estimation and they are one-factor models. Chi-square goodness-of-fit tests are also performed for all models.

    The findings indicate that the Vasicek and CIR models fail to describe the stochastic process of the Euribor 6-month rate. However, the result from the goodness-of-fit test of the three-factor model gives support for that model.

  • 326.
    Nguyen Andersson, Peter
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Liquidity and corporate bond pricing on the Swedish market2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a corporate bond valuation model based on Dick-Nielsen, Feldhütter, and Lando (2011) and Chen, Lesmond, and Wei (2007) is examined. The aim is for the model to price corporate bond spreads and in particular capture the price effects of liquidity as well as credit risk. The valuation model is based on linear regression and is conducted on the Swedish market with data provided by Handelsbanken. Two measures of liquidity are analyzed: the bid-ask spread and the zero-trading days. The investigation shows that the bid-ask spread outperforms the zero-trading days in both significance and robustness. The valuation model with the bid-ask spread explains 59% of the cross-sectional variation and has a standard error of 56 bps in its pricing predictions of corporate spreads. A reduced version of the valuation model is also developed to address simplicity and target a larger group of users. The reduced model is shown to maintain a large proportion of the explanation power while including fewer and simpler variables.

     

  • 327.
    Nilsson, Hans-Erik
    et al.
    KTH, Superseded Departments, Microelectronics and Information Technology, IMIT.
    Martinez, Antonio B.
    KTH, Superseded Departments, Microelectronics and Information Technology, IMIT.
    Hjelm, Mats
    KTH, Superseded Departments, Microelectronics and Information Technology, IMIT.
    Full band Monte Carlo simulation-beyond the semiclassical approach2004In: Monte Carlo Methods and Applications, ISSN 0929-9629, Vol. 10, no 3-4, p. 481-490Article in journal (Refereed)
    Abstract [en]

    A quantum mechanical extension of the full band ensemble Monte Carlo (MC) simulation method is presented. The new approach goes beyond the traditional semi-classical method generally used in MC simulations of charge transport in semiconductor materials and devices. The extension is necessary in high-field simulations of semiconductor materials with a complex unit cell, such as the hexagonal SiC polytypes or wurtzite GaN. Instead of complex unit cells the approach can also be used for super-cells, in order to understand charge transport at surfaces, around point defects, or in quantum wells.

  • 328.
    Nordling, Torbjörn E. M.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Robust inference of gene regulatory networks: System properties, variable selection, subnetworks, and design of experiments2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this thesis, inference of biological networks from in vivo data generated by perturbation experiments is considered, i.e. deduction of causal interactions that exist among the observed variables. Knowledge of such regulatory influences is essential in biology.

    A system property–interampatteness–is introduced that explains why the variation in existing gene expression data is concentrated to a few “characteristic modes” or “eigengenes”, and why previously inferred models have a large number of false positive and false negative links. An interampatte system is characterized by strong INTERactions enabling simultaneous AMPlification and ATTEnuation of different signals and we show that perturbation of individual state variables, e.g. genes, typically leads to ill-conditioned data with both characteristic and weak modes. The weak modes are typically dominated by measurement noise due to poor excitation and their existence hampers network reconstruction.

    The excitation problem is solved by iterative design of correlated multi-gene perturbation experiments that counteract the intrinsic signal attenuation of the system. The next perturbation should be designed such that the expected response practically spans an additional dimension of the state space. The proposed design is numerically demonstrated for the Snf1 signalling pathway in S. cerevisiae.

    The impact of unperturbed and unobserved latent state variables, that exist in any real biological system, on the inferred network and required set-up of the experiments for network inference is analysed. Their existence implies that a subnetwork of pseudo-direct causal regulatory influences, accounting for all environmental effects, in general is inferred. In principle, the number of latent states and different paths between the nodes of the network can be estimated, but their identity cannot be determined unless they are observed or perturbed directly.

    Network inference is recognized as a variable/model selection problem and solved by considering all possible models of a specified class that can explain the data at a desired significance level, and by classifying only the links present in all of these models as existing. As shown, these links can be determined without any parameter estimation by reformulating the variable selection problem as a robust rank problem. Solution of the rank problem enable assignment of confidence to individual interactions, without resorting to any approximation or asymptotic results. This is demonstrated by reverse engineering of the synthetic IRMA gene regulatory network from published data. A previously unknown activation of transcription of SWI5 by CBF1 in the IRMA strain of S. cerevisiae is proven to exist, which serves to illustrate that even the accumulated knowledge of well studied genes is incomplete.

  • 329.
    Norgren, Lee
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Segmenting Observed Time Series Using Comovement and Complexity Measures2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Society depends on unbiased, efficient and replicable measurement tools to tell us more truthfully what is happening when our senses would otherwise fool us. A new approach is made to consistently detect the start and end of historic recessions as defined by the US Federal Reserve. To do this, three measures, correlation (Spearman and Pearson), Baur comovement and Kolmogorov complexity, are used to quantify market behaviour to detect recessions. To compare the effectiveness of each measure the normalized correct Area Under Curve (AUC) fraction is introduced. It is found that for all three measures, the performance is mostly dependent on the type of data and that financial market data does not perform as good as fundamental economical data to detect recessions. Furthermore, comovement is found to be the most efficient individual measure and also most efficient of all measures when compared against several measures merged together.

  • 330.
    Nykvist, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Topics in importance sampling and derivatives pricing2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers, presented in Chapters 2-5, on the topics of derivatives pricing and importance sampling for stochastic processes.

    In the first paper a model for the evolution of the forward density of the future value of an asset is proposed. The model is constructed with the aim of being both simple and realistic, and avoid the need for frequent re-calibration. The model is calibrated to liquid options on the S\&P 500 index and an empirical study illustrates that the model provides a good fit to option price data.

    In the last three papers of this thesis efficient importance sampling algorithms are designed for computing rare-event probabilities in the setting of stochastic processes. The algorithms are based on subsolutions of partial differential equations of Hamilton-Jacobi type and the construction of appropriate subsolutions is facilitated by a minmax representation involving the \mane potential.

    In the second paper, a general framework is provided for the case of one-dimensional diffusions driven by Brownian motion. An analytical formula for the \mane potential is provided and the performance of the algorithm is analyzed in detail for geometric Brownian motion and for the Cox-Ingersoll-Ross process. Depending on the choice of the parameters of the models, the importance sampling algorithm is either proven to be asymptotically optimal or its good performance is demonstrated in numerical investigations.

    The third paper extends the results from the previous paper to the setting of high-dimensional stochastic processes. Using the method of characteristics, the partial differential equation for the \mane potential is rewritten as a system of ordinary differential equations which can be efficiently solved. The methodology is used to estimate loss probabilities of large portfolios in the Black-Scholes model and in the stochastic volatility model proposed by Heston. Numerical experiments indicate that the algorithm yields significant variance reduction when compared with standard Monte-Carlo simulation.

    In the final paper, an importance sampling algorithm is proposed for computing the probability of voltage collapse in a power system. The power load is modeled by a high-dimensional stochastic process and the sought probability is formulated as an exit problem for the diffusion. A particular challenge is that the boundary of the domain cannot be characterized explicitly. Simulations for two power systems shows that the algorithm can be effectively implemented and provides a viable alternative to existing system risk indices.

    The thesis begins with a historical review of mathematical finance, followed by an introduction to importance sampling for stochastic processes.

  • 331.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Large deviations for weighted empirical measures and processes arising in importance sampling2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of two papers related to large deviation results associated with importance sampling algorithms. As the need for efficient computational methods increases, so does the need for theoretical analysis of simulation algorithms. This thesis is mainly concerned with algorithms using importance sampling. Both papers make theoretical contributions to the development of a new approach for analyzing efficiency of importance sampling algorithms by means of large deviation theory.

    In the first paper of the thesis, the efficiency of an importance sampling algorithm is studied using a large deviation result for the sequence of weighted empirical measures that represent the output of the algorithm. The main result is stated in terms of the Laplace principle for the weighted empirical measure arising in importance sampling and it can be viewed as a weighted version of Sanov's theorem. This result is used to quantify the performance of an importance sampling algorithm over a collection of subsets of a given target set as well as quantile estimates. The method of proof is the weak convergence approach to large deviations developed by Dupuis and Ellis.

    The second paper studies moderate deviations of the empirical process analogue of the weighted empirical measure arising in importance sampling. Using moderate deviation results for empirical processes the moderate deviation principle is proved for weighted empirical processes that arise in importance sampling. This result can be thought of as the empirical process analogue of the main result of the first paper and the proof is established using standard techniques for empirical processes and Banach space valued random variables. The moderate deviation principle for the importance sampling estimator of the tail of a distribution follows as a corollary. From this, moderate deviation results are established for importance sampling estimators of two risk measures: The quantile process and Expected Shortfall. The results are proved using a delta method for large deviations established by Gao and Zhao (2011) together with more classical results from the theory of large deviations.

    The thesis begins with an informal discussion of stochastic simulation, in particular importance sampling, followed by short mathematical introductions to large deviations and importance sampling.

  • 332.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics. Brown Univ, USA.
    MODERATE DEVIATION PRINCIPLES FOR IMPORTANCE SAMPLING ESTIMATORS OF RISK MEASURES2017In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 54, no 2, p. 490-506Article in journal (Refereed)
    Abstract [en]

    Importance sampling has become an important tool for the computation of extreme quantiles and tail-based risk measures. For estimation of such nonlinear functionals of the underlying distribution, the standard efficiency analysis is not necessarily applicable. In this paper we therefore study importance sampling algorithms by considering moderate deviations of the associated weighted empirical processes. Using a delta method for large deviations, combined with classical large deviation techniques, the moderate deviation principle is obtained for importance sampling estimators of two of the most common risk measures: value at risk and expected shortfall.

  • 333.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Moderate deviation principles for importance sampling estimators of risk measures2017In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072Article in journal (Refereed)
    Abstract [en]

    Importance sampling has become an important tool for the computation of tail-based risk measures. Since such quantities are often determined mainly by rare events standard Monte Carlo can be inefficient and importance sampling provides a way to speed up computations. This paper considers moderate deviations for the weighted empirical process, the process analogue of the weighted empirical measure, arising in importance sampling. The moderate deviation principle is established as an extension of existing results. Using a delta method for large deviations established by Gao and Zhao (Ann. Statist., 2011) together with classical large deviation techniques, the moderate deviation principle for the weighted empirical process is extended to functionals of the weighted empirical process which correspond to risk measures. The main results are moderate deviation principles for importance sampling estimators of the quantile function of a distribution and Expected Shortfall.

  • 334.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On large deviations and design of efficient importance sampling algorithms2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers, presented in Chapters 2-5, on the topics large deviations and stochastic simulation, particularly importance sampling. The four papers make theoretical contributions to the development of a new approach for analyzing efficiency of importance sampling algorithms by means of large deviation theory, and to the design of efficient algorithms using the subsolution approach developed by Dupuis and Wang (2007).

    In the first two papers of the thesis, the random output of an importance sampling algorithm is viewed as a sequence of weighted empirical measures and weighted empirical processes, respectively. The main theoretical results are a Laplace principle for the weighted empirical measures (Paper 1) and a moderate deviation result for the weighted empirical processes (Paper 2). The Laplace principle for weighted empirical measures is used to propose an alternative measure of efficiency based on the associated rate function.The moderate deviation result for weighted empirical processes is an extension of what can be seen as the empirical process version of Sanov's theorem. Together with a delta method for large deviations, established by Gao and Zhao (2011), we show moderate deviation results for importance sampling estimators of the risk measures Value-at-Risk and Expected Shortfall.

    The final two papers of the thesis are concerned with the design of efficient importance sampling algorithms using subsolutions of partial differential equations of Hamilton-Jacobi type (the subsolution approach).

    In Paper 3 we show a min-max representation of viscosity solutions of Hamilton-Jacobi equations. In particular, the representation suggests a general approach for constructing subsolutions to equations associated with terminal value problems and exit problems. Since the design of efficient importance sampling algorithms is connected to such subsolutions, the min-max representation facilitates the construction of efficient algorithms.

    In Paper 4 we consider the problem of constructing efficient importance sampling algorithms for a certain type of Markovian intensity model for credit risk. The min-max representation of Paper 3 is used to construct subsolutions to the associated Hamilton-Jacobi equation and the corresponding importance sampling algorithms are investigated both theoretically and numerically.

    The thesis begins with an informal discussion of stochastic simulation, followed by brief mathematical introductions to large deviations and importance sampling. 

  • 335. Nyström, Kaj
    et al.
    Önskog, Thomas
    Remarks on the Skorohod problem and reflected Lévy driven SDEs in time-dependent domains2015In: Stochastics: An International Journal of Probablitiy and Stochastic Processes, ISSN 1744-2508, E-ISSN 1744-2516, Vol. 87, no 5, p. 747-765Article in journal (Refereed)
    Abstract [en]

    We consider the Skorohod problem for cadlag functions, and the subsequent construction of solutions to normally reflected stochastic differential equations driven by Levy processes, in the setting of non-smooth and time-dependent domains.

  • 336. Nyström, Kaj
    et al.
    Önskog, Thomas
    The Skorohod oblique reflection problem in time-dependent domains2010In: Annals of Probability, ISSN 0091-1798, E-ISSN 2168-894X, Vol. 38, no 6, p. 2170-2223Article in journal (Refereed)
    Abstract [en]

    The deterministic Skorohod problem plays an important role in the construction and analysis of diffusion processes with reflection. In the form studied here, the multidimensional Skorohod problem was introduced, in time-independent domains, by H. Tanaka [61] and further investigated by P.-L. Lions and A.-S. Sznitman [42] in their celebrated article. Subsequent results of several researchers have resulted in a large literature on the Skorohod problem in time-independent domains. In this article we conduct a thorough study of the multidimensional Skorohod problem in time-dependent domains. In particular, we prove the existence of cadlag solutions (x, lambda) to the Skorohod problem, with oblique reflection, for (D, Gamma, w) assuming, in particular, that D is a time-dependent domain (Theorem 1.2). In addition, we prove that if w is continuous, then x is continuous as well (Theorem 1.3). Subsequently, we use the established existence results to construct solutions to stochastic differential equations with oblique reflection (Theorem 1.9) in time-dependent domains. In the process of proving these results we establish a number of estimates for solutions to the Skorohod problem with bounded jumps and, in addition, several results concerning the convergence of sequences of solutions to Skorohod problems in the setting of time-dependent domains.

  • 337. Nyström, Kaj
    et al.
    Önskog, Thomas
    Weak approximation of obliquely reflected diffusions in time-dependent domains.2010In: Journal of Computational Mathematics, ISSN 0254-9409, E-ISSN 1991-7139, Vol. 28, no 5, p. 579-605Article in journal (Refereed)
    Abstract [en]

    In an earlier paper, we proved the existence of solutions to the Skorohod problem with oblique reflection in time-dependent domains and, subsequently, applied this result to the problem of constructing solutions, in time-dependent domains, to stochastic differential equations with oblique reflection. In this paper we use these results to construct weak approximations of solutions to stochastic differential equations with oblique reflection, in time-dependent domains in R-d, by means of a projected Euler scheme. We prove that the constructed method has, as is the case for normal reflection and time-independent domains, an order of convergence equal to 1/2 and we evaluate the method empirically by means of two numerical examples. Furthermore, using a well-known extension of the Feynman-Kac formula, to stochastic differential equations with reflection, our method gives, in addition, a Monte Carlo method for solving second order parabolic partial differential equations with Robin boundary conditions in time-dependent domains.

  • 338.
    Näsman, P
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Centres, Centre for Transport Studies, CTS. KTH, School of Architecture and the Built Environment (ABE), Transport Science, Transport and Location Analysis.
    Thedéen, T
    Valdeltagande, bebyggelsetyp och röstandelar i storstadsområdena Stockholm, Göteborg och Malmö vid riksdagsvalen 1982, 1985 och 19881990Report (Other academic)
  • 339.
    Näsman, Per
    KTH, School of Architecture and the Built Environment (ABE), Centres, Centre for Transport Studies, CTS. KTH, School of Architecture and the Built Environment (ABE), Transport Science, Transport and Location Analysis.
    Jan Gustavsson, mentor och vän: Festskrift med anledning av att Jan Gustavsson, Statistiska Institutionen, går i pension.1998Other (Other (popular science, discussion, etc.))
  • 340.
    Okanovic, Mirza
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    An inquiry into the efficacy ofconvolutional neural networks in low-resolution video feeds for object detection2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, various famous models have been investigated and compared to a custom model for people detection in low resolution video feeds. YOLOv3 and SSD in particular are famous models which have, at their time, produced state of the art results on competitions such as ImageNet and COCO. The performance of all models have been compared on speed and accuracy where it was found that YOLOv3 was the slowest and SSD was the fastest. The proposed model was superior in accuracy to both of the aforementioned architectures which can be attributed to addition of newer techniques from research such as leaving activations out and having a carefully balanced loss function. The results seem to suggest that the proposed model is implementable for real-time inference using cheap hardware such as a raspberry pi 3B+ coupled with one or more AI accelerator stickssuch as the Intel Neural Compute Stick 2 and that the networks are usable for detection even in bad video streams.

  • 341.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pavlenko, Tatjana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rios, Felix
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Bayesian structure learning in graphical models using sequential Monte CarloManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper we present a family of algorithms, the junction tree expanders, for expanding junction trees in the sense that the number of nodes in the underlying decomposable graph is increased by one. The family of junction tree expanders is equipped with a number of theoretical results including a characterization stating that every junction tree and consequently every de- composable graph can be constructed by iteratively using a junction tree expander. Further, an important feature of a stochastic implementation of a junction tree expander is the Markovian property inherent to the tree propagation dynamics. Using this property, a sequential Monte Carlo algorithm for approximating a probability distribution defined on the space of decompos- able graphs is developed with the junction tree expander as a proposal kernel. Specifically, we apply the sequential Monte Carlo algorithm for structure learning in decomposable Gaussian graphical models where the target distribution is a junction tree posterior distribution. In this setting, posterior parametric inference on the underlying decomposable graph is a direct by- product of the suggested methodology; working with the G-Wishart family of conjugate priors, we derive a closed form expression for the Bayesian estimator of the precision matrix of Gaus- sian graphical models Markov with respect to a decomposable graph. Performance accuracy of the graph and parameter estimators are illustrated through a collection of numerical examples demonstrating the feasibility of the suggested approach in high-dimensional domains. 

  • 342.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pavlenko, Tatjana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rios, Felix Leopoldo
    Bayesian inference in decomposable graphical models using sequential Monte Carlo methodsManuscript (preprint) (Other academic)
    Abstract [en]

    In this study we present a sequential sampling methodology for Bayesian inference in decomposable graphical models. We recast the problem of graph estimation, which in general lacks natural sequential interpretation, into a sequential setting. Specifically, we propose a recursive Feynman-Kac model which generates a flow of junction tree distributions over a space of increasing dimensions and develop an efficient sequential Monte Carlo sampler. As a key ingredient of the proposal kernel in our sampler we use the Christmas tree algorithm developed in the companion paper Olsson et al. [2017]. We focus on particle MCMC methods, in particular particle Gibbs (PG) as it allows for generating MCMC chains with global moves on an underlying space of decomposable graphs. To further improve the algorithm mixing properties of this PG, we incorporate a systematic refreshment step implemented through direct sampling from a backward kernel. The theoretical properties of the algorithm are investigated, showing in particular that the refreshment step improves the algorithm performance in terms of asymptotic variance of the estimated distribution. Performance accuracy of the graph estimators are illustrated through a collection of numerical examples demonstrating the feasibility of the suggested approach in both discrete and continuous graphical models.

  • 343.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pavlenko, Tatjana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rios, Felix Leopoldo
    Generating junction trees of decomopsable graphs with the Christmas tree algorithmManuscript (preprint) (Other academic)
    Abstract [en]

    The junction tree representation provides an attractive structural property for organizing a decomposable graph. In this study, we present a novel stochastic algorithm which we call the Christmas tree algorithm for building of junction trees sequentially by adding one node at a time to the underlying decomposable graph. The algorithm has two important theoretical properties. Firstly, every junction tree and hence every decomposable graph have positive probability of being generated. Secondly, the transition probability from one tree to another has a tractable expression. These two properties, along with the reversed version of the proposed algorithm are key ingredients in the construction of a sequential Monte Carlo sampling scheme for approximating distributions over decomposable graphs, see Olsson et al. [2016]. As an illustrating example, we specify a distribution over the space of junction trees and estimate of the number of decomposable graph through the normalizing constant.

  • 344. Olsson, Jimmy
    et al.
    Rydén, Tobias
    Lund University.
    Asymptotic properties of particle filter-based maximum likelihood estimators for state space models2008In: Stochastic Processes and their Applications, ISSN 0304-4149, E-ISSN 1879-209X, Vol. 118, no 4, p. 649-680Article in journal (Refereed)
    Abstract [en]

    We study the asymptotic performance of approximate maximum likelihood estimators for state space models obtained via sequential Monte Carlo methods. The state space of the latent Markov chain and the parameter space are assumed to be compact. The approximate estimates are computed by, firstly, running possibly dependent particle filters on a fixed grid in the parameter space, yielding a pointwise approximation of the log-likelihood function. Secondly, extensions of this approximation to the whole parameter space are formed by means of piecewise constant functions or B-spline interpolation, and approximate maximum likelihood estimates are obtained through maximization of the resulting functions. In this setting we formulate criteria for how to increase the number of particles and the resolution of the grid in order to produce estimates that are consistent and asymptotically normal.

  • 345. Olsson, Jimmy
    et al.
    Rydén, Tobias
    Lund University.
    Particle filter-based approximate maximum likelihood inference asymptotics in state-space models2007In: ESAIM: Proc. Volume 19, 2007, Conference Oxford sur les méthodes de Monte Carlo séquentielles / [ed] Andrieu, C. and Crisan, D., 2007, p. 115-120Conference paper (Refereed)
    Abstract [en]

    To implement maximum likelihood estimation in state-space models, the log-likelihoodfunction must be approximated. We study such approximations based on particle filters, and in particularconditions for consistency of the corresponding approximate maximum likelihood estimator.Numerical results illustrate the theory.

  • 346.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    An efficient particle-based online EM algorithm for general state-space modelsManuscript (preprint) (Other academic)
  • 347.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Efficient particle-based online smoothing in general hidden Markov models: the PaRIS algorithmManuscript (preprint) (Other academic)
  • 348.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Particle-based adaptive-lag online marginal smoothing in general state-space modelsManuscript (preprint) (Other academic)
  • 349.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Particle-based, online estimation of tangent filters with application to parameter estimation in nonlinear state-space modelsManuscript (preprint) (Other academic)
  • 350.
    Olsson, Kevin
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Ivinskiy, Valeriy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Predicting runners’ oxygen consumption on flat terrain using accelerometer data2019Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This project aimed to use accelerometer data and KPIs to predict the oxygen consumption of runners’ during exercises on flat terrain. Based on many studies researching the relationship between oxygen consumption and running economy and a small set of data, a model was constructed which had a prediction accuracy of 81.1% on one individual. Problems encountered during the research include issues with comparing data from different systems, model nonlinearity and data noise. These problems were solved using transformation of data in the R software, model re-specification and identifying outlying observations that could be viewed as noise. The results from this project should be seen as a proof of concept for further studies, showing that it is possible to predict oxygen consumption using a set of accelerometer data and KPIs. With a larger sample set this model can be validated and furthermore implemented in Racefox’s current service as a calibration method of individual results and an early warning system to avoid running economy deficiency.

45678910 301 - 350 of 464
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf