Change search
Refine search result
1234567 101 - 150 of 392
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hamdi, Ali
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Two-modes Mean-field Optimal Switching Problem for The Full Balance Sheet2014In: International Journal of Stochastic Analysis, ISSN 2090-3332, E-ISSN 2090-3340, article id 159519Article in journal (Refereed)
    Abstract [en]

    We consider the problem of switching a large number of production lines between two modes, high-production and low-production. The switching is based on the optimal expected profit and cost yields of the respective production lines, and considers both sides of the balance sheet. Furthermore, the production lines are all assumed to be interconnected through a coupling term, which is the average of all optimal expected yields. Intuitively, this means that each individual production line is compared to the average of all its peers which acts as a benchmark.

    Due to the complexity of the problem, we consider the aggregated optimal expected yields, where the coupling term is approximated with the mean of the optimal expected yields. This turns the problem into a two-modes optimal switching problem of mean-field type, which can be described by a system of Snell envelopes where the obstacles are interconnected and nonlinear.

    The main result of the paper is a proof of a continuous minimal solution to the system of Snell envelopes, as well as the full characterization of the optimal switching strategy.

  • 102.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Importance sampling for a Markovian intensity model with applications to credit riskManuscript (preprint) (Other academic)
    Abstract [en]

    This paper considers importance sampling for estimation of rare-event probabilities in a Markovian intensity model for credit risk. The main contribution is the design of efficient importance sampling algorithms using subsolutions of a certain Hamilton-Jacobi equation. For certain instances of the credit risk model the proposed algorithm is proved to be asymptotically optimal. The computational gain compared to standard Monte Carlo is illustrated by numerical experiments.

  • 103.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Min-max representations of viscosity solutions of Hamilton-Jacobi equations and applications in rare-event simulationManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper a duality relation between the Mañé potential and Mather's action functional is derived in the context of convex and state-dependent Hamiltonians. The duality relation is used to obtain min-max representations of viscosity solutions of first order Hamilton-Jacobi equations. These min-max representations naturally suggest classes of subsolutions of Hamilton-Jacobi equations that arise in the theory of large deviations. The subsolutions, in turn, are good candidates for designing efficient rare-event simulation algorithms.

  • 104.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A hidden Markov approach to disability insuranceManuscript (preprint) (Other academic)
    Abstract [en]

    Point and interval estimation of future disability inception and recovery rates are predominantly carried out by combining generalized linear models (GLM) with time series forecasting techniques into a two-step method involving parameter estimation from historical data and subsequent calibration of a time series model. This approach may in fact lead to both conceptual and numerical problems since any time trend components of the model are incoherently treated as both model parameters and realizations of a stochastic process. We suggest that this general two-step approach can be improved in the following way: First, we assume a stochastic process form for the time trend component. The corresponding transition densities are then incorporated into the likelihood, and the model parameters are estimated using the Expectation-Maximization algorithm. We illustrate the modelling procedure by fitting the model to Swedish disability claims data.

  • 105.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Aggregation of 1-year risks in life and disability insurance2016In: Annals of Actuarial Science, ISSN 1748-4995, E-ISSN 1748-5002, Vol. 10, no 2, p. 203-221Article in journal (Refereed)
    Abstract [en]

    We consider large insurance portfolios consisting of life or disability insurance policies that are assumed independent, conditional on a stochastic process representing the economic-demographic environment. Using the conditional law of large numbers, we show that when the portfolio of liabilities becomes large enough, its value on a delta-year horizon can be approximated by a functional of the environment process. Based on this representation, we derive a semi-analytical approximation of the systematic risk quantiles of the future liability value for a homogeneous portfolio when the environment is represented by a one-factor diffusion process. For the multi-factor diffusion case, we propose two different risk aggregation techniques for a portfolio consisting of large, homogeneous pools. We give numerical results comparing the resulting capital charges with the Solvency II standard formula, based on disability claims data from the Swedish insurance company Folksam.

  • 106.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Aggregation of one-year risks in life and disability insuranceManuscript (preprint) (Other academic)
  • 107.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nonlinear reserving in life insurance: aggregation and mean-field approximationManuscript (preprint) (Other academic)
  • 108.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Risk aggregation and stochastic claims reserving in disability insurance2014In: Insurance, Mathematics & Economics, ISSN 0167-6687, E-ISSN 1873-5959, Vol. 59, p. 100-108Article in journal (Refereed)
    Abstract [en]

    We consider a large, homogeneous portfolio of life or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic-demographic environment. Using a conditional law of large numbers, we establish the connection between claims reserving and risk aggregation for large portfolios. Further, we derive a partial differential equation for moments of present values. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for solving the PDEs very efficiently. Finally, we give a numerical example where moments of present values of disability annuities are computed using finite-difference methods and Monte Carlo simulations.

  • 109.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rinne, Jonas
    Can stocks help mend the asset and liability mismatch?2010In: Scandinavian Actuarial Journal, ISSN 0346-1238, E-ISSN 1651-2030, no 2, p. 148-160Article in journal (Refereed)
    Abstract [en]

    Stocks are generally used to provide higher returns in the long run. But the dramatic fall in equity prices at the beginning of this century, triggering large underfundings in pension plans, raised the question as to whether stocks can really help mend the asset and liability mismatch. To understand some aspects of this topical issue, we examine whether existing major equity indexes can close this gap, given the liability profile of a typical pension fund. We also compare the non-market capitalization weighted equity indexes recently introduced as Research Affiliates Fundamental Indexes (R) (RAFI (R)) with traditional market capitalization weighted equity indexes from an asset and liability management perspective. The analysis of the behavior of the solvency ratio clearly indicates that interest rate sensitive stocks have a large potential to improve the link between assets and liabilities. Compared with market capitalization weighted equity indexes, RAFI (R) shows a substantially better potential to mend the asset and liability mismatch, while also improving returns.

  • 110. Doll, Jim
    et al.
    Dupuis, Paul
    Nyquist, Pierre
    A large deviation analysis of certain qualitative properties of parallel tempering and infinite swapping algorithms2018In: Applied mathematics and optimization, ISSN 0095-4616, E-ISSN 1432-0606, Vol. 78, no 1, p. 103-144Article in journal (Refereed)
    Abstract [en]

    Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Using the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.

  • 111. Doll, Jim
    et al.
    Dupuis, Paul
    Nyquist, Pierre
    Thermodynamic integration methods, infinite swapping and the calculation of generalized averages2017In: Journal of Chemical Physics, ISSN 0021-9606, E-ISSN 1089-7690, Vol. 146Article in journal (Refereed)
    Abstract [en]

    In the present paper we examine the risk-sensitive and sampling issues associated with the problem of calculating generalized averages. By combining thermodynamic integration and Stationary Phase Monte Carlo techniques, we develop an approach for such problems and explore its utility for a prototypical class of applications.

  • 112. Doorn, N.
    et al.
    Hansson, Sven Ove
    KTH, School of Architecture and the Built Environment (ABE), Philosophy and History of Technology, Philosophy.
    Design for the value of safety2015In: Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, Springer Netherlands, 2015, p. 491-511Chapter in book (Other academic)
    Abstract [en]

    Two major methods for achieving safety in engineering design are compared: safety engineering and probabilistic risk analysis. Safety engineering employs simple design principles or rules of thumb such as inherent safety, multiple barriers, and numerical safety margins to reduce the risk of accidents. Probabilistic risk analysis combines the probabilities of individual events in event chains leading to accidents in order to identify design elements in need of improvement and often also to optimize the use of resources. It is proposed that the two methodologies should be seen as complementary rather than as competitors. Probabilistic risk analysis is at its advantage when meaningful probability estimates are available for most of the major events that may contribute to an accident. Safety engineering principles are more suitable to deal with uncertainties that defy quantification. In many design tasks, the combined use of both methodologies is preferable.

  • 113. Douc, Randal
    et al.
    Moulines, Eric
    Olsson, Jimmy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Long-term stability of sequential monte carlo methods under verifiable conditions2014In: The Annals of Applied Probability, ISSN 1050-5164, E-ISSN 2168-8737, Vol. 24, no 5, p. 1767-1802Article in journal (Refereed)
    Abstract [en]

    This paper discusses particle filtering in general hidden Markov models (HMMs) and presents novel theoretical results on the long-term stability of bootstrap-type particle filters. More specifically, we establish that the asymptotic variance of the Monte Carlo estimates produced by the bootstrap filter is uniformly bounded in time. On the contrary to most previous results of this type, which in general presuppose that the state space of the hidden state process is compact (an assumption that is rarely satisfied in practice), our very mild assumptions are satisfied for a large class of HMMs with possibly non-compact state space. In addition, we derive a similar time uniform bound on the asymptotic L-p error. Importantly, our results hold for misspecified models; that is, we do not at all assume that the data entering into the particle filter originate from the model governing the dynamics of the particles or not even from an HMM.

  • 114. Douc, Randal
    et al.
    Moulines, Eric
    Rydén, Tobias
    Lund University.
    Asymptotic properties of the maximum likelihood estimator in autoregressive models with Markov regime2004In: Annals of Statistics, ISSN 0090-5364, E-ISSN 2168-8966, Vol. 32, no 5, p. 2254-2304Article in journal (Refereed)
    Abstract [en]

    An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the maximum likelihood estimator in a possibly nonstationary process of this kind for which the hidden state space is compact but not necessarily finite. Consistency and asymptotic normality are shown to follow from uniform exponential forgetting of the initial distribution for the hidden Markov chain conditional on the observations.

  • 115.
    Drugge, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Allocation Methods for Alternative Risk Premia Strategies2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    We use regime switching and regression tree methods to evaluate performance in the risk premia strategies provided by Deutsche Bank and constructed from U.S. research data from the Fama French library. The regime switching method uses the Baum-Welch algorithm at its core and splits return data into a normal and a turbulent regime. Each regime is independently evaluated for risk and the estimates are then weighted together according to the expected value of the proceeding regime. The regression tree methods identify macro-economic states in which the risk premia perform well or poorly and use these results to allocate between risk premia strategies. The regime switching method proves to be mostly unimpressive but has its results boosted by investing less into risky assets as the probability of an upcoming turbulent regime becomes larger. This proves to be highly effective for all time periods and for both data sources. The regression tree method proves the most effective when making the assumption that we know all macro-economic data the same month as it is valid for. Since this is an unrealistic assumption the best method seems to be to evaluate the performance of the risk premia strategy using macro-economic data from the previous quarter.

  • 116.
    Dufour Partanen, Bianca
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On the Valuation of Contingent Convertibles (CoCos): Analytically Tractable First Passage Time Model for Pricing AT1 CoCos2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Contingent Convertibles (CoCos) are a new type of hybrid debt instrument characterized by forced equity conversion or write-down under a specified trigger event, usually indicating a state of near non-viability of the Additional Tier 1 capital category, giving them additional features such as possible coupon cancellation. In this thesis, the structure of CoCos is presented and different pricing approaches are introduced. A special focus is put on structural models with the Analytically Tractable First Passage Time(AT1P) Model and its extensions. Two models are applied on the write-down CoCo issued by Svenska Handelsbanken, starting with the equity derivative model and followed by the AT1P model.

  • 117.
    Edberg, Erik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Prediktering av VD-löner i svenska onoterade aktiebolag2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

     

    The CEO’s remuneration is in contradiction to the union labours, set individually and independent from union agreements. The company board determines the remuneration. It’s based on an estimated valuation of variables such as job characteristics, personal qualities of the CEO, market valuation of similar tasks and the availability of possible candidates.

    The purpose of this thesis is to create a model to predict the market remuneration for a current or forthcoming CEO. Further, the compensation structure will be examined, aiming to find the compensation structure that maximizes the CEO’s performance.

    This thesis showes that it is possible to predict the CEO remuneration for employed CEOs in unlisted corporations with 64-percentage explanation rate. The variance is explained by six covariates, four covariates representing job characteristics and two related to company performance. The highest explanation rate is given by the covariate turnover, which explains just below 40-percentage of the remuneration variance.

    This study shows that the optimal compensation structure is different for different

    companies. Further, recommendations for what the variable remuneration should be

    based on, in order to maximize the CEO’s performance.

  • 118.
    Ekeberg, Magnus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Detecting contacts in protein folds by solving the inverse Potts problem - a pseudolikelihood approach2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract 

    Spatially proximate amino acid positions in a protein tend to co-evolve, so a protein's 3D-structure leaves an echo of correlations in the evolutionary record. Reverse engineering 3D-structures from such correlations is an open problem in structural biology, pursued with increasing vigor as new protein sequences continue to fill the data banks. Within this task lies a statistical stumbling block, rooted in the following: correlation between two amino acid positions can arise from firsthand interaction, but also be network-propagated via intermediate positions; observed correlation is not enough to guarantee proximity. The remedy, and the focus of this thesis, is to mathematically untangle the crisscross of correlations and extract direct interactions, which enables a clean depiction of co-evolution among the positions.

    Recently, analysts have used maximum-entropy modeling to recast this cause-and-effect puzzle as parameter learning in a Potts model (a kind of Markov random field). Unfortunately, a computationally expensive partition function puts this out of reach of straightforward maximum-likelihood estimation. Mean-field approximations have been used, but an arsenal of other approximate schemes exists. In this work, we re-implement an existing contact-detection procedure and replace its mean-field calculations with pseudo-likelihood maximization. We then feed both routines real protein data and highlight differences between their respective outputs. Our new program seems to offer a systematic boost in detection accuracy.

  • 119.
    El Menouni, Zakaria
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing Interest Rate Derivatives in the Multi-Curve Framework with a Stochastic Basis2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The financial crisis of 2007/2008 has brought about a lot of changes in the interest rate market in particular, as it has forced to review and modify the former pricing procedures and methodologies. As a consequence, the Multi-Curve framework has been adopted to deal with the inconsistencies of the frameworks used so far, namely the single-curve method.

    We propose to study this new framework in details by focusing on a set of interest rate derivatives such as deposits, swaps and caplets, then we explore a stochastic approach to model the Libor-OIS basis spread, which has appeared since the beginning of the crisis and is now the quantity of interest to which a lot of researchers dedicate their work (F.Mercurio, M.Bianchetti and others).

    A discussion follows this study to set the light on the challenges and difficulties related to the modeling of basis spread.

     

  • 120.
    Eliasson, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Game contingent claims2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    Game contingent claims (GCCs), as introduced by Kifer (2000), are a generalization of American contingent claims where the writer has the opportunity to terminate the contract, and must then pay the intrinsic option value plus a penalty. In complete markets, GCCs are priced using no-arbitrage arguments as the value of a zero-sum stochastic game of the type described in Dynkin (1969). In incomplete markets, the neutral pricing approach of

    Kallsen and Kühn (2004) can be used.

    In Part I of this thesis, we introduce GCCs and their pricing, and also cover some basics of mathematical finance. In Part II, we present a new algorithm for valuing game contingent claims. This algorithm generalises the least-squares Monte-Carlo method for pricing American options of Longstaff and Schwartz (2001). Convergence proofs are obtained, and the algorithm is tested against certain GCCs. A more efficient algorithm is derived from the first one using the computational complexity analysis technique of Chen and Shen (2003).

    The algorithms were found to give good results with reasonable time requirements. Reference implementations of both algorithms are available for download from the author’s Github page https://github.com/del/ Game-option-valuation-library

  • 121.
    Engsner, Hampus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A PIT - Based approach to Validation of Electricity Spot Price Models2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The modeling of electricity spot prices is still in its early stages, with various different competing models being proposed by different researchers. This makes model evaluation and comparison research an important area, for practitioners and researchers alike. However, there is a distinct lack in the literature of consensus regarding model evaluation tools to assess model validity, with different researchers using different methods of varying suitability as validation methods. In this thesis the current landscape of electricity spot price models and how they are currently evaluated is mapped out. Then, as the main contribution this research aims to make, a general and flexible framework for model validation is proposed, based on the Probability Integral Transform (PIT). The probability integral transform, which can be seen as a generalization of analyzing residuals in simple time series and regression models, transforms the realizations of a time series into independent and identically distributed U(0,1) variables using the conditional distributions of the time series. Testing model validity is with this method reduced to testing if the PIT values are independent and identically distributed U(0,1) variables. The thesis is concluded by testing spot price models of varying validity according to previous research using this framework against actual spot price data. These empirical tests suggest that PIT-based model testing does indeed point us toward the more suitable models, with especially unsuitable models being rejected by a large margin.

  • 122.
    Eriksson, André
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Anomaly Detection inMachine-Generated Data:A Structured Approach2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Anomaly detection is an important issue in data mining and analysis, with applications in almost every area in science, technology and business that involves data collection. The development of general anomaly detection techniques can therefore have a large impact on data analysis across many domains. In spite of this, little work has been done to consolidate the different approaches to the subject.

    In this report, this deficiency is addressed in the target domain of temporal machine-generated data. To this end, new theory for comparing and reasoning about anomaly detection tasks and methods is introduced, which facilitates a problem-oriented rather than a method-oriented approach to the subject. Using this theory as a basis, the possible approaches to anomaly detection in the target domain are discussed, and a set of interesting anomaly detection tasks is highlighted.

    One of these tasks is selected for further study: the detection of subsequences that are anomalous with regards to their context within long univariate real-valued sequences. A framework for relating methods derived from this task is developed, and is used to derive new methods and an algorithm for solving a large class of derived problems. Finally, a software implementation of this framework along with a set of evaluation utilities is discussed and demonstrated

  • 123. Eriksson, Kimmo
    et al.
    Jansson, Fredrik
    Sjöstrand, Jonas
    Stockholms universitet.
    Bentley's conjecture on popularity toplist turnover under random copying2010In: The Ramanujan journal, ISSN 1382-4090, E-ISSN 1572-9303, Vol. 23, p. 371-396Article in journal (Refereed)
    Abstract [en]

    Bentley et al studied the turnover rate in popularity toplists in a ’random copying’ model of cultural evolution. Based on simulations of a model with population size N, list length ℓ and invention rate μ, they conjectured a remarkably simple formula for the turnover rate: ℓ√μ. Here we study an overlapping generations version of the random copying model, which can be interpreted as a random walk on the integer partitions of the population size. In this model we show that the conjectured formula, after a slight correction, holds asymptotically.

  • 124.
    Eriksson, Kimmo
    et al.
    Mälardalen University, School of Education, Culture and Communication.
    Sjöstrand, Jonas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Limiting shapes of birth-and-death processes on Young diagrams2012In: Advances in Applied Mathematics, ISSN 0196-8858, E-ISSN 1090-2074, Vol. 48, no 4, p. 575-602Article in journal (Refereed)
    Abstract [en]

    We consider a family of birth processes and birth-and-death processes on Young diagrams of integer partitions of n. This family incorporates three famous models from very different fields: Rost's totally asymmetric particle model (in discrete time), Simon's urban growth model, and Moran's infinite alleles model. We study stationary distributions and limit shapes as n tends to infinity, and present a number of results and conjectures.

  • 125. Ezquiaga, J. M.
    et al.
    Zumalacárregui, Miguel
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA.
    Dark Energy after GW170817: Dead Ends and the Road Ahead2017In: Physical Review Letters, ISSN 0031-9007, E-ISSN 1079-7114, Vol. 119, no 25, article id 251304Article in journal (Refereed)
    Abstract [en]

    Multimessenger gravitational-wave (GW) astronomy has commenced with the detection of the binary neutron star merger GW170817 and its associated electromagnetic counterparts. The almost coincident observation of both signals places an exquisite bound on the GW speed |cg/c-1|≤5×10-16. We use this result to probe the nature of dark energy (DE), showing that a large class of scalar-tensor theories and DE models are highly disfavored. As an example we consider the covariant Galileon, a cosmologically viable, well motivated gravity theory which predicts a variable GW speed at low redshift. Our results eliminate any late-universe application of these models, as well as their Horndeski and most of their beyond Horndeski generalizations. Three alternatives (and their combinations) emerge as the only possible scalar-tensor DE models: (1) restricting Horndeski's action to its simplest terms, (2) applying a conformal transformation which preserves the causal structure, and (3) compensating the different terms that modify the GW speed (to be robust, the compensation has to be independent on the background on which GWs propagate). Our conclusions extend to any other gravity theory predicting varying cg such as Einstein-Aether, Hořava gravity, Generalized Proca, tensor-vector-scalar gravity (TEVES), and other MOND-like gravities.

  • 126.
    Forsman, Mikael
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Model Implementation of Incremental Risk Charge2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    In 2009 the Basel Committee on Banking Supervision released the final guidelines for computing capital for the Incremental Risk Charge, which is a complement to the traditional Value at Risk intended to measure the migration risk and the default risk in the trading book. Before Basel III banks will have to develop their own Incremental Risk Charge model following these guidelines. The development of such a model that computes the capital charge for a portfolio of corporate bonds is described in this thesis. Essential input parameters like the credit ratings of the underlying issuers, credit spreads, recovery rates at default, liquidity horizons and correlations among the positions in the portfolio will be discussed. Also required in the model is the transition matrix with probabilities of migrating between different credit states, which is measured by historical data from Moody´s rating institute. Several sensitivity analyses and stress tests are then made by generating different scenarios and running them in the model and the results of these tests are compared to a base case. As it turns out, the default risk contributes for the most part of the Incremental Risk Charge.

  • 127. Fosgerau, M.
    et al.
    Lindberg, P. O.
    Mattsson, Lars-Göran
    KTH, School of Architecture and the Built Environment (ABE), Transport Science. KTH, School of Architecture and the Built Environment (ABE), Centres, Centre for Transport Studies, CTS.
    Weibull, Jörgen
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.). Stockholm School of Economics, Sweden.
    A note on the invariance of the distribution of the maximum2018In: Journal of Mathematical Economics, ISSN 0304-4068, E-ISSN 1873-1538, Vol. 74, p. 56-61Article in journal (Refereed)
    Abstract [en]

    Many models in economics involve discrete choices where a decision-maker selects the best alternative from a finite set. Viewing the array of values of the alternatives as a random vector, the decision-maker draws a realization and chooses the alternative with the highest value. The analyst is then interested in the choice probabilities and in the value of the best alternative. The random vector has the invariance property if the distribution of the value of a specific alternative, conditional on that alternative being chosen, is the same, regardless of which alternative is considered. This note shows that the invariance property holds if and only if the marginal distributions of the random components are positive powers of each other, even when allowing for quite general statistical dependence among the random components. We illustrate the analytical power of the invariance property by way of examples.

  • 128.
    Fransson, Viktor
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Graphical lasso for covariance structure learning in the high dimensional setting2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis considers the estimation of undirected Gaussian graphical models especially in the high dimensional setting where the true observations are assumed to be non-Gaussian distributed.

    The first aim is to present and compare the performances of existing Gaussian graphical model estimation methods. Furthermore since the models rely heavily on the normality assumption, various methods for relaxing the normal assumption are presented. In addition to the existing methods, a modified version of the joint graphical lasso method is introduced which monetizes on the strengths of the community Bayes method. The community Bayes method is used to partition the features (or variables) of datasets consisting of several classes into several communities which are estimated to be mutually independent within each class which allows the calculations when performing the joint graphical lasso method, to be split into several smaller parts. The method is also inspired by the cluster graphical lasso and is applicable to both Gaussian and non-Gaussian data, assuming that the normal assumption is relaxed.

    Results show that the introduced cluster joint graphical lasso method outperforms com-peting methods, producing graphical models which are easier to comprehend due to the added information obtained from the clustering step of the method. The cluster joint graphical lasso is applied to a real dataset consisting of p = 12582 features which resulted in computation gain of a factor 35 when comparing to the competing method which is very significant when analysing large datasets. The method also allows for parallelization where computations can be spread across several computers greatly increasing the computational efficiency.

  • 129.
    Fröling, Anton
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Lahdo, Sandy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A smoother and more up-to-date development of the income pension2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    For an apparatus as big as the pension system, the financial stability is essential. An important feature in the existing pension system is the balance mechanism, which secures the stability of the system. The balance ratio is obtained by dividing the assets by the liabilities. When this ratio drops below 1.0000, it triggers the so-called automatic balancing.

    While the existing pension system has achieved its goal of being financially stable, it has become clear that the indexation of the pensions during balancing periods has properties that are not optimal. On a short-term perspective the income pension system is exposed to the risk of reacting with a lag, or reacting unnecessarily strong. This gave rise to a new legislative proposal, issued by the government. The goal of the proposal is to obtain a smoother and more up-to-date development of the income pension, i.e. a shorter lag period, without jeopardizing the financial stability. In addition to this it is also desirable to simplify and improve the existing calculation methods. In order to compare the existing calculation methods in the pension system with the new legislative proposal, a simplified model of the existing pension system and the modified version of it, are created.

    The results of this study shows that the new legislative proposal decreases the volatility in the pensions and it avoids the deepest valleys in the balance ratio. The development of the pension disbursements in the new system has a higher correlation with the development of the average pension-qualifying income than in the current system. Moreover, the results show that the new system has a shorter lag period which makes the income pension system more up- to-date with the current economic and demographic situation.

    The financial stability is still contained, and the new system also handles variations in the inflation better than the current system

  • 130.
    Gallais, Arnaud
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    CPPI Structures on Funds Derivatives2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    With the ever-increasing complexity of financial markets and financial products, many investors now choose to benefit from a manager’s expertise by investing in a fund. This fueled a rapid growth of the fund industry over the past decades, and the recent emergence of complex derivatives products written on underlying funds. The diversity (hedge funds, mutual funds, funds of funds, managed accounts…) and the particularities (liquidity, specific risks) of funds call for adapted models and suited risk management. This thesis aims at understanding the issues and difficulties met when dealing with such products. In particular, we will deal in a great extent with CPPI (Constant Proportion Portfolio Insurance) structures written on funds, which combine the specificities of funds with particularities of such structures. Correctly assessing the corresponding market risks is a challenging issue, and is the subject of many investigations.

  • 131.
    Georgelis, Nikos
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nyberg, Mikael
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Scenario Based Allocation Model Using Entropy Pooling for Computing the cenarioProbabilities2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    We introduce a scenario based allocation model (SBAM) that uses entropy pooling for computing scenario probabilities. Compared to most other models that allow the investor to blend historical data with subjective views about the future, the SBAM does not require the investor to quantify a level of confidence in the subjective views.

     A quantitative test is performed on a simulated systematic fund offered by the fund company Informed Portfolio Management in Stockholm, Sweden. The simulated fund under study consists of four individual systematic trading strategies and the test is simulated on a monthly basis during the years 1986-2010.

     We study how the selection of views might affect the SBAM portfolios, creating three systematic views and combining them in different variations creating seven SBAM portfolios. We also compare how the size of sample data affects the results. 

     Furthermore, the SBAM is compared to more common allocation methods, namely an equally weighted portfolio and a portfolio optimization based only on historical data.

     We find that the SBAM portfolios produced higher annual returns and information ratio than the equally weighted portfolio or the portfolio optimized only on historical data.

  • 132.
    Giertz Jonsson, Fredrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Analysis and Optimization of aPortfolio of Catastrophe Bonds2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This Master's Thesis in mathematical statistics has the two major purposes; (i) to model and measure the risk associated with a special type of reinsurance contract, the catastrophe bond, and (ii) to analyze and develop methods of portfolio optimization suitable for a portfolio of catastrophe bonds. Two pathways of modeling potential catastrophe bond losses are analyzed; one method directly modeling potential contract losses and one method modeling the underlying contract loss governing variables. The first method is simple in its structure but with the disadvantage of the inability to introduce a dependence structure between the losses of different contracts in a simple and flexible way. The second modeling method uses a stochastic number of stochastic events representation connected into a multivariate dependence structure using the theory of copulas.

    Results show that the choice of risk measure is of great importance when analyzing catastrophe bonds and their related risks. As an example, the measure Value at Risk often fails to capture the essence of catastrophe bond risk, which in turn means that portfolio optimization with respect to the same might lead to a systematic obscurity of risk. Two coherent risk measures were investigated, the spectral risk measure and the Expected Shortfall measure. Both measures provides good representation of the risk of a portfolio consisting of catastrophe bonds.

    This thesis extends and applies a well-known optimization method of Conditional Value at Risk to obtain a method of optimization of spectral risk measures. The optimization results show that expected shortfall optimization leads to portfolios being advantageous at the specific point at which it is optimized but that their characteristics may be disadvantageous at other parts of the loss distribution. Portfolios optimized for the spectral risk measure were shown to possess good characteristics across the entire loss distribution. Optimization results were compared to the popular mean-variance portfolio optimization approach. The comparison shows that the mean-variance approach handles the special distribution of catastrophe bond losses in an over-simplistic way, and that it has a severe lack of flexibility towards focusing on different aspects of risk. The spectral risk measure optimization procedure was demonstrated to be the most flexible and possibly the most appropriate way to optimize a portfolio of catastrophe bonds.

     

  • 133.
    Gobeljic, Persa
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Classification of Probability of Defaultand Rating Philosophies2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Basel II consists of international recommendations on banking regulations, mainly concerning how much capital banks and other financial institutions should be made to set aside in order to protect themselves from various types of risks. Implementing Basel II involves estimating risks; one of the main measurements is Probability of Default. Firm specific and macroeconomic risks cause obligors to default. Separating the two risk factors in order to define which of them affect the Probability of Default through the years. The aim of this thesis is to enable a separation of the risk variables in the structure of Probability of Default in order to classify the rating philosophy.

  • 134.
    Grandell, Jan
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Schmidli, Hanspeter
    Ruin probabilities in a diffusion environment2011In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 48A, p. 39-50Article in journal (Refereed)
    Abstract [en]

    We consider an insurance model, where the underlying point process is a Cox process. Using a martingale approach applied to diffusion processes, finite-time Lundberg inequalities are obtained. By change-of-measure techniques, Cramer-Lundberg approximations are derived.

  • 135.
    Grossman, Mikael
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Proposal networks in object detection2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Locating and extracting useful data from images is a task that has been revolutionized in the last decade as computing power has risen to such a level to use deep neural networks with success. A type of neural network that uses the convolutional operation called convolutional neural network (CNN) is suited for image related tasks. Using the convolution operation creates opportunities for the network to learn their own filters, that previously had to be hand engineered. For locating objects in an image the state-of-the-art Faster R-CNN model predicts objects in two parts. Firstly, the region proposal network (RPN) extracts regions from the picture where it is likely to find an object. Secondly, a detector verifies the likelihood of an object being in that region.For this thesis, we review the current literature on artificial neural networks, object detection methods, proposal methods and present our new way of generating proposals. By replacing the RPN with our network, the multiscale proposal network (MPN), we increase the average precision (AP) with 12% and reduce the computation time per image by 10%.

  • 136.
    Gruselius, Hanna
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Generative Models and Feature Extraction on Patient Images and Structure Data in Radiation Therapy2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This Master thesis focuses on generative models for medical patient data for radiation therapy. The objective with the project is to implement and investigate the characteristics of a Variational Autoencoder applied to this diverse and versatile data. The questions this thesis aims to answer are: (i) whether the VAE can capture salient features of medical image data, and (ii) if these features can be used to compare similarity between patients. Furthermore, (iii) if the VAE network can successfully reconstruct its input and lastly (iv) if the VAE can generate artificial data having a reasonable anatomical appearance. The experiments carried out conveyed that the VAE is a promising method for feature extraction, since it appeared to ascertain similarity between patient images. Moreover, the reconstruction of training inputs demonstrated that the method is capable of identifying and preserving anatomical details. Regarding the generative abilities, the artificial samples generally conveyed fairly realistic anatomical structures. Future work could be to investigate the VAEs ability to generalize, with respect to both the amount of data and probabilistic considerations as well as probabilistic assumptions.

  • 137.
    Grönberg, Fredrik
    et al.
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    Danielsson, Mats
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    Sjölin, Martin
    KTH, School of Engineering Sciences (SCI), Physics, Physics of Medical Imaging.
    Count statistics of nonparalyzable photon-counting detectors with nonzero pulse length2018In: Medical physics (Lancaster), ISSN 0094-2405, Vol. 45, no 8, p. 3800-3811Article in journal (Refereed)
    Abstract [en]

    PurposePhoton-counting detectors are expected to be the next big step in the development of medical computed tomography (CT). Accurate modeling of the behavior of photon-counting detectors in both low and high count rate regimes is important for accurate image reconstruction and detector performance evaluations. The commonly used ideal nonparalyzable (delta pulse) model is built on crude assumptions that make it unsuitable for predicting the behavior of photon-counting detectors at high count rates. The aim of this work is to present an analytical count statistics model that better describes the behavior of photon-counting detectors with nonzero pulse length. MethodsAn analytical statistical count distribution model for nonparalyzable detectors with nonzero pulse length is derived using tools from statistical analysis. To validate the model, a nonparalyzable photon-counting detector is simulated using Monte Carlo methods and compared against. Image performance metrics are computed using the Fisher information metric and a comparison between the proposed model, approximations of the proposed model, and those made by the ideal nonparalyzable model is presented and analyzed. ResultsIt is shown that the presented model agrees well with the results from the Monte Carlo simulation and is stable for varying x-ray beam qualities. It is also shown that a simple Gaussian approximation of the distribution can be used to accurately model the behavior and performance of nonparalyzable detectors with nonzero pulse length. Furthermore, the comparison of performance metrics show that the proposed model predicts a very different behavior than the ideal nonparalyzable detector model, suggesting that the proposed model can fill an important gap in the understanding of pileup effects. ConclusionsAn analytical model for the count statistics of a nonparalyzable photon-counting detector with nonzero pulse length is presented. The model agrees well with results obtained from Monte Carlo simulations and can be used to improve, speed up and simplify modeling of photon-counting detectors.

  • 138.
    Gudmundsson, Thorbjörn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Markov chain Monte Carlo for rare-event simulation in heavy-tailed settings2013Licentiate thesis, monograph (Other academic)
  • 139.
    Gudmundsson, Thorbjörn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rare-event simulation with Markov chain Monte Carlo2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Stochastic simulation is a popular method for computing probabilities or expecta- tions where analytical answers are difficult to derive. It is well known that standard methods of simulation are inefficient for computing rare-event probabilities and there- fore more advanced methods are needed to those problems.

    This thesis presents a new method based on Markov chain Monte Carlo (MCMC) algorithm to effectively compute the probability of a rare event. The conditional distri- bution of the underlying process given that the rare event occurs has the probability of the rare event as its normalising constant. Using the MCMC methodology a Markov chain is simulated, with that conditional distribution as its invariant distribution, and information about the normalising constant is extracted from its trajectory.

    In the first two papers of the thesis, the algorithm is described in full generality and applied to four problems of computing rare-event probability in the context of heavy- tailed distributions. The assumption of heavy-tails allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and heavy-tailed. The second problem is an extension of the first one to a heavy-tailed random sum Y1+···+YN exceeding a high threshold,where the number of increments N is random and independent of Y1 , Y2 , . . .. The third problem considers the solution Xm to a stochastic recurrence equation, Xm = AmXm−1 + Bm, exceeding a high threshold, where the innovations B are independent and identically distributed and heavy-tailed and the multipliers A satisfy a moment condition. The fourth problem is closely related to the third and considers the ruin probability for an insurance company with risky investments.

    In last two papers of this thesis, the algorithm is extended to the context of light- tailed distributions and applied to four problems. The light-tail assumption ensures the existence of a large deviation principle or Laplace principle, which in turn allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and light-tailed. The second problem considers a discrete-time Markov chains and the computation of general expectation, of its sample path, related to rare-events. The third problem extends the the discrete-time setting to Markov chains in continuous- time. The fourth problem is closely related to the third and considers a birth-and-death process with spatial intensities and the computation of first passage probabilities.

    An unbiased estimator of the reciprocal probability for each corresponding prob- lem is constructed with efficient rare-event properties. The algorithms are illustrated numerically and compared to existing importance sampling algorithms.

  • 140.
    Gudmundsson, Thorbjörn
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Markov chain monte carlo for computing rare-event probabilities for a heavy-tailed random walk2014In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 51, no 2, p. 359-376Article in journal (Refereed)
    Abstract [en]

    In this paper a method based on a Markov chain Monte Carlo (MCMC) algorithm is proposed to compute the probability of a rare event. The conditional distribution of the underlying process given that the rare event occurs has the probability of the rare event as its normalizing constant. Using the MCMC methodology, a Markov chain is simulated, with the aforementioned conditional distribution as its invariant distribution, and information about the normalizing constant is extracted from its trajectory. The algorithm is described in full generality and applied to the problem of computing the probability that a heavy-tailed random walk exceeds a high threshold. An unbiased estimator of the reciprocal probability is constructed whose normalized variance vanishes asymptotically. The algorithm is extended to random sums and its performance is illustrated numerically and compared to existing importance sampling algorithms.

  • 141.
    Gudmundsson, Thorbjörn
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Markov chain Monte Carlo for rare-event simulation for light-tailed random walkManuscript (preprint) (Other academic)
  • 142.
    Gudmundsson, Thorbjörn
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Markov chain Monte Carlo for rare-event simulation for Markov chainsManuscript (preprint) (Other academic)
  • 143.
    Gudmundsson, Thorbjörn
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Markov chain Monte Carlo for rare-event simulation for stochastic recurrence equations with heavy-tailed innovationsManuscript (preprint) (Other academic)
  • 144.
    Guinaudeau, Alexandre
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Estimating the probability of event occurrence2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In complex systems anomalous behaviors can occur intermittently and stochastically. In this case, it is hard to diagnose real errors among spurious ones. These errors are often hard to troubleshoot and require close attention, but troubleshooting each occurrence is time-consuming and is not always an option.

    In this thesis, we define two different models to estimate the underlying probability of occurrence of an error, one based on binary segmentation and null hypothesis testing, and the other one based on hidden Markov models. Given a threshold level of confidence, these models are tuned to trigger alerts when a change is detected with sufficiently high probability.

    We generated events drawn from Bernoulli distributions emulating these anomalous behaviors to benchmark these two candidate models. Both models have the same sensitivity, δp ≈ 10%, and delay, δt ≈ 100 observations, to detect change points. However, they do not generalize in the same way to broader problems and provide therefore two complementary solutions.

  • 145.
    Gunnarsson, Simon
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Curve Building and SwapPricing in the Presence of Collateral and Basis Spreads2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The eruption of the financial crisis in 2008 caused immense widening of both domestic and cross currency basis spreads. Also, as a majority of all fixed income contracts are now collateralized the funding cost of a financial institution may deviate substantially from the domestic Libor. In this thesis, a framework for pricing of collateralized interest rate derivatives that accounts for the existence of non-negligible basis spreads is implemented. It is found that losses corresponding to several percent of the outstanding notional may arise as a consequence of not adapting to the new market conditions.

  • 146.
    Gunnvald, Patrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Joelsson, Viktor
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Analysis of Hedging Strategies for Hydro Power on the Nordic Power Market2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Hydro power is the largest source for generation of electricity in the Nordic region today.

    This production is heavily dependent on the weather since it dictates the terms for the availability and the amount of power to be produced. Vattenfall as a company has an incentive to avoid volatile revenue streams as it facilitates economic planning and induces a positive effect on its credit rating, thus also on its bottom line. Vattenfall is a large producer of hydro power with a possibility to move the power market which adds further complexity to the problem. In this thesis the authors develop new hedging strategies which will hedge more efficiently. With efficiency is meant the same risk, or standard deviation, at a lower cost or alternatively formulated lower risk for the same cost. In order to enable comparison and make claims about efficiency, a reference solution is developed that should reflect their current hedging strategy. To achieve higher efficiency we focus on finding dynamic hedging strategies. First a prototype model is suggested to facilitate the construction of the solution methods and if it is worthwhile to pursue a further investigation. As this initial prototype model results showed that there were substantial room for efficiency improvement, a larger main model with parameters estimated from data is constructed which encapsulate the real world scenario much better. Four different solutions methods are developed and applied to this main model setup. The results are then compared to reference strategy. We find that even though the efficiency was less then first expected from the prototype model results, using these new hedging strategies could reduce costs by 1.5 % - 5%. Although the final choice of the hedging strategy might be down to the end user we suggest the strategy called BW to reduce costs and improve efficiency. The paper also discusses among other things; the solution methods and hedging strategies, the term optimality and the impact of parameters in the model.

  • 147.
    Gustafsson, Alexander
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Wogenius, Sebastian
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Modelling Apartment Prices with the Multiple Linear Regression Model2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis examines factors that are of most statistical significance for the sales prices of apartments in the Stockholm City Centre. Factors examined are address, area, balcony, construction year, elevator, fireplace, floor number, maisonette, monthly fee, penthouse and number of rooms. On the basis of this examination, a model for predicting prices of apartments is constructed. In order to evaluate how the factors influence the price, this thesis analyses sales statistics and the mathematical method used is the multiple linear regression model. In a minor case-study and literature review, included in this thesis, the relationship between proximity to public transport and the prices of apartments in Stockholm are examined.

    The result of this thesis states that it is possible to construct a model, from the factors analysed, which can predict the prices of apartments in Stockholm City Centre with an explanation degree of 91% and a two million SEK confidence interval of 95%. Furthermore, a conclusion can be drawn that the model predicts lower priced apartments more accurately. In the case-study and literature review, the result indicates support for the hypothesis that proximity to public transport is positive for the price of an apartment. However, such a variable should be regarded with caution due to the purpose of the modelling, which differs between an individual application and a social economic application

  • 148. Gustavsson, J
    et al.
    Näsman, P
    KTH, School of Architecture and the Built Environment (ABE), Centres, Centre for Transport Studies, CTS. KTH, School of Architecture and the Built Environment (ABE), Transport Science, Transport and Location Analysis.
    Some information about the activities at the Department of Statistics, Stockholm University, Sweden1990Conference paper (Other academic)
  • 149. Haimi, Antti
    et al.
    Wennman, Aron
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    A central limit theorem for fluctuations in polyanalytic ginibre ensembles2017In: International mathematics research notices, ISSN 1073-7928, E-ISSN 1687-0247, Vol. rnx147Article in journal (Refereed)
    Abstract [sv]

    We study fluctuations of linear statistics in polyanalytic Ginibre ensembles, a family of point processes describing planar free fermions in a uniform magnetic field at higher Landau levels. Our main result is asymptotic normality of fluctuations, extending a result of Rider and Virág. As in the analytic case, the variance is composed of independent terms from the bulk and the boundary. Our methods rely on a structural formula for polyanalytic polynomial Bergman kernels which separates out the different pure q" role="presentation">q-analytic kernels corresponding to different Landau levels. The fluctuations with respect to these pure q" role="presentation">q-analytic Ginibre ensembles are also studied, and a central limit theorem is proved. The results suggest a stabilizing effect on the variance when the different Landau levels are combined together.

  • 150.
    Halberg, Oscar
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Wärmlös Helmrich, Mattias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Operational Risk Modeling: Addressing the Reporting Threshold Problem2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    External loss data are typically left truncated at a reporting threshold. Ignoring this truncation level leads to biased capital charge estimations. This thesis addresses the challenges of recreating the truncated part of the distribution. By predicting the continuation of a probability density function, the unobserved body of an external operational risk loss distribution is estimated. The prediction is based on internally collected losses and the tail of the external loss distribution. Using a semiparametric approach to generate sets of internal losses and applying the Best Linear Unbiased Predictor, results in an enriched external dataset that shares resemblance with the internal dataset. By avoiding any parametrical assumptions, this study proposes a new and unique way to address the reporting threshold problem. Financial institutions will benefit from these findings as it permits the use of the semiparametric approach developed by Bolancé et al. (2012) and thereby eliminates the well known difficulty with determining the breaking point beyond which the tail domain is defined when using the Loss Distribution Approach. The main conclusion from this thesis is that predicting the continuation of a function using the Best Linear Unbiased Predictor can be successfully applied in an operational risk setting. This thesis has predicted the continuation of a probability density function, resulting in a full external loss distribution.

1234567 101 - 150 of 392
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf