Change search
Refine search result
1234567 151 - 200 of 384
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 151.
    Hallgren, Jonas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Structure Learning and Mixed Radix representation in Continuous Time Bayesian NetworksManuscript (preprint) (Other academic)
    Abstract [en]

    Continuous time Bayesian Networks (CTBNs) are graphical representations of the dependence structures between continuous time random processes with finite state spaces. We propose a method for learning the structure of the CTBNs using a causality measure based on Kullback-Leibler divergence. We introduce the causality matrix can be seen as a generalized version of the covariance matrix. We give a mixed radix representation of the process that much facilitates the learning and simulation. A new graphical model for tick-by-tick financial data is proposed and estimated. Our approach indicates encouraging results on both the tick-data and on a simulated example.

  • 152.
    Hamdi, Ali
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On the Snell envelope approach to optimal switching and pricing Bermudan options2011Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of two papers related to systems of Snell envelopes. The first paper uses a system of Snell envelopes to formulate the problem of two-modes optimal switching for the full balance sheet in finite horizon. This means that the switching problem is formulated in terms of trade-off strategies between expected profit and cost yields, which act as obstacles to each other. Existence of a minimal solution of this system is obtained by using an approximation scheme. Furthermore, the optimal switching strategies are fully characterized.

    The second paper uses the Snell envelope to formulate the fair price of Bermudan options. To evaluate this formulation of the price, the optimal stopping strategy for such a contract must be estimated. This may be done recursively if some method of estimating conditional expectations is available. The paper focuses on nonparametric estimation of such expectations, by using regularization of a least-squares minimization, with a Tikhonov-type smoothing put on the partial diferential equation which characterizes the underlying price processes. This approach can hence be viewed as a combination of the Monte Carlo method and the PDE method for the estimation of conditional expectations. The estimation method turns out to be robust with regard tothe size of the smoothing parameter.

  • 153.
    Hamdi, Ali
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    PDE-regularization for pricing multi-dimensional Bermudan options with Monte Carlo simulationManuscript (preprint) (Other academic)
    Abstract [en]

    This paper considers the problem of pricing multi-dimensional Bermudan derivatives using Monte Carlo simulation. A new method for computing conditional expectations is proposed, which combined with the dynamic programming principle provides a way of pricing the derivatives.

    The method is a non-parametric projection with regularization. The regularization penalizes deviations from the PDE that the true conditional expectation satisfies. The point being that it is less costly to compute the norm of the PDE than it is to solve it, thus avoiding the curse of dimensionality.

    The method is shown to produce accurate numerical results in multi-dimensional settings, given a good choice of the regularization parameter. It is illustrated with the multi-dimensional Black-Scholes model and compared to the Longstaff-Schwartz approach.

  • 154.
    Hamdi, Ali
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Some aspects of optimal switching and pricing Bermudan options2013Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers that are all related to the Snell envelope. In the first paper, the Snell envelope is used as a formulation of a two-modes optimal switching problem. The obstacles are interconnected, take both profit and cost yields into account, and switching is based on both sides of the balance sheet. The main result is a proof of existence of a continuous minimal solution to a system of Snell envelopes, which fully characterizes the optimal switching strategy. A counter-example is provided to show that uniqueness does not hold.

    The second paper considers the problem of having a large number of production lines with two modes of production, high-production and low-production. As in the first paper, we consider both expected profit and cost yields and switching based on both sides of the balance sheet. The production lines are assumed to be interconnected through a coupling term, which is the average optimal expected yields. The corresponding system of Snell envelopes is highly complex, so we consider the aggregated yields where a mean-field approximation is used for the coupling term. The main result is a proof of existence of a continuous minimal solution to a system of Snell envelopes, which fully characterizes the optimal switching strategy. Furthermore, existence and uniqueness is proven for the mean-field reflected backward stochastic differential equations (MF-RBSDEs) we consider, a comparison theorem and a uniform bound for the MF-RBSDEs is provided.

    The third paper concerns pricing of Bermudan type options. The Snell envelope is used as a representation of the price, which is determined using Monte Carlo simulation combined with the dynamic programming principle. For this approach, it is necessary to estimate the conditional expectation of the future optimally exercised payoff. We formulate a projection on a grid which is ill-posed due to overfitting, and regularize with the PDE which characterizes the underlying process. The method is illustrated with numerical examples, where accurate results are demonstrated in one dimension.

    In the fourth paper, the idea of the third paper is extended to the multi-dimensional setting. This is necessary because in one dimension it is more efficient to solve the PDE than to use Monte Carlo simulation. We relax the use of a grid in the projection, and add local weights for stability. Using the multi-dimensional Black-Scholes model, the method is illustrated in settings ranging from one to 30 dimensions. The method is shown to produce accurate results in all examples, given a good choice of the regularization parameter.

  • 155.
    Hamdi, Ali
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Marcus, Mårten
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing Bermudan options: A nonparametric estimation approachManuscript (preprint) (Other academic)
    Abstract [en]

    A nonparametric alternative to the Longstaff-Schwartz estimation of conditional expectations is suggested for pricing of Bermudan options. The method is based on regularization of a least-squares minimization, with a Tikhonov-type smoothing put on the partial differential equation which characterizes the underlying price processes. This approach can hence be viewed as a combination of the Monte Carlo method and the PDE method for the estimation of conditional expectations. The estimation method turns out to be robust with regard to the size of the smoothing parameter.

  • 156.
    Hansson, Fredrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A pricing and performance study on auto-callable structured products2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    We propose an algorithm to price and analyze the performance of auto-callable structured _nancial products. The algorithm contains Monte-Carlo simulations in order to reproduce, as probable as possible, a future product. This model is then compared to other, previously presented models. The di_erent in-data parameters together with a time dependency study is then performed to evaluate what one might expect when investing in these products. Numerical results conclude that, the risks taken by the investor closely reect the potential return for each product. When constructing these products for the near future, one must closely evaluate the demand from the investors i.e. evaluate the level of risk that the investors are willing to take.

  • 157.
    Hansén, Rasmus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Allocation of Risk Capital to Contracts in Catastrophe Reinsurance2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis is theresult of a project aimed at developing a tool for allocation of risk capitalin catastrophe excess-of-loss reinsurance. Allocation of risk capital is animportant tool for measuring portfolio performance and optimizing the capitalrequirement. Here, two allocation rules are described and analyzed, Eulerallocation and Capital layer allocation. The rules are applied to two differentportfolios. The main conclusions is that the two methods can be used togetherto get a better picture of how the dependence structure between the contractsaffect the portfolio result. It is also illustrated how the RORAC of one of theportfolios can be increased by 1 % using the outcome from the analyses.

  • 158.
    Hededal Klincov, Lazar
    et al.
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Symeri, Ali
    KTH, School of Technology and Health (STH), Medical Engineering, Computer and Electronic Engineering.
    Devising a Trend-break-detection Algorithm of stored Key Performance Indicators for Telecom Equipment2017Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    A problem that is prevalent for testers at Ericsson is that performance test results are continuously generated but not analyzed. The time between occurrence of problems and information about the occurrence is long and variable. This is due to the manual analysis of log files that is time consuming and tedious. The requested solution is automation with an algorithm that analyzes the performance and notifies when problems occur. A binary classifier algorithm, based on statistical methods, was developed and evaluated as a solution to the stated problem. The algorithm was evaluated with simulated data and produced an accuracy of 97.54 %, to detect trend breaks. Furthermore, correlation analysis was carried out between performance and hardware to gain insights in how hardware configurations affect test runs.

  • 159.
    Heimbürger, Hjalmar
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Modelling of Stochastic Volatility using Partially Observed Markov Models2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, calibration of stochastic volatility models that allow correlation between the volatility and the returns has been considered. To achieve this, the dynamics has been modelled as an extension of hidden Markov models, and a special case of partially observed Markov models. This thesis shows that such models can be calibrated using sequential Monte Carlo methods, and that a model with correlation provide a better fit to the observed data. However, the results are not conclusive and more research is needed in order to confirm this for other data sets and models.

  • 160.
    Hellander, Martin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Credit Value Adjustment: The Aspects of Pricing Counterparty Credit Risk on Interest Rate Swaps2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, the pricing of counterparty credit risk on an OTC plain vanilla interest rate swap is investigated. Counterparty credit risk can be defined as the risk that a counterparty in a financial contract might not be able or willing to fulfil their obligations. This risk has to be taken into account in the valuation of an OTC derivative. The market price of the counterparty credit risk is known as the Credit Value Adjustment (CVA). In a bilateral contract, such as a swap, the party’s own creditworthiness also has to be taken into account, leading to another adjustment known as the Debit Value Adjustment (DVA). Since 2013, the international accounting standards (IFRS) states that these adjustments have to be done in order to reflect the fair value of an OTC derivative.

    A short background and the derivation of CVA and DVA is presented, including related topics like various risk mitigation techniques, hedging of CVA, regulations etc.. Four different pricing frameworks are compared, two more sophisticated frameworks and two approximative approaches. The most complex framework includes an interest rate model in form of the LIBOR Market Model and a credit model in form of the Cox-Ingersoll- Ross model. In this framework, the impact of dependencies between credit and market risk factors (leading to wrong-way/right-way risk) and the dependence between the default time of different parties are investigated.

  • 161.
    Henrikson, Fredrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Characteristics of high-frequency trading2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 162.
    Henter, Gustav Eje
    et al.
    KTH, School of Electrical Engineering (EES), Communication Theory. The University of Edinburgh, United Kingdom.
    Leijon, Arne
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Kleijn, W. Bastiaan
    KTH, School of Electrical Engineering (EES), Communication Theory. Victoria University of Wellington, New Zealand.
    Kernel Density Estimation-Based Markov Models with Hidden StateManuscript (preprint) (Other academic)
    Abstract [en]

    We consider Markov models of stochastic processes where the next-step conditional distribution is defined by a kernel density estimator (KDE), similar to certain time-series bootstrap schemes from the economic forecasting literature. The KDE Markov models (KDE-MMs) we discuss are nonlinear, nonparametric, fully probabilistic representations of stationary processes with strong asymptotic convergence properties. The models generate new data simply by concatenating points from the training data sequences in a context-sensitive manner, with some added noise. We present novel EM-type maximum-likelihood algorithms for data-driven bandwidth selection in KDE-MMs. Additionally, we augment the KDE-MMs with a hidden state, yielding a new model class, KDE-HMMs. The added state-variable enables long-range memory and signal structure representation, complementing the short-range correlations captured by the Markov process. This is compelling for modelling complex real-world processes such as speech and language data. The paper presents guaranteed-ascent EM-update equations for model parameters in the case of Gaussian kernels, as well as relaxed update formulas that greatly accelerate training in practice. Experiments demonstrate increased held-out set probability for KDE-HMMs on several challenging natural and synthetic data series, compared to traditional techniques such as autoregressive models, HMMs, and their combinations.

  • 163.
    Hoel, Håkon
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA (closed 2012-06-30).
    Complexity and Error Analysis of Numerical Methods for Wireless Channels, SDE, Random Variables and Quantum Mechanics2012Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of the four papers which consider different aspects of stochastic process modeling, error analysis, and minimization of computational cost.

         In Paper I, we construct a Multipath Fading Channel (MFC) model for wireless channels with noise introduced through scatterers flipping on and off. By coarse graining the MFC model a Gaussian process channel model is developed. Complexity and accuracy comparisons of the models are conducted.

         In Paper II, we generalize a multilevel Forward Euler Monte Carlo method introduced by Mike Giles for the approximation of expected values depending on solutions of Ito stochastic differential equations. Giles' work proposed and analyzed a Forward Euler Multilevel Monte Carlo (MLMC) method based on realizations on a hierarchy of uniform time discretizations and a coarse graining based control variates idea to reduce the computational cost required by a standard single level Forward Euler Monte Carlo method. This work is an extension of Giles' MLMC method from uniform to adaptive time grids. It has the same improvement in computational cost and is applicable to a larger set of problems.

         In paper III, we consider the problem to estimate the mean of a random variable by a sequential stopping rule Monte Carlo method. The performance of a typical second moment based sequential stopping rule is shown to be unreliable both by numerical examples and by analytical arguments. Based on analysis and approximation of error bounds we construct a higher moment based stopping rule which performs more reliably.

         In paper IV, Born-Oppenheimer dynamics is shown to provide an accurate approximation of time-independent Schrödinger observables for a molecular system with an electron spectral gap, in the limit of large ratio of nuclei and electron masses, without assuming that the nuclei are localized to vanishing domains. The derivation, based on a Hamiltonian system interpretation of the Schrödinger equation and stability of the corresponding hitting time Hamilton-Jacobi equation for non ergodic dynamics, bypasses the usual separation of nuclei and electron wave functions, includes caustic states and gives a different perspective on the Born-Oppenheimer approximation, Schrödinger Hamiltonian systems and numerical simulation in molecular dynamics modeling at constant energy.

  • 164.
    Hoel, Håkon
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA.
    Nyberg, Henrik
    Ericsson Research.
    Gaussian Coarse Graining of a Master Equation Extension of Clarke's Model2012Report (Other academic)
    Abstract [en]

    We study the error and computational cost of generating outputsignal realizations for the channel model of a moving receiver in a scatteringenvironment, as in Clarke’s model, with the extension that scatterers randomlyflip on and off. At micro scale, the channel is modeled by a Multipath FadingChannel (MFC) model, and by coarse graining the micro scale model we derivea macro scale Gaussian process model. Four algorithms are presented for gen-erating stochastic signal realizations, one for the MFC model and three for theGaussian process model. A computational cost comparison of the presentedalgorithms indicates that Gaussian process algorithms generate signal realiza-tions more efficiently than the MFC algorithm does. Numerical examples ofgenerating signal realizations in time independent and time dependent scatter-ing environments are given, and the problem of estimating model parametersfrom real life signal measurements is also studied.

  • 165.
    Hoel, Håkon
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA (closed 2012-06-30).
    von Schwerin, Erik
    King Abdullah University of Science and Technology.
    Szepessy, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA (closed 2012-06-30).
    Tempone, Raul
    King Abdullah University of Science and Technology.
    Implementation and Analysis of an Adaptive Multilevel Monte Carlo Algorithm2012Report (Other academic)
    Abstract [en]

    This work generalizes a multilevel Monte Carlo (MLMC) method in-troduced in [7] for the approximation of expected values of functions depending on the solution to an Ito stochastic differential equation. The work [7] proposed and analyzed a forward Euler MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, forward Euler Monte Carlo method from O( TOL^(−3) ) to O( TOL^(−2) log( TOL^(−1))^2 ) for a meansquare error of size 2 . This work uses instead a hierarchy of adaptivelyrefined, non uniform, time discretizations, generated by an adaptive algo-rithm introduced in [20, 19, 5]. Given a prescribed accuracy TOL in theweak error, this adaptive algorithm generates time discretizations basedon a posteriori expansions of the weak error first developed in [24]. Atheoretical analysis gives results on the stopping, the accuracy, and thecomplexity of the resulting adaptive MLMC algorithm. In particular, it isshown that: the adaptive refinements stop after a finite number of steps;the probability of the error being smaller than TOL is under certain as-sumptions controlled by a given confidence parameter, asymptotically asTOL → 0; the complexity is essentially the expected for MLMC methods,but with better control of the constant factors. We also show that themultilevel estimator is asymptotically normal using the Lindeberg-FellerCentral Limit Theorem. These theoretical results are based on previouslydeveloped single level estimates, and results on Monte Carlo stoppingfrom [3]. Our numerical tests include cases, one with singular drift andone with stopped diffusion, where the complexity of uniform single levelmethod is O TOL−4 . In both these cases the results confirm the theoryby exhibiting savings in the computational cost to achieve an accuracy of O(TOL), from O( TOL^(−3) )for the adaptive single level algorithm toessentially O( TOL^(−2) log(TOL−1)^2 ) for the adaptive MLMC.

  • 166.
    Holmberg, Jan-Erik
    et al.
    VTT Statens tekniska forskningscentral.
    Bladh, Kent
    Vattenfall Power Consultant.
    Oxstrand, Johanna
    Ringhals AB.
    Pyy, Pekka
    Teollisuuden Voima Oy.
    Enhanced Bayesian THERP: Lessons learnt from HRA benchmarking2010In: Proc. of PSAM 10 — International Probabilistic Safety Assessment & Management Conference, 7–11 June 2010, Seattle, Washington, USA, IAPSAM — International Association of Probabilistic Safety Assessment and Management, International Association for Probabilistic Safety Assessment and Managemen , 2010, p. 52-Conference paper (Refereed)
    Abstract [en]

    The Enhanced Bayesian THERP (Technique for Human Reliability Analysis) method usesas its basis the time-reliability curve introduced in the Swain’s human reliability analysis (HRA)handbook. It differs from the Swain's Handbook via a transparent adjustment of the time-dependenthuman error probabilities by use of five performance shaping factors (PSFs): (1) support fromprocedures, (2) support from training, (3) feedback from process, (4) need for co-ordination andcommunication, (5) mental load, decision burden. In order to better know the characteristics of theEnhanced Bayesian THERP from a more international perspective, the method has been subject toevaluation within the framework of the international “HRA Methods Empirical Study Using SimulatorData”. Without knowledge of the crews’ performances, several HRA analysis teams from differentcountries, using different methods, performed predictive analyses of four scenarios. This paper givesan overview of the method with major findings from the benchmarking. The empirical comparisongives confidence that the time reliability curve is a feasible and cost effective method to estimatehuman error probabilities when the time window is well defined and relatively short. The comparisonof empirical observations with predictions was found as a useful exercise to identify areas ofimprovements in the HRA method.

  • 167.
    Holmberg, Jan-Erik
    et al.
    VTT Statens tekniska forskningscentral.
    Hellström, Per
    Scandpower AB.
    Development of methods for risk follow-up and handling of CCF events in PSA applications2010In: Proc. of PSAM 10 — International Probabilistic Safety Assessment & Management Conference, 7–11 June 2010, Seattle, Washington, USA, IAPSAM — International Association of Probabilistic Safety Assessment and Management, International Association for Probabilistic Safety Assessment and Management , 2010, p. 53-Conference paper (Refereed)
    Abstract [en]

    Risk follow-up aims at analysis of operational events from their risk point of view usingprobabilistic safety assessment (PSA) as the basis. Risk follow-up provides additional insight tooperational experience feedback compared to deterministic event analysis. Even though thisapplication of PSA is internationally widely spread and tried out for more than a decade at manynuclear power plants, there are several problematic issues in the performance of a retrospective riskanalysis as well as in the interpretation of the results.An R&D project sponsored by the Nordic PSA group (NPSAG) has focused on selected issues in thistopic. The main development needs were seen in the handling of CCF and the reference levels forresult presentation. CCF events can be difficult to assess due to possibilities to interpret the eventdifferently. Therefore a sensitivity study with varying assumptions is recommended as a generalapproach. Reference levels for indicators are proposed based on the survey of criteria usedinternationally. The paper summarizes the results.

  • 168.
    Holmberg, Jan-Erik
    et al.
    VTT Statens tekniska forskningscentral.
    Hukki, Kristiina
    VTT Statens tekniska forskningscentral.
    Interdisciplinary expert collaboration method (IECM) for supporting human reliability analysis in fire PSA2005In: Proc. of International Topical Meeting on Probabilistic Safety Analysis. San Francisco , CA, 11–15 Sept. 2005, American Nuclear Society , 2005, p. 1056-1061Conference paper (Refereed)
    Abstract [en]

    The Interdisciplinary Expert Collaboration Method (IECM) has been developed for contributing to the development of risk-informed management of fire situations at nuclear power plants. The IECM provides a systematic practice and shared conceptual tool for the co-operation of experts from different domains, related to the management of fire situations. It is used for improving support for the management, but it can also contribute to the fire risk analysis by improving the realism of the human reliability analysis (HRA). The paper compares the IECM with the recommendations of the NUREG-1792 document (Good Practices for Implementing Human Reliability Analysis). Even though the method does not yet support the quantification of human error probabilities, it can be utilized in HRA in several manners.

  • 169.
    Holmberg, Jan-Erik
    et al.
    VTT Statens tekniska forskningscentral.
    Kuusela, Pirkko
    VTT Statens tekniska forskningscentral.
    Analysis of Probability of Defects in the Disposal Canisters2011Report (Other academic)
  • 170.
    Holmberg, Jan-Erik
    et al.
    VTT Statens tekniska forskningscentral.
    Männistö, Ilkka
    VTT Statens tekniska forskningscentral.
    Risk-informed classification of systems, structures and components2008In: Rakenteiden mekaniikka (Journal of Structural Mechanics), ISSN 0783-6104, Vol. 41, no 2, p. 90-98Article in journal (Refereed)
  • 171.
    Holmberg, Jan-Erik
    et al.
    VTT Statens tekniska forskningscentral.
    Nirmark, Jan
    Vattenfall Power Consultant.
    Risk-informed assessment of defence in depth, LOCA example: Phase 1: Mapping of conditions and definition of quantitative measures for the defence in depth levels2008Report (Other academic)
  • 172.
    Holmsäter, Sara
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Malmberg, Emelie
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Applying Multivariate Expected Shortfall on High Frequency Foreign Exchange Data2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis aims at implementing and evaluating the performance of multivariate Expected Shortfall models on high frequency foreign exchange data. The implementation is conducted with a unique portfolio consisting of five foreign exchange rates; EUR/SEK, EUR/NOK, EUR/USD, USD/SEK and USD/NOK. High frequency is in this context defined as observations with time intervals from second by second up to minute by minute. The thesis consists of three main parts. In the first part, the exchange rates are modelled individually with time series models for returns and realized volatility. In the second part, the dependence between the exchange rates is modelled with copulas. In the third part, Expected Shortfall is calculated, the risk contribution of each exchange rate is derived and the models are backtested.

    The results of the thesis indicate that three of the five final models can be rejected at a 5% significance level if the risk is measured by Expected Shortfall (ES0:05). The two models that cannot be rejected are based on the Clayton and Student’s t copulas, the only two copulas with heavy left tails. The rejected models are based on the Gaussian, Gumbel-Hougaard and Frank copulas. The fact that some of the copula models are rejected emphasizes the importance of choosing an appropriate dependence structure. The risk contribution calculations show that the risk contributions are highest from EUR/NOK and USD/NOK, and that EUR/USD has the lowest risk contribution and even decreases the portfolio risk in some cases. Regarding the underlying models, it is concluded that for the data used in this thesis, the final combined time series and copula models perform quite well, given that the purpose is to measure the risk. However, the most important parts to capture seem to be the fluctuations in the volatilities as well as the tail dependencies between the exchange rates. Thus, the predictions of the return mean values play a less significant role, even though they still improve the results and are necessary in order to proceed with other parts of the modelling. As future research, we first and foremost recommend including the liquidity aspect in the models.

  • 173.
    Holst, Lars
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The number of two consecutive successes in a Hoppe-Pólya urn2008In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 45, no 3, p. 901-906Article in journal (Refereed)
    Abstract [en]

    In a sequence of independent Bernoulli trials the probability of success in the kth trial is pk = a/(a + b + k - 1). An explicit formula for the binomial moments of the number of two consecutive successes in the first n trials is obtained and some consequences of it are derived.

  • 174.
    Holst, Lars
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Konstantopoulos, Takis
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.). Uppsala univ.
    RUNS IN COIN TOSSING: A GENERAL APPROACH FOR DERIVING DISTRIBUTIONS FOR FUNCTIONALS2015In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 52, no 3, p. 752-770Article in journal (Refereed)
    Abstract [en]

    We take a fresh look at the classical problem of runs in a sequence of independent and identically distributed coin tosses and derive a general identity/recursion which can be used to compute (joint) distributions of functionals of run types. This generalizes and unifies already existing approaches. We give several examples, derive asymptotics, and pose some further questions.

  • 175.
    Huang, Sheng
    et al.
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Skoglund, Mikael
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Induced transformations of recurrent a.m.s. dynamical systems2015In: Stochastics and Dynamics, ISSN 0219-4937, Vol. 15, no 02Article in journal (Refereed)
    Abstract [en]

    This note proves that an induced transformation with respect to a finite measure set of a recurrent asymptotically mean stationary dynamical system with a sigma-finite measure is asymptotically mean stationary. Consequently, the Shannon–McMillan–Breiman theorem, as well as the Shannon–McMillan theorem, holds for all reduced processes of any finite-state recurrent asymptotically mean stationary random process.

    As a by-product, a ratio ergodic theorem for asymptotically mean stationary dynamical systems is presented.

  • 176.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Approximating some Volterra type stochastic integrals with applications to parameter estimation2003In: Stochastic Processes and their Applications, ISSN 0304-4149, E-ISSN 1879-209X, Vol. 105, no 1, p. 1-32Article in journal (Refereed)
  • 177.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Lindskog, Filip
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Extremal behavior of regularly varying stochastic processes2005In: Stochastic Processes and their Applications, ISSN 0304-4149, E-ISSN 1879-209X, Vol. 115, no 2, p. 249-274Article in journal (Refereed)
    Abstract [en]

    We study a formulation of regular variation for multivariate stochastic processes on the unit interval with sample paths that are almost surely right-continuous with left limits and we provide necessary and sufficient conditions for such stochastic processes to be regularly varying. A version of the Continuous Mapping Theorem is proved that enables the derivation of the tail behavior of rather general mappings of the regularly varying stochastic process. For a wide class of Markov processes with increments satisfying a condition of weak dependence in the tails we obtain simplified sufficient conditions for regular variation. For such processes we show that the possible regular variation limit measures concentrate on step functions with one step, from which we conclude that the extremal behavior of such processes is due to one big jump or an extreme starting point. By combining this result with the Continuous Mapping Theorem, we are able to give explicit results on the tail behavior of various vectors of functionals acting on such processes. Finally, using the Continuous Mapping Theorem we derive the tail behavior of filtered regularly varying Levy processes.

  • 178.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Lindskog, Filip
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Heavy-tailed insurance portfolios: buffer capital and ruin probabilities2006Report (Other academic)
  • 179.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Lindskog, Filip
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Multivariate extremes, aggregation and dependence in elliptical distributions2002In: Advances in Applied Probability, ISSN 0001-8678, E-ISSN 1475-6064, Vol. 32, no 3, p. 587-608Article in journal (Refereed)
  • 180.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Lindskog, Filip
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On Kesten's counterexample to the Cramer-Wold device for regular variation2006In: Bernoulli, ISSN 1350-7265, E-ISSN 1573-9759, Vol. 12, no 1, p. 133-142Article in journal (Refereed)
    Abstract [en]

    In 2002 Basrak, Davis and Mikosch showed that an analogue of the Cramer-Wold device holds for regular variation of random vectors if the index of regular variation is not an integer. This characterization is of importance when studying stationary solutions to stochastic recurrence equations. In this paper we construct counterexamples showing that for integer-valued indices, regular variation of all linear combinations does not imply that the vector is regularly varying. The construction is based on unpublished notes by Harry Kesten.

  • 181. Hult, Henrik
    et al.
    Lindskog, Filip
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On regular variation for infinitely divisible random vectors and additive processes2006In: Advances in Applied Probability, ISSN 0001-8678, E-ISSN 1475-6064, Vol. 38, no 1, p. 134-148Article in journal (Refereed)
    Abstract [en]

    We study the tail behavior of regularly varying infinitely divisible random vectors and additive processes, i.e. stochastic processes with independent but not necessarily stationary increments. We show that the distribution of an infinitely divisible random vector is tail equivalent to its Levy measure and we study the asymptotic decay of the probability for an additive process to hit sets far away from the origin. The results are extensions of known univariate results to the multivariate setting; we exemplify some of the difficulties that arise in the multivariate case.

  • 182.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Lindskog, Filip
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Ruin probabilities under general investments and heavy-tailed claims2011In: Finance and Stochastics, ISSN 0949-2984, E-ISSN 1432-1122, Vol. 15, no 2, p. 243-265Article in journal (Refereed)
    Abstract [en]

    In this paper, the asymptotic decay of finite time ruin probabilities is studied. An insurance company is considered that faces heavy-tailed claims and makes investments in risky assets whose prices evolve according to quite general semimartingales. In this setting, the ruin problem corresponds to determining hitting probabilities for the solution to a randomly perturbed stochastic integral equation. A large deviation result for the hitting probabilities is derived that holds uniformly over a family of semimartingales. This result gives the asymptotic decay of finite time ruin probabilities under sufficiently conservative investment strategies, including ruin-minimizing strategies. In particular, as long as the insurance company invests sufficiently conservatively, the investment strategy has only a moderate impact on the asymptotics of the ruin probability.

  • 183.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Lindskog, Filip
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nykvist, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A simple time-consistent model for the forward density process2013In: International Journal of Theoretical and Applied Finance, ISSN 0219-0249, Vol. 16, no 8, p. 13500489-Article in journal (Refereed)
    Abstract [en]

    In this paper, a simple model for the evolution of the forward density of the future value of an asset is proposed. The model allows for a straightforward initial calibration to option prices and has dynamics that are consistent with empirical findings from option price data. The model is constructed with the aim of being both simple and realistic, and avoid the need for frequent re-calibration. The model prices of n options and a forward contract are expressed as time-varying functions of an (n + 1)-dimensional Brownian motion and it is investigated how the Brownian trajectory can be determined from the trajectories of the price processes. An approach based on particle filtering is presented for determining the location of the driving Brownian motion from option prices observed in discrete time. A simulation study and an empirical study of call options on the S&P 500 index illustrate that the model provides a good fit to option price data.

  • 184.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Lindskog, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Regular variation for measures on metric spaces2006In: Publications de l'Institut Mathématique (Beograd), ISSN 0350-1302, E-ISSN 1820-7405, Vol. 80, no 94, p. 121-140Article in journal (Refereed)
    Abstract [en]

    The foundations of regular variation for Borel measures on a com- plete separable space S, that is closed under multiplication by nonnegative real numbers, is reviewed. For such measures an appropriate notion of convergence is presented and the basic results such as a Portmanteau theorem, a mapping theorem and a characterization of relative compactness are derived. Regu- lar variation is defined in this general setting and several statements that are equivalent to this definition are presented. This extends the notion of regular variation for Borel measures on the Euclidean space Rd to more general metric spaces. Some examples, including regular variation for Borel measures on Rd, the space of continuous functions C and the Skorohod space D, are provided.

  • 185.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nykvist, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A note on efficient importance sampling for one-dimensional diffusionsManuscript (preprint) (Other academic)
  • 186.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nykvist, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Efficient importance sampling to assess the risk of voltage collapse in power systemsManuscript (preprint) (Other academic)
  • 187.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nykvist, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Efficient importance sampling to compute loss probabilities in financial portfoliosManuscript (preprint) (Other academic)
  • 188.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Large deviations for weighted empirical measures arising in importance sampling2016In: Stochastic Processes and their Applications, ISSN 0304-4149, E-ISSN 1879-209X, Vol. 126, no 1Article in journal (Refereed)
    Abstract [en]

    Importance sampling is a popular method for efficient computation of various properties of a distribution such as probabilities, expectations, quantiles etc. The output of an importance sampling algorithm can be represented as a weighted empirical measure, where the weights are given by the likelihood ratio between the original distribution and the sampling distribution. In this paper the efficiency of an importance sampling algorithm is studied by means of large deviations for the weighted empirical measure. The main result, which is stated as a Laplace principle for the weighted empirical measure arising in importance sampling, can be viewed as a weighted version of Sanov's theorem. The main theorem is applied to quantify the performance of an importance sampling algorithm over a collection of subsets of a given target set as well as quantile estimates. The proof of the main theorem relies on the weak convergence approach to large deviations developed by Dupuis and Ellis.

  • 189.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Samorodnitsky, Gennady
    Large deviations for point processes based on stationary sequences with heavy tails2010In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 47, no 1, p. 1-40Article in journal (Refereed)
    Abstract [en]

    In this paper we propose a framework that facilitates the study of large deviations for point processes based on stationary sequences with regularly varying tails. This framework allows us to keep track both of the magnitude of the extreme values of a process and the order in which these extreme values appear. Particular emphasis is put on (infinite) linear processes with random coefficients. The proposed framework provides a fairly complete description of the joint asymptotic behavior of the large values of the stationary sequence. We apply the general result on large deviations for point processes to derive the asymptotic decay of certain probabilities related to partial sum processes as well as ruin probabilities.

  • 190.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Samorodnitsky, Gennady
    Cornell University, ORIE.
    Tail probabilities for infinite series of regularly varying random vectors2008In: Bernoulli, ISSN 1350-7265, E-ISSN 1573-9759, Vol. 14, no 3, p. 838-864Article in journal (Refereed)
    Abstract [en]

    A random vector X with representation X = Sigma(j >= 0)A(j)Z(j) is considered. Here, (Z(j)) is a sequence of independent and identically distributed random vectors and (A(j)) is a sequence of random matrices, 'predictable' with respect to the sequence (Z(j)). The distribution of Z(1) is assumed to be multivariate regular varying. Moment conditions on the matrices (A(j)) are determined under which the distribution of X is regularly varying and, in fact, 'inherits' its regular variation from that of the (Z(j))'s. We compute the associated limiting measure. Examples include linear processes, random coefficient linear processes such as stochastic recurrence equations, random sums and stochastic integrals.

  • 191.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Svensson, Jens
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Efficient calculation of risk measures by importance sampling -- the heavy tailed caseManuscript (preprint) (Other academic)
    Abstract [en]

    Computation of extreme quantiles and tail-based risk measures using standard Monte Carlo simulation can be inefficient. A method to speed up computations is provided by importance sampling. We show that importance sampling algorithms, designed for efficient tail probability estimation, can significantly improve Monte Carlo estimators of tail-based risk measures. In the heavy-tailed setting, when the random variable of interest has a regularly varying distribution, we provide sufficient conditions for the asymptotic relative error of importance sampling estimators of risk measures, such as Value-at-Risk and expected shortfall, to be small. The results are illustrated by some numerical examples.

  • 192.
    Hult, Henrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Svensson, Jens
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    On Importance Sampling with Mixtures for Random Walks with Heavy Tails2012In: ACM Transactions on Modeling and Computer Simulation, ISSN 1049-3301, E-ISSN 1558-1195, Vol. 22, no 2, p. 8-Article in journal (Refereed)
    Abstract [en]

    State-dependent importance sampling algorithms based on mixtures are considered. The algorithms are designed to compute tail probabilities of a heavy-tailed random walk. The increments of the random walk are assumed to have a regularly varying distribution. Sufficient conditions for obtaining bounded relative error are presented for rather general mixture algorithms. Two new examples, called the generalized Pareto mixture and the scaling mixture, are introduced. Both examples have good asymptotic properties and, in contrast to some of the existing algorithms, they are very easy to implement. Their performance is illustrated by numerical experiments. Finally, it is proved that mixture algorithms of this kind can be designed to have vanishing relative error.

  • 193.
    Hultin, Hanna
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Evaluation of Massively Scalable Gaussian Processes2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Gaussian process methods are flexible non-parametric Bayesian methods used for regression and classification. They allow for explicit handling of uncertainty and are able to learn complex structures in the data. Their main limitation is their scaling characteristics: for n training points the complexity is O(n³) for training and O(n²) for prediction per test data point.

    This makes full Gaussian process methods prohibitive to use on training sets larger than a few thousand data points. There has been recent research on approximation methods to make Gaussian processes scalable without severely affecting the performance. Some of these new approximation techniques are still not fully investigated and in a practical situation it is hard to know which method to choose. This thesis examines and evaluates scalable GP methods, especially focusing on the framework Massively Scalable Gaussian Processes introduced by Wilson et al. in 2016, which reduces the training complexity to nearly O(n) and the prediction complexity to O(1). The framework involves inducing point methods, local covariance function interpolation, exploitations of structured matrices and projections to low-dimensional spaces. The properties of the different approximations are studied and the possibilities of making improvements are discussed.

     

  • 194.
    Hveem, Markus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Portofolio management using structured products - The capital guarnatee puzzle2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 195. Hyodo, M.
    et al.
    Shutoh, N.
    Nishiyama, T.
    Pavlenko, Tetyana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Testing block-diagonal covariance structure for high-dimensional data2015In: Statistica neerlandica (Print), ISSN 0039-0402, E-ISSN 1467-9574, Vol. 69, no 4, p. 460-482Article in journal (Refereed)
    Abstract [en]

    A test statistic is developed for making inference about a block-diagonal structure of the covariance matrix when the dimensionality p exceeds n, where n = N - 1 and N denotes the sample size. The suggested procedure extends the complete independence results. Because the classical hypothesis testing methods based on the likelihood ratio degenerate when p > n, the main idea is to turn instead to a distance function between the null and alternative hypotheses. The test statistic is then constructed using a consistent estimator of this function, where consistency is considered in an asymptotic framework that allows p to grow together with n. The suggested statistic is also shown to have an asymptotic normality under the null hypothesis. Some auxiliary results on the moments of products of multivariate normal random vectors and higher-order moments of the Wishart matrices, which are important for our evaluation of the test statistic, are derived. We perform empirical power analysis for a number of alternative covariance structures.

  • 196. Hyodo, Masashi
    et al.
    Shutoh, Nobumichi
    Seo, Takashi
    Pavlenko, Tatjana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics. Tokyo Univ Sci, Fac Sci, Dept Math Informat Sci, Japan.
    Estimation of the covariance matrix with two-step monotone missing data2016In: Communications in Statistics - Theory and Methods, ISSN 0361-0926, E-ISSN 1532-415X, Vol. 45, no 7, p. 1910-1922Article in journal (Refereed)
    Abstract [en]

    We suggest shrinkage based technique for estimating covariance matrix in the high-dimensional normal model with missing data. Our approach is based on the monotone missing scheme assumption, meaning that missing values patterns occur completely at random. Our asymptotic framework allows the dimensionality p grow to infinity together with the sample size, N, and extends the methodology of Ledoit and Wolf (2004) to the case of two-step monotone missing data. Two new shrinkage-type estimators are derived and their dominance properties over the Ledoit and Wolf (2004) estimator are shown under the expected quadratic loss. We perform a simulation study and conclude that the proposed estimators are successful for a range of missing data scenarios.

  • 197.
    Isaksson, Fredrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Quantifying effects of deformable CT-MR registration2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Rigid image registration is an important part of many medical applications. In order to make correct decisions from the registration process the un-certainty of the results should be included. In this thesis a framework for estimating and visualising the spatial uncertainty of rigid image registration without groundtruth measurements is presented. The framework uses a deformable registration algorithm to estimate the errors and a groupwise registration for collocating multiple image sets to generate multiple realisations of the error field. A mean and covariance field are then generated which allows a characterisation of the error field. The framework is used to evaluate errors in CT-MR registration and a statistically significant bias field is detected using random field theory. It is also established that B-spline registration of CT images to themselves exhibit a bias.

  • 198.
    Ivert, Annica
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Aranha, C.
    Iba, H.
    Feature selection and classification using ensembles of genetic programs and within-class and between-class permutations2015In: 2015 IEEE Congress on Evolutionary Computation, CEC 2015, IEEE , 2015, p. 1121-1128Conference paper (Refereed)
    Abstract [en]

    Many feature selection methods are based on the assumption that important features are highly correlated with their corresponding classes, but mainly uncorrelated with each other. Often, this assumption can help eliminate redundancies and produce good predictors using only a small subset of features. However, when the predictability depends on interactions between features, such methods will fail to produce satisfactory results. In this paper a method that can find important features, both independently and dependently discriminative, is introduced. This method works by performing two different types of permutation tests that classify each of the features as either irrelevant, independently predictive or dependently predictive. It was evaluated using a classifier based on an ensemble of genetic programs. The attributes chosen by the permutation tests were shown to yield classifiers at least as good as the ones obtained when all attributes were used during training-and often better. The proposed method also fared well when compared to other attribute selection methods such as RELIEFF and CFS. Furthermore, the ability to determine whether an attribute was independently or dependently predictive was confirmed using artificial datasets with known dependencies.

  • 199.
    Jangenstål, Lovisa
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hedging Interest Rate Swaps2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis investigates hedging strategies for a book of interest rate swaps of the currencies EUR and SEK. The aim is to minimize the variance of the portfolio and keep the transaction costs down. The analysis is performed using historical simulation for two different cases. First, with the real changes of the forward rate curve and the discount curve. Then, with principal component analysis to reduce the dimension of the changes in the curves. These methods are compared with a method using the principal component variance to randomize new principal components.

  • 200. Janson, Svante
    et al.
    Stefánsson, Sigurdur Örn
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA.
    Scaling limits of random planar maps with a unique large face2015In: Annals of Probability, ISSN 0091-1798, E-ISSN 2168-894X, Vol. 43, no 3, p. 1045-1081Article in journal (Refereed)
    Abstract [en]

    We study random bipartite planar maps defined by assigning nonnegative weights to each face of a map. We prove that for certain choices of weights a unique large face, having degree proportional to the total number of edges in the maps, appears when the maps are large. It is furthermore shown that as the number of edges n of the planar maps goes to infinity, the profile of distances to a marked vertex rescaled by n(-1/2) is described by a Brownian excursion. The planar maps, with the graph metric resealed by n(-1/2), are then shown to converge in distribution toward Aldous' Brownian tree in the Gromov-Hausdorff topology. In the proofs, we rely on the Bouttier-di Francesco-Guitter bijection between maps and labeled trees and recent results on simply generated trees where a unique vertex of a high degree appears when the trees are large.

1234567 151 - 200 of 384
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf