Change search
Refine search result
1234567 51 - 100 of 375
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Blanchet, Jose
    et al.
    Columbia University.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Leder, Kevin
    University of Minnesota.
    Rare-Event Simulation for Stochastic Recurrence Equations with Heavy-Tailed Innovations2013In: ACM Transactions on Modeling and Computer Simulation, ISSN 1049-3301, E-ISSN 1558-1195, Vol. 23, no 4, p. 22-Article in journal (Refereed)
    Abstract [en]

    In this article, rare-event simulation for stochastic recurrence equations of the form Xn+1 = A(n+1)X(n) + Bn+1, X-0 = 0 is studied, where {A(n);n >= 1} and {B-n;n >= 1} are independent sequences consisting of independent and identically distributed real-valued random variables. It is assumed that the tail of the distribution of B-1 is regularly varying, whereas the distribution of A(1) has a suitably light tail. The problem of efficient estimation, via simulation, of quantities such as P{X-n > b} and P{sup(k <= n) X-k > b} for large b and n is studied. Importance sampling strategies are investigated that provide unbiased estimators with bounded relative error as b and n tend to infinity.

  • 52.
    Blomberg, Niclas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Higher Criticism Testing for Signal Detection in Rare And Weak Models2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    examples - we need models for selecting a small subset of useful features from high-dimensional data, where the useful features are both rare and weak, this being crucial for e.g. supervised classfication of sparse high- dimensional data. A preceding step is to detect the presence of useful features, signal detection. This problem is related to testing a very large number of hypotheses, where the proportion of false null hypotheses is assumed to be very small. However, reliable signal detection will only be possible in certain areas of the two-dimensional sparsity-strength parameter space, the phase space.

    In this report, we focus on two families of distributions, N and χ2. In the former case, features are supposed to be independent and normally distributed. In the latter, in search for a more sophisticated model, we suppose that features depend in blocks, whose empirical separation strength asymptotically follows the non-central χ2ν-distribution.

    Our search for informative features explores Tukey's higher criticism (HC), which is a second-level significance testing procedure, for comparing the fraction of observed signi cances to the expected fraction under the global null.

    Throughout the phase space we investgate the estimated error rate,

    Err = (#Falsely rejected H0+ #Falsely rejected H1)/#Simulations,

    where H0: absence of informative signals, and H1: presence of informative signals, in both the N-case and the χ2ν-case, for ν= 2; 10; 30. In particular, we find, using a feature vector of the approximately same size as in genomic applications, that the analytically derived detection boundary is too optimistic in the sense that close to it, signal detection is still failing, and we need to move far from the boundary into the success region to ensure reliable detection. We demonstrate that Err grows fast and irregularly as we approach the detection boundary from the success region.

    In the χ2ν-case, ν > 2, no analytical detection boundary has been derived, but we show that the empirical success region there is smaller than in the N-case, especially as ν increases.

  • 53.
    Blomberg, Renée
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Who is Granted Disability Benefit in Sweden?: Description of risk factors and the effect of the 2008 law reform2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Disabilitybenefit is a publicly funded benefit in Sweden that provides financialprotection to individuals with permanent working ability impairments due todisability, injury, or illness. The eligibility requirements for disabilitybenefit were tightened June 1, 2008 to require that the working abilityimpairment be permanent and that no other factors such as age or local labormarket conditions can affect eligibility for the benefit. The goal of thispaper is to investigate risk factors for the incidence disability benefit andthe effects of the 2008 reform. This is the first study to investigate theimpact of the 2008 reform on the demographics of those that received disabilitybenefit. A logistic regression model was used to study the effect of the 2008law change. The regression results show that the 2008 reform did have astatistically significant effect on the demographics of the individuals whowere granted disability benefit. After the reform women were lessoverrepresented, the older age groups were more overrepresented, and peoplewith short educations were more overrepresented. Although the variables for SKLregions together were jointly statistically significant, their coefficientswere small and the group of variables had the least amount of explanatory valuecompared to the variables for age, education, gender and the interactionvariables.

  • 54.
    Blomkvist, Oscar
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Smart Beta - index weighting2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study is a thesis ending a 120 credit masters program in Mathematics with specialization Financial Mathematics and Mathematical Statistics at the Royal Institute of Technology (KTH).

    The subject of Smart beta is defined and studied in an index fund context. The portfolio weighting schemes tested are: equally weighting, maximum Sharpe ratio, maximum diversification, and fundamental weighting using P/E-ratios. The outcome of the strategies is measured in performance (accumulated return), risk, and cost of trading, along with measures of the proportions of different assets in the portfolio.

    The thesis goes through the steps of collecting, ordering, and ”cleaning” the data used in the process. A brief explanation of historical simulation used in estimation of stochastic variables such as expected return and covariance matrices is included, as well as analysis on the data’s distribution.

    The process of optimization and how rules for being UCITS compliant forms optimization programs with constraints is described.

    The results indicate that all, but the most diversified, portfolios tested outperform the market cap weighted portfolio. In all cases, the trading volumes and the market impact is increased, in comparison with the cap weighted portfolio. The Sharpe ratio maximizer yields a high level of return, while keeping the risk low. The fundamentally weighted portfolio performs best, but with higher risk. A combination of the two finds the portfolio with highest return and lowest risk. 

  • 55.
    Bogren, Felix
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Estimating the Term Structure of Default Probabilities for Heterogeneous Credit Porfolios2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this thesis is to estimate the term structure of default probabilities for heterogeneous credit portfolios. The term structure is defined as the cumulative distribution function (CDF) of the time until default. Since the CDF is the complement of the survival function, survival analysis is applied to estimate the term structures. To manage long-term survivors and plateaued survival functions, the data is assumed to follow a parametric as well as a semi-parametric mixture cure model. Due to the general intractability of the maximum likelihood of mixture models, the parameters are estimated by the EM algorithm. A simulation study is conducted to assess the accuracy of the EM algorithm applied to the parametric mixture cure model with data characterized by a low default incidence. The simulation study recognizes difficulties in estimating the parameters when the data is not gathered over a sufficiently long observational window. The estimated term structures are compared to empirical term structures, determined by the Kaplan-Meier estimator. The results indicated a good fit of the model for longer horizons when applied to each credit type separately, despite difficulties capturing the dynamics of the term structure for the first one to two years. Both models performed poorly with few defaults. The parametric model did however not seem sensitive to low default rates. In conclusion, the class of mixture cure models are indeed viable for estimating the term structure of default probabilities for heterogeneous credit portfolios.

  • 56.
    Boros, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On Lapse risk factors in Solvency II2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the wake of the sub-prime crisis of 2008, the European Insurance and Occupational Pensions Authority issued the Solvency II directive, aiming at replacing the obsolete Solvency I framework by 2016. Among the quantitative requirements of Solvency II, a measure for an insurance firms solvency risk, the solvency risk capital, is found. It aims at establishing the amount of equity the company needs to hold to be able to meet its insurance obligations with a probability of 0.995 over the coming year. The SCR of a company is essentially built up by the SCR induced by a set of quantifiable risks. Among these, risks originating from the take up rate of contractual options, lapse risks, are included.

    In this thesis, the contractual options of a life insurer have been identified and risk factors aiming at capturing the risks arising are suggested. It has been concluded that a risk factor estimating the size of mass transfer events captures the risk arising through the resulting rescaling of the balance sheet. Further, a risk factor modeling the deviation of the Company's assumption for the yearly transfer rate is introduced to capture the risks induced by the characteristics of traditional life insurance and unit-linked insurance contracts upon transfer. The risk factors are modeled in a manner to introduce co-dependence with equity returns as well as interest rates of various durations and the model parameters are estimated using statistical methods for Norwegian transfer-frequency data obtained from Finans Norge.

    The univariate and multivariate properties of the models are investigated in a scenario setting and it is concluded the the suggested models provide predominantly plausible results for the mass-lapse risk factors. However, the performance of the models for the risk factors aiming at capturing deviations in the transfer assumptions are questionable, why two means of increasing its validity have been proposed.

  • 57.
    Borysov, Stanislav
    et al.
    KTH, School of Engineering Sciences (SCI), Applied Physics, Nanostructure Physics. KTH, Centres, Nordic Institute for Theoretical Physics NORDITA.
    Roudi, Yasser
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA. The Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway.
    Balatsky, Alexander V.
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA. Institute for Materials Science, Los Alamos National Laboratory, Los Alamos, NM, United States.
    U.S. stock market interaction network as learned by the Boltzmann machine2015In: European Physical Journal B: Condensed Matter Physics, ISSN 1434-6028, E-ISSN 1434-6036, Vol. 88, no 12, p. 1-14Article in journal (Refereed)
    Abstract [en]

    We study historical dynamics of joint equilibrium distribution of stock returns in the U.S. stock market using the Boltzmann distribution model being parametrized by external fields and pairwise couplings. Within Boltzmann learning framework for statistical inference, we analyze historical behavior of the parameters inferred using exact and approximate learning algorithms. Since the model and inference methods require use of binary variables, effect of this mapping of continuous returns to the discrete domain is studied. The presented results show that binarization preserves the correlation structure of the market. Properties of distributions of external fields and couplings as well as the market interaction network and industry sector clustering structure are studied for different historical dates and moving window sizes. We demonstrate that the observed positive heavy tail in distribution of couplings is related to the sparse clustering structure of the market. We also show that discrepancies between the model’s parameters might be used as a precursor of financial instabilities.

  • 58.
    Bramstång, Philip
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hermanson, Richard
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Extreme value theory with Markov chain Monte Carlo - an automated process for EVT in finance2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this thesis was to create an automated procedure for estimating financial risk using extreme value theory (EVT).

    The "peaks over threshold" (POT) result from EVT was chosen for modelling the tails of the distribution of financial returns. The main difficulty with POT is choosing a convergence threshold above which the data points are regarded as extreme events and modelled using a limit distribution. It was investigated how risk measures are affected by variations in this threshold and it was deemed that fixed-threshold models are inadequate in the context of few relevant data points, as is often the case in EVT applications. A model for automatic threshold weighting was proposed and shows promise.

    Moreover, the choice of Bayesian vs frequentist inference, with focus on Markov chain Monte Carlo (MCMC) vs maximum likelihood estimation (MLE), was investigated with regards to EVT applications, favoring Bayesian inference and MCMC. Two MCMC algorithms, independence Metropolis (IM) and automated factor slice sampler (AFSS), were analyzed and improved in order to increase performance of the final procedure.

    Lastly, the effects of a reference prior and a prior based on expert opinion were compared and exemplified for practical applications in finance.

  • 59.
    Brodin, Kristoffer
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical Machine Learning from Classification Perspective:: Prediction of Household Ties for Economical Decision Making2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In modern society, many companies have large data records over their individual customers, containing information about attributes, such as name, gender, marital status, address, etc. These attributes can be used to link costumers together, depending on whether they share some sort of relationship with each other or not. In this thesis the goal is to investigate and compare methods to predict relationships between individuals in the terms of what we define as a household relationship, i.e. we wish to identify which individuals are sharing living expenses with one another. The objective is to explore the ability of three supervised statistical machine learning methods, namely, logistic regression (LR), artificial neural networks (ANN) and the support vector machine (SVM), to predict these household relationships and evaluate their predictive performance for different settings on their corresponding tuning parameters. Data over a limited population of individuals, containing information about household affiliation and attributes, were available for this task. In order to apply these methods, the problem had to be formulated on a form enabling supervised learning, i.e. a target Y and input predictors X = (X1, …, Xp), based on the set of p attributes associated with each individual, had to be derived. We have presented a technique which forms pairs of individuals under the hypothesis H0, that they share a household relationship, and then a test of significance is constructed. This technique transforms the problem into a standard binary classification problem. A sample of observations could be generated by randomly pair individuals and using the available data over each individual to code the corresponding outcome on Y and X for each random pair. For evaluation and tuning of the three supervised learning methods, the sample was split into a training set, a validation set and a test set.

    We have seen that the prediction error, in term of misclassification rate, is very small for all three methods since the two classes, H0 is true, and H0 is false, are far away from each other and well separable. The data have shown pronounced linear separability, generally resulting in minor differences in misclassification rate as the tuning parameters are modified. However, some variations in the prediction results due to tuning have been observed, and if also considering computational time and requirements on computational power, optimal settings on the tuning parameters could be determined for each method. Comparing LR, ANN and SVM, using optimal tuning settings, the results from testing have shown that there is no significant difference between the three methods performances and they all predict well. Nevertheless, due to difference in complexity between the methods, we have concluded that SVM is the least suitable method to use, whereas LR most suitable. However, the ANN handles complex and non-linear data better than LR, therefore, for future application of the model, where data might not have such a pronounced linear separability, we find it suitable to consider ANN as well.

    This thesis has been written at Svenska Handelsbanken, one of the large major banks in Sweden, with offices all around the world. Their headquarters are situated in Kungsträdgården, Stockholm. Computations have been performed using SAS software and data have been processed in SQL relational database management system.

  • 60. Buckdahn, Rainer
    et al.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Li, Juan
    A General Stochastic Maximum Principle for SDEs of Mean-field Type2011In: Applied mathematics and optimization, ISSN 0095-4616, E-ISSN 1432-0606, Vol. 64, no 2, p. 197-216Article in journal (Refereed)
    Abstract [en]

    We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng's-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966-979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng's stochastic maximum principle.

  • 61.
    Budai, Daniel
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Jallo, David
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The Market Graph: A study of its characteristics, structure & dynamics2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis we have considered three different market graphs; one solely based on stock returns, another one based on stock returns with vertices weighted with a liquidity measure and lastly one based on correlations of volume fluctuations. Research is conducted on two different markets; the Swedish and the American stock market. We want to introduce graph theory as a method for representing the stock market in order to show that one can more fully understand the structural properties and dynamics of the stock market by studying the market graph. We found many signs of increased globalization by studying the clustering coefficient and the correlation distribution. The structure of the market graph is such that it pinpoints specific sectors when the correlation threshold is increased and different sectors are found in the two different markets. For low correlation thresholds we found groups of independent stocks that can be used as diversified portfolios. Furthermore, the dynamics revealed that it is possible to use the daily absolute change in edge density as an indicator for when the market is about to make a downturn. This could be an interesting topic for further studies. We had hoped to get additional results by considering volume correlations, but that did not turn out to be the case. Regardless of that, we think that it would be interesting to study volume based market graphs further.

  • 62.
    Budhiraja, Amarjit
    et al.
    University of North Carolina at Chapel Hill United States.
    Nyquist, Pierre
    Brown University, United States.
    Large deviations for multidimensional state-dependent shot noise processes2015In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 52, no 4, p. 1097-1114Article in journal (Refereed)
    Abstract [en]

    Shot-noise processes are used in applied probability to model a variety of physical systems in, for example, teletraffic theory, insurance and risk theory, and in the engineering sciences. In this paper we prove a large deviation principle for the sample-paths of a general class of multidimensional state-dependent Poisson shot-noise processes. The result covers previously known large deviation results for one-dimensional state-independent shot-noise processes with light tails. We use the weak convergence approach to large deviations, which reduces the proof to establishing the appropriate convergence of certain controlled versions of the original processes together with relevant results on existence and uniqueness.

  • 63.
    Callert, Gustaf
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Halén Dahlström, Filip
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A performance investigation and evaluation of selected portfolio optimization methods with varying assets and market scenarios2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study investigates and evaluates how different portfolio optimization methods perform when varying assets and financial market scenarios. Methods included are mean variance, Conditional Value-at-Risk, utility based, risk factor based and Monte Carlo optimization. Market scenarios are represented by stagnating, bull and bear market data from the Bloomberg database. In order to perform robust optimizations resampling of the Bloomberg data has been done hundred times. The evaluation of the methods has been done with respect to selected ratios and two benchmark portfolios. Namely an equally weighted portfolio and an equally weighted risk contributions portfolio. The study found that mean variance and Conditional Value-at-Risk optimization performed best when using linear assets in all the investigated cases. Considering non-linear assets such as options an equally weighted portfolio performs best.

  • 64. Cappé, Olivier
    et al.
    Moulines, Eric
    Rydén, Tobias
    Lund University.
    Inference in Hidden Markov Models2005Book (Refereed)
  • 65.
    Carlqvist, Håkan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Multiscale analysis of multi-channel signals2005Doctoral thesis, comprehensive summary (Other scientific)
    Abstract [en]

    I: Amplitude and phase relationship between alpha and beta oscillations in the human EEG We have studied the relation between two oscillatory patterns within EEG signals (oscillations with main frequency 10 Hz and 20 Hz), with wavelet-based methods. For better comparison, a variant of the continuous wavelet transform, was derived. As a conclusion, the two patterns were closely related and 70-90 % of the activity in the 20 Hz pattern could be seen as a resonance phenomenon of the 10 Hz activity.

    II: A local discriminant basis algorithm using wavelet packets for discrimination between classes of multidimensional signals We have improved and extended the local discriminant basis algorithm for application on multidimensional signals appearing from multichannels. The improvements includes principal-component analysis and crossvalidation- leave-one out. The method is furthermore applied on two classes of EEG signals, one group of control subjects and one group of subjects with type I diabetes. There was a clear discrimination between the two groups. The discrimination follows known differences in the EEG between the two groups of subjects.

    III: Improved classification of multidimensional signals using orthogonality properties of a time-frequency library We further improve and refine the method in paper2 and apply it on 4 classes of EEG signals from subjects differing in age and/or sex, which are known factors of EEG alterations. As a method for deciding the best basis we derive an orthogonalbasis- pursuit-like algorithm which works statistically better (Tukey's test for simultaneous confidence intervals) than the basis selection method in the original local discriminant basis algorithm. Other methods included were Fisher's class separability, partial-least-squares and cross-validation-leave-one-subject out. The two groups of younger subjects were almost fully discriminated between each other and to the other groups, while the older subjects were harder to discriminate.

  • 66.
    Chaqchaq, Othmane
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Fixed Income Modeling2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Besides financial analysis, quantitative tools play a major role in asset management. By managing the aggregation of large amount of historical and prospective data on different asset classes, it can give portfolio allocation solution with respect to risk and regulatory constraints.

    Asset class modeling requires three main steps, the first one is to assess the product features (risk premium and risks) by considering historical and prospective data, which in the case of fixed income depends on spread and default levels. The second is choosing the quantitative model, in this study we introduce a new credit model, which unlike equity like models, model default as a main feature of fixed income performance. The final step consists on calibrating the model.

    We start in this study with the modeling of bond classes and study its behavior in asset allocation, we than model the capital solution transaction as an example of a fixed income structured product.

  • 67.
    Chatall, Kim
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Johansson, Niklas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    An Analysis of Asynchronous Data2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Risk analysis and financial decision making requires true and appropriate estimates of correlations today and how they are expected to evolve in the future. If a portfolio consists of assets traded in markets with different trading hours, there could potentially occur an underestimation of the right correlation. This is due the asynchronous data - there exist an asynchronicity within the assets time series in the portfolio. The purpose of this paper is twofold. First, we suggest a modified synchronization model of Burns, Engle and Mezrich (1998) which replaces the first-order vector moving average with an first-order vector autoregressive process. Second, we study the time-varying dynamics along with forecasting the conditional variance-covariance and correlation through a DCC model. The performance of the DCC model is compared to the industrial standard RiskMetrics Exponentially Weighted Moving Averages (EWMA) model. The analysis shows that the covariance of the DCC model is slightly lower than of the RiskmMetrics EWMA model. Our conclusion is that the DCC model is simple and powerful and therefore a promising tool. It provides good insight into how correlations are likely to evolve in the short-run time horizon.

  • 68.
    Chen, Peng
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Modelling the Stochastic Correlation2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, we mainly study the correlation between stocks. The correlation between stocks has been receiving increasing attention. Usually the correlation is considered to be a constant, although it is observed to be varying over time. In this thesis, we study the properties of correlations between Wiener processes and introduce a stochastic correlation model. Following the calibration methods by Zetocha, we implement the calibration for a new set of market data.

  • 69. Chhita, S.
    et al.
    Johansson, Kurt
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Young, B.
    Asymptotic domino statistics in the Aztec diamond2015In: The Annals of Applied Probability, ISSN 1050-5164, E-ISSN 2168-8737, Vol. 25, no 3, p. 1232-1278Article in journal (Refereed)
    Abstract [en]

    We study random domino tilings of the Aztec diamond with different weights for horizontal and vertical dominoes. A domino tiling of an Aztec diamond can also be described by a particle system which is a determinantal process. We give a relation between the correlation kernel for this process and the inverse Kasteleyn matrix of the Aztec diamond. This gives a formula for the inverse Kasteleyn matrix which generalizes a result of Helfgott. As an application, we investigate the asymptotics of the process formed by the southern dominoes close to the frozen boundary. We find that at the northern boundary, the southern domino process converges to a thinned Airy point process. At the southern boundary, the process of holes of the southern domino process converges to a multiple point process that we call the thickened Airy point process. We also study the convergence of the domino process in the unfrozen region to the limiting Gibbs measure.

  • 70.
    Clason Diop, Noah
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Forecasting Euro Area Inflation By Aggregating Sub-components2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this paper is to see whether one can improve on the naiveforecast of Euro Area inflation, where by naive forecast we mean theyear-over-year inflation rate one-year ahead will be the same as the past year.Various model selection procedures are employed on anautoregressive-moving-average model and several Phillips curvebasedmodels. We test also whether we can improve on the Euro Area inflation forecastby first forecasting the sub-components and aggregating them. We manage tosubstantially improve on the forecast by using a Phillips curve based model. Wealso find further improvement by forecasting the sub-components first andaggregating them to Euro Area inflation

  • 71.
    Combes, Richard
    et al.
    Centrale-Supelec, L2S, France.
    Magureanu, Stefan
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Proutiere, Alexandre
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Minimal Exploration in Structured Stochastic Bandits2017In: Advances in Neural Information Processing Systems, Neural information processing systems foundation , 2017, p. 1764-1772Conference paper (Refereed)
    Abstract [en]

    This paper introduces and addresses a wide class of stochastic bandit problems where the function mapping the arm to the corresponding reward exhibits some known structural properties. Most existing structures (e.g. linear, lipschitz, unimodal, combinatorial, dueling,...) are covered by our framework. We derive an asymptotic instance-specific regret lower bound for these problems, and develop OSSB, an algorithm whose regret matches this fundamental limit. OSSB is not based on the classical principle of " role="presentation" style="box-sizing: border-box; display: inline-block; line-height: 0; font-size: 16.38px; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border: 0px; margin: 0px; padding: 1px 0px; color: rgb(51, 51, 51); font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; position: relative;">optimism in the face of uncertainty'' or on Thompson sampling, and rather aims at matching the minimal exploration rates of sub-optimal arms as characterized in the derivation of the regret lower bound. We illustrate the efficiency of OSSB using numerical experiments in the case of the linear bandit problem and show that OSSB outperforms existing algorithms, including Thompson sampling.

  • 72.
    Corander, Jukka
    et al.
    University of Helsinki .
    Cui, Yaqiong
    University of Helsinki .
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Inductive Inference and Partition Exchangeability in Classification2013In: Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence: Papers from the Ray Solomonoff 85th Memorial Conference. / [ed] Dowe, David L., Springer Berlin/Heidelberg, 2013, p. 91-105Conference paper (Refereed)
    Abstract [en]

    Inductive inference has been a subject of intensive research efforts over several decades. In particular, for classification problems substantial advances have been made and the field has matured into a wide range of powerful approaches to inductive inference. However, a considerable challenge arises when deriving principles for an inductive supervised classifier in the presence of unpredictable or unanticipated events corresponding to unknown alphabets of observable features. Bayesian inductive theories based on de Finetti type exchangeability which have become popular in supervised classification do not apply to such problems. Here we derive an inductive supervised classifier based on partition exchangeability due to John Kingman. It is proven that, in contrast to classifiers based on de Finetti type exchangeability which can optimally handle test items independently of each other in the presence of infinite amounts of training data, a classifier based on partition exchangeability still continues to benefit from a joint prediction of labels for the whole population of test items. Some remarks about the relation of this work to generic convergence results in predictive inference are also given.

  • 73. Corander, Jukka
    et al.
    Gyllenberg, Mats
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Learning Genetic Population Structures Using Minimization of Stochastic Complexity2010In: Entropy, ISSN 1099-4300, E-ISSN 1099-4300, Vol. 12, no 5, p. 1102-1124Article in journal (Refereed)
    Abstract [en]

    Considerable research efforts have been devoted to probabilistic modeling of genetic population structures within the past decade. In particular, a wide spectrum of Bayesian models have been proposed for unlinked molecular marker data from diploid organisms. Here we derive a theoretical framework for learning genetic population structure of a haploid organism from bi-allelic markers for which potential patterns of dependence are a priori unknown and to be explicitly incorporated in the model. Our framework is based on the principle of minimizing stochastic complexity of an unsupervised classification under tree augmented factorization of the predictive data distribution. We discuss a fast implementation of the learning framework using deterministic algorithms.

  • 74. Cui, Y.
    et al.
    Sirén, J.
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Corander, J.
    Simultaneous Predictive Gaussian Classifiers2016In: Journal of Classification, ISSN 0176-4268, E-ISSN 1432-1343, p. 1-30Article in journal (Refereed)
    Abstract [en]

    Gaussian distribution has for several decades been ubiquitous in the theory and practice of statistical classification. Despite the early proposals motivating the use of predictive inference to design a classifier, this approach has gained relatively little attention apart from certain specific applications, such as speech recognition where its optimality has been widely acknowledged. Here we examine statistical properties of different inductive classification rules under a generic Gaussian model and demonstrate the optimality of considering simultaneous classification of multiple samples under an attractive loss function. It is shown that the simpler independent classification of samples leads asymptotically to the same optimal rule as the simultaneous classifier when the amount of training data increases, if the dimensionality of the feature space is bounded in an appropriate manner. Numerical investigations suggest that the simultaneous predictive classifier can lead to higher classification accuracy than the independent rule in the low-dimensional case, whereas the simultaneous approach suffers more from noise when the dimensionality increases.

  • 75.
    Dacke, Fredrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Non-local means denoising ofprojection images in cone beamcomputed tomography2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A new edge preserving denoising method is used to increase image quality in cone beam computed tomography. The reconstruction algorithm for cone beam computed tomography used by Elekta enhances high frequency image details, e.g. noise, and we propose that denoising is done on the projection images before reconstruction. The denoising method is shown to have a connection with computational statistics and some mathematical improvements to the method are considered. Comparisons are made with the state-of-theart method on both artificial and physical objects. The results show that the smoothness of the images is enhanced at the cost of blurring out image details. Some results show how the setting of the method parameters influence the trade off between smoothness and blurred image details in the images.

  • 76.
    Dahlin, Fredrik
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Storkitt, Samuel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Estimation of Loss Given Default for Low Default Portfolios2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Basel framework allows banks to assess their credit risk by using their own estimates of Loss Given Default (LGD). However, for a Low Default Portfolio (LDP), estimating LGD is difficult due to shortage of default data. This study evaluates different LGD estimation approaches in an LDP setting by using pooled industry data obtained from a subset of the PECDC LGD database. Based on the characteristics of a LDP a Workout LGD approach is suggested. Six estimation techniques, including OLS regression, Ridge regression, two techniques combining logistic regressions with OLS regressions and two tree models, are tested. All tested models give similar error levels when tested against the data but the tree models might produce rather different estimates for specific exposures compared to the other models. Using historical averages yield worse results than the tested models within and out of sample but are not considerably worse out of time.

  • 77. Damasso, M.
    et al.
    Del Sordo, Fabio
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA.
    Proxima Centauri reloaded: Unravelling the stellar noise in radial velocities2017In: Astronomy and Astrophysics, ISSN 0004-6361, E-ISSN 1432-0746, Vol. 599, article id A126Article in journal (Refereed)
    Abstract [en]

    Context. The detection and characterisation of Earth-like planets with Doppler signals of the order of 1 m s-1 currently represent one of the greatest challenge for extrasolar-planet hunters. As results for such findings are often controversial, it is desirable to provide independent confirmations of the discoveries. Testing different models for the suppression of non-Keplerian stellar signals usually plaguing radial velocity data is essential to ensuring findings are robust and reproducible. Aims. Using an alternative treatment of the stellar noise to that discussed in the discovery paper, we re-analyse the radial velocity dataset that led to the detection of a candidate terrestrial planet orbiting the star Proxima Centauri. We aim to confirm the existence of this outstanding planet, and test the existence of a second planetary signal. Methods. Our technique jointly modelled Keplerian signals and residual correlated signals in radial velocities using Gaussian processes. We analysed only radial velocity measurements without including other ancillary data in the fitting procedure. In a second step, we have compared our outputs with results coming from photometry, to provide a consistent physical interpretation. Our analysis was performed in a Bayesian framework to quantify the robustness of our findings. Results. We show that the correlated noise can be successfully modelled as a Gaussian process regression, and contains a periodic term modulated on the stellar rotation period and characterised by an evolutionary timescale of the order of one year. Both findings appear to be robust when compared with results obtained from archival photometry, thus providing a reliable description of the noise properties. We confirm the existence of a coherent signal described by a Keplerian orbit equation that can be attributed to the planet Proxima b, and provide an independent estimate of the planetary parameters. Our Bayesian analysis dismisses the existence of a second planetary signal in the present dataset.

  • 78.
    Dastmard, Benjamin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A statistical analysis of the connection between test results and field claims for ECUs in vehicles2013Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The objective of this thesis is to analyse theconnection between test results and field claims of ECUs (electronic controlunits) at Scania in order to improve the acceptance criteria and evaluatesoftware testing strategies. The connection is examined through computation ofdifferent measures of dependencies such as the Pearson’s correlation, Spearman’srank correlation and Kendall’s tau. The correlations are computed from testresults in different ECU projects and considered in a predictive model based onlogistic regression. Numerical results indicate a weak connection between testresults and field claims. This is partly due to insufficient number of ECUprojects and the lack of traceability of field claims and test results. Themain conclusion confirms the present software testing strategy. Continuoussoftware release and testing results in a lower field claim and thus a betterproduct.

  • 79.
    Datye, Shlok
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Money Management Principles for Mechanical Traders2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In his five books during 1990-2009, starting with Portfolio Management Formulas, Ralph Vince made accessible to mechanical traders with limited background in mathematics various important concepts in the field of money management. During this process, he coined and popularized the terms “optimal f" and “leverage space trading model."

    This thesis provides a sound mathematical understanding of these concepts, and adds various extensions and insights of its own. It also provides practical examples of how mechanical traders can use these concepts to their advantage. Although beneficial to all mechanical traders, the examples involve trading futures contracts, and practical details such as the back-adjustment of futures prices are provided along the way.

  • 80.
    de Sauvage Vercour, Héloïse
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Analysis and comparison of capital allocation techniques in an insurance context2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Companiesissuing insurance cover, in return for insurance premiums, face the payments ofclaims occurring according to a loss distribution. Hence, capital must be heldby the companies so that they can guarantee the fulfilment of the claims ofeach line of insurance. The increased incidence of insurance insolvencymotivates the birth of new legislations as the European Solvency II Directive.Companies have to determine the required amount of capital and the optimalcapital allocation across the different lines of insurance in order to keep therisk of insolvency at an adequate level. The capital allocation problem may betreated in different ways, starting from the insurance company balance sheet.Here, the running process and efficiency of four methods are evaluated andcompared so as to point out the characteristics of each of the methods. TheValue-at-Risk technique is straightforward and can be easily generated for anyloss distribution. The insolvency put option principle is easily implementableand is sensitive to the degree of default. The capital asset pricing model isone of the oldest reliable methods and still provides very helpful intermediateresults. The Myers and Read marginal capital allocation approach encouragesdiversification and introduces the concept of default value. Applications ofthe four methods to some fictive and real insurance companies are provided. Thethesis further analyses the sensitivity of those methods to changes in the economiccontext and comments how insurance companies can anticipate those changes.

  • 81.
    Dellner, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Can a simple model for the interaction between value and momentum traders explain how equity futures react to earnings announcements?2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 82. Dermoune, Azzouz
    et al.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rahmania, Nadji
    Estimation of the smoothing parameters in the HPMV filter2011In: ANALELE STIINT UNIV AL I CUZA, ISSN 1221-8421, Vol. 57, no 1, p. 61-75Article in journal (Refereed)
    Abstract [en]

    We suggest an optimality criterion, for choosing the best smoothing parameters for an extension of the so-called Hodrick-Prescott Multivariate (HPMV) filter. We show that this criterion admits a whole set of optimal smoothing parameters, to which belong the widely used noise-to-signal ratios. We also propose explicit consistent estimators of these noise-to-signal ratios, which in turn yield a new performant method to estimate the output gap.

  • 83.
    Dimoulkas, Ilias
    et al.
    KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
    Amelin, Mikael
    KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
    Hesamzadeh, Mohammad Reza
    KTH, School of Electrical Engineering (EES), Electric Power and Energy Systems.
    Forecasting Balancing Market Prices Using Hidden Markov Models2016In: 2016 13TH INTERNATIONAL CONFERENCE ON THE EUROPEAN ENERGY MARKET (EEM), IEEE conference proceedings, 2016Conference paper (Refereed)
    Abstract [en]

    This paper presents a Hidden Markov Model (HMM) based method to predict the prices and trading volumes in the electricity balancing markets. The HMM are quite powerful in modelling stochastic processes where the underlying dynamics are not apparent. The proposed method provides both one hour and 12-36 hour ahead forecasts. The first is mostly useful to wind/solar producers in order to compensate their production imbalances while the second is important when submitting the offers to the day ahead markets. The results are compared to the ones from Markov-autoregressive model.

  • 84.
    Dizdarevic, Goran
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Data Fusion for Consumer Behaviour2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis analyses different methods of data fusion by fitting a chosen number of statistical models to empirical consumer data and evaluating their performance in terms of a selection of performance measures. The main purpose of the models is to predict business related consumer variables. Conventional methods such as decision trees, linear model and K-nearest neighbor have been suggested as well as single-layered neural networks and the naive Bayesian classifier. Furthermore, ensemble methods for both classification and regression have been investigated by minimizing the cross-entropy and RMSE of predicted outcomes using the iterative non-linear BFGS optimization algorithm. Time consumption of the models and methods for feature selection are also discussed in this thesis. Data regarding consumer drinking habits, transaction and purchase history and social demographic background is provided by Nepa. Evaluation of the performance measures indicate that the naive Bayesian classifier predicts consumer drinking habits most accurately whereas the random forest, although the most time consuming, is preferred when classifying the Consumer Satisfaction Index (CSI). Regression of CSI yield similar performance to all models. Moreover, the ensemble methods increased the prediction accuracy slightly in addition to increasing the time consumption. 

  • 85.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Actuarial mathematics for life contingent risks2011In: Scandinavian Actuarial Journal, ISSN 0346-1238, E-ISSN 1651-2030, no 4, p. 318-318Article, book review (Refereed)
  • 86.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nonlife actuarial models, theory, methods and evaluation2011In: Scandinavian Actuarial Journal, ISSN 0346-1238, E-ISSN 1651-2030, no 4, p. 319-320Article, book review (Refereed)
  • 87.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Regression modeling with actuarial and financial applications2011In: Scandinavian Actuarial Journal, ISSN 0346-1238, E-ISSN 1651-2030, no 4, p. 319-319Article, book review (Refereed)
  • 88.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical estimation techniques in life and disability insurance—a short overview2016In: Springer Proceedings in Mathematics and Statistics, Springer, 2016, p. 127-147Conference paper (Refereed)
    Abstract [en]

    This is a short introduction to some basic aspects of statistical estimation techniques known as graduation technique in life and disability insurance. © 2016, Springer International Publishing Switzerland 2016.

  • 89.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hamadene, Said
    Hdhiri, Ibtissam
    Stochastic Impulse Control of Non-Markovian Processes2010In: Applied mathematics and optimization, ISSN 0095-4616, E-ISSN 1432-0606, Vol. 61, no 1, p. 1-26Article in journal (Refereed)
    Abstract [en]

    We consider a class of stochastic impulse control problems of general stochastic processes i.e. not necessarily Markovian. Under fairly general conditions we establish existence of an optimal impulse control. We also prove existence of combined optimal stochastic and impulse control of a fairly general class of diffusions with random coefficients. Unlike, in the Markovian framework, we cannot apply quasi-variational inequalities techniques. We rather derive the main results using techniques involving reflected BSDEs and the Snell envelope.

  • 90.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hamdi, Ali
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A full balance sheet two-mode optimal switching problem2015In: Stochastics: An International Journal of Probablitiy and Stochastic Processes, ISSN 1744-2508, E-ISSN 1744-2516, Vol. 87, no 4, p. 604-622Article in journal (Refereed)
    Abstract [en]

    We formulate and solve a finite horizon full balance sheet of a two-mode optimal switching problem related to trade-off strategies between expected profit and cost yields. Given the current mode, this model allows for either a switch to the other mode or termination of the project, and this happens for both sides of the balance sheet. A novelty in this model is that the related obstacles are nonlinear in the underlying yields, whereas, they are linear in the standard optimal switching problem. The optimal switching problem is formulated in terms of a system of Snell envelopes for the profit and cost yields which act as obstacles to each other. We prove the existence of a continuous minimal solution of this system using an approximation scheme and fully characterize the optimal switching strategy.

  • 91.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hamdi, Ali
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Full Balance Sheet Two-modes Optimal Switching problemIn: Mathematical Methods of Operations Research, ISSN 1432-2994, E-ISSN 1432-5217Article in journal (Other academic)
    Abstract [en]

    We formulate and solve a finite horizon full balance sheet two-modes optimal switching problem related to trade-off strategies between expected profit and cost yields. The optimal switching problem is formulated in terms of a system of Snell envelopes for the profit and cost yields which act as obstacles to each other. We prove existence of a continuous minimal solution of this system using an approximation scheme and fully characterize the optimal switching strategy.

  • 92.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hamdi, Ali
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Two-modes Mean-field Optimal Switching Problem for The Full Balance Sheet2014In: International Journal of Stochastic Analysis, ISSN 2090-3332, E-ISSN 2090-3340, article id 159519Article in journal (Refereed)
    Abstract [en]

    We consider the problem of switching a large number of production lines between two modes, high-production and low-production. The switching is based on the optimal expected profit and cost yields of the respective production lines, and considers both sides of the balance sheet. Furthermore, the production lines are all assumed to be interconnected through a coupling term, which is the average of all optimal expected yields. Intuitively, this means that each individual production line is compared to the average of all its peers which acts as a benchmark.

    Due to the complexity of the problem, we consider the aggregated optimal expected yields, where the coupling term is approximated with the mean of the optimal expected yields. This turns the problem into a two-modes optimal switching problem of mean-field type, which can be described by a system of Snell envelopes where the obstacles are interconnected and nonlinear.

    The main result of the paper is a proof of a continuous minimal solution to the system of Snell envelopes, as well as the full characterization of the optimal switching strategy.

  • 93.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Importance sampling for a Markovian intensity model with applications to credit riskManuscript (preprint) (Other academic)
    Abstract [en]

    This paper considers importance sampling for estimation of rare-event probabilities in a Markovian intensity model for credit risk. The main contribution is the design of efficient importance sampling algorithms using subsolutions of a certain Hamilton-Jacobi equation. For certain instances of the credit risk model the proposed algorithm is proved to be asymptotically optimal. The computational gain compared to standard Monte Carlo is illustrated by numerical experiments.

  • 94.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Min-max representations of viscosity solutions of Hamilton-Jacobi equations and applications in rare-event simulationManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper a duality relation between the Mañé potential and Mather's action functional is derived in the context of convex and state-dependent Hamiltonians. The duality relation is used to obtain min-max representations of viscosity solutions of first order Hamilton-Jacobi equations. These min-max representations naturally suggest classes of subsolutions of Hamilton-Jacobi equations that arise in the theory of large deviations. The subsolutions, in turn, are good candidates for designing efficient rare-event simulation algorithms.

  • 95.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A hidden Markov approach to disability insuranceManuscript (preprint) (Other academic)
    Abstract [en]

    Point and interval estimation of future disability inception and recovery rates are predominantly carried out by combining generalized linear models (GLM) with time series forecasting techniques into a two-step method involving parameter estimation from historical data and subsequent calibration of a time series model. This approach may in fact lead to both conceptual and numerical problems since any time trend components of the model are incoherently treated as both model parameters and realizations of a stochastic process. We suggest that this general two-step approach can be improved in the following way: First, we assume a stochastic process form for the time trend component. The corresponding transition densities are then incorporated into the likelihood, and the model parameters are estimated using the Expectation-Maximization algorithm. We illustrate the modelling procedure by fitting the model to Swedish disability claims data.

  • 96.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Aggregation of 1-year risks in life and disability insurance2016In: Annals of Actuarial Science, ISSN 1748-4995, E-ISSN 1748-5002, Vol. 10, no 2, p. 203-221Article in journal (Refereed)
    Abstract [en]

    We consider large insurance portfolios consisting of life or disability insurance policies that are assumed independent, conditional on a stochastic process representing the economic-demographic environment. Using the conditional law of large numbers, we show that when the portfolio of liabilities becomes large enough, its value on a delta-year horizon can be approximated by a functional of the environment process. Based on this representation, we derive a semi-analytical approximation of the systematic risk quantiles of the future liability value for a homogeneous portfolio when the environment is represented by a one-factor diffusion process. For the multi-factor diffusion case, we propose two different risk aggregation techniques for a portfolio consisting of large, homogeneous pools. We give numerical results comparing the resulting capital charges with the Solvency II standard formula, based on disability claims data from the Swedish insurance company Folksam.

  • 97.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Aggregation of one-year risks in life and disability insuranceManuscript (preprint) (Other academic)
  • 98.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Nonlinear reserving in life insurance: aggregation and mean-field approximationManuscript (preprint) (Other academic)
  • 99.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Risk aggregation and stochastic claims reserving in disability insurance2014In: Insurance, Mathematics & Economics, ISSN 0167-6687, E-ISSN 1873-5959, Vol. 59, p. 100-108Article in journal (Refereed)
    Abstract [en]

    We consider a large, homogeneous portfolio of life or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic-demographic environment. Using a conditional law of large numbers, we establish the connection between claims reserving and risk aggregation for large portfolios. Further, we derive a partial differential equation for moments of present values. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for solving the PDEs very efficiently. Finally, we give a numerical example where moments of present values of disability annuities are computed using finite-difference methods and Monte Carlo simulations.

  • 100.
    Djehiche, Boualem
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rinne, Jonas
    Can stocks help mend the asset and liability mismatch?2010In: Scandinavian Actuarial Journal, ISSN 0346-1238, E-ISSN 1651-2030, no 2, p. 148-160Article in journal (Refereed)
    Abstract [en]

    Stocks are generally used to provide higher returns in the long run. But the dramatic fall in equity prices at the beginning of this century, triggering large underfundings in pension plans, raised the question as to whether stocks can really help mend the asset and liability mismatch. To understand some aspects of this topical issue, we examine whether existing major equity indexes can close this gap, given the liability profile of a typical pension fund. We also compare the non-market capitalization weighted equity indexes recently introduced as Research Affiliates Fundamental Indexes (R) (RAFI (R)) with traditional market capitalization weighted equity indexes from an asset and liability management perspective. The analysis of the behavior of the solvency ratio clearly indicates that interest rate sensitive stocks have a large potential to improve the link between assets and liabilities. Compared with market capitalization weighted equity indexes, RAFI (R) shows a substantially better potential to mend the asset and liability mismatch, while also improving returns.

1234567 51 - 100 of 375
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf