Change search
Refine search result
1234567 1 - 50 of 350
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    Abbaszadeh Shahri, Abbas
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering.
    Larsson, Stefan
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics.
    Johansson, Fredrik
    KTH, School of Architecture and the Built Environment (ABE), Civil and Architectural Engineering, Soil and Rock Mechanics.
    Updated relations for the uniaxial compressive strength of marlstones based on P-wave velocity and point load index test2016In: INNOVATIVE INFRASTRUCTURE SOLUTIONS, ISSN 2364-4176, Vol. 1, no 1, UNSP 17Article in journal (Refereed)
    Abstract [en]

    Although there are many proposed relations for different rock types to predict the uniaxial compressive strength (UCS) as a function of P-wave velocity (V-P) and point load index (Is), only a few of them are focused on marlstones. However, these studies have limitations in applicability since they are mainly based on local studies. In this paper, an attempt is therefore made to present updated relations for two previous proposed correlations for marlstones in Iran. The modification process is executed through multivariate regression analysis techniques using a provided comprehensive database for marlstones in Iran, including UCS, V-P and Is from publications and validated relevant sources comprising 119 datasets. The accuracy, appropriateness and applicability of the obtained modifications were tested by means of different statistical criteria and graph analyses. The conducted comparison between updated and previous proposed relations highlighted better applicability in the prediction of UCS using the updated correlations introduced in this study. However, the derived updated predictive models are dependent on rock types and test conditions, as they are in this study.

  • 2.
    Abdalmoaty, Mohamed
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Learning Stochastic Nonlinear Dynamical Systems Using Non-stationary Linear Predictors2017Licentiate thesis, monograph (Other academic)
    Abstract [en]

    The estimation problem of stochastic nonlinear parametric models is recognized to be very challenging due to the intractability of the likelihood function. Recently, several methods have been developed to approximate the maximum likelihood estimator and the optimal mean-square error predictor using Monte Carlo methods. Albeit asymptotically optimal, these methods come with several computational challenges and fundamental limitations.

    The contributions of this thesis can be divided into two main parts. In the first part, approximate solutions to the maximum likelihood problem are explored. Both analytical and numerical approaches, based on the expectation-maximization algorithm and the quasi-Newton algorithm, are considered. While analytic approximations are difficult to analyze, asymptotic guarantees can be established for methods based on Monte Carlo approximations. Yet, Monte Carlo methods come with their own computational difficulties; sampling in high-dimensional spaces requires an efficient proposal distribution to reduce the number of required samples to a reasonable value.

    In the second part, relatively simple prediction error method estimators are proposed. They are based on non-stationary one-step ahead predictors which are linear in the observed outputs, but are nonlinear in the (assumed known) input. These predictors rely only on the first two moments of the model and the computation of the likelihood function is not required. Consequently, the resulting estimators are defined via analytically tractable objective functions in several relevant cases. It is shown that, under mild assumptions, the estimators are consistent and asymptotically normal. In cases where the first two moments are analytically intractable due to the complexity of the model, it is possible to resort to vanilla Monte Carlo approximations. Several numerical examples demonstrate a good performance of the suggested estimators in several cases that are usually considered challenging.

  • 3.
    Ahlgren, Markus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Internal Market Risk Modelling for Power Trading Companies2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Since the financial crisis of 2008, the risk awareness has increased in the -financial sector. Companies are regulated with regards to risk exposure. These regulations are driven by the Basel Committee that formulates broad supervisory standards, guidelines and recommends statements of best practice in banking supervision. In these regulations companies are regulated with own funds requirements for market risks.

    This thesis constructs an internal model for risk management that, according to the "Capital Requirements Regulation" (CRR) respectively the "Fundamental Review of the Trading Book" (FRTB), computes the regulatory capital requirements for market risks. The capital requirements according to CRR and FRTB are compared to show how the suggested move to an expected shortfall (ES) based model in FRTB will affect the capital requirements. All computations are performed with data that have been provided from a power trading company to make the results fit reality. In the results, when comparing the risk capital requirements according to CRR and FRTB for a power portfolio with only linear assets, it shows that the risk capital is higher using the value-at-risk (VaR) based model. This study shows that the changes in risk capital mainly depend on the different methods of calculating the risk capital according to CRR and FRTB respectively and minor on the change of risk measure.

  • 4.
    Ahmed, Ilyas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Importance Sampling for Least-Square Monte Carlo Methods2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Pricing American style options is challenging due to early exercise opportunities. The conditional expectation in the Snell envelope, known as the continuation value is approximated by basis functions in the Least-Square Monte Carlo-algorithm, giving robust estimation for the options price. By change of measure in the underlying Geometric Brownain motion using Importance Sampling, the variance of the option price can be reduced up to 9 times. Finding the optimal estimator that gives the minimal variance requires careful consideration on the reference price without adding bias in the estimator. A stochastic algorithm is used to find the optimal drift that minimizes the second moment in the expression of the variance after change of measure. The usage of Importance Sampling shows significant variance reduction in comparison with the standard Least-Square Monte Carlo. However, Importance Sampling method may be a better alternative for more complex instruments with early exercise opportunity.

  • 5.
    Ali, Dana
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Kap, Goran
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical Analysis of Computer Network Security2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis it isshown how to measure the annual loss expectancy of computer networks due to therisk of cyber attacks. With the development of metrics for measuring theexploitation difficulty of identified software vulnerabilities, it is possibleto make a measurement of the annual loss expectancy for computer networks usingBayesian networks. To enable the computations, computer net-work vulnerabilitydata in the form of vulnerability model descriptions, vulnerable dataconnectivity relations and intrusion detection system measurements aretransformed into vector based numerical form. This data is then used to generatea probabilistic attack graph which is a Bayesian network of an attack graph.The probabilistic attack graph forms the basis for computing the annualizedloss expectancy of a computer network. Further, it is shown how to compute anoptimized order of vulnerability patching to mitigate the annual lossexpectancy. An example of computation of the annual loss expectancy is providedfor a small invented example network

  • 6. Alm, Sven Erick
    et al.
    Janson, Svante
    Linusson, Svante
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    First critical probability for a problem on random orientations in G(n,p)2014In: Electronic Journal of Probability, ISSN 1083-6489, E-ISSN 1083-6489, Vol. 19, 69- p.Article in journal (Refereed)
    Abstract [en]

    We study the random graph G (n,p) with a random orientation. For three fixed vertices s, a, b in G(n,p) we study the correlation of the events {a -> s} (there exists a directed path from a to s) and {s -> b}. We prove that asymptotically the correlation is negative for small p, p < C-1/n, where C-1 approximate to 0.3617, positive for C-1/n < p < 2/n and up to p = p(2)(n). Computer aided computations suggest that p(2)(n) = C-2/n, with C-2 approximate to 7.5. We conjecture that the correlation then stays negative for p up to the previously known zero at 1/2; for larger p it is positive.

  • 7.
    Almgren, Lars
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Evaluation of HYDRA - A risk model for hydropower plants2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Vattenfall Hydro AB has more than 50 large scale power plants. In these power plants there are over 130 power generating units. The planning of renewals of these units is important to minimize the risk of having big breakdowns which inflict long downtime. Because all power plants are different Vattenfall Hydro AB started using a self developed risk model in 2003 to improve the comparisons between power plants. Since then the model has been used without larger improvements or validation.

    The purpose of this study is to evaluate and analyse how well the risk model has performed and is performing. This thesis is divided into five subsections where analyses are made on the input to the model, adverse events used in the model, the probabilities used in the model, risk forecasts from the model and finally trends for the periods the model has been used. In each subsection different statistical methods are used for the analyses.

    From the analyses it is clear that the low number of adverse events in power plants makes the usage of statistical methods for evaluating performance of Vattenfall Hydro AB’s risk model imprecise. Based on the results of this thesis the conclusion is made that if the risk model is to be used in the future it needs further improvements to generate more accurate results.

  • 8. Ameur, Yacin
    et al.
    Hedenmalm, Håkan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Makarov, Nikolai
    Random normal matrices and ward identities2015In: Annals of Probability, ISSN 0091-1798, E-ISSN 2168-894X, Vol. 43, no 3, 1157-1201 p.Article in journal (Refereed)
    Abstract [en]

    We consider the random normal matrix ensemble associated with a potential in the plane of sufficient growth near infinity. It is known that asymptotically as the order of the random matrix increases indefinitely, the eigenvalues approach a certain equilibrium density, given in terms of Frostman's solution to the minimum energy problem of weighted logarithmic potential theory. At a finer scale, we may consider fluctuations of eigenvalues about the equilibrium. In the present paper, we give the correction to the expectation of the fluctuations, and we show that the potential field of the corrected fluctuations converge on smooth test functions to a Gaussian free field with free boundary conditions on the droplet associated with the potential.

  • 9.
    Amsköld, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    A comparison between different volatility models2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 10.
    Andersson, Alexandra
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Smart Beta Investering Baserad på Makroekonomiska Indikatorer2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis examines the possibility to find a relationship between the Nasdaq Nordea Smart Beta Indices and a series of macroeconomic indicators. This relationship will be used as a signal-value and implemented in a portfolio consisting of all six smart beta indices. To investigate the impact of the signal-value on the portfolio performance, three portfolio strategies are examined with the equally weighted portfolio as a benchmark. The portfolio weights will be re-evaluated monthly and the portfolios examined are the mean-variance portfolio, the mean-variance portfolio based on the signal-value and the equally weighted portfolio based on the signal-value.

    In order to forecast the performance of the portfolio, a multivariate GARCH model with time-varying correlations is fitted to the data and three different error-distributions are considered. The performances of the portfolios are studied both in- and out-of-sample and the analysis is based on the Sharpe ratio.

    The results indicate that a mean-variance portfolio based on the relationship with the macroeconomic indicators outperforms the other portfolios for the in-sample period, with respect to the Sharpe ratio. In the out-of-sample period however, none of the portfolio strategies has Sharpe ratios that are statistically different from that of an equally weighted portfolio.

  • 11.
    Andersson, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A mixed relaxed singular maximum principle for linear SDEs with random coefficientsArticle in journal (Refereed)
    Abstract [en]

    We study singular stochastic control of a two dimensional stochastic differential equation, where the first component is linear with random and unbounded coefficients. We derive existence of an optimal relaxed control and necessary conditions for optimality in the form of a mixed relaxed-singular maximum principle in a global form. A motivating example is given in the form of an optimal investment and consumption problem with transaction costs, where we consider a portfolio with a continuum of bonds and where the portfolio weights are modeled as measure-valued processes on the set of times to maturity.

  • 12.
    Andersson, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Contributions to the Stochastic Maximum Principle2009Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers treating the maximum principle for stochastic control problems.

    In the first paper we study the optimal control of a class of stochastic differential equations (SDEs) of mean-field type, where the coefficients are allowed to depend on the law of the process. Moreover, the cost functional of the control problem may also depend on the law of the process. Necessary and sufficient conditions for optimality are derived in the form of a maximum principle, which is also applied to solve the mean-variance portfolio problem.

    In the second paper, we study the problem of controlling a linear SDE where the coefficients are random and not necessarily bounded. We consider relaxed control processes, i.e. the control is defined as a process taking values in the space of probability measures on the control set. The main motivation is a bond portfolio optimization problem. The relaxed control processes are then interpreted as the portfolio weights corresponding to different maturity times of the bonds. We establish existence of an optimal control and necessary conditons for optimality in the form of a maximum principle, extended to include the family of relaxed controls.

    The third paper generalizes the second one by adding a singular control process to the SDE. That is, the control is singular with respect to the Lebesgue measure and its influence on the state is thus not continuous in time. In terms of the portfolio problem, this allows us to consider two investment possibilities - bonds (with a continuum of maturities) and stocks - and incur transaction costs between the two accounts.

    In the fourth paper we consider a general singular control problem. The absolutely continuous part of the control is relaxed in the classical way, i.e. the generator of the corresponding martingale problem is integrated with respect to a probability measure, guaranteeing the existence of an optimal control. This is shown to correspond to an SDE driven by a continuous orthogonal martingale measure. A maximum principle which describes necessary conditions for optimal relaxed singular control is derived.

  • 13.
    Andersson, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Necessary Optimality Conditions for Two Stochastic Control Problems2008Licentiate thesis, comprehensive summary (Other scientific)
    Abstract [en]

    This thesis consists of two papers concerning necessary conditions in stochastic control problems. In the first paper, we study the problem of controlling a linear stochastic differential equation (SDE) where the coefficients are random and not necessarily bounded. We consider relaxed control processes, i.e. the control is defined as a process taking values in the space of probability measures on the control set. The main motivation is a bond portfolio optimization problem. The relaxed control processes are then interpreted as the portfolio weights corresponding to different maturity times of the bonds. We establish existence of an optimal control and necessary conditions for optimality in the form of a maximum principle, extended to include the family of relaxed controls.

    In the second paper we consider the so-called singular control problem where the control consists of two components, one absolutely continuous and one singular. The absolutely continuous part of the control is allowed to enter both the drift and diffusion coefficient. The absolutely continuous part is relaxed in the classical way, i.e. the generator of the corresponding martingale problem is integrated with respect to a probability measure, guaranteeing the existence of an optimal control. This is shown to correspond to an SDE driven by a continuous orthogonal martingale measure. A maximum principle which describes necessary conditions for optimal relaxed singular control is derived.

  • 14.
    Andersson, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The relaxed general maximum principle for singular optimal control of diffusions2009In: Systems & control letters (Print), ISSN 0167-6911, E-ISSN 1872-7956, ISSN 01676911, Vol. 58, no 1, 76-82 p.Article in journal (Refereed)
    Abstract [en]

    In this paper we study optimality in stochastic control problems where the state process is a stochastic differential equation (SDE) and the control variable has two components, the first being absolutely continuous and the second singular. A control is defined as a solution to the corresponding martingale problem. To obtain existence of an optimal control Haussmann and Suo [U.G. Haussmann, W. Suo, Singular optimal stochastic controls I: Existence, SIAM J. Control Optim. 33 (3) (1995) 916-936] relaxed the martingale problem by extending the absolutely continuous control to the space of probability measures on the control set. Bahlali et al. [S. Bahlali, B. Djehiche, B. Mezerdi, The relaxed stochastic maximum principle in singular optimal control of diffusions, SIAM J. Control Optim. 46 (2) (2007) 427-444] established a maximum principle for relaxed singular control problems with uncontrolled diffusion coefficient. The main goal of this paper is to extend their results to the case where the control enters the diffusion coefficient. The proof is based on necessary conditions for near optimality of a sequence of ordinary controls which approximate the optimal relaxed control. The necessary conditions for near optimality are obtained by Ekeland's variational principle and the general maximum principle for (strict) singular control problems obtained in Bahlali and Mezerdi [S. Bahlali, B. Mezerdi, A general stochastic maximum principle for singular control problems, Electron J. Probab. 10 (2005) 988-1004. Paper no 30]. © 2008 Elsevier B.V. All rights reserved.

  • 15.
    Andersson, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    The relaxed stochastic maximum principle in singular optimal control of diffusions with controlled diffusion coefficientManuscript (Other academic)
  • 16.
    Andersson, Daniel
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A maximum principle for relaxed stochastic control of linear SDEs with application to bond portfolio optimization2010In: Mathematical Methods of Operations Research, ISSN 1432-2994, E-ISSN 1432-5217, Vol. 72, no 2, 273-310 p.Article in journal (Refereed)
    Abstract [en]

    We study relaxed stochastic control problems where the state equation is a one dimensional linear stochastic differential equation with random and unbounded coefficients. The two main results are existence of an optimal relaxed control and necessary conditions for optimality in the form of a relaxed maximum principle. The main motivation is an optimal bond portfolio problem in a market where there exists a continuum of bonds and the portfolio weights are modeled as measure-valued processes on the set of times to maturity.

  • 17.
    Andersson, Daniel
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A maximum principle for SDEs of mean-field type2011In: Applied mathematics and optimization, ISSN 0095-4616, E-ISSN 1432-0606, Vol. 63, no 3, 341-356 p.Article in journal (Refereed)
    Abstract [en]

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  • 18.
    Andersson, Gabriella
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Karlsson, Louise
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Factors affecting the proportion of smartphone usage at Flygresor.se2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Digitization has changed the way people access the internet. Smartphones is soon to be the preferred internet access device leading us into a new generation of e-commerce, namely mobile commerce or m-commerce. The on-going transition, from desktop to smartphone has led to an uprising problem for companies within the area of e-commerce. Visitors coming from a smartphone device tend to not go through with the purchase. With this transition in mind, the thesis aimed to identify the factors that affect the proportion of smartphone visitors on a website, more specifically at the flight comparison site Flygresor.se. The method used was multiple linear regression analysis. To see whether the chosen factors affected the proportion of smartphone transactions or just the proportion of smartphone sessions two regression were performed. One with response variable Sessions and one with response variable Transactions, where Sessions refer to the number of visitors on the website and Transactions refer to the number of visitors moving on to the final booking website. The explanatory variables used were divided into four categories; Marketing, Channels, Season and Other, where the category Other contained the variables Total number of visitors and Amount of MB used per smartphone subscription. The study showed that all categories contained variables with significant impact on both of the response variables. There was only one variable that had different impact on the models, namely the Total number of visitors. The result indicates that smartphone users tend to, in comparison with desktop users, to a less extent continue to the final booking website. Since there were no other variables that only had an impact on Transactions it was assumed that there exist other factors which have a greater impact on smartphone users tendency to finalize a booking.

  • 19.
    Andersson, Joacim
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Falk, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Missing Data in Value-at-Risk Analysis: Conditional Imputation in Optimal Portfolios Using Regression2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A regression-based method is presented in order toregenerate missing data points in stock return time series. The method usesonly complete time series of assets in optimal portfolios, in which the returnsof the underlying tend to correlate inadequately with each other. The studyshows that the method is able to replicate empirical VaR-backtesting resultswhere all data are available, even when up to 90% of the time series in half ofthe assets in the portfolios have been removed.

  • 20.
    Andersson, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Locating Multiple Change-Points Using a Combination of Methods2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this study is to find a method that is able to locate multiple change-points in a time series with unknown properties. The methods that are investigated are the CUSUM and CUSUM of squares test, the CUSUM test with OLS residuals, the Mann-Whitney test and Quandt’s log likelihood ratio. Since all methods are detecting single change-points, the binary segmentation technique is used to find multiple change-points. The study shows that the CUSUM test with OLS residuals, Mann-Whitney test and Quandt’s log likelihood ratio work well on most samples while the CUSUM and CUSUM of squares are not able to detect the location of the change-points. Furthermore the study shows that the binary segmentation technique works well with all methods and is able to detect multiple change-points in most circumstances. The study also shows that the results can, most of the time, be improved by using a combination of the methods.

  • 21.
    Andersson, Markus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Multivariate Financial Time Series and Volatility Models with Applications to Tactical Asset Allocation2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The financial markets have a complex structure and the modelling techniques have recently been more and more complicated. So for a portfolio manager it is very important to find better and more sophisticated modelling techniques especially after the 2007-2008 banking crisis. The idea in this thesis is to find the connection between the components in macroeconomic environment and portfolios consisting of assets from OMX Stockholm 30 and use these relationships to perform Tactical Asset Allocation (TAA). The more specific aim of the project is to prove that dynamic modelling techniques outperform static models in portfolio theory.

  • 22.
    Andersson, Sofia
    et al.
    AstraZeneca R and D.
    Rydén, Tobias
    Lund University.
    Subspace estimation and prediction methods for hidden Markov models2009In: Annals of Statistics, ISSN 0090-5364, E-ISSN 2168-8966, Vol. 37, no 6B, 4131-4152 p.Article in journal (Refereed)
    Abstract [en]

    Hidden Markov models (HMMs) are probabilistic functions of finite Markov chains, or, put in other words, state space models with finite state space. In this paper, we examine subspace estimation methods for HMMs whose output lies a finite set as well. In particular, we study the geometric structure arising from the nonminimality of the linear state space representation of HMMs, and consistency of a subspace algorithm arising from a certain factorization of the singular value decomposition of the estimated linear prediction matrix, For this algorithm, we show that the estimates of the transition and emission probability matrices are consistent up to a similarity transformation, and that the in-step linear predictor Computed from the estimated system matrices is consistent, i.e., converges to the true optimal linear m-step predictor.

  • 23.
    Armerin, Fredrik
    KTH, Superseded Departments, Mathematics.
    Aspects of cash-flow valuation2004Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis consists of five papers. In the first two papers we consider a general approach to cash flow valuation, focusing on dynamic properties of the value of a stream of cash flows. The third paper discusses immunization theory, where old results are shown to hold in general deterministic models, but often fail to be true in stochastic models. In the fourth paper we comment on the connection between arbitrage opportunities and an immunized position. Finally, in the last paper we study coherent and convex measure of risk applied to portfolio optimization and insurance.

  • 24. Aro, Helena
    et al.
    Djehiche, Boualem
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Stochastic modelling of disability insurance in a multi-period framework2015In: Scandinavian Actuarial Journal, ISSN 0346-1238, E-ISSN 1651-2030, no 1, 88-106 p.Article in journal (Refereed)
    Abstract [en]

    We propose a stochastic semi-Markovian framework for disability modelling in a multi-period discrete-time setting. The logistic transforms of disability inception and recovery probabilities are modelled by means of stochastic risk factors and basis functions, using counting processes and generalized linear models. The model for disability inception also takes IBNR claims into consideration. We fit various versions of the models into Swedish disability claims data.

  • 25. Asmussen, Sören
    et al.
    Rydén, Tobias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Note on Skewness in Regenerative Simulation2011In: Communications in statistics. Simulation and computation, ISSN 0361-0918, E-ISSN 1532-4141, Vol. 40, no 1, 45-57 p.Article in journal (Refereed)
    Abstract [en]

    The purpose of this article is to show, empirically and theoretically, that performance evaluation by means of regenerative simulation often involves random variables with distributions that are heavy tailed and heavily skewed. This, in turn, leads to the variance of estimators being poorly estimated, and confidence intervals having actual coverage quite different from (typically lower than) the nominal one. We illustrate these general ideas by estimating the mean occupancy and tail probabilities in M/G/1 queues, comparing confidence intervals computed from batch means to various intervals computed from regenerative cycles. In addition, we provide theoretical results on skewness to support the empirical findings.

  • 26.
    Aurell, Erik
    et al.
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST). Depts of Information and Computer Science and Applied Physics, Aalto University, Finland.
    Del Ferraro, Gino
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Causal analysis, Correlation-Response, and Dynamic cavity2016In: International Meeting on High-Dimensional Data-Driven Science (HD3-2015), Institute of Physics (IOP), 2016, 012002Conference paper (Refereed)
    Abstract [en]

    The purpose of this note is to point out analogies between causal analysis in statistics and the correlation-response theory in statistical physics. It is further shown that for some systems the dynamic cavity offers a way to compute the stationary state of a non-equilibrium process effectively, which could then be taken an alternative starting point of causal analysis.

  • 27.
    Baldvindsdottir, Ebba-Kristin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    On Constructing a Market Consistent Economic Scenario Generator2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 28. Bao, Z.
    et al.
    Erdős, L.
    Schnelli, Kevin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Local Law of Addition of Random Matrices on Optimal Scale2017In: Communications in Mathematical Physics, ISSN 0010-3616, E-ISSN 1432-0916, Vol. 349, no 3, 947-990 p.Article in journal (Refereed)
    Abstract [en]

    The eigenvalue distribution of the sum of two large Hermitian matrices, when one of them is conjugated by a Haar distributed unitary matrix, is asymptotically given by the free convolution of their spectral distributions. We prove that this convergence also holds locally in the bulk of the spectrum, down to the optimal scales larger than the eigenvalue spacing. The corresponding eigenvectors are fully delocalized. Similar results hold for the sum of two real symmetric matrices, when one is conjugated by Haar orthogonal matrix.

  • 29.
    Batres-Estrada, Bilberto
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Deep learning for multivariate financial time series2015Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Deep learning is a framework for training and modelling neural networks which recently have surpassed all conventional methods in many learning tasks, prominently image and voice recognition. This thesis uses deep learning algorithms to forecast financial data. The deep learning framework is used to train a neural network. The deep neural network is a Deep Belief Network (DBN) coupled to a Multilayer Perceptron (MLP). It is used to choose stocks to form portfolios. The portfolios have better returns than the median of the stocks forming the list. The stocks forming the S&P 500 are included in the study. The results obtained from the deep neural network are compared to benchmarks from a logistic regression network, a multilayer perceptron and a naive benchmark. The results obtained from the deep neural network are better and more stable than the benchmarks. The findings support that deep learning methods will find their way in finance due to their reliability and good performance.

  • 30.
    Bayer, Christian
    et al.
    Weierstrass Institute for Applied Analysis and Stochastics.
    Hoel, Håkon
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA.
    von Schwerin, Erik
    King Abdullah University of Science and Technology.
    Tempone, Raul
    King Abdullah University of Science and Technology.
    On non-asymptotic optimal stopping criteria in Monte Carlo simulations2012Report (Other academic)
    Abstract [en]

    We consider the setting of estimating the mean of a random variable by a sequential stopping rule Monte Carlo (MC) method. The performance of a typical second moment based sequential stopping rule MC method is shown to be unreliable in such settings both by numerical examples and through analysis. By analysis and approximations, we construct a higher moment based stopping rule which is shown in numerical examples to perform more reliably and only slightly less efficiently than the second moment based stopping rule.

  • 31. Bengtsson, Göran
    et al.
    Nilsson, Elna
    Rydén, Tobias
    Lund University.
    Wiktorsson, Magnus
    Irregular walks and loops combines in small-scale movement of a soil insect: implications for dispersal biology2004In: Journal of Theoretical Biology, ISSN 0022-5193, E-ISSN 1095-8541, Vol. 231, no 2, 299-306 p.Article in journal (Refereed)
    Abstract [en]

    Analysis of small-scale movement patterns of animals we may help to understand and predict movement at a larger scale, such as dispersal, which is a key parameter in spatial population dynamics. We have chosen to study the movement of a soil-dwelling Collembola, Protaphorura armata, in an experimental system consisting of a clay surface with or without physical obstacles. A combination of video recordings, descriptive statistics, and walking simulations was used to evaluate the movement pattern. Individuals were found to link periods of irregular walk with those of looping in ahomogeneous environment as well as in one structured to heterogeneity by physical obstacles. The number of loops varied between 0 and 44 per hour from one individual to another and some individuals preferred to make loops by turning right and others by turning left. P. armata spent less time at the boundary of small obstacles compared to large, presumably because of a lower probability to track the steepness of the curvature as the individual walks along a highly curved surface. Food deprived P. armata had amore winding movement and made more circular loops than those that were well fed. The observed looping behaviour is interpreted in the context of systematic search strategies and compared with similar movement patterns found in other species.

  • 32.
    Berg, Edvin
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Orrsveden, Magnus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A regression analysis of the factors affecting the ticket price in thetravel industry2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This bachelor thesis in applied mathematics and industrial engineering and management investigates which factors that affect the price of tickets in the travel industry. This has been done by performing different multiple linear regression analyses based on the theory from mathematical statistics and econometrics. The analyses has been made with data that has been provided by MTR Express, containing data of departures of 2016 for the main operators in the railway and airline industry. The route that has been analysed is Stockholm - Gothenburg since this is the route where MTR Express has established its business in the railway market in Sweden. The results of the linear regression analysis show that the variables "Days before departure" and the weekday of travel have the most significant impact on the prices for both train and flight tickets. The final models have an explanation

    degree of 50% for the railway and 51% for the airline industry. The results show many similarities and correlations between the railway and airline industries. Furthermore, some interesting differences between these subindustries appeared in the final regression models and these have been one of the aspects in the discussion. The conclusion of the thesis is that there are several different aspects affecting the price in the travel industry

  • 33.
    Berglund, David
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Analysis of Swedish pollutants2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    Today’s environmental reports contain flaws in the acquired data. This master thesis has the mission to alleviate the estimations of those flaws. The data in question, originates from Swedish industrial facilities.

    The thesis involves data-treatment by statistical analysis, which is done through fitting a model by the means of analysis of variance and multilevel modeling. The thesis also involves gathering and work with data from databases, as well as systematic treatment, sorting, categorization and evaluation of the data material.

    Calculations are made through the SAS statistical analysis program, which rendered estimates of fixed, linear and random effects. The results are presented through graphs and numerical estimates in the later part of the report. Calculations for estimations of the grand pollutant totals are conducted. These are compared to the observed data for relevance. Alternative ways on working on the problem at hand is discussed, as well as problems that have appeared during the work on the master thesis. The relevant code and calculations are attached towards the end.

  • 34.
    BERGROTH, JONAS
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Performance and risk analysis of the Hodrick-Prescott filter2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 35.
    Bergroth, Magnus
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Carlsson, Anders
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Estimation of a Liquidity Premium for Swedish Inflation Linked Bonds2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    It is well known that the inflation linked breakeven inflation, defined as the difference between a nominal yield and an inflation linked yield, sometimes is used as an approximation of the market’s inflation expectation. D’Amico et al. (2009, [5]) show that this is a poor approximation for the US market. Based on their work, this thesis shows that the approximation also is poor for the Swedish bond market. This is done by modelling the Swedish bond market using a five-factor latent variable model, where an inflation linked bond specific premium is introduced. Latent variables and parameters are estimated using a Kalman filter and a maximum likelihood estimation. The conclusion is drawn that the modelling was successful and that the model implied outputs gave plausible results.

  • 36.
    Berlin, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Multi-class Supervised Classification Techniques for High-dimensional Data: Applications to Vehicle Maintenance at Scania2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In vehicle repairs, many times locating the cause of error could turn out more time consuming than the reparation itself. Hence a systematic way to accurately predict a fault causing part would constitute a valuable tool especially for errors difficult to diagnose. This thesis explores the predictive ability of Diagnostic Trouble Codes (DTC’s), produced by the electronic system on Scania vehicles, as indicators for fault causing parts. The statistical analysis is based on about 18800 observations of vehicles where both DTC’s and replaced parts could be identified during the period march 2016 - march 2017. Two different approaches of forming classes is evaluated. Many classes had only few observations and, to give the classifiers a fair chance, it is decided to omit observations of classes based on their frequency in data. After processing, the resulting data could comprise 1547 observations on 4168 features, demonstrating very high dimensionality and making it impossible to apply standard methods of large-sample statistical inference. Two procedures of supervised statistical learning, that are able to cope with high dimensionality and multiple classes, Support Vector Machines and Neural Networks are exploited and evaluated. The analysis showed that on data with 1547 observations of 4168 features (unique DTC’s) and 7 classes SVM yielded an average prediction accuracy of 79.4% compared to 75.4% using NN.The conclusion of the analysis is that DTC’s holds potential to be used as indicators for fault causing parts in a predictive model, but in order to increase prediction accuracy learning data needs improvements. Scope for future research to improve and expand the model, along with practical suggestions for exploiting supervised classifiers at Scania is provided. keywords: Statistical learning, Machine learning, Neural networks, Deep learning, Supervised learning, High dimensionality

  • 37.
    Berntsson, Fredrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Methods of high-dimensional statistical analysis for the prediction and monitoring of engine oil quality2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Engine oils fill important functions in the operation of modern internal combustion engines. Many essential functions are provided by compounds that are either sacrificial or susceptible to degradation. The engine oil will eventually fail to provide these functions with possibly unrepairable damages as a result. To decide how often the oil should be changed, there are several laboratory tests to monitor the oil condition, e.g. FTIR (oxidation, nitration, soot, water), viscosity, TAN (acidity), TBN (alkalinity), ICP (elemental analysis) and GC (fuel dilution). These oil tests are however often labor intensive and costly and it would be desirable to supplement and/or replace some of them with simpler and faster methods. One way, is to utilise the whole spectrum of the FTIR-measurements already performed. FTIR is traditionally used to monitor chemical properties at specific wave lengths, but also provides information, in a more multivariate way though, relevant for viscosity, TAN, and TBN. In order to make use of the whole FTIR-spectrum, methods capable of handling high dimensional data have to be used. Partial Least Squares Regression (PLSR) will be used in order to predict the relevant chemical properties.

    This survey also considers feature selection methods based on the second order statistic Higher Criticism as well as Hierarchical Clustering. The Feature Selection methods are used in order to ease further research on how infrared data may be put into usage as a tool for more automated oil analyses.

    Results show that PLSR may be utilised to provide reliable estimates of mentioned chemical quantities. In addition may mentioned feature selection methods be applied without losing prediction power. The feature selection methods considered may also aid analysis of the engine oil itself and feature work on how to utilise infrared properties in the analysis of engine oil in other situations.

  • 38.
    Bisot, Clémence
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Spectral Data Processing for Steel Industry2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    For steel industry, knowing and understanding characteristics of a steel strip surface at every steps of the production process is a key element to control final product quality. Today as the quality requirements increase this task gets more and more important. The surface of new steel grades with complex chemical compositions has behaviors especially hard to master. For those grades in particular, surface control is critical and difficult.

    One of the promising technics to assess the problem of surface quality control is spectra analysis. Over the last few years, ArcelorMittal, world’s leading integrated steel and mining company,

    has led several projects to investigate the possibility of using devices to measure light spectrum of their product at different stage of the production.

    The large amount of data generated by these devices makes it absolutely necessary to develop efficient data treatment pipelines to get meaningful information out of the recorded spectra. In this thesis, we developed mathematical models and statistical tools to treat signal measured with spectrometers in the framework of different research projects.

  • 39. Bizjajeva, Svetlana
    et al.
    Olsson, Jimmy
    Antithetic sampling for sequential Monte Carlo methods with application to state-space models2016In: Annals of the Institute of Statistical Mathematics, ISSN 0020-3157, E-ISSN 1572-9052, Vol. 68, no 5, 1025-1053 p.Article in journal (Refereed)
    Abstract [en]

    In this paper, we cast the idea of antithetic sampling, widely used in standard Monte Carlo simulation, into the framework of sequential Monte Carlo methods. We propose a version of the standard auxiliary particle filter where the particles are mutated blockwise in such a way that all particles within each block are, first, offspring of a common ancestor and, second, negatively correlated conditionally on this ancestor. By deriving and examining the weak limit of a central limit theorem describing the convergence of the algorithm, we conclude that the asymptotic variance of the produced Monte Carlo estimates can be straightforwardly decreased by means of antithetic techniques when the particle filter is close to fully adapted, which involves approximation of the so-called optimal proposal kernel. As an illustration, we apply the method to optimal filtering in state-space models.

  • 40.
    Bjarnadottir, Frida
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Implementation of CoVaR, A Measure for Systemic Risk2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    In recent years we have witnessed how distress can spread quickly through the financial system and threaten financial stability. Hence there has been increased focus on developing systemic risk indicators that can be used by central banks and others as a monitoring tool. For Sveriges Riksbank it is of great value to be able to quantify the risks that can threaten the Swedish financial system CoVaR is a systemic risk measure implemented here with that with that purpose. CoVaR, which stands for conditional Value at Risk, measures a financial institutions contribution to systemic risk and its contribution to the risk of other financial institutions. The conclusion is that CoVaR can together with other systemic risk indicators help get a better understanding of the risks threatening the stability of the Swedish financial system.

  • 41.
    Bjarnason, Jónas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Optimized Transport Planning through Coordinated Collaboration between Transport Companies2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis studies a specific transport planning problem, which is based on a realistic scenario in the transport industry and deals with the delivery of goods by transport companies to their customers. The main aspect of the planning problem is to consider if each company should deliver the cargo on its own or through a collaboration of companies, in which the companies share the deliveries. In order to find out whether or not collaboration should take place, the transport planning problem is represented in terms of a mathematical optimization problem, which is formulated by using a column generation method and whose objective function involves minimization of costs. Three different solution cases are considered where each of them takes into account different combinations of vehicles used for delivering the cargo as well as the different maximum allowed driving time of the vehicles.

    The goal of the thesis is twofold; firstly, to see if the optimization problem can be solved and secondly, in case the problem is solvable, investigate whether it is beneficial for transport companies to collaborate under the aforementioned circumstances in order to incur lower costs in all instances considered. It turns out that both goals are achieved. To achieve the first goal, a few simplifications need to be made. The simplifications pertain both to the formulation of the problem and its implementation, as it is not only difficult to formulate a transport planning problem of this kind with respect to real life situations, but the problem is also difficult to solve due to its computational complexity. As for the second goal of the thesis, a numerical comparison between the different instances for the two scenarios demonstrates that the costs according to collaborative transport planning turns out to be considerably lower, which suggests that, under the circumstances considered in the thesis, collaboration between transport companies is beneficial for the companies involved.

  • 42.
    Björk, Tomas
    et al.
    Stockholm School of Economics.
    Hult, Henrik
    Dept. of Appl. Math. and Statistics, Universitetsparken 5, 2100 Copenhagen, Denmark.
    A note on Wick products and the fractional Black-Scholes model2005In: Finance and Stochastics, ISSN 0949-2984, E-ISSN 1432-1122, Vol. 9, no 2, 197-209 p.Article in journal (Refereed)
    Abstract [en]

    In some recent papers (Elliott and van der Hoek 2003; Hu and Oksendal 2003) a fractional Black-Scholes model has been proposed as an improvement of the classical Black-Scholes model (see also Benth 2003; Biagini et al. 2002; Biagini and Oksendal 2004). Common to these fractional Black-Scholes models is that the driving Brownian motion is replaced by a fractional Brownian motion and that the Ito integral is replaced by the Wick integral, and proofs have been presented that these fractional Black-Scholes models are free of arbitrage. These results on absence of arbitrage complelety contradict a number of earlier results in the literature which prove that the fractional Black-Scholes model (and related models) will in fact admit arbitrage. The objective of the present paper is to resolve this contradiction by pointing out that the definition of the self-financing trading strategies and/or the definition of the value of a portfolio used in the above papers does not have a reasonable economic interpretation, and thus that the results in these papers are not economically meaningful. In particular we show that in the framework of Elliott and van der Hoek (2003), a naive buy-and-hold strategy does not in general qualify as "self-financing". We also show that in Hu and Oksendal (2003), a portfolio consisting of a positive number of shares of a stock with a positive price may, with positive probability, have a negative "value".

  • 43.
    Blanchet, Jose
    et al.
    Columbia University.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Leder, Kevin
    University of Minnesota.
    Importance sampling for stochastic recurrence equations with heavy-tailed increments2011In: Proceedings of the 2011 Winter Simulation Conference, 2011, 3824-3831 p.Conference paper (Other academic)
    Abstract [en]

    Importance sampling in the setting of heavy tailed random variables has generally focused on models withadditive noise terms. In this work we extend this concept by considering importance sampling for theestimation of rare events in Markov chains of the formXn+1 = An+1Xn+Bn+1; X0 = 0;where the Bn’s and An’s are independent sequences of independent and identically distributed (i.i.d.) randomvariables and the Bn’s are regularly varying and the An’s are suitably light tailed relative to Bn. We focuson efficient estimation of the rare event probability P(Xn > b) as b%¥. In particular we present a stronglyefficient importance sampling algorithm for estimating these probabilities, and present a numerical exampleshowcasing the strong efficiency.

  • 44.
    Blanchet, Jose
    et al.
    Columbia University.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Leder, Kevin
    University of Minnesota.
    Rare-Event Simulation for Stochastic Recurrence Equations with Heavy-Tailed Innovations2013In: ACM Transactions on Modeling and Computer Simulation, ISSN 1049-3301, E-ISSN 1558-1195, Vol. 23, no 4, 22- p.Article in journal (Refereed)
    Abstract [en]

    In this article, rare-event simulation for stochastic recurrence equations of the form Xn+1 = A(n+1)X(n) + Bn+1, X-0 = 0 is studied, where {A(n);n >= 1} and {B-n;n >= 1} are independent sequences consisting of independent and identically distributed real-valued random variables. It is assumed that the tail of the distribution of B-1 is regularly varying, whereas the distribution of A(1) has a suitably light tail. The problem of efficient estimation, via simulation, of quantities such as P{X-n > b} and P{sup(k <= n) X-k > b} for large b and n is studied. Importance sampling strategies are investigated that provide unbiased estimators with bounded relative error as b and n tend to infinity.

  • 45.
    Blomberg, Niclas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Higher Criticism Testing for Signal Detection in Rare And Weak Models2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    examples - we need models for selecting a small subset of useful features from high-dimensional data, where the useful features are both rare and weak, this being crucial for e.g. supervised classfication of sparse high- dimensional data. A preceding step is to detect the presence of useful features, signal detection. This problem is related to testing a very large number of hypotheses, where the proportion of false null hypotheses is assumed to be very small. However, reliable signal detection will only be possible in certain areas of the two-dimensional sparsity-strength parameter space, the phase space.

    In this report, we focus on two families of distributions, N and χ2. In the former case, features are supposed to be independent and normally distributed. In the latter, in search for a more sophisticated model, we suppose that features depend in blocks, whose empirical separation strength asymptotically follows the non-central χ2ν-distribution.

    Our search for informative features explores Tukey's higher criticism (HC), which is a second-level significance testing procedure, for comparing the fraction of observed signi cances to the expected fraction under the global null.

    Throughout the phase space we investgate the estimated error rate,

    Err = (#Falsely rejected H0+ #Falsely rejected H1)/#Simulations,

    where H0: absence of informative signals, and H1: presence of informative signals, in both the N-case and the χ2ν-case, for ν= 2; 10; 30. In particular, we find, using a feature vector of the approximately same size as in genomic applications, that the analytically derived detection boundary is too optimistic in the sense that close to it, signal detection is still failing, and we need to move far from the boundary into the success region to ensure reliable detection. We demonstrate that Err grows fast and irregularly as we approach the detection boundary from the success region.

    In the χ2ν-case, ν > 2, no analytical detection boundary has been derived, but we show that the empirical success region there is smaller than in the N-case, especially as ν increases.

  • 46.
    Blomberg, Renée
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Who is Granted Disability Benefit in Sweden?: Description of risk factors and the effect of the 2008 law reform2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Disabilitybenefit is a publicly funded benefit in Sweden that provides financialprotection to individuals with permanent working ability impairments due todisability, injury, or illness. The eligibility requirements for disabilitybenefit were tightened June 1, 2008 to require that the working abilityimpairment be permanent and that no other factors such as age or local labormarket conditions can affect eligibility for the benefit. The goal of thispaper is to investigate risk factors for the incidence disability benefit andthe effects of the 2008 reform. This is the first study to investigate theimpact of the 2008 reform on the demographics of those that received disabilitybenefit. A logistic regression model was used to study the effect of the 2008law change. The regression results show that the 2008 reform did have astatistically significant effect on the demographics of the individuals whowere granted disability benefit. After the reform women were lessoverrepresented, the older age groups were more overrepresented, and peoplewith short educations were more overrepresented. Although the variables for SKLregions together were jointly statistically significant, their coefficientswere small and the group of variables had the least amount of explanatory valuecompared to the variables for age, education, gender and the interactionvariables.

  • 47.
    Blomkvist, Oscar
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Smart Beta - index weighting2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This study is a thesis ending a 120 credit masters program in Mathematics with specialization Financial Mathematics and Mathematical Statistics at the Royal Institute of Technology (KTH).

    The subject of Smart beta is defined and studied in an index fund context. The portfolio weighting schemes tested are: equally weighting, maximum Sharpe ratio, maximum diversification, and fundamental weighting using P/E-ratios. The outcome of the strategies is measured in performance (accumulated return), risk, and cost of trading, along with measures of the proportions of different assets in the portfolio.

    The thesis goes through the steps of collecting, ordering, and ”cleaning” the data used in the process. A brief explanation of historical simulation used in estimation of stochastic variables such as expected return and covariance matrices is included, as well as analysis on the data’s distribution.

    The process of optimization and how rules for being UCITS compliant forms optimization programs with constraints is described.

    The results indicate that all, but the most diversified, portfolios tested outperform the market cap weighted portfolio. In all cases, the trading volumes and the market impact is increased, in comparison with the cap weighted portfolio. The Sharpe ratio maximizer yields a high level of return, while keeping the risk low. The fundamentally weighted portfolio performs best, but with higher risk. A combination of the two finds the portfolio with highest return and lowest risk. 

  • 48.
    Bogren, Felix
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Estimating the Term Structure of Default Probabilities for Heterogeneous Credit Porfolios2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this thesis is to estimate the term structure of default probabilities for heterogeneous credit portfolios. The term structure is defined as the cumulative distribution function (CDF) of the time until default. Since the CDF is the complement of the survival function, survival analysis is applied to estimate the term structures. To manage long-term survivors and plateaued survival functions, the data is assumed to follow a parametric as well as a semi-parametric mixture cure model. Due to the general intractability of the maximum likelihood of mixture models, the parameters are estimated by the EM algorithm. A simulation study is conducted to assess the accuracy of the EM algorithm applied to the parametric mixture cure model with data characterized by a low default incidence. The simulation study recognizes difficulties in estimating the parameters when the data is not gathered over a sufficiently long observational window. The estimated term structures are compared to empirical term structures, determined by the Kaplan-Meier estimator. The results indicated a good fit of the model for longer horizons when applied to each credit type separately, despite difficulties capturing the dynamics of the term structure for the first one to two years. Both models performed poorly with few defaults. The parametric model did however not seem sensitive to low default rates. In conclusion, the class of mixture cure models are indeed viable for estimating the term structure of default probabilities for heterogeneous credit portfolios.

  • 49.
    Boros, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On Lapse risk factors in Solvency II2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the wake of the sub-prime crisis of 2008, the European Insurance and Occupational Pensions Authority issued the Solvency II directive, aiming at replacing the obsolete Solvency I framework by 2016. Among the quantitative requirements of Solvency II, a measure for an insurance firms solvency risk, the solvency risk capital, is found. It aims at establishing the amount of equity the company needs to hold to be able to meet its insurance obligations with a probability of 0.995 over the coming year. The SCR of a company is essentially built up by the SCR induced by a set of quantifiable risks. Among these, risks originating from the take up rate of contractual options, lapse risks, are included.

    In this thesis, the contractual options of a life insurer have been identified and risk factors aiming at capturing the risks arising are suggested. It has been concluded that a risk factor estimating the size of mass transfer events captures the risk arising through the resulting rescaling of the balance sheet. Further, a risk factor modeling the deviation of the Company's assumption for the yearly transfer rate is introduced to capture the risks induced by the characteristics of traditional life insurance and unit-linked insurance contracts upon transfer. The risk factors are modeled in a manner to introduce co-dependence with equity returns as well as interest rates of various durations and the model parameters are estimated using statistical methods for Norwegian transfer-frequency data obtained from Finans Norge.

    The univariate and multivariate properties of the models are investigated in a scenario setting and it is concluded the the suggested models provide predominantly plausible results for the mass-lapse risk factors. However, the performance of the models for the risk factors aiming at capturing deviations in the transfer assumptions are questionable, why two means of increasing its validity have been proposed.

  • 50.
    Borysov, Stanislav
    et al.
    KTH, School of Engineering Sciences (SCI), Applied Physics, Nanostructure Physics. KTH, Centres, Nordic Institute for Theoretical Physics NORDITA.
    Roudi, Yasser
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA. The Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway.
    Balatsky, Alexander V.
    KTH, Centres, Nordic Institute for Theoretical Physics NORDITA. Institute for Materials Science, Los Alamos National Laboratory, Los Alamos, NM, United States.
    U.S. stock market interaction network as learned by the Boltzmann machine2015In: European Physical Journal B: Condensed Matter Physics, ISSN 1434-6028, E-ISSN 1434-6036, Vol. 88, no 12, 1-14 p.Article in journal (Refereed)
    Abstract [en]

    We study historical dynamics of joint equilibrium distribution of stock returns in the U.S. stock market using the Boltzmann distribution model being parametrized by external fields and pairwise couplings. Within Boltzmann learning framework for statistical inference, we analyze historical behavior of the parameters inferred using exact and approximate learning algorithms. Since the model and inference methods require use of binary variables, effect of this mapping of continuous returns to the discrete domain is studied. The presented results show that binarization preserves the correlation structure of the market. Properties of distributions of external fields and couplings as well as the market interaction network and industry sector clustering structure are studied for different historical dates and moving window sizes. We demonstrate that the observed positive heavy tail in distribution of couplings is related to the sparse clustering structure of the market. We also show that discrepancies between the model’s parameters might be used as a precursor of financial instabilities.

1234567 1 - 50 of 350
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf