Change search
Refine search result
5678 351 - 384 of 384
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 351.
    Tingström, Victor
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Sequential parameter and state learning in continuous time stochastic volatility models using the SMC² algorithm2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this Master’s thesis, joint sequential inference of both parameters and states of stochastic volatility models is carried out using the SMC2 algorithm found in SMC2: an efficient algorithm for sequential analysis of state-space models, Nicolas Chopin, Pierre E. Jacob, Omiros Papaspiliopoulos. The models under study are the continuous time s.v. models (i) Heston, (ii) Bates, and (iii) SVCJ, where inference is based on options prices. It is found that the SMC2 performs well for the simpler models (i) and (ii), wheras filtering in (iii) performs worse. Furthermore, it is found that the FFT option price evaluation is the most computationally demanding step, and it is suggested to explore other avenues of computation, such as GPGPU-based computing.

  • 352.
    Torell, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Name Concentration Risk and Pillar 2 Compliance: The Granularity Adjustment2013Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A credit portfolio where each obligor contributes infinitesimally to the risk is said to be infinitely granular. The risk related to the fact that no real credit portfolio is infinitely granular, is called name concentration risk.

    Under Basel II, banks are required to hold a capital buffer for credit risk in order to sustain the probability of default on an acceptable level. Credit risk capital charges computed under pillar 1 of Basel II have been calibrated for a specific level of name concentration. If a bank deviates from this benchmark it is expected to address this under pillar 2, which may involve increased capital charges.

    Here, we look at some of the difficulties that a bank may encounter when computing a name concentration risk add-on under pillar 2. In particular, we study the granularity adjustment for the Vasicek and CreditRisk+ models. An advantage of this approach is that no vendor software products are necessary. We also address the questions of when the granularity adjustment is a coherent risk measure and how to allocate the add-on to exposures in order to optimize the credit portfolio. Finally, the discussed models are applied to real data

  • 353.
    Trost, Johanna
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Tail Dependence Considerations for Cross-Asset Portfolios2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Extreme events, heaviness of log return distribution tails and bivariate asymptotic dependence are important aspects of cross-asset tail risk hedging and diversification. These are in this thesis investigated with the help of threshold copulas, scalar tail dependence measures and bivariate Value-at-Risk. The theory is applied to a global equity portfolio extended with various other asset classes as proxied by different market indices. The asset class indices are shown to possess so-called stylised facts of financial asset returns such as heavy-tailedness, clustered volatility and aggregational Gaussianity. The results on tail dependence structure show on lack of strong joint tail dependence, but suitable bivariate dependence models can nonetheless be found and fitted to the data. These dependence structures are then used when concluding about tail hedging opportunities as defined by highly tail correlated long vs short positions as well as diversification benefits of lower estimated Value-at-Risk for cross-asset portfolios than univariate portfolios.

  • 354.
    Vallin, Simon
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Small Cohort Population Forecasting via Bayesian Learning2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A set of distributional assumptions regarding the demographic processes of birth, death, emigration and immigration have been assembled to form a probabilistic model framework of population dynamics. This framework was summarized as a Bayesian network and Bayesian inference techniques are exploited to infer the posterior distributions of the model parameters from observed data. The birth, death and emigration processes are modelled using a hierarchical beta-binomial model from which the inference of the posterior parameter distribution was analytically tractable. The immigration process was modelled with a Poisson type regression model where posterior distribution of the parameters has to be estimated numerically. This thesis suggests an implementation of the Metropolis-Hasting algorithm for this task. Classifi cation of incomings into subpopulations of age and gender is subsequently made using a Dirichlet-multinomial hierarchic model, for which parameter inference is analytically tractable. This model framework is used to generate forecasts of demographic data, which can be validated using the observed outcomes. A key component of the Bayesian model framework used is that is estimates the full posterior distributions of demographic data, which can take into account the full amount of uncertainty when forecasting population growths.

  • 355.
    Vignon, Marc
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Implementing Sensitivity Calculations for Long Interest Rate Futures2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 356.
    Viktorsson, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The GARCH-copula model for gaugeing time conditional dependence in the risk management of electricity derivatives2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the risk management of electricity derivatives, time to delivery can be divided into a time grid, with the assumption that within each cell of the grid, volatility is more or less constant. This setup however does not take in to account dependence between the different cells in the time grid.

    This thesis tries to develop a way to gauge the dependence between electricity derivatives at the different places in the time grid and different delivery periods. More specifically, the aim is to estimate the size of the ratio of the quantile of the sum of price changes against the sum of the marginal quantiles of the price changes.

    The approach used is a combination of Generalised Autoregressive Conditional Heteroscedasticity (GARCH) processes and copulas. The GARCH process is used to filter out heteroscedasticity in the price data. Copulas are fitted to the filtered data using pseudo maximum likelihood and the fitted copulas are evaluated using a goodness of fit test.

    GARCH processes alone are found to be insufficient to capture the dynamics of the price data. It is found that combining GARCH with Autoregressive Moving Average processes provides better fit to the data. The resulting dependence is the found to be best captured by elliptical copulas. The estimated ratio is found to be quite small in the cases studied. The use of the ARMA-GARCH filtering gives in general a better fit for copulas when applied to financial data. A time dependency in the dependence can also be observed.

  • 357.
    Villaume, Erik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Predicting customer level risk patterns in non-life insurance2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Several models for predicting future customer profitability early into customer life-cycles in the property and casualty business are constructed and studied. The objective is to model risk at a customer level with input data available early into a private consumer’s lifespan. Two retained models, one using Generalized Linear Model another using a multilayer perceptron, a special form of Artificial Neural Network are evaluated using actual data. Numerical results show that differentiation on estimated future risk is most effective for customers with highest claim frequencies.

     

  • 358.
    von Feilitzen, Helena
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Modeling non-maturing liabilities2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Non‐maturing liabilities, such as savings accounts, lack both predetermined maturity and reset dates due to the fact that the depositor is free to withdraw funds at any time and that the depository institution is free to change the rate. These attributes complicate the risk management of such products and no standardized solution exists. The problem is important however since non‐maturing liabilities typically make up a considerable part of the funding of a bank. In this report different modeling approaches to the risk management are described and a method for managing the interest rate risk is implemented. It is a replicating portfolio approach used to approximate the non‐maturing liabilities with a portfolio of fixed income instruments. The search for a replicating portfolio is formulated as an optimization problem based on regression between the deposit rate and market ratesseparated by a fixed margin. In the report two different optimization criteria are compared for the replicating portfolio, minimizing the standard deviation of the margin versus maximizing the risk‐adjusted margin represented by the Sharpe ratio, of which the latter is found to yield superior results. The choice of historical sample interval over which the portfolio is optimized seems to have a rather big impact on the outcome but recalculating the portfolio weights at regular intervals is found to stabilize the results somewhat. All in all, despite the fact that this type of method cannot fully capture the most advanced dynamics of the non‐maturing liabilities, a replicating portfolio still appears to be a feasible approach for the interest risk management.

  • 359.
    von Mentzer, Simon
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Risks and scenarios in the Swedish income-based pension system2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this master thesis the risks and scenarios in the Swedish income-based pension system are investigated. To investigate the risks one has chosen to look at a vector autoregressive (VAR) model for three variables (AP-fund returns, average wage returns and inflation). Bootstrap is used to simulate the VAR model. When the simulated values are received they are put back in equations that describes real average wage return, real return from the AP-funds, average wage and income index. Lastly the pension balance is calculated with the simulated data.

    Scenarios are created by changing one variable at the time in the VAR model. Then it is investigated how different scenarios affect the indexation and pension balance.

    The result show a cross correlation structure between average wage return and inflation in the VAR model, but AP-fund returns can simply be modelled as an exogenous white noise random variable. In the scenario when average wage return is altered, one can see the largest changes in indexation and pension balance.

  • 360.
    Väljamets, Sara
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Peptide Retention Time Prediction using Artificial Neural Networks2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis describes the development and evaluation of an artificial neural network, trained to predict the chromatographic retention times of peptides, based on their amino acid sequence. The purpose of accurately predicting retention times is to increase the number of protein identifications in shotgun proteomics and to improve targeted mass spectrometry experiment. The model presented in this thesis is a branched convolutional neural network (CNN) consisting of two convolutional layers, followed by three fully connected layers, all with leaky rectifier as the activation function. Each amino acid sequence is represented by a 20-by-20 matrix X, with each row corresponding to a certain amino acid and the columns representing the position of the amino acid in the peptide. This model achieves a RMSE corresponding to 3.8% of the total running time of the liquid chromatography and a 95 % confidence interval proportional to 14% of the running time, when trained on 20 000 unique peptides from a yeast sample. The CNN predicts retention times slightly more accurately than the software ELUDE when trained on a larger dataset, yet ELUDE performs better on smaller datasets. The CNN does however have a considerable shorter training time. 

  • 361.
    Wahlström, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Operational Risk Modeling:Theory and Practice2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis studies the Loss Distribution Approach for modeling of Operational Risk under Basel II from a practical and general perspective. Initial analysis supports the use of the Peaks over Threshold method for modeling the severity distributions of individual cells.

    A method for weighting loss data subject to data capture bias is implemented and discussed. The idea of the method is that each loss event is registered if and only if it exceeds an outcome of a stochastic threshold. The method is shown to be very useful, but poses some challenges demanding the employment of qualitative reasoning.

    The most well known estimators of both the extreme value threshold and the parameters in the Generalized Pareto Distribution are reviewed and studied from a theoretical perspective. We also introduce a GPD estimator which uses the Method-of-Moments estimate of the shape parameter while estimating the scale parameter by fitting a specific high quantile to empirical data. All estimators are then applied to available data sets and evaluated with respect to robustness and data fit.

    We further review an analytical approximation of the regulatory capital for each cell and apply this to our model. The validity of the approximation is evaluated by using Monte Carlo estimates as a benchmark. This also leads us to study how the rate of convergence of the Monte Carlo estimates depends on the "heavy-tailedness" of the loss distribution.

    A standard model for correlation between cells is discussed and explicit expressions limiting the actual correlation between the aggregated loss distributions in the model are presented. These bounds are then numerically estimated from data.

  • 362.
    Wallnerström, Carl Johan
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    On Incentives affecting Risk and Asset Management of Power Distribution2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The introduction of performance based tariff regulations along with higher media and political pressure have increased the need for well-performed risk and asset management applied to electric power distribution systems (DS), which is an infrastructure considered as a natural monopoly. Compared to other technical systems, DS have special characteristics which are important to consider. The Swedish regulation of DS tariffs between 1996 and 2012 is described together with complementary laws such as customer compensation for long outages. The regulator’s rule is to provide incentives for cost efficient operation with acceptable reliability and reasonable tariff levels. Another difficult task for the regulator is to settle the complexity, i.e. the balance between considering many details and the manageability. Two performed studies of the former regulatory model, included in this thesis, were part of the criticism that led to its fall. Furthermore, based on results from a project included here, initiated by the regulator to review a model to judge effectible costs, the regulator changed some initial plans concerning the upcoming regulation.

     

    A classification of the risk management divided into separate categories is proposed partly based on a study investigating investment planning and risk management at a distribution system operator (DSO). A vulnerability analysis method using quantitative reliability analyses is introduced aimed to indicate how available resources could be better utilized and to evaluate whether additional security should be deployed for certain forecasted events. To evaluate the method, an application study has been performed based on hourly weather measurements and detailed failure reports over eight years for two DS. Months, weekdays and hours have been compared and the vulnerability of several weather phenomena has been evaluated. Of the weather phenomena studied, heavy snowfall and strong winds significantly affect the reliability, while frost, rain and snow depth have low or no impact. The main conclusion is that there is a need to implement new, more advanced, analysis methods. The thesis also provides a statistical validation method and introduces a new category of reliability indices, RT.

  • 363.
    Walås, Gustav
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Modeling deposit prices2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Thisreport investigates whether there are sufficient differences between a bank'sdepositors to motivate price discrimination. This is done by looking at timeseries of individual depositors to try to find predictors by a regressionanalysis. To be able to conclude on the value of more stable deposits for thebank and hence deduce a price, one also needs to look at regulatory aspects ofdeposits and different depositors. Once these qualities of a deposit have beenassigned by both the bank and regulator, they need to be transformed into aprice. This is done by replicationwith market funding instruments.

  • 364.
    Wargentin, Robin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Long-term and Short-term Forecasting Techniques for Regional Airport Planning2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this thesis is to forecast passenger demand in long term and short term perspectives at the Airport of Bologna, a regional airport in Italy with a high mix of low cost traffic and conventional airline traffic. In the long term perspective, time series are applied to forecast a significant growth of passenger volumes in the airport in the period 2016-2026. In the short term perspective, time-of-week passenger demand is estimated using two non-parametric techniques; local regression (LOESS) and a simple method of averaging observations. Using cross validation to estimate the accuracy of the estimates, the simple averaging method and the more complex LOESS method are concluded to perform equally well. Peak hour passenger volumes at the airport are observed in historical data and by use of bootstrapping, these are proved to contain little variability and can be concluded to be stable.

  • 365.
    Wennman, Aron
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Ameur, Yacin
    Kang, Nam-Gyu
    Makarov, Nikolai
    Scaling limits of random normal matrix processes at singular boundary pointsManuscript (preprint) (Other academic)
  • 366.
    Wennström, Amadeus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Volatility Forecasting Performance: Evaluation of GARCH type volatility models on Nordic equity indices2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis examines the volatility forecasting performance of six commonly used forecasting models; the simple moving average, the exponentially weighted moving average, the ARCH model, the GARCH model, the EGARCH model and the GJR-GARCH model. The dataset used in this report are three different Nordic equity indices, OMXS30, OMXC20 and OMXH25. The objective of this paper is to compare the volatility models in terms of the in-sample and out-of-sample fit. The results were very mixed. In terms of the in-sample fit, the result was clear and unequivocally implied that assuming a heavier tailed error distribution than the normal distribution and modeling the conditional mean significantly improves the fit. Moreover a main conclusion is that yes, the more complex models do provide a better in-sample fit than the more parsimonious models. However in terms of the out-of-sample forecasting performance the result was inconclusive. There is not a single volatility model that is preferred based on all the loss functions. An important finding is however not only that the ranking differs when using different loss functions but how dramatically it can differ. This illuminates the importance of choosing an adequate loss function for the intended purpose of the forecast. Moreover it is not necessarily the model with the best in-sample fit that produces the best out-of-sample forecast. Since the out-of-sample forecast performance is so vital to the objective of the analysis one can question whether the in-sample fit should even be used at all to support the choice of a specific volatility model.

  • 367.
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    On particle-based online smoothing and parameter inference in general hidden Markov models2015Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of two papers studying online inference in general hidden Markov models using sequential Monte Carlo methods.

    The first paper present an novel algorithm, the particle-based, rapid incremental smoother (PaRIS), aimed at efficiently perform online approximation of smoothed expectations of additive state functionals in general hidden Markov models. The algorithm has, under weak assumptions, linear computational complexity and very limited memory requirements. The algorithm is also furnished with a number of convergence results, including a central limit theorem.

    The second paper focuses on the problem of online estimation of parameters in a general hidden Markov model. The algorithm is based on a forward implementation of the classical expectation-maximization algorithm. The algorithm uses the PaRIS algorithm to achieve an efficient algorithm.

  • 368.
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On particle-based online smoothing and parameter inference in general state-space models2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of 4 papers, presented in Paper A-D, on particle- based online smoothing and parameter inference in general state-space hidden Markov models.

    In Paper A a novel algorithm, the particle-based, rapid incremental smoother (PaRIS), aimed at efficiently performing online approxima- tion of smoothed expectations of additive state functionals in general hidden Markov models, is presented. The algorithm has, under weak assumptions, linear computational complexity and very limited mem- ory requirements. The algorithm is also furnished with a number of convergence results, including a central limit theorem.

    In Paper B the problem of marginal smoothing in general hidden Markov models is tackled. A novel, PaRIS-based algorithm is presented where the marginal smoothing distributions are approximated using a lagged estimator where the lag is set adaptively.

    In Paper C an estimator of the tangent filter is constructed, yield- ing in turn an estimator of the score function. The resulting algorithm is furnished with theoretical results, including a central limit theorem with a uniformly bounded variance. The resulting estimator is applied to online parameter estimation via recursive maximum liklihood.

    Paper D focuses on the problem of online estimation of parameters in general hidden Markov models. The algorithm is based on a for- ward implementation of the classical expectation-maximization algo- rithm. The algorithm uses the PaRIS algorithm to achieve an efficient algorithm. 

  • 369.
    Westerlund, Per
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Electromagnetic Engineering.
    Dimoulkas, Ilias
    KTH, School of Electrical Engineering and Computer Science (EECS), Electric Power and Energy Systems.
    Prediction of Current by Artificial Neural \\ Networks in a Substation in order \\ to Schedule Thermography2018In: ITISE, Granada, 2018Conference paper (Other academic)
    Abstract [en]

    Thermography or infra-red imaging is a method that measures the temperature of a surface by receiving the infra-red radiation that the surface emits. Thermography is used in, for example, condition measuring of electrical equipment. It shows which parts are heated more than normally due to a higher resistance. Those parts will need maintenance.

    In order to get accurate values, thermography needs a high current in the equipment. Thus it is necessary to predict when the current will be high throughout the year. Here a neural network with two layers is used for the prediction. The data set consists of the hourly currents at a point in a Swedish substation from a period of ten years.

    \\

    The purpose is to plan when to go to a substation to do thermography. As the prediction is done several months ahead, the outdoor temperature cannot be used. Hence only the time expressed as week, day and hour with different resolutions in the discretization, is used as an explanatory variable. With increasing resolutions in the discretization, the prediction error decreases. Adding inputs based on interaction does not improve the prediction. The results are however not satisfactory as the prediction error is large in comparison with the predicted values of the current and the prediction is biased. One reason is that the prediction should be several months ahead, so the actual temperature cannot be used.

  • 370.
    Westerlund, Per
    et al.
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    Hilber, Patrik
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    Lindquist, Tommie
    Combining risk and uncertainty in technical systemsManuscript (preprint) (Other academic)
    Abstract [en]

    The risk matrix is tool for making decision about technical system such as prioritising of maintenance. It is used in methods such as FMEA (failure mode and effect analysis) and it is based on the definition of risk as the product of the probability of a certain failure and its consequence.

    The problem with the standard formulation is that the probability is not always completely known. The uncertainty of the probability can be estimated by its variance. %We suggest that the the mean value plus the standard deviation times a factor could be used instead of just the mean value.

    Instead of a specific value for the probability of failure, a beta distribution is used for the probability. The main point is to find a trade-off between mean and variance. In this case we want to avoid probabilities larger than the mean. We use a loss function taking into account only the right tail starting at a factor times the mean. The exponent of the deviation is 0. We have calculated how much a decreasing variance should compensate for an increasing mean. We get an approximate relation between the quotient of variances and the quotient of means.

    The conclusion is that this model should be investigated further.

  • 371.
    Westerlund, Per
    et al.
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    Hilber, Patrik
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    Lindquist, Tommie
    Prediction of time for preventive maintenance2016In: The Nordic Conference in Mathematical Statistics, Köpenhamn, 2016Conference paper (Refereed)
    Abstract [en]

    In maintenance planning a crucial question is when some asset should be maintained. As preventive maintenance often needs outages it is necessary to predict the condition of an asset until the next possible outage. The degradation of the condition can be modelled by a linear function. One method of estimating the condition is linear regression, which requires a number of measured values for different times and gives an interval within which the asset will reach a condition when it should be maintained [1]. A more sophisticated calculation of the uncertainty of the regression is presented based on [2, section 9.1].

     

    Another method is martingale theory [3, chapter 24], which serves to deduce a formula for the time such that there is a probability of less than a given $\alpha$ that the condition has reached 0 before that time. The formula contains an integral, which is evaluated numerically for different values of the measurement variance and the variance of the Brownian motion, which must be estimated by knowing the maximum and the minimum degradation per time interval. Then just one measured value is needed together with an estimate of the variance.

     

    The two methods are compared, especially with regard to the size of the confidence interval of the time when the condition reaches a predefined level. The application for the methods is the development of so called health indices for the assets in an engineering system, which should tell which asset need maintenance first. We present some requirements for a health index and check how the different predictions fulfil these requirements.

    References

    [1] S.E. Rudd, V.M. Catterson, S.D.J. McArthur, and C. Johnstone. Circuit breaker prognostics using SF6 data. In IEEE Power and Energy Society General Meeting, Detroit, MI, United States, 2011.

    [2] Bernard W. Lindgren. Statistical theory. Macmillan, New York, 2nd edition, 1968.

    [3] Jean Jacod and Philip Protter. Probability essentials. Springer-Verlag, Berlin, 2000.

  • 372.
    Westerlund, Per
    et al.
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    Rydén, Jesper
    Sveriges lantbruksuniversitet.
    Hilber, Patrik
    KTH, School of Electrical Engineering (EES), Electromagnetic Engineering.
    Lindquist, Tommie
    Prediction of high current for thermography in maintenance of electrical networks2018In: Nordstat, Tartu, 2018Conference paper (Other academic)
  • 373.
    Wiklund, Erik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Asian Option Pricing and Volatility2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

     

    An Asian option is a path-depending exotic option, which means that either the settlement price or the strike of the option is formed by some aggregation of underlying asset prices during the option lifetime. This thesis will focus on European style Arithmetic Asian options where the settlement price at maturity is formed by the arithmetic average price of the last seven days of the underlying asset.

    For this type of option it does not exist any closed form analytical formula for calculating the theoretical option value. There exist closed form approximation formulas for valuing this kind of option. One such, used in this thesis, approximate the value of an Arithmetic Asian option by conditioning the valuation on the geometric mean price. To evaluate the accuracy in this approximation and to see if it is possible to use the well known Black-Scholes formula for valuing Asian options, this thesis examines the bias between Monte-Carlo simulation pricing and these closed form approximate pricings. The bias examination is done for several different volatility schemes.

    In general the Asian approximation formula works very well for valuing Asian options. For volatility scenarios where there is a drastic volatility shift and the period with higher volatility is before the average period of the option, the Asian approximation formula will underestimate the option value. These underestimates are very significant for OTM options, decreases for ATM options and are small, although significant, for ITM options.

    The Black-Scholes formula will in general overestimate the Asian option value. This is expected since the Black-Scholes formula applies to standard European options which only, implicitly, considers the underlying asset price at maturity of the option as settlement price. This price is in average higher than the Asian option settlement price when the underlying asset price has a positive drift. However, for some volatility scenarios where there is a drastic volatility shift and the period with higher volatility is before the average period of the option, even the Black-Scholes formula will underestimate the option value. As for the Asian approximation formula, these over-and underestimates are very large for OTM options and decreases for ATM and ITM options.

     

  • 374. Wiktorsson, Magnus
    et al.
    Rydén, Tobias
    Lund University.
    Nilsson, Elna
    Bengtsson, Göran
    Modelling the movement of a soil insect2004In: Journal of Theoretical Biology, ISSN 0022-5193, E-ISSN 1095-8541, Vol. 231, no 4, p. 497-513Article in journal (Refereed)
    Abstract [en]

    We use a linear autoregressive model to describe the movement of a soil-living insect, Protaphorura armata (Collembola). Models of this kind can be viewed as extensions of a random walk, but unlike a correlated random walk, in which the speed and turning angles are independent, our model identifies and expresses the correlations between the turning angles and a variable speed. Our model uses data in x- and y-coordinates rather than in polar coordinates, which is useful for situations in which the resolution of the observations is limited. The movement of the insect was characterized by (i) looping behaviour due to autocorrelation and cross correlation in the velocity process and (ii) occurrence of periods of inactivity, which we describe with a Poisson random effects model. We also introduce obstacles to the environment to add structural heterogeneity to the movement process. We compare aspects such as loop shape, inter-loop time, holding angles at obstacles, net squared displacement, number, and duration of inactive periods between observed and predicted movement. The comparison demonstrates that our approach is relevant as a starting-point to predict behaviourally complex moving, e.g. systematic searching, in a heterogeneous landscape.

  • 375.
    Wirenhammar, Andreas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Modeling Downturn LGD for a Retail Portofolio2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 376.
    Wu, Junfeng
    et al.
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Meng, Z.
    Yang, Tao
    Shi, G.
    Johansson, Karl Henrik
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Critical sampling rate for sampled-data consensus over random networks2016In: Proceedings of the IEEE Conference on Decision and Control, IEEE conference proceedings, 2016, p. 412-417Conference paper (Refereed)
    Abstract [en]

    In this paper, we consider the consensus problem for a network of nodes with random interactions and sampled-data control actions. Each node independently samples its neighbors in a random manner over a directed graph underlying the information exchange of different nodes. The relationship between the sampling rate and the achievement of consensus is studied. We first establish a sufficient condition, in terms of the inter-sampling interval, such that consensus in expectation, in mean square, and in almost sure sense are simultaneously achieved provided a mild connectivity assumption for the underlying graph. Necessary and sufficient conditions for mean-square consensus are derived in terms of the spectral radius of the corresponding state transition matrix. These conditions are then interpreted as the existence of a critical value on the inter-sampling interval, below which global mean-square consensus is achieved and above which the system diverges in mean-square sense for some initial states. Finally, we establish an upper bound of the inter-sampling interval, below which almost sure consensus is reached, and a lower bound, above which almost sure divergence is reached. An numerical example is given to validate the theoretical results.

  • 377.
    Yousefi, Sepehr
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Credit Risk Management in Absence of Financial and Market Data2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Credit risk management is a significant fragment in financial institutions' security precautions against the downside of their investments. A major quandary within the subject of credit risk is the modeling of simultaneous defaults. Globalization causes economises to be affected by innumerous external factors and companies to become interdependent, which in turn enlarges the complexity of establishing reliable mathematical models. The precarious situation is exacerbated by the fact that managers often suffer from the lack of data. The default correlations are most often calibrated by either using financial and/or market information. However, there exists circumstances where these types of data are inaccessible or unreliable. The problem of scarce data also induces diculties in the estimation of default probabilities. The frequency of insolvencies and changes in credit ratings are usually updated on an annual basis and historical information covers 20-25 years at best. From a mathematical perspective, this is considered as a small sample and standard statistical models are inferior in such situations.

    The first part of this thesis specifies the so-called entropy model which estimates the impact of macroeconomic fluctuations on the probability of defaults, and aims to outperform standard statistical models for small samples. The second part specifies the CIMDO, a framework for modeling correlated defaults without financial and market data. The last part submits a risk analysis framework for calculating the uncertainty in the simulated losses.

    It is shown that the entropy model will reduce the variance of the regression coefficients but increase its bias compared to the OLS and Maximum Likelihood. Furthermore there is a significant difference between the Student's t CIMDO and the t-Copula. The former appear to reduce the model uncertainty, however not to such extent that evident conclusions were carried out.

  • 378.
    Zetoun, Mirella
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing With Uncertainty: The impact of uncertainty in the valuation models ofDupire and Black&Scholes2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Theaim of this master-thesis is to study the impact of uncertainty in the local-and implied volatility surfaces when pricing certain structured products suchas capital protected notes and autocalls. Due to their long maturities, limitedavailability of data and liquidity issue, the uncertainty may have a crucialimpact on the choice of valuation model. The degree of sensitivity andreliability of two different valuation models are studied. The valuation models chosen for this thesis are the local volatility model of Dupire and the implied volatility model of Black&Scholes. The two models are stress tested with varying volatilities within an uncertainty interval chosen to be the volatilities obtained from Bid and Ask market prices. The volatility surface of the Mid market prices is set as the relative reference and then successively scaled up and down to measure the uncertainty.The results indicates that the uncertainty in the chosen interval for theDupire model is of higher order than in the Black&Scholes model, i.e. thelocal volatility model is more sensitive to volatility changes. Also, the pricederived in the Black&Scholes modelis closer to the market price of the issued CPN and the Dupire price is closer tothe issued Autocall. This might be an indication of uncertainty in thecalibration method, the size of the chosen uncertainty interval or the constantextrapolation assumption.A further notice is that the prices derived from the Black&Scholes model areoverall higher than the prices from the Dupire model. Another observation ofinterest is that the uncertainty between the models is significantly greaterthan within each model itself.

  • 379. Zhang, C.
    et al.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Mandt, S.
    Determinantal point processes for mini-batch diversification2017In: Uncertainty in Artificial Intelligence - Proceedings of the 33rd Conference, UAI 2017, AUAI Press Corvallis , 2017Conference paper (Refereed)
    Abstract [en]

    We study a mini-batch diversification scheme for stochastic gradient descent (SGD). While classical SGD relies on uniformly sampling data points to form a mini-batch, we propose a non-uniform sampling scheme based on the Determinantal Point Process (DPP). The DPP relies on a similarity measure between data points and gives low probabilities to mini-batches which contain redundant data, and higher probabilities to mini-batches with more diverse data. This simultaneously balances the data and leads to stochastic gradients with lower variance. We term this approach Diversified Mini-Batch SGD (DM-SGD). We show that regular SGD and a biased version of stratified sampling emerge as special cases. Furthermore, DM-SGD generalizes stratified sampling to cases where no discrete features exist to bin the data into groups. We show experimentally that our method results more interpretable and diverse features in unsupervised setups, and in better classification accuracies in supervised setups.

  • 380.
    zhong, Liang
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Betting on Volatility: A Delta Hedging Approach2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 381.
    Zickert, Gustav
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Furstenberg's conjecture and measure rigidity for some classes of non-abelian affine actions on tori2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In 1967 Furstenberg proved that the set {2n3mα(mod 1) | n, m ∈N} is dense in the circle for any irrational α. He also made the following famous measure rigidity conjecture: the only ergodic measures on the circle invariant under both x —> 2x and x —> 3x are the Lebesgue measure and measures supported on a finite set. In this thesis we discuss both Furstenberg’s theorem and his conjecture, as well as the partial solution of the latter given by Rudolph. Following Matheus’presentation of Avila’s ideas for a proof of a weak version of Rudolph’s theorem, we prove a result on extending measure preservation from a semigroup action to a larger semigroup action. Using this result we obtain restrictions on the set of invariant measures for certain classes of non-abelian affine actions on tori. We also study some general properties of affine abelian and non-abelian actions and we show that analogues of Furstenberg’s theorem hold for affine actions on the circle.

  • 382.
    Zimmermann, Maelle
    et al.
    Univ Montreal, Dept Comp Sci & Operat Res, Montreal, PQ, Canada.;CIRRELT Interuniv Res Ctr Entreprise Networks Log, Montreal, PQ, Canada..
    Västberg, Oskar Blom
    KTH, School of Architecture and the Built Environment (ABE), Centres, Centre for Transport Studies, CTS.
    Frejinger, Emma
    Univ Montreal, Dept Comp Sci & Operat Res, Montreal, PQ, Canada.;CIRRELT Interuniv Res Ctr Entreprise Networks Log, Montreal, PQ, Canada..
    Karlström, Anders
    KTH, School of Architecture and the Built Environment (ABE), Centres, Centre for Transport Studies, CTS.
    Capturing correlation with a mixed recursive logit model for activity-travel scheduling2018In: Transportation Research Part C: Emerging Technologies, ISSN 0968-090X, E-ISSN 1879-2359, Vol. 93, p. 273-291Article in journal (Refereed)
    Abstract [en]

    Representing activity-travel scheduling decisions as path choices in a time-space network is an emerging approach in the literature. In this paper, we model choices of activity, location, timing and transport mode using such an approach and seek to estimate utility parameters of recursive logit models. Relaxing the independence from irrelevant alternatives (IIA) property of the logit model in this setting raises a number of challenges. First, overlap in the network may not fully characterize perceptual correlation between paths, due to their interpretation as activity schedules. Second, the large number of states that are needed to represent all possible locations, times and activity combinations imposes major computational challenges to estimate the model. We combine recent methodological developments to build on previous work by Blom Vastberg et al. (2016) and allow to model complex and realistic correlation patterns in this type of network. We use sampled choices sets in order to estimate a mixed recursive logit model in reasonable time for large-scale, dense time-space networks. Importantly, the model retains the advantage of fast predictions without sampling choice sets. In addition to estimation results, we present an extensive empirical analysis which highlights the different substitution patterns when the IIA property is relaxed, and a cross-validation study which confirms improved out-of-sample fit.

  • 383.
    Önskog, Thomas
    Uppsala university.
    Existence of pathwise unique Langevin processes on polytopes with perfect reflection at the boundary2013In: Statistics and Probability Letters, ISSN 0167-7152, E-ISSN 1879-2103, Vol. 83, p. 2211-2219Article in journal (Refereed)
  • 384.
    Östlund, Simon
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Imputation of Missing Data with Application to Commodity Futures2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In recent years additional requirements have been imposed on financial institutions, including Central Counterparty clearing houses (CCPs), as an attempt to assess quantitative measures of their exposure to different types of risk. One of these requirements results in a need to perform stress tests to check the resilience in case of a stressed market/crisis. However, financial markets develop over time and this leads to a situation where some instruments traded today are not present at the chosen date because they were introduced after the considered historical event. Based on current routines, the main goal of this thesis is to provide a more sophisticated method to impute (fill in) historical missing data as a preparatory work in the context of stress testing. The models considered in this paper include two methods currently regarded as state-of-the-art techniques, based on maximum likelihood estimation (MLE) and multiple imputation (MI), together with a third alternative approach involving copulas. The different methods are applied on historical return data of commodity futures contracts from the Nordic energy market. By using conventional error metrics, and out-of-sample log-likelihood, the conclusion is that it is very hard (in general) to distinguish the performance of each method, or draw any conclusion about how good the models are in comparison to each other. Even if the Student’s t-distribution seems (in general) to be a more adequate assumption regarding the data compared to the normal distribution, all the models are showing quite poor performance. However, by analysing the conditional distributions more thoroughly, and evaluating how well each model performs by extracting certain quantile values, the performance of each method is increased significantly. By comparing the different models (when imputing more extreme quantile values) it can be concluded that all methods produce satisfying results, even if the g-copula and t-copula models seems to be more robust than the respective linear models.

5678 351 - 384 of 384
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf