Change search
Refine search result
2345678 201 - 250 of 370
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 201.
    Jöhnemark, Alexander
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Modeling Operational Risk2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Basel II accord requires banks to put aside a capital buffer against unexpected operational losses, resulting from inadequate or failed internal processes, people and systems or from external events. Under the sophisticated Advanced Measurement Approach banks are given the opportunity to develop their own model to estimate operational risk.This report focus on a loss distribution approach based on a set of real data.

    First a comprehensive data analysis was made which suggested that the observations belonged to a heavy tailed distribution. An evaluation of commonly used distributions was performed. The evaluation resulted in the choice of a compound Poisson distribution to model frequency and a piecewise defined distribution with an empirical body and a generalized Pareto tail to model severity. The frequency distribution and the severity distribution define the loss distribution from which Monte Carlo simulations were made in order to estimate the 99.9% quantile, also known as the the regulatory capital.

    Conclusions made on the journey were that including all operational risks in a model is hard, but possible, and that extreme observations have a huge impact on the outcome.

  • 202.
    Kallur, Oskar
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On the use of Value-at-Risk based models for the Fixed Income market as a risk measure for Central Counterparty clearing2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis the use of VaR based models are investigated for the purpose of setting margin requirements for Fixed Income portfolios. VaR based models has become one of the standard ways for Central Counterparties to determine the margin requirements for different types of portfolios. However there are a lot of different ways to implement a VaR based model in practice, especially for Fixed Income portfolios. The models presented in this thesis are based on Filtered Historical Simulation (FHS). Furthermore a model that combines FHS with a Student’s t copula to model the correlation between instruments in a portfolio is presented. All models are backtested using historical data dating from 1998 to 2016. The FHS models seems to produce reasonably accurate VaR estimates. However there are other market related properties that must be fulfilled for a model to be used to set margin requirements. These properties are investigated and discussed.

  • 203.
    Karlsson, Johan
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Enqvist, Per
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Gattami, A.
    Confidence assessment for spectral estimation based on estimated covariances2016In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 4343-4347Conference paper (Refereed)
    Abstract [en]

    In probability theory, time series analysis, and signal processing, many identification and estimation methods rely on covariance estimates as an intermediate statistics. Errors in estimated covariances propagate and degrade the quality of the estimation result. In particular, in large network systems where each system node of the network gather and pass on results, it is important to know the reliability of the information so that informed decisions can be made. In this work, we design confidence regions based on covariance estimates and study how these can be used for spectral estimation. In particular, we consider three different confidence regions based on sets of unitarily invariant matrices and bound the eigenvalue distribution based on three principles: uniform bounds; arithmetic and harmonic means; and the Marcenko-Pastur Law eigenvalue distribution for random matrices. Using these methodologies we robustly bound the energy in a selected frequency band, and compare the resulting spectral bound from the respective confidence regions.

  • 204.
    Katzler, Sigrid
    KTH, School of Architecture and the Built Environment (ABE), Real Estate and Construction Management, Building and Real Estate Economics.
    Methods for comparing diversification strategies on the Swedish real estate market2016In: International Journal of Strategic Property Management, ISSN 1648-715X, E-ISSN 1648-9179, Vol. 20, no 1, p. 17-30Article in journal (Refereed)
    Abstract [en]

    This paper compares the effectiveness of different property portfolio diversification strategies using five methods; (1) correlation matrices, (2) efficient frontiers, (3) Sharpe ratios, using three different sub methods, (4) coefficients in equations explaining total returns and (5) R-square values in equations explaining total returns. The evaluation methods are applied to both value weighted and equally weighted indices based on Swedish real estate return data. All methods show that, if any, diversifying over property types is a better strategy on the Swedish market than diversifying over regions. No test yields significant support for regional diversification. The support for the property type strategy is stronger when using equally weighted indices.

  • 205.
    Kiamehr, Ramin
    et al.
    Department of Geodesy and Geomatics, Zanjan University, Iran).
    Eshagh, Mehdi
    KTH, School of Architecture and the Built Environment (ABE), Urban Planning and Environment, Geodesy and Geoinformatics.
    Estimating variance components of ellipsoidal, orthometric and geoidalheights through the GPS/levelling Network in Iran2008In: Journal of the Earth and Space Physics, ISSN 0378-1046, Vol. 34, no 3, p. 1-13Article in journal (Refereed)
    Abstract [en]

    The Best Quadratic Unbiased Estimation (BQUE) of variance components in the Gauss-Helmert model is used to combine adjustment of GPS/levelling and geoid to determinethe individual variance components for each of the three height types. Through theresearch, different reasons for achievement of the negative variance components werediscussed and a new modified version of the Best Quadratic Unbiased Non-negativeEstimator (MBQUNE) was successfully developed and applied. This estimation could beuseful for estimating the absolute accuracy level which can be achieved using theGPS/levelling method. A general MATLAB function is presented for numericalestimation of variance components by using the different parametric models. Themodified BQUNE and developed software was successfully applied for estimating thevariance components through the sample GPS/levelling network in Iran. In the followingresearch, we used the 75 outlier free and well distributed GPS/levelling data. Threecorrective surface models based on the 4, 5 and 7 parameter models were used throughthe combined adjustment of the GPS/levelling and geoidal heights. Using the 7-parametermodel, the standard deviation indexes of the geoidal, geodetic and orthometric heights inIran were estimated to be about 27, 39 and 35 cm, respectively.

  • 206.
    Kihlström, Gustav
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A self-normalizing neural network approach to bond liquidity classication2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Bond liquidity risk is complex and something that every bond-investor needs to take into account. In this paper we investigate how well a selfnormalizing neural network (SNN) can be used to classify bonds with respect to their liquidity, and compare the results with that of a simpler logistic regression. This is done by analyzing the two algorithms' predictive capabilities on the Swedish bond market. Performing this analysis we find that the performance of the SNN and the logistic regression are broadly on the same level. However, the substantive overfitting to the training data in the case of the SNN suggests that a better performing model could be created by applying regularization techniques. As such, the conclusion is formed as such that there is need of more research in order to determine whether neural networks are the premier method to modelling liquidity.

  • 207.
    Klingmann Rönnqvist, Max
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Numerical Instability of Particle Learning: a case study2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This master's thesis is about a method called Particle Learning (PL) which can be used to analyze so called hidden Markov models (HMM) or, with an alternative terminology, state-space models (SSM) which are very popular for modeling time series. The advantage of PL over more established methods is its capacity to process new datapoints with a constant demand on computational resources but it has been suspected to su er from a problem known as particle path degeneracy. The purpose with this report is to investigate the degeneracy of PL by testing it on two examples. The results suggest that the method may not work very well for long time series.

  • 208.
    Koivusalo, Richard
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical analysis of empirical pairwise copulas for the S&P 500 stocks2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    It is of great importance to find an analytical copula that will represent the empirical lower tail dependence. In this study, the pairwise empirical copula are estimated using data of the S&P 500 stocks during the period 2007-2010.Different optimization methods and measures of dependence have been used to fit Gaussian, t and Clayton copula to the empirical copulas, in order to represent the empirical lower tail dependence. These different measures of dependence and optimization methods with their restrictions, point at different analytical copulas being optimal. In this study the t copula with 5 degrees of freedom is giving the most fulfilling result, when it comes to representing lower tail dependence. The t copula with 5 degrees of freedom gives the best representation of empirical lower tail dependence, whether one uses the 'Empirical maximum likelihood estimator', or 'Equal Ƭ' as an approach.

     

  • 209. Koski, Timo
    Hidden Markov models for bioinformatics2001Book (Refereed)
  • 210.
    Koski, Timo
    KTH, Superseded Departments, Mathematics.
    Hidden Markov Models for Bioinformatics2001Book (Refereed)
  • 211.
    Koski, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The Likelihood Ratio Statistic for Testing Spatial Independence using a Separable Covariance Matrix2009Report (Other academic)
    Abstract [en]

    his paper deals with the problem of testing spatial independence for dependent observations. The sample observationmatrix is assumed to follow a matrix normal distribution with a separable covariance matrix, in other words it can be written as a Kronecker product of two positive definite matrices. Two cases are considered, when the temporal covariance is known and when it is unknown. When the temporal covariance is known, the maximum likelihood estimates are computed and the asymptotic null distribution is given. In the case when the temporal covariance is unknown the maximum likelihood estimates of the parameters are found by an iterative alternating algorithm

  • 212.
    Koski, Timo J. T.
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Noble, John M.
    Rios, Felix L.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    The Minimal Hoppe-Beta Prior Distribution for Directed Acyclic Graphs and Structure LearningManuscript (preprint) (Other academic)
    Abstract [en]

    The main contribution of this article is a new prior distribution over directed acyclic graphs intended for structured Bayesian networks, where the structure is given by an ordered block model. That is, the nodes of the graph are objects which fall into categories or blocks; the blocks have a natural ordering or ranking. The presence of a relationship between two objects is denoted by a directed edge, from the object of category of lower rank to the object of higher rank. The models considered here were introduced in Kemp et al. [7] for relational data and extended to multivariate data in Mansinghka et al. [12].

    We consider the situation where the nodes of the graph represent random variables, whose joint probability distribution factorises along the DAG. We use a minimal layering of the DAG to express the prior. We describe Monte Carlo schemes, with a similar generative that was used for prior, for finding the optimal a posteriori structure given a data matrix and compare the performance with Mansinghka et al. and also with the uniform prior. 

  • 213.
    Koski, Timo
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Noble, John
    University of Warsaw .
    A Review of Bayesian Networks and Structure Learning2012In: Mathematica Applicanda (Matematyka Stosowana), ISSN 2299-4009, Vol. 40, no 1, p. 51-103Article in journal (Refereed)
    Abstract [en]

    This article reviews the topic of Bayesian networks. A Bayesian networkis a factorisation of a probability distribution along a directed acyclic graph. Therelation between graphicald-separation and independence is described. A short ar-ticle from 1853 by Arthur Cayley [8] is discussed, which contains several ideas laterused in Bayesian networks: factorisation, the noisy ‘or’ gate, applications of algebraicgeometry to Bayesian networks. The ideas behind Pearl’s intervention calculus whenthe DAG represents acausaldependence structure and the relation between the workof Cayley and Pearl is commented on.Most of the discussion is about structure learning, outlining the two main approaches,search and score versus constraint based. Constraint based algorithms often rely onthe assumption offaithfulness, that the data to which the algorithm is applied isgenerated from distributions satisfying a faithfulness assumption where graphicald-separation and independence are equivalent. The article presents some considerationsfor constraint based algorithms based on recent data analysis, indicating a variety ofsituations where the faithfulness assumption does not hold. There is a short discussionabout the causal discovery controversy, the idea thatcausalrelations may be learnedfrom data.

  • 214.
    Krebs, Daniel
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing a basket option when volatility is capped using affinejump-diffusion models2013Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis considers the price and characteristics of an exotic option called the Volatility-Cap-Target-Level(VCTL) option. The payoff function is a simple European option style but the underlying value is a dynamic portfolio which is comprised of two components: A risky asset and a non-risky asset. The non-risky asset is a bond and the risky asset can be a fund or an index related to any asset category such as equities, commodities, real estate, etc.

    The main purpose of using a dynamic portfolio is to keep the realized volatility of the portfolio under control and preferably below a certain maximum level, denoted as the Volatility-Cap-Target-Level (VCTL). This is attained by a variable allocation between the risky asset and the non-risky asset during the maturity of the VCTL-option. The allocation is reviewed and if necessary adjusted every 15th day. Adjustment depends entirely upon the realized historical volatility of the risky asset.

    Moreover, it is assumed that the risky asset is governed by a certain group of stochastic differential equations called affine jump-diffusion models. All models will be calibrated using out-of-the money European call options based on the Deutsche-Aktien-Index(DAX).

    The numerical implementation of the portfolio diffusions and the use of Monte Carlo methods will result in different VCTL-option prices. Thus, to price a nonstandard product and to comply with good risk management, it is advocated that the financial institution use several research models such as the SVSJ- and the Seppmodel in addition to the Black-Scholes model.

    Keywords: Exotic option, basket option, risk management, greeks, affine jumpdiffusions, the Black-Scholes model, the Heston model, Bates model with lognormal jumps, the Bates model with log-asymmetric double exponential jumps, the Stochastic-Volatility-Simultaneous-Jumps(SVSJ)-model, the Sepp-model.

  • 215.
    Kremer, Laura
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Assessment of a Credit Value atRisk for Corporate Credits2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis I describe the essential steps of developing a credit rating system. This comprises the credit scoring process that assigns a credit score to each credit, the forming of rating classes by the k-means algorithm and the assignment of a probability of default (PD) for the rating classes. The main focus is on the PD estimation for which two approaches are presented. The first and simple approach in form of a calibration curve assumes independence of the defaults of different corporate credits. The second approach with mixture models is more realistic as it takes default dependence into account. With these models we can use an estimate of a country’s GDP to calculate an estimate for the Value-at-Risk of some credit portfolio.

  • 216.
    Köll, Joonas
    et al.
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Hallström, Stefan
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Influence from polydispersity on the morphology of Voronoi and equilibrium foams2017In: Journal of cellular plastics (Print), ISSN 0021-955X, E-ISSN 1530-7999, Vol. 53, no 2, p. 199-214Article in journal (Refereed)
    Abstract [en]

    Stochastic foam models are generated from Voronoi spatial partitioning, using the centers of equi-sized hard spheres in random periodic distributions as seed points. Models with different levels of polydispersity are generated by varying the packing of the spheres. Subsequent relaxation is then performed with the Surface Evolver software which minimizes the surface area for better resemblance with real foam structures. The polydispersity of the Voronoi precursors is conserved when the models are converted into equilibrium models. The relation between the sphere packing fraction and the resulting degree of volumetric polydispersity is examined and the relations between the polydispersity and a number of associated morphology parameters are then investigated for both the Voronoi and the equilibrium models. Comparisons with data from real foams in the literature indicate that the used method is somewhat limited in terms of spread in cell volume but it provides a very controlled way of varying the foam morphology while keeping it periodic and truly stochastic. The study shows several strikingly consistent relations between the spread in cell volume and other geometric parameters, considering the stochastic nature of the models.

  • 217.
    Lamm, Ludvig
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Sunnegårdh, Erik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Efficient Sensitivity Analysis using Algorithmic  Differentiation in Financial Applications2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    One of the most essential tasks of a financial institution is to keep the financial risk the institution is facing down to an acceptable level. This risk can for example be incurred due to bought or sold financial contracts, however, it can usually be dealt with using some kind of hedging technique. Certain quantities refereed to as "the Greeks" are often used to manage risk. The Greeks are usually determined using Monte Carlo simulation in combination with a finite difference approach, this can in some cases be very demanding considering the computational cost. Because of this, alternative methods for determining the Greeks are of interest.

    In this report a method called Algorithmic differentiation is evaluated. As will be described, there are two different settings of Algorithmic differentiation, namely, forward and adjoint mode. The evaluation will be done by firstly introducing the theory of the method and applying it to a simple, non financial, example. Then the method is applied to three different situations often arising in financial applications. The first example covers the case where a grid of local volatilities is given and sensitivities of an option price with respect to all grid points are sought. The second example deals with the case of a basket option. Here sensitivities of the option with respect to all of the underlying assets are desired. The last example covers the case where sensitivities of a caplet with respect to all initial LIBOR rates, under the assumption of a LIBOR Market Model, are sought.

     It is shown that both forward and adjoint mode produces results aligning with the ones determined using a finite difference approach. Also, it is shown that using the adjoint method, in all these three cases, large savings in computational cost can be made compared to using forward mode or finite difference.

  • 218. Larsson, Sara
    et al.
    Rydén, Tobias
    Lund University.
    Holst, Ulla
    Oredsson, Stina
    Johansson, Maria
    Estimating the distribution of the G2 phase duration from flow cytometric histograms2008In: Mathematical Biosciences, ISSN 0025-5564, E-ISSN 1879-3134, Vol. 211, no 1, p. 1-17Article in journal (Refereed)
    Abstract [en]

    A mathematical model, based on branching processes, is proposed to interpret BrdUrd DNA FCM-derived data. Our main interest is in determining the distribution of the G(2) phase duration. Two different model classes involving different assumptions on the distribution of the G(2) phase duration are considered. Different assumptions of the G(2) phase duration result in very similar distributions of the S phase duration and the estimated means and standard deviations of the G(2) phase duration are all in the same range.

  • 219. Larsson, Sara
    et al.
    Rydén, Tobias
    Lund University.
    Holst, Ulla
    Oredsson, Stina
    Johansson, Maria
    Estimating the Total Rate of DNA Replication Using Branching Processes2008In: Bulletin of Mathematical Biology, ISSN 0092-8240, E-ISSN 1522-9602, Vol. 70, no 8, p. 2177-2194Article in journal (Refereed)
    Abstract [en]

    Increasing the knowledge of various cell cycle kinetic parameters, such as the length of the cell cycle and its different phases, is of considerable importance for several purposes including tumor diagnostics and treatment in clinical health care and a deepened understanding of tumor growth mechanisms. Of particular interest as a prognostic factor in different cancer forms is the S phase, during which DNA is replicated. In the present paper, we estimate the DNA replication rate and the S phase length from bromodeoxyuridine-DNA flow cytometry data. The mathematical analysis is based on a branching process model, paired with an assumed gamma distribution for the S phase duration, with which the DNA distribution of S phase cells can be expressed in terms of the DNA replication rate. Flow cytometry data typically contains rather large measurement variations, however, and we employ nonparametric deconvolution to estimate the underlying DNA distribution of S phase cells; an estimate of the DNA replication rate is then provided by this distribution and the mathematical model.

  • 220. Larsson, Sara
    et al.
    Rydén, Tobias
    Lund University.
    Holst, Ulla
    Oredsson, Stina
    Johansson, Maria
    Estimating the variation in S phase duration from flow cytometric histograms2008In: Mathematical Biosciences, ISSN 0025-5564, E-ISSN 1879-3134, Vol. 213, no 1, p. 40-49Article in journal (Refereed)
    Abstract [en]

    A stochastic model for interpreting BrdUrd DNA FCM-derived data is proposed. The model is based on branching processes and describes the progression of the DNA distribution of BrdUrd-labelled cells through the cell cycle. With the main focus on estimating the S phase duration and its variation, the DNA replication rate is modelled by a piecewise linear function, while assuming a gamma distribution for the S phase duration. Estimation of model parameters was carried out using maximum likelihood for data from two different cell lines. The results provided quite a good fit to the data, suggesting that stochastic models may be a valuable tool for analysing this kind of data.

  • 221.
    Lauri, Linus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Algorithmic evaluation of Parameter Estimation for Hidden Markov Models in Finance2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Modeling financial time series is of great importance for being successful within the financial market. Hidden Markov Models is a great way to include the regime shifting nature of financial data. This thesis will focus on getting an in depth knowledge of Hidden Markov Models in general and specifically the parameter estimation of the models. The objective will be to evaluate if and how financial data can be fitted nicely with the model. The subject was requested by Nordea Markets with the purpose of gaining knowledge of HMM’s for an eventual implementation of the theory by their index development group. The research chiefly consists of evaluating the algorithmic behavior of estimating model parameters. HMM’s proved to be a good approach of modeling financial data, since much of the time series had properties that supported a regime shifting approach. The most important factor for an effective algorithm is the number of states, easily explained as the distinguishable clusters of values. The suggested algorithm of continuously modeling financial data is by doing an extensive monthly calculation of starting parameters that are used daily in a less time consuming usage of the EM-algorithm.

  • 222.
    Leijonmarck, Eric
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Exploiting Temporal Difference for Energy Disaggregation via Discriminative Sparse Coding2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis analyzes one hour based energy disaggregation using Sparse Coding by exploiting temporal differences. Energy disaggregation is the task of taking a whole-home energy signal and separating it into its component appliances. Studies have shown that having device-level energy information can cause users to conserve significant amounts of energy, but current electricity meters only report whole-home data. Thus, developing algorithmic methods for disaggregation presents a key technical challenge in the effort to maximize energy conservation. In Energy Disaggregation or sometimes called Non- Intrusive Load Monitoring (NILM) most approaches are based on high frequent monitored appliances, while households only measure their consumption via smart-meters, which only account for one-hour measurements. This thesis aims at implementing key algorithms from J. Zico Kotler, Siddarth Batra and Andrew Ng paper "Energy Disaggregation via Discriminative Sparse Coding" and try to replicate the results by exploiting temporal differences that occur when dealing with time series data. The implementation was successful, but the results were inconclusive when dealing with large datasets, as the algorithm was too computationally heavy for the resources available. The work was performed at the Swedish company Greenely, who develops visualizations based on gamification for energy bills via a mobile application.

  • 223. Li, Bo
    et al.
    Wu, Junfeng
    Qi, Hongsheng
    Proutiere, Alexandre
    KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control.
    Shi, Guodong
    Boolean Gossip Networks2018In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 26, no 1, p. 118-130Article in journal (Refereed)
    Abstract [en]

    This paper proposes and investigates a Boolean gossip model as a simplified but non-trivial probabilistic Boolean network. With positive node interactions, in view of standard theories from Markov chains, we prove that the node states asymptotically converge to an agreement at a binary random variable, whose distribution is characterized for large-scale networks by mean-field approximation. Using combinatorial analysis, we also successfully count the number of communication classes of the positive Boolean network explicitly in terms of the topology of the underlying interaction graph, where remarkably minor variation in local structures can drastically change the number of network communication classes. With general Boolean interaction rules, emergence of absorbing network Boolean dynamics is shown to be determined by the network structure with necessary and sufficient conditions established regarding when the Boolean gossip process defines absorbing Markov chains. Particularly, it is shown that for the majority of the Boolean interaction rules, except for nine out of the total 2(16) - 1 possible nonempty sets of binary Boolean functions, whether the induced chain is absorbing has nothing to do with the topology of the underlying interaction graph, as long as connectivity is assumed. These results illustrate the possibilities of relating dynamical properties of Boolean networks to graphical properties of the underlying interactions.

  • 224.
    Liang, Zhimo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Forecasting Shanghai Composite Index using hidden Markov model2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this thesis is to forecast the future trend of Shanghai Composite Index and other securities. Our approach is applying the hidden Markov models to the market-transaction data indirectly. Previous work has not consider the independent problem between each training samples, which may result in inference bias. So we should select samples which is not significantly dependent, and suppose those samples are independent to each other. Rather than forecasting the future trend by estimating the hidden state one day before the trend, we measure the probabilities of the trend directions by calculating the gaps between the likelihoods of two hidden Markov models in a periods of time before the trends. As we have altered the target function of the optimization in parameter-estimation process, the accuracy of our model is improved. Furthermore, the experiment result reveals that it is lucrative to select securities for portfolios by our method.

  • 225.
    Lidholm, Erik
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Entrepreneurship and innovation.
    Nudel, Benjamin
    KTH, School of Industrial Engineering and Management (ITM), Industrial Economics and Management (Dept.), Entrepreneurship and innovation.
    Implications of Multiple Curve Construction in the Swedish Swap Market2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The global financial crisis of 2007 caused abrupt changes in the financial markets. Interest rates that were known to follow each other diverged. Furthermore, both regulation and an increased awareness of counterparty credit risks have fuelled a growth of collateralised contracts. As a consequence, pre-crisis swap pricing methods are no longer valid. In light of this, the purpose of this thesis is to apply a framework to the Swedish swap market that is able to consistently price interest rate and cross currency swaps in the presence of non-negligible cross currency basis spreads, and to investigate the pricing differences arising from the use and type of collat- eral. Through the implementation of a framework proposed by Fujii, Shimada and Takahashi (2010b), it is shown that the usage of collateral has a noticeable impact on the pricing. Ten year forward starting swaps are found to be priced at lower rates under collateral. Moreover, the results from pricing off-market swaps show that disregarding the impact of collateral would cause one to consistently underestimate the change in value of a contract, whether in or out of the money. The choice of collateral currency is also shown to matter, as pricing under SEK and USD as the collateral currencies yielded different results, in terms of constructed curves as well as in the pricing of spot starting, forward starting and off-market swaps. Based on the results from the pricing of off-market swaps, two scenarios are outlined that exemplify the importance of correct pricing methods when terminating and novating swaps. It is concluded that a market participant who fails to recognise the pricing implications from the usage and type of collateral could incur substantial losses.

  • 226.
    Ligai, Wolmir
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistisk analys avtågförseningar i Sverige2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this work, data for train travel was analyzed to determine if any factors could affect the delay time, and risk of delay. Data was given by Trafikverket for all railway travels in Sweden (n = 827087) for year 2013, excluding metro and tram travels. Models used were a multiple linear regression, and two logistic regressions, the result of the latter was examined.

    The dependent variables in the models that were examined were which trains had delays to the final destination, and which trains had delays at all. Variables that showed correlation (p<0.01) with delayed trains were non-holidays, planned travel time, and type of train. Number of days to summer solstice, a variable for approximating the weather, showed a weak correlation with train delays. Reasons for delay during the travel were also positively correlated with delays to final destination, and among these the greatest correlation belonged to “Accidents/danger and external factors” and “Infrastructure reasons”.

    It is suggested from the results that factors that strongly affect delays are train routes, and the route traffic capacity.

  • 227.
    Limin, Emelie
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Analysis of purchase behaviors of IKEA family card users, using Generalized linear model2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the end of 2009 IKEA had 267 stores in 25 countries with total sales of 21,5 billion Euros and is one of the most successful home furnishing companies in the world. IKEA has its roots in Småland, a region in Sweden with a history of poorness. This has come to characterize IKEAs business concept with quality furniture’s at low prices. With this business concept in mind IKEA strives to save money and avoid wasting resources, this also goes in line with IKEAs efforts in saving the environment. Market adaptation is a key to succeed with this concept. A better understanding of what the customer wants reduces the risk of producing too much and therefore decreases the amount of waste. This thesis studies the customers purchase habits when it comes to chairs and tables. IKEA collects information of their customers by the IKEA family card, which stores all purchase information of the customer. By access to that database we are able to compare the purchase habits on different markets. In this thesis we are interested of knowing which chairs the customer has bought on the same receipt as a certain table. With this information we can compare actual figures with IKEAs belief and make models over the purchase patterns.

  • 228.
    Lindblad, Kalle
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    How big is large?: A study of the limit for large insurance claims in case reserves2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A company issuing an insurance will provide, in return for a monetary premium, acceptance of the liability to make certain payments to the insured person or company if some beforehand specified event occurs. There will always be a delay between occurrence of this event and actual payment from the insurance company. It is therefore necessary for the company to put aside money for this liability. This money is called the reserve. When a claim is reported, a claim handler will make an estimate of how much the company will have to pay to the claimant. This amount is booked as a liability. This type of reserve is called; "case reserve". When making the estimate, the claim handler has the option of giving the claim a standard reserve or a manual reserve. A standard reserve is a statistically calculated amount based on historical claim costs. This type of reserve is more often used in small claims. A manual reserve is a reserve subjectively decided by the claim handler. This type of reserve is more often used in large claims. This thesis propose a theory to model and calculate an optimal limit above which a claim should be considered large. An application of the method is also applied to some different types of claims.

  • 229.
    Lindskog, Filip
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hult, Henrik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Hammarlid, Ola
    Rehn, Carl-Johan
    Risk and portfolio analysis: principles and methods2012Book (Refereed)
  • 230.
    Liu, Du
    et al.
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Flierl, Markus
    KTH, School of Electrical Engineering (EES), Communication Theory.
    Energy Compaction on Graphs for Motion-Adaptive Transforms2015In: Data Compression Conference Proceedings, 2015, p. 457-Conference paper (Refereed)
    Abstract [en]

    It is well known that the Karhunen-Loeve Transform (KLT) diagonalizes the covariance matrix and gives the optimal energy compaction. Since the real covariance matrix may not be obtained in video compression, we consider a covariance model that can be constructed without extra cost. In this work, a covariance model based on a graph is considered for temporal transforms of videos. The relation between the covariance matrix and the Laplacian is studied. We obtain an explicit expression of the relation for tree graphs, where the trees are defined by motion information. The proposed graph-based covariance is a good model for motion-compensated image sequences. In terms of energy compaction, our graph-based covariance model has the potential to outperform the classical Laplacian-based signal analysis.

  • 231.
    Ljung, Carl
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Copula selection and parameter estimation in market risk models2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, literature is reviewed for theory regarding elliptical copulas (Gaussian, Student’s t, and Grouped t) and methods for calibrating parametric copulas to sets of observations. Theory regarding model diagnostics is also summarized in the thesis. Historical data of equity indices and government bond rates from several geo-graphical regions along with U.S. corporate bond indices are used as proxies of the most significant stochastic variables in the investment portfolio of If P&C. These historical observations are transformed into pseudo-uniform observations, pseudo-observations, using parametric and non-parametric univariate models. The parametric models are fitted using both maximum likelihood and least squares of the quantile function. Ellip-tical copulas are then calibrated to the pseudo-observations using the well known methods Inference Function for Margins (IFM) and Semi-Parametric (SP) as well as compositions of these methods and a non-parametric estimator of Kendall’s tau.The goodness-of-fit of the calibrated multivariate models is assessed in aspect of general dependence, tail dependence, mean squared error as well as by using universal measures such as Akaike and Bayesian Informa-tion Criterion, AIC and BIC. The mean squared error is computed both using the empirical joint distribution and the empirical Kendall distribution function. General dependence is measured using the scale-invariant measures Kendall’s tau, Spearman’s rho, and Blomqvist’s beta, while tail dependence is assessed using Krup-skii’s tail-weighted measures of dependence (see [16]). Monte Carlo simulation is used to estimate these mea-sures for copulas where analytical calculation is not feasible.Gaussian copulas scored lower than Student’s t and Grouped t copulas in every test conducted. However, not all test produced conclusive results. Further, the obtained values of the tail-weighted measures of depen-dence imply a systematically lower tail dependence of Gaussian copulas compared to historical observations.

  • 232.
    Lorentz, Pär
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    A Modified Sharpe Ratio Based Portfolio Optimization2012Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The performance of an optimal-weighted portfolio strategy is evaluated when transaction costs are penalized compared to an equal-weighted portfolio strategy. The optimal allocation weights are found by maximizing a modified Sharpe ratio measure each trading day, where modified refers to the expected return of an asset in this context. The leverage of the investment is determined by a conditional expectation estimate of the number of portfolio assets of the next-coming day. A moving window is used to historically measure the transition probabilities of moving from one state to another within this stochastic count process and this is used as an input to the estimator. It is found that the most accurate estimate is the actual trading day’s number of portfolio assets and this is obtained when the size of the moving window is one. Increasing the penalty parameter on transaction costs of selling and buying assets between trading days lowers the aggregated transaction cost and increases the performance of the optimal-weighted portfolio considerably. The best portfolio performance is obtained when at least 50% of the capital is invested equally among the assets when maximizing the modified Sharpe ratio. The optimal-weighted and equal-weighted portfolios are constructed on a daily basis, where the allowed VaR0:05 is €300 000 for each portfolio. This sets the limit on the amount of capital allowed to be invested each trading day, and is determined by empirical VaR0:05 simulations of these two portfolios.

  • 233.
    Lorenzo Varela, Juan Manuel
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Transport Science.
    Börjesson, Maria
    KTH.
    Daly, Andrew
    Measuring errors by latent variables in transport models2017Conference paper (Refereed)
  • 234.
    Loso, Jesper
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Forecasting of Self-Rated Health Using Hidden Markov Algorithm2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a model for predicting a person’s monthly average of self-rated health the following month was developed. It was based on statistics from a form constructed by HealthWatch. The model used is a Hidden Markov Algorithm based on Hidden Markov Models where the hidden part is the future value of self-rated health. The emissions were based on five of the eleven questions that make the HealthWatch form. The questions are answered on a scale from zero to one hundred. The model predicts in which of three intervals of SRH the responder most likely will answer on average during the following month. The final model has an accuracy of 80 %.

  • 235.
    Lundemo, Anna
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Detecting change points in remote sensing time series2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    We analyse methods for detecting change points in optical remote sensing lake drainage time series. Change points are points in a data set where the statistical properties of the data change. The data that we look at represent drained lakes in the Arctic hemisphere. It is generally noisy, with observations missing due to difficult weather conditions. We evaluate a partitioning algorithm, with five different approaches to model the data, based on least-squares regression and an assumption of normally distributed measurement errors. We also evaluate two computer programs called DBEST and TIMESAT and a MATLAB function called findchangepts(). We find that TIMESAT, DBEST and the MATLAB function are not relevant for our purposes. We also find that the partitioning algorithm that models the data as normally distributed around a piecewise constant function, is best suited for finding change points in our data.

  • 236.
    Lundström, Ina
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Finding Risk Factors for Long-Term Sickness Absence Using Classification Trees2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a model for predicting if someone has an over-risk for long-term sickness absence during the forthcoming year is developed. The model is a classification tree that classifies objects as having high or low risk for long-term sickness absence based on their answers on the Health-Watch form. The HealthWatch form is a questionnaire about health consisting of eleven questions, such as "How do you feel right now?", "How did you sleep last night?", "How is your job satisfaction right now?" etc. As a measure on risk for long-term sickness absence, the Oldenburg Burnout Inventory and a scale for performance based self-esteem are used. Separate models are made for men and for women. The model for women shows good enough performance on a test set for being acceptable as a general model and can be used for prediction. Some conclusions can also be drawn from the additional information given by the classification tree; workload and work atmosphere do not seem to contribute a lot to an in-creased risk for long-term sickness absence, while job satisfaction seems to be one of the most important factors. The model for men performs poorly on a test set, and therefore it is not advisable to use it for prediction or to draw other conclusions from it.

  • 237.
    Löfdahl, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Stochastic modelling in disability insurance2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of two papers related to the stochastic modellingof disability insurance. In the first paper, we propose a stochastic semi-Markovian framework for disability modelling in a multi-period discrete-time setting. The logistic transforms of disability inception and recovery probabilities are modelled by means of stochastic risk factors and basis functions, using counting processes and generalized linear models. The model for disability inception also takes IBNR claims into consideration. We fit various versions of the models into Swedish disability claims data.

    In the second paper, we consider a large, homogeneous portfolio oflife or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic environment. Using a conditional law of large numbers, we establish the connection between risk aggregation and claims reserving for large portfolios. Further, we derive a partial differential equation for moments of present values. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for solving the PDEs very efficiently. Finally, we givea numerical example where moments of present values of disabilityannuities are computed using finite difference methods.

  • 238.
    Löfdahl Grelsson, Björn
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Topics in life and disability insurance2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of five papers, presented in Chapters A-E, on topics in life and disability insurance. It is naturally divided into two parts, where papers A and B discuss disability rates estimation based on historical claims data, and papers C-E discuss claims reserving, risk management and insurer solvency.In Paper A, disability inception and recovery probabilities are modelled in a generalized linear models (GLM) framework. For prediction of future disability rates, it is customary to combine GLMs with time series forecasting techniques into a two-step method involving parameter estimation from historical data and subsequent calibration of a time series model. This approach may in fact lead to both conceptual and numerical problems since any time trend components of the model are incoherently treated as both model parameters and realizations of a stochastic process. In Paper B, we suggest that this general two-step approach can be improved in the following way: First, we assume a stochastic process form for the time trend component. The corresponding transition densities are then incorporated into the likelihood, and the model parameters are estimated using the Expectation-Maximization algorithm.In Papers C and D, we consider a large portfolio of life or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic-demographic environment. Using the Conditional Law of Large Numbers (CLLN), we establish the connection between claims reserving and risk aggregation for large portfolios. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for computing reserves and capital requirements efficiently. Paper C focuses on claims reserving and ultimate risk, whereas the focus of Paper D is on the one-year risks associated with the Solvency II directive.In Paper E, we consider claims reserving for life insurance policies with reserve-dependent payments driven by multi-state Markov chains. The associated prospective reserve is formulated as a recursive utility function using the framework of backward stochastic differential equations (BSDE). We show that the prospective reserve satisfies a nonlinear Thiele equation for Markovian BSDEs when the driver is a deterministic function of the reserve and the underlying Markov chain. Aggregation of prospective reserves for large and homogeneous insurance portfolios is considered through mean-field approximations. We show that the corresponding prospective reserve satisfies a BSDE of mean-field type and derive the associated nonlinear Thiele equation.

  • 239.
    Magureanu, Stefan
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Combes, Richard
    Supelec, France.
    Proutiere, Alexandre
    KTH, School of Electrical Engineering (EES), Automatic Control. INRIA, France.
    Lipschitz Bandits: Regret Lower Bounds and Optimal Algorithms2014Conference paper (Refereed)
    Abstract [en]

    We consider stochastic multi-armed bandit problems where the expected reward is a Lipschitzfunction of the arm, and where the set of arms is either discrete or continuous. For discrete Lipschitzbandits, we derive asymptotic problem specific lower bounds for the regret satisfied by anyalgorithm, and propose OSLB and CKL-UCB, two algorithms that efficiently exploit the Lipschitzstructure of the problem. In fact, we prove that OSLB is asymptotically optimal, as its asymptoticregret matches the lower bound. The regret analysis of our algorithms relies on a new concentrationinequality for weighted sums of KL divergences between the empirical distributions of rewards andtheir true distributions. For continuous Lipschitz bandits, we propose to first discretize the actionspace, and then apply OSLB or CKL-UCB, algorithms that provably exploit the structure efficiently.This approach is shown, through numerical experiments, to significantly outperform existing algorithmsthat directly deal with the continuous set of arms. Finally the results and algorithms areextended to contextual bandits with similarities.

  • 240. Maire, Florian
    et al.
    Douc, Randal
    Olsson, Jimmy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    COMPARISON OF ASYMPTOTIC VARIANCES OF INHOMOGENEOUS MARKOV CHAINS WITH APPLICATION TO MARKOV CHAIN MONTE CARLO METHODS2014In: Annals of Statistics, ISSN 0090-5364, E-ISSN 2168-8966, Vol. 42, no 4, p. 1483-1510Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the asymptotic variance of sample path averages for inhomogeneous Markov chains that evolve alternatingly according to two different 7-reversible Markov transition kernels P and Q. More specifically, our main result allows us to compare directly the asymptotic variances of two inhomogeneous Markov chains associated with different kernels Pi and Q(i), i is an element of {0, 1}, as soon as the kernels of each pair (P-0, P-1) and (Q(0), Q(1)) can be ordered in the sense of lag-one autocovariance. As an important application, we use this result for comparing different data-augmentation-type Metropolis Hastings algorithms. In particular, we compare some pseudo-marginal algorithms and propose a novel exact algorithm, referred to as the random refreshment algorithm, which is more efficient, in terms of asymptotic variance, than the Grouped Independence Metropolis Hastings algorithm and has a computational complexity that does not exceed that of the Monte Carlo Within Metropolis algorithm.

  • 241.
    Malgrat, Maxime
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pricing of a “worst of” option using a Copula method2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, we use a Copula Method in order to price basket options and especially “worst of” options. The dependence structure of the underlying assets will be modeled using different families of copulas. The copulas parameters are estimated via the Maximum Likelihood Method from a sample of observed daily returns.

    The Monte Carlo method will be revisited when it comes to generate underlying assets daily returns from the fitted copula.

    Two baskets are priced: one composed of two correlated assets and one composed of two uncorrelated assets. The obtained prices are then compared with the price obtained using the Pricing Partners software

  • 242.
    Malmberg, Emilie
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Sjöberg, Jonas
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Förklarande faktorer bakom statsobligationsspread mellan USA och Tyskland2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

     

    This bachelor’s thesis in Mathematical Statistics and Industrial Economics aims to determine explanatory variables of yield spread between U.S. and German government bonds. The bonds used in this thesis have maturities of five and ten years. To accomplish the task at hand, a multiple linear regression model is used. Regression models are commonly used to describe government bond spread, and this bachelor’s thesis aims to create a basis for further modeling and contribute to improvement of existing models. The problem formulation and course of action have been developed in cooperation with a Swedish bank, not named for reasons of confidentiality. Two main parts constitute this bachelor’s thesis. The Industrial Economics part investigates which macroeconomic factors are of interest in order to create the model. The economics are (in this case) the statistical context, which emphasizes the importance of this part. For the mathematical part of the thesis, a multiple linear regression and related statistical tests are performed on the chosen variables. The results of these tests indicate that the policy rate spread between the countries is the most significant variable, and in itself describes the government bond spread quite well. However, the policy rate does not seem to describe the bond spread as well regarding the last five years. This gives a hint that the importance of the variable policy spread is diminishing, while the importance of other factors is increasing.

  • 243.
    Martinsson Engshagen, Jan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Nothing is normal in nance!: On Tail Correlations and Robust Higher Order Moments in Normal Portfolio Frameworks2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

    This thesis project is divided in two parts. The first part examines the possibility that correlation matrix estimates based on an outlier sample would contain information about extreme events. According to my findings, such methods do not perform better than simple shrinkage methods where robust shrinkage targets are used. The method tested is especially outperformed when it comes to the extreme events, where a shrinkage of the correlation matrix towards the identity matrix seems to give the best result.

    The second part is about valuation of skewness in marginal distributions and the penalizing of heavy tails. I argue that it is reasonable to use a degrees of freedom parameter instead of kurtosis and a certain regression parameter, that I develop, instead of skewness due to robustness issues. When minimizing the one period draw-down is our target, the "value" of skewness seems to have a linear relationship with expected returns. Re-valuing of expected returns, in terms of skewness, in the standard Markowitz framework will tend to lower expected shortfall (ES), increase skewness and lower the realized portfolio variance. Penalizing of heavy tails will most times in the same way lower ES, lower kurtosis and realized portfolio variance. The results indicate that the parameters representing higher order moments in some way characterize the assets and also reflect their future behavior. These properties can be used in a simple optimization framework and seem to have a positive impact even on portfolio level

  • 244. Maruotti, Antonello
    et al.
    Rydén, Tobias
    Lund University.
    A semiparametric approach to hidden Markov models under longitudinal observations2009In: Statistics and computing, ISSN 0960-3174, E-ISSN 1573-1375, Vol. 19, no 4, p. 381-393Article in journal (Refereed)
    Abstract [en]

    We propose a hidden Markov model for longitudinal count data where sources of unobserved heterogeneity arise, making data overdispersed. The observed process, conditionally on the hidden states, is assumed to follow an inhomogeneous Poisson kernel, where the unobserved heterogeneity is modeled in a generalized linear model (GLM) framework by adding individual-specific random effects in the link function. Due to the complexity of the likelihood within the GLM framework, model parameters may be estimated by numerical maximization of the log-likelihood function or by simulation methods; we propose a more flexible approach based on the Expectation Maximization (EM) algorithm. Parameter estimation is carried out using a non-parametric maximum likelihood (NPML) approach in a finite mixture context. Simulation results and two empirical examples are provided.

  • 245.
    Mhitarean, Ecaterina
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Marketing Mix Modelling from the multiple regression perspective2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The optimal allocation of the marketing budget has become a difficult issue that each company is facing. With the appearance of new marketing techniques, such as online advertising and social media advertising, the complexity of data has increased, making this problem even more challenging. Statistical tools for explanatory and predictive modelling have commonly been used to tackle the problem of budget allocation. Marketing Mix Modelling involves the use of a range of statistical methods which are suitable for modelling the variable of interest (in this thesis it is sales) in terms of advertising strategies and external variables, with the aim to construct an optimal combination of marketing strategies that would maximize the profit.

    The purpose of this thesis is to investigate a number of regression-based model building strategies, with the focus on advanced regularization methods of linear regression, with the analysis of advantages and disadvantages of each method. Several crucial problems that modern marketing mix modelling is facing are discussed in the thesis. These include the choice of the most appropriate functional form that describes the relationship between the set of explanatory variables and the response, modelling the dynamical structure of marketing environment by choosing the optimal decays for each marketing advertising strategy, evaluating the seasonality effects and collinearity of marketing instruments.

    To efficiently tackle two common challenges when dealing with marketing data, which are multicollinearity and selection of informative variables, regularization methods are exploited. In particular, the performance accuracy of ridge regression, the lasso, the naive elastic net and elastic net is compared using cross-validation approach for the selection of tuning parameters. Specific practical recommendations for modelling and analyzing Nepa marketing data are provided.

  • 246. Millán, P.
    et al.
    Vivas, C.
    Fischione, Carlo
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Distributed event-based observers for LTI systems2015In: Asynchronous Control for Networked Systems, Springer Publishing Company, 2015, p. 181-191Chapter in book (Other academic)
    Abstract [en]

    This chapter is concerned with the networked distributed estimation problem. A set of agents (observers) are assumed to be estimating the state of a large-scale process. Each of them must provide a reliable estimate of the state of the plant, but it have only access to some plant outputs. Local observability is not assumed, so the agents need to communicate and collaborate to obtain their estimates. This chapter proposes a structure of the observers, which merges local Luenberger-like estimators with consensus matrices.

  • 247.
    Molavipour, Sina
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Bassi, German
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Skoglund, Mikael
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Testing for Directed Information Graphs2017In: 2017 55TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), IEEE , 2017, p. 212-219Conference paper (Refereed)
    Abstract [en]

    In this paper, we study a hypothesis test to determine the underlying directed graph structure of nodes in a network, where the nodes represent random processes and the direction of the links indicate a causal relationship between said processes. Specifically, a k-th order Markov structure is considered for them, and the chosen metric to determine a connection between nodes is the directed information. The hypothesis test is based on the empirically calculated transition probabilities which are used to estimate the directed information. For a single edge, it is proven that the detection probability can be chosen arbitrarily close to one, while the false alarm probability remains negligible. When the test is performed on the whole graph, we derive bounds for the false alarm and detection probabilities, which show that the test is asymptotically optimal by properly setting the threshold test and using a large number of samples. Furthermore, we study how the convergence of the measures relies on the existence of links in the true graph.

  • 248.
    Mollaret, Sébastian
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Collateral choice option valuation2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A bank borrowing some money has to give some securities to the lender, which is called collateral. Different kinds of collateral can be posted, like cash in different currencies or a stock portfolio depending on the terms of the contract, which is called a Credit Support Annex (CSA). Those contracts specify eligible collateral, interest rate, frequency of collateral posting, minimum transfer amounts, etc. This guarantee reduces the counterparty risk associated with this type of transaction.

    If a CSA allows for posting cash in different currencies as collateral, then the party posting collateral can, now and at each future point in time, choose which currency to post. This choice leads to optionality that needs to be accounted for when valuing even the most basic of derivatives such as forwards or swaps.

    In this thesis, we deal with the valuation of embedded optionality in collateral contracts. We consider the case when collateral can be posted in two different currencies, which seems sufficient since collateral contracts are soon going to be simplified.

    This study is based on the conditional independence approach proposed by Piterbarg [8]. This method is compared to both Monte-Carlo simulation and finite- difference method.

    A practical application is finally presented with the example of a contract between Natixis and Barclays.

     

  • 249.
    Monin Nylund, Jean-Alexander
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Semi-Markov modelling in a Gibbssampling algorithm for NIALM2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Residential households in the EU are estimated to have a savings potential of around 27% [1]. The question yet remains on how to realize this savings potential. Non-Intrusive Appliance Load Monitoring (NIALM) aims to disaggregate the combination of household appliance energy signals with only measurements of the total household power load.

    The core of this thesis has been the implementation of an extension to a Gibbs sampling model with Hidden Markov Models for energy disaggregation. The goal has been to improve overall performance, by including the duration times of electrical appliances in the probabilistic model.

    The final algorithm was evaluated in comparison to the base algorithm, but results remained at the very best inconclusive, due to the model's inherent limitations.

    The work was performed at the Swedish company Watty. Watty develops the first energy data analytic tool that can automate the energy efficiency process in buildings.

  • 250.
    Mumm, Lennart
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Reject Inference in Online Purchases2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

     

    As accurately as possible, creditors wish to determine if a potential debtor will repay the borrowed sum. To achieve this mathematical models known as credit scorecards quantifying the risk of default are used. In this study it is investigated whether the scorecard can be improved by using reject inference and thereby include the characteristics of the rejected population when refining the scorecard. The reject inference method used is parcelling. Logistic regression is used to estimate probability of default based on applicant characteristics. Two models, one with and one without reject inference, are compared using Gini coefficient and estimated profitability. The results yield that, when comparing the two models, the model with reject inference both has a slightly higher Gini coefficient as well a showing an increase in profitability. Thus, this study suggests that reject inference does improve the predictive power of the scorecard, but in order to verify the results additional testing on a larger calibration set is needed

2345678 201 - 250 of 370
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf