Change search
Refine search result
345678 251 - 300 of 373
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 251.
    Mollaret, Sébastian
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Collateral choice option valuation2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A bank borrowing some money has to give some securities to the lender, which is called collateral. Different kinds of collateral can be posted, like cash in different currencies or a stock portfolio depending on the terms of the contract, which is called a Credit Support Annex (CSA). Those contracts specify eligible collateral, interest rate, frequency of collateral posting, minimum transfer amounts, etc. This guarantee reduces the counterparty risk associated with this type of transaction.

    If a CSA allows for posting cash in different currencies as collateral, then the party posting collateral can, now and at each future point in time, choose which currency to post. This choice leads to optionality that needs to be accounted for when valuing even the most basic of derivatives such as forwards or swaps.

    In this thesis, we deal with the valuation of embedded optionality in collateral contracts. We consider the case when collateral can be posted in two different currencies, which seems sufficient since collateral contracts are soon going to be simplified.

    This study is based on the conditional independence approach proposed by Piterbarg [8]. This method is compared to both Monte-Carlo simulation and finite- difference method.

    A practical application is finally presented with the example of a contract between Natixis and Barclays.

     

  • 252.
    Monin Nylund, Jean-Alexander
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Semi-Markov modelling in a Gibbssampling algorithm for NIALM2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Residential households in the EU are estimated to have a savings potential of around 27% [1]. The question yet remains on how to realize this savings potential. Non-Intrusive Appliance Load Monitoring (NIALM) aims to disaggregate the combination of household appliance energy signals with only measurements of the total household power load.

    The core of this thesis has been the implementation of an extension to a Gibbs sampling model with Hidden Markov Models for energy disaggregation. The goal has been to improve overall performance, by including the duration times of electrical appliances in the probabilistic model.

    The final algorithm was evaluated in comparison to the base algorithm, but results remained at the very best inconclusive, due to the model's inherent limitations.

    The work was performed at the Swedish company Watty. Watty develops the first energy data analytic tool that can automate the energy efficiency process in buildings.

  • 253.
    Mumm, Lennart
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Reject Inference in Online Purchases2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Abstract

     

    As accurately as possible, creditors wish to determine if a potential debtor will repay the borrowed sum. To achieve this mathematical models known as credit scorecards quantifying the risk of default are used. In this study it is investigated whether the scorecard can be improved by using reject inference and thereby include the characteristics of the rejected population when refining the scorecard. The reject inference method used is parcelling. Logistic regression is used to estimate probability of default based on applicant characteristics. Two models, one with and one without reject inference, are compared using Gini coefficient and estimated profitability. The results yield that, when comparing the two models, the model with reject inference both has a slightly higher Gini coefficient as well a showing an increase in profitability. Thus, this study suggests that reject inference does improve the predictive power of the scorecard, but in order to verify the results additional testing on a larger calibration set is needed

  • 254. Munkhammar, J.
    et al.
    Widén, J.
    Grahn, Pia
    KTH, School of Electrical Engineering (EES), Electric Power Systems.
    Rydén, J.
    A Bernoulli distribution model for plug-in electric vehicle charging based on time-use data for driving patterns2014In: 2014 IEEE International Electric Vehicle Conference, IEVC 2014, IEEE conference proceedings, 2014Conference paper (Refereed)
    Abstract [en]

    This paper presents a Bernoulli distribution model for plug-in electric vehicle (PEV) charging based on high resolution activity data for Swedish driving patterns. Based on the activity 'driving vehicle' from a time diary study a Monte Carlo simulation is made of PEV state of charge which is then condensed down to Bernoulli distributions representing charging for each hour during weekday and weekend day. These distributions are then used as a basis for simulations of PEV charging patterns. Results regarding charging patterns for a number of different PEV parameters are shown along with a comparison with results from a different stochastic model for PEV charging. A convergence test for Monte Carlo simulations of the distributions is also provided. In addition to this we show that multiple PEV charging patterns are represented by Binomial distributions via convolution of Bernoulli distributions. Also the distribution for aggregate charging of many PEVs is shown to be normally distributed. Finally a few remarks regarding the applicability of the model are given along with a discussion on potential extensions.

  • 255.
    Murase, Takeo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Interest Rate Risk – Using Benchmark Shifts in a Multi Hierarchy Paradigm2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This master thesis investigates the generic benchmark approach to measuring interest rate risk. First the background and market situation is described followed by an outline of the concept and meaning of measuring interest rate risk with generic benchmarks. Finally a single yield curve in an arbitrary currency is analyzed in the cases where linear interpolation and cubic interpolation technique is utilized. It is shown that in the single yield curve setting with linear interpolation or cubic interpolation the problem of finding interest rate scenarios can be formulated as convex optimization problems implying properties such as convexity and monotonicity. The analysis also shed light on the difference between linear interpolation and cubic interpolation technique for which scenario is generated and means to go about solving for the scenarios generated by the views imposed on the generic benchmark instruments. Further research on the topic of the generic benchmark approach that would advance the understanding of the model is suggested at the end of the paper. However at this stage it seems like using generic benchmark instruments for measuring interest rate risk is a consistent and computational viable option which not only measures the interest rate risk exposure but also provide a guidance in how to act in order to manage interest rate risk in a multi hierarchy paradigm

  • 256.
    Muratov, Anton
    et al.
    KTH, School of Electrical Engineering (EES).
    Zuyev, Sergei
    Neighbour-dependent point shifts and random exchange models: Invariance and attractors2017In: Bernoulli, ISSN 1350-7265, E-ISSN 1573-9759, Vol. 23, no 1, p. 539-551Article in journal (Refereed)
    Abstract [en]

    Consider a partition of the real line into intervals by the points of a stationary renewal point process. Subdivide the intervals in proportions given by i.i.d. random variables with distribution G supported by [0, 1]. We ask ourselves for what interval length distribution F and what division distribution G, the subdivision points themselves form a renewal process with the same F? An evident case is that of degenerate F and G. As we show, the only other possibility is when F is Gamma and G is Beta with related parameters. In particular, the process of division points of a Poisson process is again Poisson, if the division distribution is Beta: B(r, 1 - r) for some 0 < r < 1. We show a similar behaviour of random exchange models when a countable number of "agents" exchange randomly distributed parts of their "masses" with neighbours. More generally, a Dirichlet distribution arises in these models as a fixed point distribution preserving independence of the masses at each step. We also show that for each G there is a unique attractor, a distribution of the infinite sequence of masses, which is a fixed point of the random exchange and to which iterations of a non-equilibrium configuration of masses converge weakly. In particular, iteratively applying B(r, 1 - r)-divisions to a realisation of any renewal process with finite second moment of F yields a Poisson process of the same intensity in the limit.

  • 257.
    Möllberg, Martin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On Calibrating an Extension of the Chen Model2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    There are many ways of modeling stochastic processes of short-term interest rates. One way is to use one-factor models which may be easy to use and easy to calibrate. Another way is to use a three-factor model in the strive for a higher degree of congruency with real world market data. Calibrating such models may however take much more effort. One of the main questions here is which models will be better fit to the data in question. Another question is if the use of a three-factor model can result in better fitting compared to one-factor models.

    This is investigated by using the Efficient Method of Moments to calibrate a three-factor model with a Lévy process. This model is an extension of the Chen Model. The calibration is done with Euribor 6-month interest rates and these rates are also used with the Vasicek and Cox-Ingersoll-Ross (CIR) models. These two models are calibrated by using Maximum Likelihood Estimation and they are one-factor models. Chi-square goodness-of-fit tests are also performed for all models.

    The findings indicate that the Vasicek and CIR models fail to describe the stochastic process of the Euribor 6-month rate. However, the result from the goodness-of-fit test of the three-factor model gives support for that model.

  • 258.
    Nguyen Andersson, Peter
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Liquidity and corporate bond pricing on the Swedish market2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis a corporate bond valuation model based on Dick-Nielsen, Feldhütter, and Lando (2011) and Chen, Lesmond, and Wei (2007) is examined. The aim is for the model to price corporate bond spreads and in particular capture the price effects of liquidity as well as credit risk. The valuation model is based on linear regression and is conducted on the Swedish market with data provided by Handelsbanken. Two measures of liquidity are analyzed: the bid-ask spread and the zero-trading days. The investigation shows that the bid-ask spread outperforms the zero-trading days in both significance and robustness. The valuation model with the bid-ask spread explains 59% of the cross-sectional variation and has a standard error of 56 bps in its pricing predictions of corporate spreads. A reduced version of the valuation model is also developed to address simplicity and target a larger group of users. The reduced model is shown to maintain a large proportion of the explanation power while including fewer and simpler variables.

     

  • 259.
    Nilsson, Hans-Erik
    et al.
    KTH, Superseded Departments, Microelectronics and Information Technology, IMIT.
    Martinez, Antonio B.
    KTH, Superseded Departments, Microelectronics and Information Technology, IMIT.
    Hjelm, Mats
    KTH, Superseded Departments, Microelectronics and Information Technology, IMIT.
    Full band Monte Carlo simulation-beyond the semiclassical approach2004In: Monte Carlo Methods and Applications, ISSN 0929-9629, Vol. 10, no 3-4, p. 481-490Article in journal (Refereed)
    Abstract [en]

    A quantum mechanical extension of the full band ensemble Monte Carlo (MC) simulation method is presented. The new approach goes beyond the traditional semi-classical method generally used in MC simulations of charge transport in semiconductor materials and devices. The extension is necessary in high-field simulations of semiconductor materials with a complex unit cell, such as the hexagonal SiC polytypes or wurtzite GaN. Instead of complex unit cells the approach can also be used for super-cells, in order to understand charge transport at surfaces, around point defects, or in quantum wells.

  • 260.
    Nordling, Torbjörn E. M.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Robust inference of gene regulatory networks: System properties, variable selection, subnetworks, and design of experiments2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this thesis, inference of biological networks from in vivo data generated by perturbation experiments is considered, i.e. deduction of causal interactions that exist among the observed variables. Knowledge of such regulatory influences is essential in biology.

    A system property–interampatteness–is introduced that explains why the variation in existing gene expression data is concentrated to a few “characteristic modes” or “eigengenes”, and why previously inferred models have a large number of false positive and false negative links. An interampatte system is characterized by strong INTERactions enabling simultaneous AMPlification and ATTEnuation of different signals and we show that perturbation of individual state variables, e.g. genes, typically leads to ill-conditioned data with both characteristic and weak modes. The weak modes are typically dominated by measurement noise due to poor excitation and their existence hampers network reconstruction.

    The excitation problem is solved by iterative design of correlated multi-gene perturbation experiments that counteract the intrinsic signal attenuation of the system. The next perturbation should be designed such that the expected response practically spans an additional dimension of the state space. The proposed design is numerically demonstrated for the Snf1 signalling pathway in S. cerevisiae.

    The impact of unperturbed and unobserved latent state variables, that exist in any real biological system, on the inferred network and required set-up of the experiments for network inference is analysed. Their existence implies that a subnetwork of pseudo-direct causal regulatory influences, accounting for all environmental effects, in general is inferred. In principle, the number of latent states and different paths between the nodes of the network can be estimated, but their identity cannot be determined unless they are observed or perturbed directly.

    Network inference is recognized as a variable/model selection problem and solved by considering all possible models of a specified class that can explain the data at a desired significance level, and by classifying only the links present in all of these models as existing. As shown, these links can be determined without any parameter estimation by reformulating the variable selection problem as a robust rank problem. Solution of the rank problem enable assignment of confidence to individual interactions, without resorting to any approximation or asymptotic results. This is demonstrated by reverse engineering of the synthetic IRMA gene regulatory network from published data. A previously unknown activation of transcription of SWI5 by CBF1 in the IRMA strain of S. cerevisiae is proven to exist, which serves to illustrate that even the accumulated knowledge of well studied genes is incomplete.

  • 261.
    Nykvist, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Topics in importance sampling and derivatives pricing2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers, presented in Chapters 2-5, on the topics of derivatives pricing and importance sampling for stochastic processes.

    In the first paper a model for the evolution of the forward density of the future value of an asset is proposed. The model is constructed with the aim of being both simple and realistic, and avoid the need for frequent re-calibration. The model is calibrated to liquid options on the S\&P 500 index and an empirical study illustrates that the model provides a good fit to option price data.

    In the last three papers of this thesis efficient importance sampling algorithms are designed for computing rare-event probabilities in the setting of stochastic processes. The algorithms are based on subsolutions of partial differential equations of Hamilton-Jacobi type and the construction of appropriate subsolutions is facilitated by a minmax representation involving the \mane potential.

    In the second paper, a general framework is provided for the case of one-dimensional diffusions driven by Brownian motion. An analytical formula for the \mane potential is provided and the performance of the algorithm is analyzed in detail for geometric Brownian motion and for the Cox-Ingersoll-Ross process. Depending on the choice of the parameters of the models, the importance sampling algorithm is either proven to be asymptotically optimal or its good performance is demonstrated in numerical investigations.

    The third paper extends the results from the previous paper to the setting of high-dimensional stochastic processes. Using the method of characteristics, the partial differential equation for the \mane potential is rewritten as a system of ordinary differential equations which can be efficiently solved. The methodology is used to estimate loss probabilities of large portfolios in the Black-Scholes model and in the stochastic volatility model proposed by Heston. Numerical experiments indicate that the algorithm yields significant variance reduction when compared with standard Monte-Carlo simulation.

    In the final paper, an importance sampling algorithm is proposed for computing the probability of voltage collapse in a power system. The power load is modeled by a high-dimensional stochastic process and the sought probability is formulated as an exit problem for the diffusion. A particular challenge is that the boundary of the domain cannot be characterized explicitly. Simulations for two power systems shows that the algorithm can be effectively implemented and provides a viable alternative to existing system risk indices.

    The thesis begins with a historical review of mathematical finance, followed by an introduction to importance sampling for stochastic processes.

  • 262.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Large deviations for weighted empirical measures and processes arising in importance sampling2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of two papers related to large deviation results associated with importance sampling algorithms. As the need for efficient computational methods increases, so does the need for theoretical analysis of simulation algorithms. This thesis is mainly concerned with algorithms using importance sampling. Both papers make theoretical contributions to the development of a new approach for analyzing efficiency of importance sampling algorithms by means of large deviation theory.

    In the first paper of the thesis, the efficiency of an importance sampling algorithm is studied using a large deviation result for the sequence of weighted empirical measures that represent the output of the algorithm. The main result is stated in terms of the Laplace principle for the weighted empirical measure arising in importance sampling and it can be viewed as a weighted version of Sanov's theorem. This result is used to quantify the performance of an importance sampling algorithm over a collection of subsets of a given target set as well as quantile estimates. The method of proof is the weak convergence approach to large deviations developed by Dupuis and Ellis.

    The second paper studies moderate deviations of the empirical process analogue of the weighted empirical measure arising in importance sampling. Using moderate deviation results for empirical processes the moderate deviation principle is proved for weighted empirical processes that arise in importance sampling. This result can be thought of as the empirical process analogue of the main result of the first paper and the proof is established using standard techniques for empirical processes and Banach space valued random variables. The moderate deviation principle for the importance sampling estimator of the tail of a distribution follows as a corollary. From this, moderate deviation results are established for importance sampling estimators of two risk measures: The quantile process and Expected Shortfall. The results are proved using a delta method for large deviations established by Gao and Zhao (2011) together with more classical results from the theory of large deviations.

    The thesis begins with an informal discussion of stochastic simulation, in particular importance sampling, followed by short mathematical introductions to large deviations and importance sampling.

  • 263.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics. Brown Univ, USA.
    MODERATE DEVIATION PRINCIPLES FOR IMPORTANCE SAMPLING ESTIMATORS OF RISK MEASURES2017In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 54, no 2, p. 490-506Article in journal (Refereed)
    Abstract [en]

    Importance sampling has become an important tool for the computation of extreme quantiles and tail-based risk measures. For estimation of such nonlinear functionals of the underlying distribution, the standard efficiency analysis is not necessarily applicable. In this paper we therefore study importance sampling algorithms by considering moderate deviations of the associated weighted empirical processes. Using a delta method for large deviations, combined with classical large deviation techniques, the moderate deviation principle is obtained for importance sampling estimators of two of the most common risk measures: value at risk and expected shortfall.

  • 264.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Moderate deviation principles for importance sampling estimators of risk measures2017In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072Article in journal (Refereed)
    Abstract [en]

    Importance sampling has become an important tool for the computation of tail-based risk measures. Since such quantities are often determined mainly by rare events standard Monte Carlo can be inefficient and importance sampling provides a way to speed up computations. This paper considers moderate deviations for the weighted empirical process, the process analogue of the weighted empirical measure, arising in importance sampling. The moderate deviation principle is established as an extension of existing results. Using a delta method for large deviations established by Gao and Zhao (Ann. Statist., 2011) together with classical large deviation techniques, the moderate deviation principle for the weighted empirical process is extended to functionals of the weighted empirical process which correspond to risk measures. The main results are moderate deviation principles for importance sampling estimators of the quantile function of a distribution and Expected Shortfall.

  • 265.
    Nyquist, Pierre
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    On large deviations and design of efficient importance sampling algorithms2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers, presented in Chapters 2-5, on the topics large deviations and stochastic simulation, particularly importance sampling. The four papers make theoretical contributions to the development of a new approach for analyzing efficiency of importance sampling algorithms by means of large deviation theory, and to the design of efficient algorithms using the subsolution approach developed by Dupuis and Wang (2007).

    In the first two papers of the thesis, the random output of an importance sampling algorithm is viewed as a sequence of weighted empirical measures and weighted empirical processes, respectively. The main theoretical results are a Laplace principle for the weighted empirical measures (Paper 1) and a moderate deviation result for the weighted empirical processes (Paper 2). The Laplace principle for weighted empirical measures is used to propose an alternative measure of efficiency based on the associated rate function.The moderate deviation result for weighted empirical processes is an extension of what can be seen as the empirical process version of Sanov's theorem. Together with a delta method for large deviations, established by Gao and Zhao (2011), we show moderate deviation results for importance sampling estimators of the risk measures Value-at-Risk and Expected Shortfall.

    The final two papers of the thesis are concerned with the design of efficient importance sampling algorithms using subsolutions of partial differential equations of Hamilton-Jacobi type (the subsolution approach).

    In Paper 3 we show a min-max representation of viscosity solutions of Hamilton-Jacobi equations. In particular, the representation suggests a general approach for constructing subsolutions to equations associated with terminal value problems and exit problems. Since the design of efficient importance sampling algorithms is connected to such subsolutions, the min-max representation facilitates the construction of efficient algorithms.

    In Paper 4 we consider the problem of constructing efficient importance sampling algorithms for a certain type of Markovian intensity model for credit risk. The min-max representation of Paper 3 is used to construct subsolutions to the associated Hamilton-Jacobi equation and the corresponding importance sampling algorithms are investigated both theoretically and numerically.

    The thesis begins with an informal discussion of stochastic simulation, followed by brief mathematical introductions to large deviations and importance sampling. 

  • 266. Nyström, Kaj
    et al.
    Önskog, Thomas
    Remarks on the Skorohod problem and reflected Lévy driven SDEs in time-dependent domains2015In: Stochastics: An International Journal of Probablitiy and Stochastic Processes, ISSN 1744-2508, E-ISSN 1744-2516, Vol. 87, no 5, p. 747-765Article in journal (Refereed)
    Abstract [en]

    We consider the Skorohod problem for cadlag functions, and the subsequent construction of solutions to normally reflected stochastic differential equations driven by Levy processes, in the setting of non-smooth and time-dependent domains.

  • 267. Nyström, Kaj
    et al.
    Önskog, Thomas
    The Skorohod oblique reflection problem in time-dependent domains2010In: Annals of Probability, ISSN 0091-1798, E-ISSN 2168-894X, Vol. 38, no 6, p. 2170-2223Article in journal (Refereed)
    Abstract [en]

    The deterministic Skorohod problem plays an important role in the construction and analysis of diffusion processes with reflection. In the form studied here, the multidimensional Skorohod problem was introduced, in time-independent domains, by H. Tanaka [61] and further investigated by P.-L. Lions and A.-S. Sznitman [42] in their celebrated article. Subsequent results of several researchers have resulted in a large literature on the Skorohod problem in time-independent domains. In this article we conduct a thorough study of the multidimensional Skorohod problem in time-dependent domains. In particular, we prove the existence of cadlag solutions (x, lambda) to the Skorohod problem, with oblique reflection, for (D, Gamma, w) assuming, in particular, that D is a time-dependent domain (Theorem 1.2). In addition, we prove that if w is continuous, then x is continuous as well (Theorem 1.3). Subsequently, we use the established existence results to construct solutions to stochastic differential equations with oblique reflection (Theorem 1.9) in time-dependent domains. In the process of proving these results we establish a number of estimates for solutions to the Skorohod problem with bounded jumps and, in addition, several results concerning the convergence of sequences of solutions to Skorohod problems in the setting of time-dependent domains.

  • 268. Nyström, Kaj
    et al.
    Önskog, Thomas
    Weak approximation of obliquely reflected diffusions in time-dependent domains.2010In: Journal of Computational Mathematics, ISSN 0254-9409, E-ISSN 1991-7139, Vol. 28, no 5, p. 579-605Article in journal (Refereed)
    Abstract [en]

    In an earlier paper, we proved the existence of solutions to the Skorohod problem with oblique reflection in time-dependent domains and, subsequently, applied this result to the problem of constructing solutions, in time-dependent domains, to stochastic differential equations with oblique reflection. In this paper we use these results to construct weak approximations of solutions to stochastic differential equations with oblique reflection, in time-dependent domains in R-d, by means of a projected Euler scheme. We prove that the constructed method has, as is the case for normal reflection and time-independent domains, an order of convergence equal to 1/2 and we evaluate the method empirically by means of two numerical examples. Furthermore, using a well-known extension of the Feynman-Kac formula, to stochastic differential equations with reflection, our method gives, in addition, a Monte Carlo method for solving second order parabolic partial differential equations with Robin boundary conditions in time-dependent domains.

  • 269.
    Näsman, P
    et al.
    KTH, School of Architecture and the Built Environment (ABE), Centres, Centre for Transport Studies, CTS. KTH, School of Architecture and the Built Environment (ABE), Transport Science, Transport and Location Analysis.
    Thedéen, T
    Valdeltagande, bebyggelsetyp och röstandelar i storstadsområdena Stockholm, Göteborg och Malmö vid riksdagsvalen 1982, 1985 och 19881990Report (Other academic)
  • 270.
    Näsman, Per
    KTH, School of Architecture and the Built Environment (ABE), Centres, Centre for Transport Studies, CTS. KTH, School of Architecture and the Built Environment (ABE), Transport Science, Transport and Location Analysis.
    Jan Gustavsson, mentor och vän: Festskrift med anledning av att Jan Gustavsson, Statistiska Institutionen, går i pension.1998Other (Other (popular science, discussion, etc.))
  • 271.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pavlenko, Tatjana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rios, Felix
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Bayesian structure learning in graphical models using sequential Monte CarloManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper we present a family of algorithms, the junction tree expanders, for expanding junction trees in the sense that the number of nodes in the underlying decomposable graph is increased by one. The family of junction tree expanders is equipped with a number of theoretical results including a characterization stating that every junction tree and consequently every de- composable graph can be constructed by iteratively using a junction tree expander. Further, an important feature of a stochastic implementation of a junction tree expander is the Markovian property inherent to the tree propagation dynamics. Using this property, a sequential Monte Carlo algorithm for approximating a probability distribution defined on the space of decompos- able graphs is developed with the junction tree expander as a proposal kernel. Specifically, we apply the sequential Monte Carlo algorithm for structure learning in decomposable Gaussian graphical models where the target distribution is a junction tree posterior distribution. In this setting, posterior parametric inference on the underlying decomposable graph is a direct by- product of the suggested methodology; working with the G-Wishart family of conjugate priors, we derive a closed form expression for the Bayesian estimator of the precision matrix of Gaus- sian graphical models Markov with respect to a decomposable graph. Performance accuracy of the graph and parameter estimators are illustrated through a collection of numerical examples demonstrating the feasibility of the suggested approach in high-dimensional domains. 

  • 272.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pavlenko, Tatjana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rios, Felix Leopoldo
    Bayesian inference in decomposable graphical models using sequential Monte Carlo methodsManuscript (preprint) (Other academic)
    Abstract [en]

    In this study we present a sequential sampling methodology for Bayesian inference in decomposable graphical models. We recast the problem of graph estimation, which in general lacks natural sequential interpretation, into a sequential setting. Specifically, we propose a recursive Feynman-Kac model which generates a flow of junction tree distributions over a space of increasing dimensions and develop an efficient sequential Monte Carlo sampler. As a key ingredient of the proposal kernel in our sampler we use the Christmas tree algorithm developed in the companion paper Olsson et al. [2017]. We focus on particle MCMC methods, in particular particle Gibbs (PG) as it allows for generating MCMC chains with global moves on an underlying space of decomposable graphs. To further improve the algorithm mixing properties of this PG, we incorporate a systematic refreshment step implemented through direct sampling from a backward kernel. The theoretical properties of the algorithm are investigated, showing in particular that the refreshment step improves the algorithm performance in terms of asymptotic variance of the estimated distribution. Performance accuracy of the graph estimators are illustrated through a collection of numerical examples demonstrating the feasibility of the suggested approach in both discrete and continuous graphical models.

  • 273.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pavlenko, Tatjana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rios, Felix Leopoldo
    Generating junction trees of decomopsable graphs with the Christmas tree algorithmManuscript (preprint) (Other academic)
    Abstract [en]

    The junction tree representation provides an attractive structural property for organizing a decomposable graph. In this study, we present a novel stochastic algorithm which we call the Christmas tree algorithm for building of junction trees sequentially by adding one node at a time to the underlying decomposable graph. The algorithm has two important theoretical properties. Firstly, every junction tree and hence every decomposable graph have positive probability of being generated. Secondly, the transition probability from one tree to another has a tractable expression. These two properties, along with the reversed version of the proposed algorithm are key ingredients in the construction of a sequential Monte Carlo sampling scheme for approximating distributions over decomposable graphs, see Olsson et al. [2016]. As an illustrating example, we specify a distribution over the space of junction trees and estimate of the number of decomposable graph through the normalizing constant.

  • 274. Olsson, Jimmy
    et al.
    Rydén, Tobias
    Lund University.
    Asymptotic properties of particle filter-based maximum likelihood estimators for state space models2008In: Stochastic Processes and their Applications, ISSN 0304-4149, E-ISSN 1879-209X, Vol. 118, no 4, p. 649-680Article in journal (Refereed)
    Abstract [en]

    We study the asymptotic performance of approximate maximum likelihood estimators for state space models obtained via sequential Monte Carlo methods. The state space of the latent Markov chain and the parameter space are assumed to be compact. The approximate estimates are computed by, firstly, running possibly dependent particle filters on a fixed grid in the parameter space, yielding a pointwise approximation of the log-likelihood function. Secondly, extensions of this approximation to the whole parameter space are formed by means of piecewise constant functions or B-spline interpolation, and approximate maximum likelihood estimates are obtained through maximization of the resulting functions. In this setting we formulate criteria for how to increase the number of particles and the resolution of the grid in order to produce estimates that are consistent and asymptotically normal.

  • 275. Olsson, Jimmy
    et al.
    Rydén, Tobias
    Lund University.
    Particle filter-based approximate maximum likelihood inference asymptotics in state-space models2007In: ESAIM: Proc. Volume 19, 2007, Conference Oxford sur les méthodes de Monte Carlo séquentielles / [ed] Andrieu, C. and Crisan, D., 2007, p. 115-120Conference paper (Refereed)
    Abstract [en]

    To implement maximum likelihood estimation in state-space models, the log-likelihoodfunction must be approximated. We study such approximations based on particle filters, and in particularconditions for consistency of the corresponding approximate maximum likelihood estimator.Numerical results illustrate the theory.

  • 276.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    An efficient particle-based online EM algorithm for general state-space modelsManuscript (preprint) (Other academic)
  • 277.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Efficient particle-based online smoothing in general hidden Markov models: the PaRIS algorithmManuscript (preprint) (Other academic)
  • 278.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Particle-based adaptive-lag online marginal smoothing in general state-space modelsManuscript (preprint) (Other academic)
  • 279.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Particle-based, online estimation of tangent filters with application to parameter estimation in nonlinear state-space modelsManuscript (preprint) (Other academic)
  • 280.
    Olsén, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Logistic regression modelling for STHR analysis2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Coronary artery heart disease (CAD) is a common condition which can impair the quality of life and lead to cardiac infarctions. Traditional criteria during exercise tests are good but far from perfect. A lot of patients with inconclusive tests are referred to radiological examinations. By finding better evaluation criteria during the exercise test we can save a lot of money and let the patients avoid unnecessary examinations.

    Computers record amounts of numerical data during the exercise test. In this retrospective study 267 patients with inconclusive exercise test and performed radiological examinations were included. The purpose was to use clinical considerations as-well as mathematical statistics to be able to find new diagnostic criteria.

    We created a few new parameters and evaluated them together with previously used parameters. For women we found some interesting univariable results where new parameters discriminated better than the formerly used. However, the number of females with observed CAD was small (14) which made it impossible to obtain strong significance. For men we computed a multivariable model, using logistic regression, which discriminates way better than the traditional parameters for these patients. The area under the ROC curve was 0:90 (95 % CI: 0.83-0.97) which is excellent to outstanding discrimination in a group initially included due to their inconclusive results.

    If the model can be proved to hold for another population it could contribute a lot to the diagnostics of this common medical conditions

  • 281.
    Orrenius, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Optimal mass transport: a viable alternative to copulas in financial risk modeling?2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Copulas as a description of joint probability distributions is today common when modeling financial risk. The optimal mass transport problem also describes dependence structures, although it is not well explored. This thesis explores the dependence structures of the entropy regularized optimal mass transport problem. The basic copula properties are replicated for the optimal mass transport problem. The estimation of the parameters of the optimal mass transport problem is attempted using a maximum likelihood analogy, but only successful when observing the general tendencies on a grid of the parameters.

  • 282.
    Osika, Anton
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Statistical analysis of online linguistic sentiment measures with financial applications2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Gavagai is a company that uses different methods to aggregate senti-ment towards specific topics from a large stream of real time published documents. Gavagai wants to find a procedure to decide which way of measuring sentiment (sentiment measure) towards a topic is most useful in a given context. This work discusses what criterion are desirable for aggregating sentiment and derives and evaluates procedures to select "optimal" sentiment measures.

    Three novel models for selecting a set of sentiment measures that describe independent attributes of the aggregated data are evaluated. The models can be summarized as: maximizing variance of the last principal compo-nent of the data, maximizing the differential entropy of the data and, in the special case of selecting an additional sentiment measure, maximizing the unexplained variance conditional on the previous sentiment measures.

    When exogenous time varying data considering a topic is available, the data can be used to select the sentiment measure that best explain the data. With this goal in mind, the hypothesis that sentiment data can be used to predict financial volatility and political poll data is tested. The null hypothesis can not be rejected.

    A framework for aggregating sentiment measures in a mathematically co-herent way is summarized in a road map.

     

  • 283.
    Owrang, Arash
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering. KTH Royal Inst Technol, Dept Informat Sci & Engn, SE-10044 Stockholm, Sweden.;KTH Royal Inst Technol, ACCESS Linnaeus Ctr, SE-10044 Stockholm, Sweden..
    Jansson, Magnus
    KTH, School of Electrical Engineering and Computer Science (EECS), Information Science and Engineering.
    A Model Selection Criterion for High-Dimensional Linear Regression2018In: IEEE Transactions on Signal Processing, ISSN 1053-587X, E-ISSN 1941-0476, Vol. 66, no 13, p. 3436-3446Article in journal (Refereed)
    Abstract [en]

    Statistical model selection is a great challenge when the number of accessible measurements is much smaller than the dimension of the parameter space. We study the problem of model selection in the context of subset selection for high-dimensional linear regressions. Accordingly, we propose a new model selection criterion with the Fisher information that leads to the selection of a parsimonious model from all the combinatorial models up to some maximum level of sparsity. We analyze the performance of our criterion as the number of measurements grows to infinity, as well as when the noise variance tends to zero. In each case, we prove that our proposed criterion gives the true model with a probability approaching one. Additionally, we devise a computationally affordable algorithm to conduct model selection with the proposed criterion in practice. Interestingly, as a side product, our algorithm can provide the ideal regularization parameter for the Lasso estimator such that Lasso selects the true variables. Finally, numerical simulations are included to support our theoretical findings.

  • 284.
    Paajanen, Sara
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Model Risk in Economic Capital Models2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With increasingly complex financial markets, many financial institutions rely on mathematical models to estimate their risk exposure. These models are subject to a relatively unexplored risk type known as model risk. This study aims to quantify the model risk associated with the top-down aggregation of different risk types when computing the economic capital of a financial institution. The types of aggregation models considered combines the risks of a firm into a final economic capital value through the use of a joint distribution function or some other summation method. Specifically, the variance-covariance method and some common elliptical and Archimedean copulas are considered.

    The scope of this study is limited to estimating the parameter estimation risk and the misspecification risk of these aggregation models. Seven model risk measures are presented that are intended to measure the sensitivity of the models to model risk. These risk measures are based on existing approaches to model risk and also utilize the Rearrangement Algorithm developed by Embrechts et al. (2013).

    The study shows that the variance-covariance method, the Gaussian copula and the Student's t copulas with many degrees of freedom tend to carry the highest parameter estimation risk of the models tested. The Cauchy copula and the Archimedean copulas have significantly lower parameter estimation risk and are thus less sensitive to their input parameters. When testing for misspecification risk the heavy-tailed Cauchy and Gumbel copulas carry the least amount of risk while the variance-covariance method and the lighter tailed copulas are more risky. The study also shows that none of the models considered come close to the theoretical upper bound of the economic capital, putting into question the common assumption that a Gaussian copula with perfect correlation between all of the risk types of a firm will yield a conservative value of the economic capital.

  • 285.
    Palikuca, Aleksandar
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Seidl,, Timo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Predicting High Frequency Exchange Rates using Machine Learning2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis applies a committee of Artificial Neural Networks and Support Vector Machines on high-dimensional, high-frequency EUR/USD exchange rate data in an effort to predict directional market movements on up to a 60 second prediction horizon. The study shows that combining multiple classifiers into a committee produces improved precision relative to the best individual committee members and outperforms previously reported results. A trading simulation implementing the committee classifier yields promising results and highlights the possibility of developing a profitable trading strategy based on the limit order book and historical transactions alone.

  • 286.
    Palmborg, Lina
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    On Constructing o Market Consistent Economic Scenario Generator2011Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
  • 287.
    Pavlenko, Tatjana
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Björkström, Anders
    Stockholm Univ, Stockholm, Sweden.
    Tillander, Annika
    Stockholm Univ, Stockholm, Sweden.
    Covariance structure approximation via gLasso in high-dimensional supervised classification2012In: Journal of Applied Statistics, ISSN 0266-4763, E-ISSN 1360-0532, Vol. 39, no 8, p. 1643-1666Article in journal (Refereed)
    Abstract [en]

    Recent work has shown that the Lasso-based regularization is very useful for estimating the high-dimensional inverse covariance matrix. A particularly useful scheme is based on penalizing the l(1) norm of the off-diagonal elements to encourage sparsity. We embed this type of regularization into high-dimensional classification. A two-stage estimation procedure is proposed which first recovers structural zeros of the inverse covariance matrix and then enforces block sparsity by moving non-zeros closer to the main diagonal. We show that the block-diagonal approximation of the inverse covariance matrix leads to an additive classifier, and demonstrate that accounting for the structure can yield better performance accuracy. Effect of the block size on classification is explored, and a class of as ymptotically equivalent structure approximations in a high-dimensional setting is specified. We suggest a variable selection at the block level and investigate properties of this procedure in growing dimension asymptotics. We present a consistency result on the feature selection procedure, establish asymptotic lower an upper bounds for the fraction of separative blocks and specify constraints under which the reliable classification with block-wise feature selection can be performed. The relevance and benefits of the proposed approach are illustrated on both simulated and real data.

  • 288.
    Pavlenko, Tatjana
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rios, Felix Leopoldo
    Graphical posterior predictive classifier:  Bayesian model averaging with particle GibbsManuscript (preprint) (Other academic)
    Abstract [en]

    In this study, we present a multi-class graphical Bayesian predictive classifier that incorporates the uncertainty in the model selection into the standard Bayesian formalism. For each class, the dependence structure underlying the observed features is represented by a set of decomposable Gaussian graphical models. Emphasis is then placed on the Bayesian model averaging which takes full account of the class-specific model uncertainty by averaging over the posterior graph model probabilities. An explicit evaluation of the model probabilities is well known to be infeasible. To address this issue, we consider the particle Gibbs strategy of Olsson et al. (2016) for posterior sampling from decomposable graphical models which utilizes the Christmas tree algorithm of Olsson et al. (2017) as proposal kernel. We also derive a strong hyper Markov law which we call the hyper normal Wishart law that allow to perform the resultant Bayesian calculations locally. The proposed predictive graphical classifier reveals superior performance compared to the ordinary Bayesian predictive rule that does not account for the model uncertainty, as well as to a number of out-of-the-box classifiers.

  • 289.
    Perninge, Magnus
    et al.
    Department of Automatic Control, Lund University.
    Söder, Lennart
    KTH, School of Electrical Engineering (EES), Electric Power Systems.
    Irreversible Investments with Delayed Reaction: An Application to Generation Re-Dispatch in Power System Operation2014In: Mathematical Methods of Operations Research, ISSN 1432-2994, E-ISSN 1432-5217, Vol. 79, no 2, p. 195-224Article in journal (Refereed)
    Abstract [en]

    In this article we consider how the operator of an electric power system should activate bids on the regulating power market in order to minimize the expected operation cost. Important characteristics of the problem are reaction times of actors on the regulating market and ramp-rates for production changes in power plants. Neglecting these will in general lead to major underestimation of the operation cost. Including reaction times and ramp-rates leads to an impulse control problem with delayed reaction. Two numerical schemes to solve this problem are proposed. The first scheme is based on the least-squares Monte Carlo method developed by Longstaff and Schwartz (Rev Financ Stud 14:113-148, 2001). The second scheme which turns out to be more efficient when solving problems with delays, is based on the regression Monte Carlo method developed by Tsitsiklis and van Roy (IEEE Trans Autom Control 44(10):1840-1851, 1999) and (IEEE Trans Neural Netw 12(4):694-703, 2001). The main contribution of the article is the idea of using stochastic control to find an optimal strategy for power system operation and the numerical solution schemes proposed to solve impulse control problems with delayed reaction.

  • 290.
    Philip, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    The Area of a Random Convex Polygon2004Report (Other academic)
    Abstract [en]

    We consider the area of the convex hull of n random points in a square. We give the distribution function of thearea for three and four points. We also present some results on the number of vertices of the convex hull. Results from Monte Carlo tests with large n are presented and compared with asymptotic estimates.

  • 291.
    Philip, Johan
    KTH, School of Engineering Sciences (SCI).
    The area of a random triangle in a regular hexagon2010Report (Other academic)
    Abstract [en]

    We determine the distribution function for the area of a random triangle in a regular hexagon.

  • 292.
    Philip, Johan
    KTH, School of Engineering Sciences (SCI).
    The area of a random triangle in a regular pentagon and the golden ratio2012Report (Other academic)
    Abstract [en]

    We determine the distribution function for the area of a random triangle in a regular pentagon. It turns out that the golden ratio is intimately related to the pentagon calculations.

  • 293.
    Philip, Johan
    KTH, School of Engineering Sciences (SCI).
    The area of a random triangle in a square2010Report (Other academic)
    Abstract [en]

    We determine the distribution function for the area of a random triangle in a unit square. The reault is not new. The method presented here is worked out to shed more light on the problem.

  • 294.
    Pokorny, Florian T.
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Persistent Homology for Learning Densities with Bounded Support2012In: Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012 / [ed] P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou and K.Q. Weinberger, Curran Associates, Inc., 2012, p. 1817-1825Conference paper (Refereed)
    Abstract [en]

    We present a novel method for learning densities with bounded support which enables us to incorporate 'hard' topological constraints. In particular, we show how emerging techniques from computational algebraic topology and the notion of persistent homology can be combined with kernel-based methods from machine learning for the purpose of density estimation. The proposed formalism facilitates learning of models with bounded support in a principled way, and - by incorporating persistent homology techniques in our approach - we are able to encode algebraic-topological constraints which are not addressed in current state of the art probabilistic models. We study the behaviour of our method on two synthetic examples for various sample sizes and exemplify the benefits of the proposed approach on a real-world dataset by learning a motion model for a race car. We show how to learn a model which respects the underlying topological structure of the racetrack, constraining the trajectories of the car.

  • 295.
    Pokorny, Florian T.
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Topological Constraints and Kernel-Based Density Estimation2012Conference paper (Refereed)
    Abstract [en]

    This extended abstract1 explores the question of how to estimate a probability distribution from a finite number of samples when information about the topology of the support region of an underlying density is known. This workshop contribution is a continuation of our recent work [1] combining persistent homology and kernel-based density estimation for the first time and in which we explored an approach capable of incorporating topological constraints in bandwidth selection. We report on some recent experiments with high-dimensional motion capture data which show that our method is applicable even in high dimensions and develop our ideas for potential future applications of this framework.

  • 296.
    Prevost, Quentin
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Measurement and valuation of country risk: how to get a right value?2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The purpose of this master thesis is to focus on country risk and its quantification as a premium. Country risk is an important parameter for investors willing to invest abroad and especially in emerging countries. Indeed, there is additional risk to invest in such countries for numerous reasons. It is thus imperative to be able to quantify it. The actual state of the art about this topic is still at its beginning.

    In this master thesis, I have developed two axis of reflection to get a country risk premium. The first one derives from the Capital Asset Pricing Model and related corporate finance theory. The second axis is based on a more mathematical approach.

    In the end, I have managed to have a quantified results with those two approaches. They are converging for both methods.

    I have applied my results with case studies on two countries: Sweden and Mexico

  • 297.
    Pärlstrand, Erik
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Comparing fast- and slow-acting features for short-term price predictions2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis compares two groups of features for short-term price predictions of futures contracts; fast- and slow-acting features. The fast-acting group are based on limit order book derived features and technical indicators that reacts to changes in price quickly. The slow-acting features constitute of technical indicators that reacts to changes in price slowly.

    The comparison is done through two methods, group importance and a mean cost calculation. This is evaluated for different forecast horizons and contracts. Furthermore, two years of data was provided to do the analysis. Moreover, the comparison is modelled with an ensemble method called random forest. The response is constructed using rolling quantiles and a volume weighted price. 

    The finding implies that fast-acting features are superior at predicting price changes on smaller time scales, while long-acting features are better at predicting prices changes on larger time scales. Furthermore, the multivariate model results were similar to the univariate ones. However, the results are not clear-cut and more investigation ought to be done in order to confirm these results.

  • 298. Radhakrishnan, A.
    et al.
    Solus, Liam
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Uhler, C.
    Counting Markov equivalence classes by number of immoralities2017In: Uncertainty in Artificial Intelligence - Proceedings of the 33rd Conference, UAI 2017, AUAI Press Corvallis , 2017Conference paper (Refereed)
    Abstract [en]

    Two directed acyclic graphs (DAGs) are called Markov equivalent if and only if they have the same underlying undirected graph (i.e. skeleton) and the same set of immoralities. When using observational data alone and typical identifiability assumptions, such as faithfulness, a DAG model can only be determined up to Markov equivalence. Therefore, it is desirable to understand the size and number of Markov equivalence classes (MECs) combinatorially. In this paper, we address this enumerative question using a pair of generating functions that encode the number and size of MECs on a skeleton G, and in doing so we connect this problem to classical problems in combinatorial optimization. The first generating function is a graph polynomial that counts the number of MECs on G by their number of immoralities. Using connections to the independent set problem, we show that computing a DAG on G with the maximum possible number of immoralities is NP-hard. The second generating function counts the MECs on G according to their size. Via computer enumeration, we show that this generating function is distinct for every connected graph on p nodes for all p < 10.

  • 299.
    Rehn, Rasmus
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Stochastic modeling of yield curve shifts usingfunctional data analysis2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis approaches the problem of modeling the multivariate distribution of interest rates by implementing a novel tool of statistics known as functional data analysis (FDA). This is done by viewing yield curve shifts as distinct but continuous stochastic objects defined over a continuum of maturities. Based on these techniques, we provide two stochastic models with different assumptions regarding the temporal dependence of yield curve shifts and compare their performance with empirical data. The study finds that both models replicate the distributions of yield changes with medium- and long-term maturities, whereas none of the models perform satisfactory at the short segment of the yield curve. Both models, however, appear to accurately capture the cross-sectional dependence.

  • 300.
    Ringh, Emil
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Low complexity algorithms for faster-than-Nyquistsign: Using coding to avoid an NP-hard problem2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis is an investigation of what happens when communication links are pushed towards their limits and the data-bearing-pulses are packed tighter in time than previously done. This is called faster-than-Nyquist (FTN) signaling and it will violate the Nyquist inter-symbol interference criterion, implying that the data-pulsesare no longer orthogonal and thus that the samples at the receiver will be dependent on more than one of the transmitted symbols. Inter-symbol interference (ISI) has occurred and the consequences of it are studied for the AWGN-channel model. Here it is shown that in order to do maximum likelihood estimation on these samples the receiver will face an NP-hard problem. The standard algorithm to make good estimations in the ISI case is the Viterbi algorithm, but applied on a block with N bits and interference among K bits thecomplexity is O(N *2K), hence limiting the practical applicability. Here, a precoding scheme is proposed together with a decoding that reduce the estimation complexity. By applying the proposed precoding/decoding to a data block of length N the estimation can be done in O(N2) operations preceded by a single off-line O(N3) calculation. The precoding itself is also done in O(N2)operations, with a single o ff-line operation of O(N3) complexity.

    The strength of the precoding is shown in simulations. In the first it was tested together with turbo codes of code rate 2/3 and block lengthof 6000 bits. When sending 25% more data (FTN) the non-precoded case needed about 2.5 dB higher signal-to-noise ratio (SNR) to have the same error rate as the precoded case. When the precoded case performed without any block errors, the non-precoded case still had a block error rate almost equal to 1.

    We also studied the scenario of transmission with low latency and high reliability. Here, 600 bits were transmitted with a code rate of 2/3, and hence the target was to communicate 400 bits of data. Applying FTN with doublepacking, that is transmitting 1200 bits during the same amount of time, it was possible to lower the code rate to 1/3 since only 400 bits of data was to be communicated. This technique greatly improves the robustness. When the FTN case performed error free, the classical Nyquist case still had a block error rate of 0.19. To reach error free performance the Nyquist case needed 1.25 dB higher SNR compared to the precoded FTN case with lower code rate.

345678 251 - 300 of 373
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf