Change search
Refine search result
1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Douc, Randal
    et al.
    Maire, Florian
    Olsson, Jimmy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    On the use of Markov chain Monte Carlo methods for the sampling of mixture models: a statistical perspective2015In: Statistics and computing, ISSN 0960-3174, E-ISSN 1573-1375, Vol. 25, no 1, p. 95-110Article in journal (Refereed)
    Abstract [en]

    In this paper we study asymptotic properties of different data-augmentation-type Markov chain Monte Carlo algorithms sampling from mixture models comprising discrete as well as continuous random variables. Of particular interest to us is the situation where sampling from the conditional distribution of the continuous component given the discrete component is infeasible. In this context, we advance Carlin & Chib's pseudo-prior method as an alternative way of infering mixture models and discuss and compare different algorithms based on this scheme. We propose a novel algorithm, the Frozen Carlin & Chib sampler, which is computationally less demanding than any Metropolised Carlin & Chib-type algorithm. The significant gain of computational efficiency is however obtained at the cost of some asymptotic variance. The performance of the algorithm vis-A -vis alternative schemes is, using some recent results obtained in Maire et al. (Ann Stat 42: 1483-1510, 2014) for inhomogeneous Markov chains evolving alternatingly according to two different -reversible Markov transition kernels, investigated theoretically as well as numerically.

  • 2. Douc, Randal
    et al.
    Moulines, Eric
    Olsson, Jimmy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Long-term stability of sequential monte carlo methods under verifiable conditions2014In: The Annals of Applied Probability, ISSN 1050-5164, E-ISSN 2168-8737, Vol. 24, no 5, p. 1767-1802Article in journal (Refereed)
    Abstract [en]

    This paper discusses particle filtering in general hidden Markov models (HMMs) and presents novel theoretical results on the long-term stability of bootstrap-type particle filters. More specifically, we establish that the asymptotic variance of the Monte Carlo estimates produced by the bootstrap filter is uniformly bounded in time. On the contrary to most previous results of this type, which in general presuppose that the state space of the hidden state process is compact (an assumption that is rarely satisfied in practice), our very mild assumptions are satisfied for a large class of HMMs with possibly non-compact state space. In addition, we derive a similar time uniform bound on the asymptotic L-p error. Importantly, our results hold for misspecified models; that is, we do not at all assume that the data entering into the particle filter originate from the model governing the dynamics of the particles or not even from an HMM.

  • 3. Maire, Florian
    et al.
    Douc, Randal
    Olsson, Jimmy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Comparison of asymptotic variances of inhomogeneous Markov chains with application to Markov chain Monte Carlo methods2014In: Annals of Statistics, ISSN 0090-5364, E-ISSN 2168-8966, Vol. 42, no 4, p. 1483-1510Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the asymptotic variance of sample path averages for inhomogeneous Markov chains that evolve alternatingly according to two different 7-reversible Markov transition kernels P and Q. More specifically, our main result allows us to compare directly the asymptotic variances of two inhomogeneous Markov chains associated with different kernels Pi and Q(i), i is an element of {0, 1}, as soon as the kernels of each pair (P-0, P-1) and (Q(0), Q(1)) can be ordered in the sense of lag-one autocovariance. As an important application, we use this result for comparing different data-augmentation-type Metropolis Hastings algorithms. In particular, we compare some pseudo-marginal algorithms and propose a novel exact algorithm, referred to as the random refreshment algorithm, which is more efficient, in terms of asymptotic variance, than the Grouped Independence Metropolis Hastings algorithm and has a computational complexity that does not exceed that of the Monte Carlo Within Metropolis algorithm.

  • 4.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Douc, And Randal
    TELECOM SudParis, Dept CITI, 9 Rue Charles Fourier, F-91000 Evry, France..
    Numerically stable online estimation of variance in particle filters2019In: Bernoulli, ISSN 1350-7265, E-ISSN 1573-9759, Vol. 25, no 2, p. 1504-1535Article in journal (Refereed)
    Abstract [en]

    This paper discusses variance estimation in sequential Monte Carlo methods, alternatively termed particle filters. The variance estimator that we propose is a natural modification of that suggested by H.P. Chan and T.L. Lai [Ann. Statist. 41 (2013) 2877-2904], which allows the variance to be estimated in a single run of the particle filter by tracing the genealogical history of the particles. However, due particle lineage degeneracy, the estimator of the mentioned work becomes numerically unstable as the number of sequential particle updates increases. Thus, by tracing only a part of the particles' genealogy rather than the full one, our estimator gains long-term numerical stability at the cost of a bias. The scope of the genealogical tracing is regulated by a lag, and under mild, easily checked model assumptions, we prove that the bias tends to zero geometrically fast as the lag increases. As confirmed by our numerical results, this allows the bias to be tightly controlled also for moderate particle sample sizes.

  • 5.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Pavlenko, Tatjana
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Rios, Felix
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Bayesian structure learning in graphical models using sequential Monte CarloManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper we present a family of algorithms, the junction tree expanders, for expanding junction trees in the sense that the number of nodes in the underlying decomposable graph is increased by one. The family of junction tree expanders is equipped with a number of theoretical results including a characterization stating that every junction tree and consequently every de- composable graph can be constructed by iteratively using a junction tree expander. Further, an important feature of a stochastic implementation of a junction tree expander is the Markovian property inherent to the tree propagation dynamics. Using this property, a sequential Monte Carlo algorithm for approximating a probability distribution defined on the space of decompos- able graphs is developed with the junction tree expander as a proposal kernel. Specifically, we apply the sequential Monte Carlo algorithm for structure learning in decomposable Gaussian graphical models where the target distribution is a junction tree posterior distribution. In this setting, posterior parametric inference on the underlying decomposable graph is a direct by- product of the suggested methodology; working with the G-Wishart family of conjugate priors, we derive a closed form expression for the Bayesian estimator of the precision matrix of Gaus- sian graphical models Markov with respect to a decomposable graph. Performance accuracy of the graph and parameter estimators are illustrated through a collection of numerical examples demonstrating the feasibility of the suggested approach in high-dimensional domains. 

  • 6.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    An efficient particle-based online EM algorithm for general state-space models2015In: IFAC-PapersOnLine, ISSN 2405-8963, Vol. 48, no 28, p. 963-968Article in journal (Refereed)
    Abstract [en]

    Estimating the parameters of general state-space models is a topic of importance for many scientific and engineering disciplines. In this paper we present an online parameter estimation algorithm obtained by casting our recently proposed particle-based, rapid incremental smoother (PaRIS) into the framework of online expectation-maximization (EM) for state-space models proposed by Cappé (2011). Previous such particle-based implementations of online EM suffer typically from either the well-known degeneracy of the genealogical particle paths or a quadratic complexity in the number of particles. However, by using the computationally efficient and numerically stable PaRIS algorithm for estimating smoothed expectations of timeaveraged sufficient statistics of the model we obtain a fast algorithm with very limited memory requirements and a computational complexity that grows only linearly with the number of particles. The efficiency of the algorithm is illustrated in a simulation study.

  • 7.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    An efficient particle-based online EM algorithm for general state-space modelsManuscript (preprint) (Other academic)
  • 8.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Efficient parameter inference in general hidden Markov models using the filter derivatives2016In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 3984-3988Conference paper (Refereed)
    Abstract [en]

    Estimating online the parameters of general state-space hidden Markov models is a topic of importance in many scientific and engineering disciplines. In this paper we present an online parameter estimation algorithm obtained by casting our recently proposed particle-based, rapid incremental smoother (Paris) into the framework of recursive maximum likelihood estimation for general hidden Markov models. Previous such particle implementations suffer from either quadratic complexity in the number of particles or from the well-known degeneracy of the genealogical particle paths. By using the computational efficient and numerically stable Paris algorithm for estimating the needed prediction filter derivatives we obtain a fast algorithm with a computational complexity that grows only linearly with the number of particles. The efficiency and stability of the proposed algorithm are illustrated in a simulation study.

  • 9.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
    Efficient particle-based online smoothing in general hidden Markov models: The PaRIS algorithm2017In: Bernoulli, ISSN 1350-7265, E-ISSN 1573-9759, Vol. 23, no 3, p. 1951-1996Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel algorithm, the particle-based, rapid incremental smoother (PaRIS), for efficient online approximation of smoothed expectations of additive state functionals in general hidden Markov models. The algorithm, which has a linear computational complexity under weak assumptions and very limited memory requirements, is furnished with a number of convergence results, including a central limit theorem. An interesting feature of PaRIS, which samples on-the-fly from the retrospective dynamics induced by the particle filter, is that it requires two or more backward draws per particle in order to cope with degeneracy of the sampled trajectories and to stay numerically stable in the long run with an asymptotic variance that grows only linearly with time.

  • 10.
    Olsson, Jimmy
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Westerborn, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Efficient particle-based online smoothing in general hidden Markov models: the PaRIS algorithmManuscript (preprint) (Other academic)
  • 11.
    Westerborn, Johan
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Olsson, Jimmy
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    EFFICIENT PARTICLE-BASED ONLINE SMOOTHING IN GENERAL HIDDEN MARKOV MODELS2014In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, ISSN 1520-6149Article in journal (Refereed)
    Abstract [en]

    This paper deals with the problem of estimating expectations of sums of additive functionals under the joint smoothing distribution in general hidden Markov models. Computing such expectations is a key ingredient in any kind of expectation-maximization-based parameter inference in models of this sort. The paper presents a computationally efficient algorithm for online estimation of these expectations in a forward manner. The proposed algorithm has a linear computational complexity in the number of particles and does not require old particles and weights to be stored during the computations. The algorithm avoids completely the well-known particle path degeneracy problem of the standard forward smoother. This makes it highly applicable within the framework of online expectation-maximization methods. The simulations show that the proposed algorithm provides the same precision as existing algorithms at a considerably lower computational cost.

1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf