We consider a system of diffusing particles on the real line in a quadratic external potential and with a logarithmic interaction potential. The empirical measure process is known to converge weakly to a deterministic measure-valued process as the number of particles tends to infinity. Provided the initial fluctuations are small, the rescaled linear statistics of the empirical measure process converge in distribution to a Gaussian limit for sufficiently smooth test functions. For a large class of analytic test functions, we derive explicit general formulae for the mean and covariance in this central limit theorem by analyzing a partial differential equation characterizing the limiting fluctuations.

KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).

Ergodic theorems for random clusters2010In: Stochastic Processes and their Applications, ISSN 0304-4149, E-ISSN 1879-209X, Vol. 120, no 3, p. 296-305Article in journal (Refereed)

Abstract [en]

We prove pointwise ergodic theorems for a class of random measures which Occurs in Laplacian growth models, most notably in the anisotropic Hastings-Levitov random cluster models. The proofs are based oil the theory Of quasi-orthogonal functions and uniform Wintner-Wintner theorems.

We establish the posterior consistency for parametric, partially observed, fully dominated Markov models. The prior is assumed to assign positive probability to all neighborhoods of the true parameter, for a distance induced by the expected Kullback-Leibler divergence between the parametric family members' Markov transition densities. This assumption is easily checked in general. In addition, we show that the posterior consistency is implied by the consistency of the maximum likelihood estimator. The result is extended to possibly improper priors and non-stationary observations. Finally, we check our assumptions on a linear Gaussian model and a well-known stochastic volatility model.

We study a formulation of regular variation for multivariate stochastic processes on the unit interval with sample paths that are almost surely right-continuous with left limits and we provide necessary and sufficient conditions for such stochastic processes to be regularly varying. A version of the Continuous Mapping Theorem is proved that enables the derivation of the tail behavior of rather general mappings of the regularly varying stochastic process. For a wide class of Markov processes with increments satisfying a condition of weak dependence in the tails we obtain simplified sufficient conditions for regular variation. For such processes we show that the possible regular variation limit measures concentrate on step functions with one step, from which we conclude that the extremal behavior of such processes is due to one big jump or an extreme starting point. By combining this result with the Continuous Mapping Theorem, we are able to give explicit results on the tail behavior of various vectors of functionals acting on such processes. Finally, using the Continuous Mapping Theorem we derive the tail behavior of filtered regularly varying Levy processes.

Importance sampling is a popular method for efficient computation of various properties of a distribution such as probabilities, expectations, quantiles etc. The output of an importance sampling algorithm can be represented as a weighted empirical measure, where the weights are given by the likelihood ratio between the original distribution and the sampling distribution. In this paper the efficiency of an importance sampling algorithm is studied by means of large deviations for the weighted empirical measure. The main result, which is stated as a Laplace principle for the weighted empirical measure arising in importance sampling, can be viewed as a weighted version of Sanov's theorem. The main theorem is applied to quantify the performance of an importance sampling algorithm over a collection of subsets of a given target set as well as quantile estimates. The proof of the main theorem relies on the weak convergence approach to large deviations developed by Dupuis and Ellis.

In this article, we consider non-smooth time-dependent domains whose boundary is W^{1,p} in time and single-valued, smoothly varying directions of reflection at the boundary. In this setting, we first prove existence and uniqueness of strong solutions to stochastic differential equations with oblique reflection. Secondly, we prove, using the theory of viscosity solutions, a comparison principle for fully nonlinear second-order parabolic partial differential equations with oblique derivative boundary conditions. As a consequence, we obtain uniqueness, and, by barrier construction and Perron’s method, we also conclude existence of viscosity solutions. Our results generalize two articles by Dupuis and Ishii to time-dependent domains.

We study the asymptotic performance of approximate maximum likelihood estimators for state space models obtained via sequential Monte Carlo methods. The state space of the latent Markov chain and the parameter space are assumed to be compact. The approximate estimates are computed by, firstly, running possibly dependent particle filters on a fixed grid in the parameter space, yielding a pointwise approximation of the log-likelihood function. Secondly, extensions of this approximation to the whole parameter space are formed by means of piecewise constant functions or B-spline interpolation, and approximate maximum likelihood estimates are obtained through maximization of the resulting functions. In this setting we formulate criteria for how to increase the number of particles and the resolution of the grid in order to produce estimates that are consistent and asymptotically normal.

We study simulated annealing algorithms to maximise a function psi on a subset of R(d). In classical simulated annealing, given a current state theta(n) in stage n of the algorithm, the probability to accept a proposed state z at which psi is smaller, is exp(-beta(n+1)(psi(z) - psi (theta(n))) where (beta(n)) is the inverse temperature. With the standard logarithmic increase of (beta(n)) the probability P(psi(theta(n)) <= psi(max) - epsilon), with psi(max) the maximal value of psi, then tends to zero at a logarithmic rate as n increases. We examine variations of this scheme in which (beta(n)) is allowed to grow faster, but also consider other functions than the exponential for determining acceptance probabilities. The main result shows that faster rates of convergence can be obtained, both with the exponential and other acceptance functions. We also show how the algorithm may be applied to functions that cannot be computed exactly but only approximated, and give an example of maximising the log-likelihood function for a state-space model.