Change search
Refine search result
1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Dzougoutov, Anna
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Moon, Kyoung-Sook
    Department of Mathematics, University of Maryland.
    von Schwerin, Erik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Szepessy, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Tempone, Raul
    ICES, The University of Texas at Austin.
    Adaptive Monte Carlo Algorithms for Stopped Diffusion2005In: Multiscale Methods in Science and Engineering, Berlin: Springer-Verlag , 2005, 44, p. 59-88Chapter in book (Other academic)
    Abstract [en]

    We present adaptive algorithms for weak approximation of stopped diffusion using the Monte Carlo Euler method. The goal is to compute an expected value of a given function g depending on the solution X of an Itô stochastic differential equation and on the first exit time τ from a given domain.

    The main steps in the extension to stopped diffusion processes are to use a conditional probability to estimate the first exit time error and introduce difference quotients to approximate the initial data of the dual solutions.

  • 2. Moon, Kyoung-Sook
    et al.
    Szepessy, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Tempone Olariaga, Raul
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Zouraris, Georgios
    Convergence rates for adaptive weak approximation of stochastic differential equations2005In: Stochastic Analysis and Applications, ISSN 0736-2994, E-ISSN 1532-9356, Vol. 23, no 3, p. 511-558Article in journal (Refereed)
    Abstract [en]

    Convergence rates of adaptive algorithms for weak approximations of Ito stochastic differential equations are proved for the Monte Carlo Euler method. Two algorithms based either oil optimal stochastic time steps or optimal deterministic time steps are studied. The analysis of their computational complexity combines the error expansions with a posteriori leading order term introduced in Szepessy et al. [Szepessy, A.. R. Tempone, and G. Zouraris. 2001. Comm. Pare Appl. Math. 54:1169-1214] and ail extension of the convergence results for adaptive algorithms approximating deterministic ordinary differential equations, derived in Moon et al. [Moon, K.-S., A. Szepessy, R. Tempone, and G. Zouraris. 2003. Numer. Malh. 93:99-129]. The main step in the extension is the proof of the almost sure convergence of the error density. Both adaptive alogrithms are proven to stop with asymptotically optimal number of steps up to a problem independent factor defined in the algorithm. Numerical examples illustrate the behavior of the adaptive algorithms, motivating when stochastic and deterministic adaptive time steps are more efficient than constant time steps and when adaptive stochastic steps are more efficient than adaptive deterministic steps.

  • 3.
    Moon, Kyoung-Sook
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA (closed 2012-06-30).
    Szepessy, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA (closed 2012-06-30).
    Tempone, Raul
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA (closed 2012-06-30).
    Zouraris, Georgios
    Div of Applied Math - Statistics, Univ of Crete.
    Stochastic Dierential Equations: Model and Numerics2008Other (Refereed)
  • 4.
    Moon, Kyoung-Sook
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    von Schwerin, Erik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Szepessy, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Tempone, Raul
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    An Adaptive Algorithm for Ordinary, Stochastic and Partial Differential Equations2005In: Recent Advances in Adaptive Computation, Providence: American Mathematical Society , 2005, p. 325-343Chapter in book (Other academic)
    Abstract [en]

    The theory of a posteriori error estimates suitable for adaptive refinement is well established. This work focuses on the fundamental, but less studied, issue of convergence rates of adaptive algorithms. In particular, this work describes a simple and general adaptive algorithm applied to ordinary, stochastic and partial differential equations with proven convergence rates. The presentation has three parts: The error approximations used to build error indicators for the adaptive algorithm are based on error expansions with computable leading order terms. It is explained how to measure optimal convergence rates for approximation of functionals of the solution, and why convergence of the error density is always useful and subtle in the case of stochastic and partial differential equations. The adaptive algorithm, performing successive mesh refinements, either reduces the maximal error indicator by a factor or stops with the error asymptotically bounded by the prescribed accuracy requirement. Furthermore, the algorithm stops using the optimal number of degrees of freedom, up to a problem independent factor.

1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf