In this paper, we take a quasi-Newton approach to nonlinear eigenvalue problems (NEPs) of the type M(λ)v = 0, where (Formula presented.) is a holomorphic function. We investigate which types of approximations of the Jacobian matrix lead to competitive algorithms, and provide convergence theory. The convergence analysis is based on theory for quasi-Newton methods and Keldysh’s theorem for NEPs. We derive new algorithms and also show that several well-established methods for NEPs can be interpreted as quasi-Newton methods, and thereby, we provide insight to their convergence behavior. In particular, we establish quasi-Newton interpretations of Neumaier’s residual inverse iteration and Ruhe’s method of successive linear problems.

The equations of motion of a single particle subject to an arbitrary electric and a static magnetic field form a Poisson system. We present a second-order time integration method which preserves well the Poisson structure and compare it to commonly used algorithms, such as the Boris scheme. All the methods are represented in a general framework of splitting methods. We use the so-called phi functions, which give efficient ways for both analyzing and implementing the algorithms. Numerical experiments show an excellent long term stability for the method considered.

In this paper we give first results for the approximation of eAb, i.e. the matrix exponential times a vector, using the incomplete orthogonalization method. The benefits compared to the Arnoldi iteration are clear: shorter orthogonalization lengths make the algorithm faster and a large memory saving is also possible. For the case of three term orthogonalization recursions, simple error bounds are derived using the norm and the field of values of the projected operator. In addition, an a posteriori error estimate is given which in numerical examples is shown to work well for the approximation. In the numerical examples we particularly consider the case where the operator A arises from spatial discretization of an advection-diffusion operator.

The Neumann expansion of Bessel functions (of integer order) of a function g: ℂ→ ℂ corresponds to representing g as a linear combination of basis functions φ0, φ1, …, i.e., g(s)=∑ℓ=0 ∞wℓφℓ(s), where φi(s) = Ji(s), i = 0, …, are the Bessel functions. In this work, we study an expansion for a more general class of basis functions. More precisely, we assume that the basis functions satisfy an infinite dimensional linear ordinary differential equation associated with a Hessenberg matrix, motivated by the fact that these basis functions occur in certain iterative methods. A procedure to compute the basis functions as well as the coefficients is proposed. Theoretical properties of the expansion are studied. We illustrate that non-standard basis functions can give faster convergence than the Bessel functions.

We propose a new numerical method to solve linear ordinary differential equations of the type δu/δt (t, ϵ) = A(ϵ) u(t,ϵ), where A: C → Cn×n is a matrix polynomial with large and sparse matrix coefficients. The algorithm computes an explicit parameterization of approximations of u(t, ϵ) such that approximations for many different values of ϵ and t can be obtained with a very small additional computational effort. The derivation of the algorithm is based on a reformulation of the parameterization as a linear parameter-free ordinary differential equation and on approximating the product of the matrix exponential and a vector with a Krylov method. The Krylov approximation is generated with Arnoldi's method and the structure of the coefficient matrix turns out to be independent of the truncation parameter so that it can also be interpreted as Arnoldi's method applied to an infinite dimensional matrix. We prove the super linear convergence of the algorithm and provide a posteriori error estimates to be used as termination criteria. The behavior of the algorithm is illustrated with examples stemming from spatial discretizations of partial differential equations.

The action of the matrix exponential and related phi functions on vectors plays an important role in the application of exponential integrators to ordinary differential equations. For the efficient evaluation of linear combinations of such actions we consider a new Krylov subspace algorithm. By employing Cauchy's integral formula an error representation of the numerical approximation is given. This is used to derive a priori error bounds that describe well the convergence behavior of the algorithm. Further, an efficient a posteriori estimate is constructed. Numerical experiments illustrating the convergence behavior are given in MATLAB.