kth.sePublications
Change search
Refine search result
12 1 - 50 of 51
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Araujo-Cabarcas, J. C.
    et al.
    Engström, C.
    Jarlebring, Elias
    KTH, Centres, SeRC - Swedish e-Science Research Centre. KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Efficient resonance computations for Helmholtz problems based on a Dirichlet-to-Neumann map2018In: Journal of Computational and Applied Mathematics, ISSN 0377-0427, E-ISSN 1879-1778, Vol. 330, p. 177-192Article in journal (Refereed)
    Abstract [en]

    We present an efficient procedure for computing resonances and resonant modes of Helmholtz problems posed in exterior domains. The problem is formulated as a nonlinear eigenvalue problem (NEP), where the nonlinearity arises from the use of a Dirichlet-to-Neumann map, which accounts for modeling unbounded domains. We consider a variational formulation and show that the spectrum consists of isolated eigenvalues of finite multiplicity that only can accumulate at infinity. The proposed method is based on a high order finite element discretization combined with a specialization of the Tensor Infinite Arnoldi method (TIAR). Using Toeplitz matrices, we show how to specialize this method to our specific structure. In particular we introduce a pole cancellation technique in order to increase the radius of convergence for computation of eigenvalues that lie close to the poles of the matrix-valued function. The solution scheme can be applied to multiple resonators with a varying refractive index that is not necessarily piecewise constant. We present two test cases to show stability, performance and numerical accuracy of the method. In particular the use of a high order finite element discretization together with TIAR results in an efficient and reliable method to compute resonances. 

  • 2. Beeumen, R. V.
    et al.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Michiels, W.
    A rank-exploiting infinite Arnoldi algorithm for nonlinear eigenvalue problems2016In: Numerical Linear Algebra with Applications, ISSN 1070-5325, E-ISSN 1099-1506Article in journal (Refereed)
    Abstract [en]

    Summary: We consider the nonlinear eigenvalue problem M(λ)x = 0, where M(λ) is a large parameter-dependent matrix. In several applications, M(λ) has a structure where the higher-order terms of its Taylor expansion have a particular low-rank structure. We propose a new Arnoldi-based algorithm that can exploit this structure. More precisely, the proposed algorithm is equivalent to Arnoldi's method applied to an operator whose reciprocal eigenvalues are solutions to the nonlinear eigenvalue problem. The iterates in the algorithm are functions represented in a particular structured vector-valued polynomial basis similar to the construction in the infinite Arnoldi method [Jarlebring, Michiels, and Meerbergen, Numer. Math., 122 (2012), pp. 169-195]. In this paper, the low-rank structure is exploited by applying an additional operator and by using a more compact representation of the functions. This reduces the computational cost associated with orthogonalization, as well as the required memory resources. The structure exploitation also provides a natural way in carrying out implicit restarting and locking without the need to impose structure in every restart. The efficiency and properties of the algorithm are illustrated with two large-scale problems.

  • 3. Claes, Rob
    et al.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Meerbergen, Karl
    Upadhyaya, Parikshit
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Linearizability of eigenvector nonlinearitiesIn: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162Article in journal (Other academic)
    Abstract [en]

    We present a method to linearize, without approximation, a specific class of eigenvalue problems with eigenvector nonlinearities (NEPv), where the nonlinearities are expressed by scalar functions that are defined by a quotient of linear functions of the eigenvector. The exact linearization relies on an equivalent multiparameter problem (MEP) that contains the exact solutions of the NEPv. Due to the characterization of MEPs in terms of a generalized eigenvalue problem this provides a direct way to compute all NEPv solutions for small problems, and it opens up the possibility to develop locally convergent iterative methods for larger problems. Moreover, the linear formulation allows us to easily determine the number of solutions of the NEPv. We propose two numerical schemes that exploit the structure of the linearization: inverse iteration and residual inverse iteration. We show how symmetry in the MEP can be used to improve reliability and reduce computational cost of both methods. Two numerical examples verify the theoretical results and a third example shows the potential of a hybrid scheme that is based on a combination of the two proposed methods.

  • 4.
    Claes, Rob
    et al.
    Katholieke Univ Leuven, Dept Comp Sci, B-3001 Leuven, Belgium..
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.). KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Meerbergen, Karl
    Katholieke Univ Leuven, Dept Comp Sci, B-3001 Leuven, Belgium..
    Upadhyaya, Parikshit
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.). KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Linearizable Eigenvector Nonlinearities2022In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 43, no 2, p. 764-786Article in journal (Refereed)
    Abstract [en]

    We present a method to linearize, without approximation, a specific class of eigenvalue problems with eigenvector nonlinearities (NEPv), where the nonlinearities are expressed by scalar functions that are defined by a quotient of linear functions of the eigenvector. The exact linearization relies on an equivalent multiparameter eigenvalue problem (MEP) that contains the exact solutions of the NEPv. Due to the characterization of MEPs in terms of a generalized eigenvalue problem this provides a direct way to compute all NEPv solutions for small problems, and it opens up the possibility to develop locally convergent iterative methods for larger problems. Moreover, the linear formulation allows us to easily determine the number of solutions of the NEPv. We propose two numerical schemes that exploit the structure of the linearization: inverse iteration and residual inverse iteration. We show how symmetry in the MEP can be used to improve reliability and reduce computational cost of both methods. Two numerical examples verify the theoretical results, and a third example shows the potential of a hybrid scheme that is based on a combination of the two proposed methods.

  • 5.
    Correnty, Siobhan
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Szyld, Daniel B.
    Department of Mathematics, Temple University, 1805 N. Broad Street, Philadelphia, PA 19122-6094, 1805 N. Broad Street.
    Preconditioned Chebyshev BiCG method for parameterized linear systems2023In: Electronic Transactions on Numerical Analysis, E-ISSN 1068-9613, Vol. 58, p. 629-656Article in journal (Refereed)
    Abstract [en]

    We consider the problem of approximating the solution to A(μ)x(μ) = b for many different values of the parameter μ. Here, A(μ) is large, sparse, and nonsingular with a nonlinear dependence on μ. Our method is based on a companion linearization derived from an accurate Chebyshev interpolation of A(μ) on the interval [-a; a], a 2 R+, inspired by Effenberger and Kressner [BIT, 52 (2012), pp. 933-951]. The solution to the linearization is approximated in a preconditioned BiCG setting for shifted systems, as proposed in Ahmad et al. [SIAM J. Matrix Anal. Appl., 38 (2017), pp. 401-424], where the Krylov basis matrix is formed once. This process leads to a short-term recurrence method, where one execution of the algorithm produces the approximation of x(μ) for many different values of the parameter μ ∈ [-a; a] simultaneously. In particular, this work proposes one algorithm which applies a shift-and-invert preconditioner exactly as well as an algorithm which applies the preconditioner inexactly based on the work by Vogel [Appl. Math. Comput., 188 (2007), pp. 226-233]. The competitiveness of the algorithms is illustrated with large-scale problems arising from a finite element discretization of a Helmholtz equation with a parameterized material coefficient. The software used in the simulations is publicly available online, and thus all our experiments are reproducible.

  • 6. Diehl, Moritz
    et al.
    Glineur, FrançoisJarlebring, EliasKTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA.Michiels, Wim
    Recent Advances in Optimization and its Applications in Engineering2010Collection (editor) (Refereed)
    Abstract [en]

    Mathematical optimization encompasses both a rich and rapidly evolving body of fundamental theory, and a variety of exciting applications in science and engineering. The present book contains a careful selection of articles on recent advances in optimization theory, numerical methods, and their applications in engineering. It features in particular new methods and applications in the fields of optimal control, PDE-constrained optimization, nonlinear optimization, and convex optimization. The authors of this volume took part in the 14th Belgian-French-German Conference on Optimization (BFG09) organized in Leuven, Belgium, on September 14-18, 2009. The volume contains a selection of reviewed articles contributed by the conference speakers as well as three survey articles by plenary speakers and two papers authored by the winners of the best talk and best poster prizes awarded at BFG09. Researchers and graduate students in applied mathematics, computer science, and many branches of engineering will find in this book an interesting and useful collection of recent ideas on the methods and applications of optimization.

  • 7.
    Gaaf, Sarah W.
    et al.
    TU Eindhoven.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    The infinite Bi-Lanczos method for nonlinear eigenvalue problems2017In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 39, p. S898-S919Article in journal (Refereed)
  • 8.
    Jarlebring, Elias
    Institut Computational Mathematics, TU Braunschweig.
    A quadratic eigenproblem in the analysis of a time delay system2006In: Proceedings in Applied Mathematics and Mechanics: PAMM, E-ISSN 1617-7061, Vol. 6, no 1, p. 63-66Article in journal (Refereed)
    Abstract [en]

    In this work we solve a quadratic eigenvalue problem occurring in a method to compute the set of delays of a linear time delay system (TDS) such that the system has an imaginary eigenvalue. The computationally dominating part of the method is to find all eigenvalues z of modulus one of the quadratic eigenvalue problem

    Because of its origin in the vectorization of a Lyapunov type matrix equation, the quadratic eigenvalue problem is, even for moderate size problems, of very large size. We show one way to treat this problem by exploiting the Lyapunov type structure of the quadratic eigenvalue problem when constructing an iterative solver. More precisely, we show that the shift-invert operation for the companion form of the quadratic eigenvalue problem can be efficiently computed by solving a Sylvester equation. The usefulness of this exploitation is demonstrated with an example.

  • 9.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.). KTH, Centres, SeRC - Swedish e-Science Research Centre.
    BROYDEN'S METHOD FOR NONLINEAR EIGENPROBLEMS2019In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 41, no 2, p. A989-A1012Article in journal (Refereed)
    Abstract [en]

    Broyden's method is a general method commonly used for nonlinear systems of equations when very little information is available about the problem. We develop an approach based on Broyden's method for the structure appearing in nonlinear eigenvalue problems. Our approach is designed for problems where the evaluation of a matrix vector product is computationally expensive, essentially as expensive as solving the corresponding linear system of equations. We show how the structure of the Jacobian matrix can be incorporated into the algorithm to improve convergence. The algorithm exhibits local superlinear convergence for simple eigenvalues, and we characterize the convergence. We show how deflation can be integrated and combined such that the method can be used to compute several eigenvalues. A specific problem in machine tool milling, coupled with a PDE, is used to illustrate the approach. The simulations were carried out using the Julia programming language, and the simulation software is provided publicly for reproducibility.

  • 10.
    Jarlebring, Elias
    TU Braunschweig, Germany.
    Computing Critical delays for time delay systems with multiple delays2006Conference paper (Refereed)
    Abstract [en]

    In this work we present a method to analyze the robustness of stability of a time-delay system (TDS) with respect to the delays. This is done by computing the delays for which the system has a purely imaginary eigenvalue. These delays, called critical delays , generate potential points for a stability switch, i.e., the point where the system switches from a stable to unstable. To derive the method, we find a Lyapunov-type equation , equivalent to the characteristic equation of the TDS. Unlike the characteristic equation, the Lyapunov-type equation does not have any non-exponential terms if the eigenvalue is imaginary. This allows us to solve the Lyapunov-type equation by rewriting it to a quadratic eigenvalue problem for which there are efficient numerical methods. For the scalar case, we find a new explicit expression for the curves in the stability chart. The method is applied to previously solved examples as well as previously unsolved problems of larger dimension.

  • 11.
    Jarlebring, Elias
    KTH, School of Computer Science and Communication (CSC).
    Computing the stability region in delay-space of a tds using polynomial eigenproblems2006In: Proceedings of the 6th IFAC Workshop on Time-Delay Systems, 2006Conference paper (Refereed)
    Abstract [en]

     In this work we analyze stability properties of retarded linear time invariant multi-dimensional, multi-delay, time delay systems with respect to perturbations in the delay parameters. We analyze two methods which allow the computation of the critical delays, i.e., the points in delay-space which causes the system to have a purely imaginary eigenvalue. The critical delays are potential stability boundaries as the boundaries of the stability region is necessarily a subset of the critical delays.The two methods originates from a Lyapunov-type condition, which is completely self-contained in this work. The first method corresponds to the case of commensurate delays, for which the the Lyapunov-type condition reduces to a polynomial eigenvalue problem for which the first companion form is exactly the eigenvalue problem occurring in Chen et al. (1995). The second method is the result of a simple substitution which allows the computation of the critical delays of an incommensurate system by solving a quadratic eigenvalue problem. For the scalar multi-delay case we find a closed expression for the critical curves using this method. We confirm the methods by comparing it to previous work and published examples.

  • 12.
    Jarlebring, Elias
    Department of Computer Science, K.U. Leuven, Celestijnenlaan 200 A, 3001 Leuven-Heverlee, Belgium.
    Convergence factors of Newton methods for nonlinear eigenvalue problems2012In: Linear Algebra and its Applications, ISSN 0024-3795, E-ISSN 1873-1856, Vol. 436, no 10, p. 3943-3953Article in journal (Refereed)
    Abstract [en]

    Consider a complex sequence convergent to λC with order pN. The convergence factor is typically defined as the fraction ck:=(λk+1-λ)/(λk-λ)p in the limit k. In this paper, we prove formulas characterizing ck in the limit k for two different Newton-type methods for nonlinear eigenvalue problems. The formulas are expressed in terms of the left and right eigenvectors.

    The two treated methods are called the method of successive linear problems (MSLP) and augmented Newton and are widely used in the literature. We prove several explicit formulas for ck for both methods. Formulas for both methods are found for simple as well as double eigenvalues. In some cases, we observe in examples that the limit ck as k does not exist. For cases where this limit does not appear to exist, we prove other limiting expressions such that a characterization of ck in the limit is still possible.

  • 13.
    Jarlebring, Elias
    Tech Univ Carolo Wilhelmina Braunschweig, Inst Computat Math, D-38023 Braunschweig, Germany.
    Critical delays and Polynomial Eigenvalue Problems2009In: Journal of Computational and Applied Mathematics, ISSN 0377-0427, E-ISSN 1879-1778, Vol. 224, no 1, p. 296-306Article in journal (Refereed)
    Abstract [en]

     In this work we present a new method to compute the delays of delay-differential equations (DDEs), such that the DDE has a purely imaginary eigenvalue. For delay-differential equations with multiple delays, the critical curves or critical surfaces in delay space (that is, the set of delays where the DDE has a purely imaginary eigenvalue) are parameterized. We show how the method is related to other works in the field by treating the case where the delays are integer multiples of some delay value, i.e., commensurate delays. The parameterization is done by solving a quadratic eigenvalue problem which is constructed from the vectorization of a matrix equation and hence typically of large size. For commensurate delay-differential equations, the corresponding equation is a polynomial eigenvalue problem. As a special case of the proposed method, we find a closed form for a parameterization of the critical surface for the scalar case. We provide several examples with visualizations where the computation is done with some exploitation of the structure of eigenvalue problems. (c) 2008 Elsevier B.V. All rights reserved.

  • 14.
    Jarlebring, Elias
    KTH, School of Computer Science and Communication (CSC).
    On Critical Delays for Linear Neutral Delay Systems2007In: Proceedings of the European Control Conference, 2007Conference paper (Refereed)
    Abstract [en]

    In this work we address the problem of finding the critical delays of a linear neutral delay system, i.e., the delays such that the system has a purely imaginary eigenvalue. Even though neutral delay systems exhibit some discontinuity properties with respect to changes in the delays an essential part in a non-conservative stability analysis with respect to changes in the delays, is the computation of the critical delays. We generalize previous results on critical delays and stability switches for retarded time-delay systems, under some minor assumptions on the delay system. The work starts with stating a general equivalence theorem between the spectrum and a matrix function condition.We show how this theorem can be applied to the commensurate timedelay system to compute the critical delays. It turns out that the resulting method is closely related to parts of the results of Fu, Niculescu and Chen[6]. For the incommensurate case we present a scheme which allows the computation of the critical curves, i.e., the points in delay-space for which the system has a purely imaginary eigenvalue. We apply the method to previously investigated examples, in order to provide a verification of the results, as well as to an example for which the stability picture is, to our knowledge, not yet known.

  • 15.
    Jarlebring, Elias
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Correnty, Siobhan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Infinite GMRES for Parameterized Linear Systems2022In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 43, no 3, p. 1382-1405Article in journal (Refereed)
    Abstract [en]

    We consider linear parameterized systems A(mu)x(mu) = b for many different mu, where A is large and sparse and depends nonlinearly on mu. Solving such systems individually for each mu would require great computational effort. In this work we propose to compute a partial parameterization (x) over tilde approximate to x(mu), where (x) over tilde(mu) is cheap to evaluate for many mu. Our methods are based on the observation that a companion linearization can be formed where the dependence on mu is only linear. In particular, methods are presented that combine the well-established Krylov subspace method for linear systems, GMRES, with algorithms for nonlinear eigenvalue problems (NEPs) to generate a basis for the Krylov subspace. Within this new approach, the basis matrix is constructed in three different ways, using a tensor structure and exploiting that certain problems have low-rank properties. The methods are analyzed analogously to the standard convergence theory for the method GMRES for linear systems. More specifically, the error is estimated based on the magnitude of the parameter mu and the spectrum of the linear companion matrix, which corresponds to the reciprocal solutions to the corresponding NEP. Numerical experiments illustrate the competitiveness of the methods for large-scale problems. The simulations are reproducible and publicly available online.

  • 16.
    Jarlebring, Elias
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA.
    Damm, Tobias
    The Lambert W function and the spectrum of some multidimensional time-delay systems2007In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 43, no 12, p. 2124-2128Article in journal (Refereed)
    Abstract [en]

    In this note we find an explicit expression for the eigenvalues of a retarded time-delay system with one delay, for the special case that the system matrices are simultaneously triangularizable, which includes the case where they commute. Using matrix function definitions we define a matrix version of the Lambert W function, from which we form the expression. We prove by counter-example that some expressions in other publications on Lambert W for time-delay systems do not always hold. (c) 2007 Elsevier Ltd. All rights reserved.

  • 17.
    Jarlebring, Elias
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Damm, Tobias
    Michiels, Wim
    Model reduction of time-delay systems using position balancing and delay Lyapunov equations2013In: MCSS. Mathematics of Control, Signals and Systems, ISSN 0932-4194, E-ISSN 1435-568X, Vol. 25, no 2, p. 147-166Article in journal (Refereed)
    Abstract [en]

    Balanced truncation is a standard and very natural approach to approximate dynamical systems. We present a version of balanced truncation for model order reduction of linear time-delay systems. The procedure is based on a coordinate transformation of the position and preserves the delay structure of the system. We therefore call it (structure-preserving) position balancing. To every position, we associate quantities representing energies for the controllability and observability of the position. We show that these energies can be expressed explicitly in terms of the solutions to corresponding delay Lyapunov equations. Apart from characterizing the energies, we show that one block of the (operator) controllability and observability Gramians in the operator formulation of the time-delay system can also be characterized with the delay Lyapunov equation. The delay Lyapunov equation undergoes a contragredient transformation when we apply the position coordinate transformation and we propose to truncate it in a classical fashion, such that positions which are only weakly connected to the input and the output in the sense of the energy concepts are removed.

  • 18.
    Jarlebring, Elias
    et al.
    KTH, Centres, SeRC - Swedish e-Science Research Centre. KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Fasi, Massimiliano
    Univ Durham, Dept Comp Sci, Stockton Rd, Durham DH1 3LE, England..
    Ringh, Emil
    Ericsson Res, SE-10044 Stockholm, Sweden..
    Computational Graphs for Matrix Functions2022In: ACM Transactions on Mathematical Software, ISSN 0098-3500, E-ISSN 1557-7295, Vol. 48, no 4, p. 1-35, article id 39Article in journal (Refereed)
    Abstract [en]

    Many numerical methods for evaluating matrix functions can be naturally viewed as computational graphs. Rephrasing these methods as directed acyclic graphs (DAGs) is a particularly effective approach to study existing techniques, improve them, and eventually derive new ones. The accuracy of these matrix techniques can be characterized by the accuracy of their scalar counterparts, thus designing algorithms for matrix functions can be regarded as a scalar-valued optimization problem. The derivatives needed during the optimization can be calculated automatically by exploiting the structure of the DAG in a fashion analogous to backpropagation. This article describes GraphMatFun.jl, a Julia package that offers the means to generate and manipulate computational graphs, optimize their coefficients, and generate Julia, MATLAB, and C code to evaluate them efficiently at a matrix argument. The software also provides tools to estimate the accuracy of a graph-based algorithm and thus obtain numerically reliable methods. For the exponential, for example, using a particular form (degree-optimal) of polynomials produces implementations that in many cases are cheaper, in terms of computational cost, than the Pade-based techniques typically used in mathematical software. The optimized graphs and the corresponding generated code are available online.

  • 19.
    Jarlebring, Elias
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Guettel, Stefan
    A SPATIALLY ADAPTIVE ITERATIVE METHOD FOR A CLASS OF NONLINEAR OPERATOR EIGENPROBLEMS2014In: Electronic Transactions on Numerical Analysis, E-ISSN 1068-9613, Vol. 41, p. 21-41Article in journal (Refereed)
    Abstract [en]

    We present a new algorithm for the iterative solution of nonlinear operator eigenvalue problems arising from partial differential equations (PDEs). This algorithm combines automatic spatial resolution of linear operators with the infinite Arnoldi method for nonlinear matrix eigenproblems proposed by Jarlebring et al. [Numer. Math., 122 (2012), pp. 169-195]. The iterates in this infinite Arnoldi method are functions, and each iteration requires the solution of an inhomogeneous differential equation. This formulation is independent of the spatial representation of the functions, which allows us to employ a dynamic representation with an accuracy of about the level of machine precision at each iteration similar to what is done in the Chebfun system with its chebop functionality although our function representation is entirely based on coefficients instead of function values. Our approach also allows nonlinearity in the boundary conditions of the PDE. The algorithm is illustrated with several examples, e.g., the study of eigenvalues of a vibrating string with delayed boundary feedback control.

  • 20.
    Jarlebring, Elias
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Güttel, Stefan
    A spatially adaptive iterative method for a class of nonlinear operator eigenvalue problems2014In: Electronic Transactions on Numerical Analysis, E-ISSN 1068-9613, Vol. 41, p. 21-42-Article in journal (Refereed)
    Abstract [en]

    We present a new algorithm for the iterative solution of nonlinear operator eigenvalue problems arising from partial differential equations (PDEs). This algorithm combines automatic spatial resolution of linear operators with the infinite Arnoldi method for nonlinear matrix eigenproblems proposed by Jarlebring et al. [Numer.Math., 122 (2012), pp. 169–195]. The iterates in this infinite Arnoldi method are functions, and each iteration requires the solution of an inhomogeneous differential equation. This formulation is independent of the spatial representation of the functions, which allows us to employ a dynamic representation with an accuracy of about the level of machine precision at each iteration similar to what is done in the Chebfun system with its chebop functionality although our function representation is entirely based on coefficients instead of function values. Our approach also allows nonlinearity in the boundary conditions of the PDE. The algorithm is illustrated with several examples, e.g., the study of eigenvalues of a vibrating string with delayed boundary feedback control.

  • 21.
    Jarlebring, Elias
    et al.
    Katholieke Univ Leuven, Dept Comp Sci, B-3001 Heverlee, Belgium.
    Hochstenbach, M.E.
    Eindhoven Univ Technol, Dept Math & Comp Sci, NL-5600 MB Eindhoven, Netherlands .
    Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations2009In: Linear Algebra and its Applications, ISSN 0024-3795, E-ISSN 1873-1856, Vol. 431, no 3-4, p. 369-380Article in journal (Refereed)
    Abstract [en]

    Several recent methods used to analyze asymptotic stability of delay-differential equations (DDEs) involve determining the eigenvalues of a matrix, a matrix pencil or a matrix polynomial constructed by Kronecker products. Despite some similarities between the different types of these so-called matrix pencil methods, the general ideas used as well as the proofs differ considerably. Moreover, the available theory hardly reveals the relations between the different methods. In this work, a different derivation of various matrix pencil methods is presented using a unifying framework of a new type of eigenvalue problem: the polynomial two-parameter eigenvalue problem, of which the quadratic two-parameter eigenvalue problem is a special case. This framework makes it possible to establish relations between various seemingly different methods and provides further insight in the theory of matrix pencil methods. We also recognize a few new matrix pencil variants to determine DDE stability. Finally, the recognition of the new types of eigenvalue problem opens a door to efficient computation of DDE stability. (C) 2009 Elsevier Inc. All rights reserved.

  • 22.
    Jarlebring, Elias
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Koskela, Antti
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Mele, Giampaolo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Disguised and new quasi-Newton methods for nonlinear eigenvalue problems2018In: Numerical Algorithms, ISSN 1017-1398, E-ISSN 1572-9265, Vol. 79, no 1, p. 311-335Article in journal (Refereed)
    Abstract [en]

    In this paper, we take a quasi-Newton approach to nonlinear eigenvalue problems (NEPs) of the type M(λ)v = 0, where (Formula presented.) is a holomorphic function. We investigate which types of approximations of the Jacobian matrix lead to competitive algorithms, and provide convergence theory. The convergence analysis is based on theory for quasi-Newton methods and Keldysh’s theorem for NEPs. We derive new algorithms and also show that several well-established methods for NEPs can be interpreted as quasi-Newton methods, and thereby, we provide insight to their convergence behavior. In particular, we establish quasi-Newton interpretations of Neumaier’s residual inverse iteration and Ruhe’s method of successive linear problems.

    Download full text (pdf)
    fulltext
  • 23.
    Jarlebring, Elias
    et al.
    K.U. Leuven, Department of Computer Science, 3001 Heverlee, Belgium.
    Kvaal, S
    CMA, P.O. Box 1053 Blindern, NO-0316 Oslo, Norway,.
    Michiels, W
    K.U. Leuven, Department of Computer Science, 3001 Heverlee, Belgium.
    Computing all pairs (lambda,mu) such that lambdais a double eigenvalue of A plus mu B2011In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 32, no 3, p. 902-927Article in journal (Refereed)
    Abstract [en]

    Double eigenvalues are not generic for matrices without any particular structure. A matrix depending linearly on a scalar parameter, A + mu B, will, however, generically have double eigenvalues for some values of the parameter mu. In this paper, we consider the problem of finding those values. More precisely, we construct a method to accurately find all scalar pairs (lambda, mu) such that A + mu B has a double eigenvalue lambda, where A and B are given arbitrary complex matrices. The general idea of the globally convergent method is that if mu is close to a solution, then A + mu B has two eigenvalues which are close to each other. We fix the relative distance between these two eigenvalues and construct a method to solve and study it by observing that the resulting problem can be stated as a two-parameter eigenvalue problem, which is already studied in the literature. The method, which we call the method of fixed relative distance (MFRD), involves solving a two-parameter eigenvalue problem which returns approximations of all solutions. It is unfortunately not possible to get full accuracy with MFRD. In order to compute solutions with full accuracy, we present an iterative method which returns a very accurate solution, for a sufficiently good starting value. The approach is illustrated with one academic example and one application to a simple problem in computational quantum mechanics.

  • 24.
    Jarlebring, Elias
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Kvaal, Simen
    Michiels, Wim
    An Inverse Iteration Method for Eigenvalue Problems with Eigenvector Nonlinearities2014In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 36, no 4, p. A1978-A2001Article in journal (Refereed)
    Abstract [en]

    Consider a symmetric matrix A(v) is an element of R-nxn depending on a vector v is an element of R-n and satisfying the property A(alpha v) = A(v) for any alpha is an element of R\{0}. We will here study the problem of finding (lambda,v) is an element of R x R-n\{0} such that (lambda,v) is an eigenpair of the matrix A(v) and we propose a generalization of inverse iteration for eigenvalue problems with this type of eigenvector nonlinearity. The convergence of the proposed method is studied and several convergence properties are shown to be analogous to inverse iteration for standard eigenvalue problems, including local convergence properties. The algorithm is also shown to be equivalent to a particular discretization of an associated ordinary differential equation, if the shift is chosen in a particular way. The algorithm is adapted to a variant of the Schrodinger equation known as the Gross-Pitaevskii equation. We use numerical simulations to illustrate the convergence properties, as well as the efficiency of the algorithm and the adaption.

  • 25.
    Jarlebring, Elias
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA.
    Meerbergen, K
    Michiels, W
    An Arnoldi method with structured starting vectors for the delay eigenvalue problem2010In: Proceedings of the 9th IFAC Workshop on Time Delay Systems, 2010Conference paper (Refereed)
    Abstract [en]

     The method called Arnoldi is currently a very popular method to solve largescale eigenvalue problems. The general purpose of this paper is to generalize Arnoldi to the characteristic equation of a time-delay system, here called adelay eigenvalue problem . The presented generalization is mathematically equivalent to Arnoldi applied to the problem corresponding to a Taylor approximation of the exponential. Even though the derivation of the result is with a Taylor approximation, the constructed method can be implemented in such a way that it is independent of the Taylor truncation paramater N . This is achieved by exploiting properties of vectors with a special structure, the vectorization of a rank one matrix plus the vectorization of a matrix which right-most columns are zero. It turns out that this set of vectors is closed under the matrix vector product as well as orthogonalization. Moreover, both operations can be efficiently computed. Since Arnoldi only consists of these operations, if Arnoldi is started with the special vector structure, the method can be efficiently executed. The presented numerical experiments indicate that the method is very efficient in comparison to methods in the literature.

  • 26.
    Jarlebring, Elias
    et al.
    Katholieke Univ Leuven, Dept Comp Sci, B-3001 Heverlee, Belgium .
    Meerbergen, Karl
    Katholieke Univ Leuven, Dept Comp Sci, B-3001 Heverlee, Belgium .
    Michiels, Wim
    Katholieke Univ Leuven, Dept Comp Sci, B-3001 Heverlee, Belgium .
    A Krylov method for the delay eigenvalue problem2010In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 32, no 6, p. 3278-3300Article in journal (Refereed)
    Abstract [en]

    The Arnoldi method is currently a very popular algorithm to solve large-scale eigenvalue problems. The main goal of this paper is to generalize the Arnoldi method to the characteristic equation of a delay-differential equation (DDE), here called a delay eigenvalue problem (DEP). The DDE can equivalently be expressed with a linear infinite-dimensional operator whose eigenvalues are the solutions to the DEP. We derive a new method by applying the Arnoldi method to the generalized eigenvalue problem associated with a spectral discretization of the operator and by exploiting the structure. The result is a scheme where we expand a subspace not only in the traditional way done in the Arnoldi method. The subspace vectors are also expanded with one block of rows in each iteration. More important, the structure is such that if the Arnoldi method is started in an appropriate way, it has the (somewhat remarkable) property that it is in a sense independent of the number of discretization points. It is mathematically equivalent to an Arnoldi method with an infinite matrix, corresponding to the limit where we have an infinite number of discretization points. We also show an equivalence with the Arnoldi method in an operator setting. It turns out that with an appropriately defined operator over a space equipped with scalar product with respect to which Chebyshev polynomials are orthonormal, the vectors in the Arnoldi iteration can be interpreted as the coefficients in a Chebyshev expansion of a function. The presented method yields the same Hessenberg matrix as the Arnoldi method applied to the operator.

  • 27.
    Jarlebring, Elias
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Meerbergen, Karl
    KU Leuven.
    Michiels, Wim
    KU Leuven.
    Computing a partial schur factorization of nonlinear eigenvalue problems using the infinite Arnoldi method2014In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 35, no 2, p. 411-436Article in journal (Refereed)
    Abstract [en]

    The partial Schur factorization can be used to represent several eigenpairs of a matrix in a numerically robust way. Different adaptions of the Arnoldi method are often used to compute partial Schur factorizations. We propose here a technique to compute a partial Schur factorization of a nonlinear eigenvalue problem (NEP). The technique is an extension of our algorithm from [E. Jarlebring, W. Michiels, and K. Meerbergen, Numer. Math., 122 (2012), pp. 169-195], now called the infinite Arnoldi method. The infinite Arnoldi method is a method designed for NEPs, and can be interpreted as Arnoldi's method applied to a linear infinite-dimensional operator, whose reciprocal eigenvalues are the solutions to the NEP. As a first result we show that the invariant pairs of the operator are equivalent to invariant pairs of the NEP. We characterize the structure of the invariant pairs of the operator and show how one can carry out a modification of the infinite Arnoldi method by respecting the structure. This also allows us to naturally add the feature known as locking. We nest this algorithm with an outer iteration, where the infinite Arnoldi method for a particular type of structured functions is appropriately restarted. The restarting exploits the structure and is inspired by the well-known implicitly restarted Arnoldi method for standard eigenvalue problems. The final algorithm is applied to examples from a benchmark collection, showing that both processing time and memory consumption can be considerably reduced with the restarting technique.

  • 28.
    Jarlebring, Elias
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.). KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Mele, Giampaolo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Palitta, Davide
    Univ Bologna, Dipartimento Matemat, Piazza Porta S Donato,5, I-40127 Bologna, Italy..
    Ringh, Emil
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.). KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Krylov methods for low-rank commuting generalized Sylvester equations2018In: Numerical Linear Algebra with Applications, ISSN 1070-5325, E-ISSN 1099-1506, Vol. 25, no 6, article id e2176Article in journal (Refereed)
    Abstract [en]

    We consider generalizations of the Sylvester matrix equation, consisting of the sum of a Sylvester operator and a linear operator pi with a particular structure. More precisely, the commutators of the matrix coefficients of the operator pi and the Sylvester operator coefficients are assumed to be matrices with low rank. We show (under certain additional conditions) low-rank approximability of this problem, that is, the solution to this matrix equation can be approximated with a low-rank matrix. Projection methods have successfully been used to solve other matrix equations with low-rank approximability. We propose a new projection method for this class of matrix equations. The choice of the subspace is a crucial ingredient for any projection method for matrix equations. Our method is based on an adaption and extension of the extended Krylov subspace method for Sylvester equations. A constructive choice of the starting vector/block is derived from the low-rank commutators. We illustrate the effectiveness of our method by solving large-scale matrix equations arising from applications in control theory and the discretization of PDEs. The advantages of our approach in comparison to other methods are also illustrated.

    Download full text (pdf)
    fulltext
  • 29.
    Jarlebring, Elias
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Mele, Giampaolo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Runborg, Olof
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    The waveguide eigenvalue problem and the tensor infinite Arnoldi method2017In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 39, no 3, p. A1062-A1088Article in journal (Refereed)
    Abstract [en]

    We present a new computational approach for a class of large-scale nonlinear eigenvalue problems (NEPs) that are nonlinear in the eigenvalue. The contribution of this paper is two fold. We derive a new iterative algorithm for NEPs, the tensor infinite Arnoldi method (TIAR), which is applicable to a general class of NEPs, and we show how to specialize the algorithm to a specific NEP: the waveguide eigenvalue problem. The waveguide eigenvalue problem arises from a finite-element discretization of a partial differential equation used in the study waves propagating in a periodic medium. The algorithm is successfully applied to accurately solve benchmark problems as well as complicated waveguides. We study the complexity of the specialized algorithm with respect to the number of iterations "m" and the size of the problem "n", both from a theoretical perspective and in practice. For the waveguide eigenvalue problem, we establish that the computationally dominating part of the algorithm has complexity O(nm^2+sqrt(n)m^3). Hence, the asymptotic complexity of TIAR applied to the waveguide eigenvalue problem, for n→ ∞, is the same as for Arnoldi’s method for standard eigenvalue problems.

    Download full text (pdf)
    fulltext
  • 30.
    Jarlebring, Elias
    et al.
    Katholieke Univ Leuven, B-3001 Heverlee, Belgium .
    Michiels, Wim
    Katholieke Univ Leuven, B-3001 Heverlee, Belgium .
    Analyzing the convergence factor of residual inverse iteration2011In: BIT Numerical Mathematics, ISSN 0006-3835, E-ISSN 1572-9125, Vol. 51, no 4, p. 937-957Article in journal (Refereed)
    Abstract [en]

    We will establish here a formula for the convergence factor of the method called residual inverse iteration, which is a method for nonlinear eigenvalue problems and a generalization of the well-known inverse iteration. The formula for the convergence factor is explicit and involves quantities associated with the eigenvalue to which the iteration converges, in particular the eigenvalue and eigenvector. Residual inverse iteration allows for some freedom in the choice of a vector w (k) and we can use the formula for the convergence factor to analyze how it depends on the choice of w (k) . We also use the formula to illustrate the convergence when the shift is close to the eigenvalue. Finally, we explain the slow convergence for double eigenvalues by showing that under generic conditions, the convergence factor is one, unless the eigenvalue is semisimple. If the eigenvalue is semisimple, it turns out that we can expect convergence similar to the simple case.

  • 31.
    Jarlebring, Elias
    et al.
    Katholieke Univ Leuven, Dept Comp Sci, B-3001 Heverlee, Belgium .
    Michiels, Wim
    Katholieke Univ Leuven, Dept Comp Sci, B-3001 Heverlee, Belgium .
    Invariance properties in the root sensitivity of time-delay systems with double imaginary roots2010In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 46, no 6, p. 1112-1115Article in journal (Refereed)
    Abstract [en]

    If 1 omega 1R is an eigenvalue of a time-delay system for the delay tau(0) then 1 omega o is also an eigenvalue for the delays tau(k) := tau(0) + k2 pi/omega, for any k is an element of Z We investigate the sensitivity. periodicity and invariance properties of the root 1 omega for the case that 1 omega is a double eigenvalue for some tau(k) It turns out that under natural conditions (the condition that the root exhibits the completely regular splunng property if the delay is perturbed), the presence of a double imaginary root ico for some delay tau(0) implies that 1 omega is a simple root for the other delays tau(k), k not equal 0. Moreover, we show how to characterize the root locus around 1 omega. The entire local root locus picture can be completely determined from the square root splitting of the double root. We separate the general picture into two cases depending on the sign of a single scalar constant; the imaginary part of the first coefficient in the square root expansion of the double eigenvalue. (C) 2010 Elsevier Ltd All rights reserved.

  • 32.
    Jarlebring, Elias
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA.
    Michiels, Wim
    Meerbergen, Karl
    A linear eigenvalue algorithm for the nonlinear eigenvalue problem2012In: Numerische Mathematik, ISSN 0029-599X, E-ISSN 0945-3245, Vol. 122, no 1, p. 169-195Article in journal (Refereed)
    Abstract [en]

    The Arnoldi method for standard eigenvalue problems possesses several attractive properties making it robust, reliable and efficient for many problems. The first result of this paper is a characterization of the solutions to an arbitrary (analytic) nonlinear eigenvalue problem (NEP) as the reciprocal eigenvalues of an infinite dimensional operator denoted . We consider the Arnoldi method for the operator and show that with a particular choice of starting function and a particular choice of scalar product, the structure of the operator can be exploited in a very effective way. The structure of the operator is such that when the Arnoldi method is started with a constant function, the iterates will be polynomials. For a large class of NEPs, we show that we can carry out the infinite dimensional Arnoldi algorithm for the operator in arithmetic based on standard linear algebra operations on vectors and matrices of finite size. This is achieved by representing the polynomials by vector coefficients. The resulting algorithm is by construction such that it is completely equivalent to the standard Arnoldi method and also inherits many of its attractive properties, which are illustrated with examples.

  • 33.
    Jarlebring, Elias
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.). KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Poloni, F.
    Iterative methods for the delay Lyapunov equation with T-Sylvester preconditioning2019In: Applied Numerical Mathematics, ISSN 0168-9274, E-ISSN 1873-5460, Vol. 135, p. 173-185Article in journal (Refereed)
    Abstract [en]

    The delay Lyapunov equation is an important matrix boundary-value problem which arises as an analogue of the Lyapunov equation in the study of time-delay systems x˙(t)=A0x(t)+A1x(t−τ)+B0u(t). We propose a new algorithm for the solution of the delay Lyapunov equation. Our method is based on the fact that the delay Lyapunov equation can be expressed as a linear system of equations, whose unknown is the value U(τ/2)∈Rn×n, i.e., the delay Lyapunov matrix at time τ/2. This linear matrix equation with n2 unknowns is solved by adapting a preconditioned iterative method such as GMRES. The action of the n2×n2 matrix associated to this linear system can be computed by solving a coupled matrix initial-value problem. A preconditioner for the iterative method is proposed based on solving a T-Sylvester equation MX+XTN=C, for which there are methods available in the literature. We prove that the preconditioner is effective under certain assumptions. The efficiency of the approach is illustrated by applying it to a time-delay system stemming from the discretization of a partial differential equation with delay. Approximate solutions to this problem can be obtained for problems of size up to n≈1000, i.e., a linear system with n2≈106 unknowns, a dimension which is outside of the capabilities of the other existing methods for the delay Lyapunov equation.

  • 34.
    Jarlebring, Elias
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Upadhyaya, Parikshit
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.). KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Implicit algorithms for eigenvector nonlinearities2021In: Numerical Algorithms, ISSN 1017-1398, E-ISSN 1572-9265Article in journal (Refereed)
    Abstract [en]

    We study and derive algorithms for nonlinear eigenvalue problems, where the system matrix depends on the eigenvector, or several eigenvectors (or their corresponding invariant subspace). The algorithms are derived from an implicit viewpoint. More precisely, we change the Newton update equation in a way that the next iterate does not only appear linearly in the update equation. Although the modifications of the update equation make the methods implicit, we show how corresponding iterates can be computed explicitly. Therefore, we can carry out steps of the implicit method using explicit procedures. In several cases, these procedures involve a solution of standard eigenvalue problems. We propose two modifications, one of the modifications leads directly to a well-established method (the self-consistent field iteration) whereas the other method is to our knowledge new and has several attractive properties. Convergence theory is provided along with several simulations which illustrate the properties of the algorithms.

    Download full text (pdf)
    fulltext
  • 35.
    Jarlebring, Elias
    et al.
    Katholieke Univ Leuven, B-3001 Louvain, Belgium .
    Vanbiervliet, J
    Katholieke Univ Leuven, B-3001 Louvain, Belgium .
    Michiels, Wim
    Katholieke Univ Leuven, B-3001 Louvain, Belgium .
    Characterizing and Computing the H(2) Norm of Time-Delay Systems by Solving the Delay Lyapunov Equation2011In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 56, no 4, p. 814-825Article in journal (Refereed)
    Abstract [en]

    It is widely known that the solutions of Lyapunov equations can be used to compute the norm of linear time-invariant (LTI) dynamical systems. In this paper, we show how this theory extends to dynamical systems with delays. The first result is that the norm can be computed from the solution of a generalization of the Lyapunov equation, which is known as the delay Lyapunov equation. From the relation with the delay Lyapunov equation we can prove an explicit formula for the norm if the system has commensurate delays, here meaning that the delays are all integer multiples of a basic delay. The formula is explicit and contains only elementary linear algebra operations applied to matrices of finite dimension. The delay Lyapunov equations are matrix boundary value problems. We show how to apply a spectral discretization scheme to these equations for the general, not necessarily commensurate, case. The convergence of spectral methods typically depends on the smoothness of the solution. To this end we describe the smoothness of the solution to the delay Lyapunov equations, for the commensurate as well as for the non-commensurate case. The smoothness properties allow us to completely predict the convergence order of the spectral method

  • 36.
    Jarlebring, Elias
    et al.
    Department of Computer Science, Katholieke Universiteit Leuven, Belgium.
    Vanbiervliet, J
    Michiels, Wim
    Explicit expressions for the H2 norm of time-delay systems based on the delay lyapunov equation2010In: Proceedings of the 49th IEEE Conference on Decision and Control, 2010Conference paper (Refereed)
    Abstract [en]

    It is widely known that the solutions of Lyapunov equations can be used to compute the H2 norm of linear time-invariant (LTI) dynamical systems. In this paper, we show how this theory extends to dynamical systems with delays. We establish, in the main result, that the H2 norm can be computed from the solution of a generalization of the Lyapunov equation, which is known as the delay Lyapunov equation. From the relation with the delay Lyapunov equation we can prove an explicit formula for the H2 norm if the system has commensurate delays, here meaning that the delays are all integer multiples of a basic delay. The formula is explicit and contains only elementary linear algebra operations applied to matrices of finite dimension.

  • 37.
    Jarlebring, Elias
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Voss, Heinrich
    Rational Krylov for Nonlinear Eigenproblems, an Iterative Projection Method2005In: Applications of Mathematics, ISSN 0862-7940, E-ISSN 1572-9109, Vol. 50, no 6, p. 543-554Article in journal (Refereed)
    Abstract [en]

    In recent papers Ruhe [10], [12] suggested rational Krylov method for nonlinear eigenproblems knitting together a secant method for linearizing the nonlinear problem and the Krylov method for the linearized problem. In this note we point out that the method can be understood as an iterative projection method. Similar to the Arnoldi method presented in [13], [14] the search space is expanded by the direction from residual inverse iteration. Numercial methods demonstrate that the rational Krylov method can be accelerated considerably by replacing an inner iteration by an explicit solver of projected problems 

  • 38.
    Koskela, Antti
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    On a generalization of neumann series of bessel functions using Hessenberg matrices and matrix exponentials2019In: European Conference on Numerical Mathematics and Advanced Applications, ENUMATH 2017, Springer, 2019, Vol. 126, p. 205-214Conference paper (Refereed)
    Abstract [en]

    The Neumann expansion of Bessel functions (of integer order) of a function g: ℂ→ ℂ corresponds to representing g as a linear combination of basis functions φ0, φ1, …, i.e., g(s)=∑ℓ=0 ∞wℓφℓ(s), where φi(s) = Ji(s), i = 0, …, are the Bessel functions. In this work, we study an expansion for a more general class of basis functions. More precisely, we assume that the basis functions satisfy an infinite dimensional linear ordinary differential equation associated with a Hessenberg matrix, motivated by the fact that these basis functions occur in certain iterative methods. A procedure to compute the basis functions as well as the coefficients is proposed. Theoretical properties of the expansion are studied. We illustrate that non-standard basis functions can give faster convergence than the Bessel functions.

  • 39.
    Koskela, Antti
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Hochstenbach, M. E.
    Krylov approximation of linear odes with polynomial parameterization2016In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 37, no 2, p. 519-538Article in journal (Refereed)
    Abstract [en]

    We propose a new numerical method to solve linear ordinary differential equations of the type δu/δt (t, ϵ) = A(ϵ) u(t,ϵ), where A: C → Cn×n is a matrix polynomial with large and sparse matrix coefficients. The algorithm computes an explicit parameterization of approximations of u(t, ϵ) such that approximations for many different values of ϵ and t can be obtained with a very small additional computational effort. The derivation of the algorithm is based on a reformulation of the parameterization as a linear parameter-free ordinary differential equation and on approximating the product of the matrix exponential and a vector with a Krylov method. The Krylov approximation is generated with Arnoldi's method and the structure of the coefficient matrix turns out to be independent of the truncation parameter so that it can also be interpreted as Arnoldi's method applied to an infinite dimensional matrix. We prove the super linear convergence of the algorithm and provide a posteriori error estimates to be used as termination criteria. The behavior of the algorithm is illustrated with examples stemming from spatial discretizations of partial differential equations.

  • 40. Kvaal, Simen
    et al.
    Jarlebring, Elias
    Departement Computerwetenschappen, Katholieke Universiteit Leuven.
    Michiels, Wim
    Computing singularities of perturbation series2011In: Physical Review A. Atomic, Molecular, and Optical Physics, ISSN 1050-2947, E-ISSN 1094-1622, Vol. 83, no 3, p. 032505-Article in journal (Refereed)
    Abstract [en]

    Many properties of current ab initio approaches to the quantum many-body problem, both perturbational and otherwise, are related to the singularity structure of the Rayleigh-Schrodinger perturbation series. A numerical procedure is presented that in principle computes the complete set of singularities, including the dominant singularity which limits the radius of convergence. The method approximates the singularities as eigenvalues of a certain generalized eigenvalue equation which is solved using iterative techniques. It relies on computation of the action of the Hamiltonian matrix on a vector and does not rely on the terms in the perturbation series. The method can be useful for studying perturbation series of typical systems of moderate size, for fundamental development of resummation schemes, and for understanding the structure of singularities for typical systems. Some illustrative model problems are studied, including a helium-like model with delta-function interactions for which Moller-Plesset perturbation theory is considered and the radius of convergence found.

  • 41.
    Mele, Giampaolo
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    On restarting the tensor infinite Arnoldi method2018In: BIT Numerical Mathematics, ISSN 0006-3835, E-ISSN 1572-9125, Vol. 58, no 1, p. 133-162Article in journal (Refereed)
    Abstract [en]

    An efficient and robust restart strategy is important for any Krylov-based method for eigenvalue problems. The tensor infinite Arnoldi method (TIAR) is a Krylov-based method for solving nonlinear eigenvalue problems (NEPs). This method can be interpreted as an Arnoldi method applied to a linear and infinite dimensional eigenvalue problem where the Krylov basis consists of polynomials. We propose new restart techniques for TIAR and analyze efficiency and robustness. More precisely, we consider an extension of TIAR which corresponds to generating the Krylov space using not only polynomials, but also structured functions, which are sums of exponentials and polynomials, while maintaining a memory efficient tensor representation. We propose two restarting strategies, both derived from the specific structure of the infinite dimensional Arnoldi factorization. One restarting strategy, which we call semi-explicit TIAR restart, provides the possibility to carry out locking in a compact way. The other strategy, which we call implicit TIAR restart, is based on the Krylov–Schur restart method for the linear eigenvalue problem and preserves its robustness. Both restarting strategies involve approximations of the tensor structured factorization in order to reduce the complexity and the required memory resources. We bound the error introduced by some of the approximations in the infinite dimensional Arnoldi factorization showing that those approximations do not substantially influence the robustness of the restart approach. We illustrate the effectiveness of the approaches by applying them to solve large scale NEPs that arise from a delay differential equation and a wave propagation problem. The advantages in comparison to other restart methods are also illustrated. 

    Download full text (pdf)
    fulltext
  • 42.
    Mele, Giampaolo
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Ringh, Emil
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Ek, David
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Izzo, Federico
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Upadhyaya, Parikshit
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Preconditioning for linear systems2020Book (Other academic)
    Download full text (pdf)
    fulltext
  • 43. Michiels, W.
    et al.
    Jarlebring, Elias
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA.
    Meerbergen, K.
    A projection approach for model reduction of large-scale time-delay systems2011In: Proceedings of the 50th IEEE Conference on Decision and Control and European Control Conference, 2011, p. 1-8Conference paper (Refereed)
  • 44. Michiels, W.
    et al.
    Jarlebring, Elias
    Department of Computer Science, K.U. Leuven, 3001 Heverlee, Belgium.
    Meerbergen, K.
    Krylov-Based Model Order Reduction of Time-delay Systems2011In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 32, no 4, p. 1399-1421Article in journal (Refereed)
    Abstract [en]

    We present a model order reduction method which allows the construction of a reduced, delay-free model of a given dimension for linear time-delay systems, whose characteristic matrix is nonlinear due to the presence of exponential functions. The method builds on the equivalent representation of the time-delay system as an infinite-dimensional linear problem. It combines ideas from a finite-dimensional approximation via a spectral discretization, on the one hand, and a Krylov–Padé model reduction approach, on the other hand. The method exhibits a good spectral approximation of the original model, in the sense that the smallest characteristic roots are well approximated and the nonconverged eigenvalues of the reduced model have a favorable location, and it preserves moments at zero and at infinity. The spectral approximation is due to an underlying Arnoldi process that relies on building an appropriate Krylov space for the linear infinite-dimensional problem. The preservation of moments is guaranteed, because the chosen finite-dimensional approximation preserves moments and, in addition, the space on which one projects is constructed in such a way that the preservation of moments carries over to the reduced model. The implementation of the method is dynamic, since the number of grid points in the spectral discretization does not need to be chosen beforehand and the accuracy of the reduced model can always be improved by doing more iterations. It relies on a reformulation of the problem involving a companion-like system matrix and a highly structured input matrix, whose structure are fully exploited.

  • 45.
    Ringh, Emil
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Nonlinearizing two-parameter eigenvalue problems2021In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 42, no 2, p. 775-799Article in journal (Refereed)
    Abstract [en]

    We investigate a technique to transform a linear two-parameter eigenvalue problem, into a nonlinear eigenvalue problem (NEP). The transformation stems from an elimination of one of the equations in the two-parameter eigenvalue problem, by considering it as a (standard) generalized eigenvalue problem. We characterize the equivalence between the original and the nonlinearized problem theoretically and show how to use the transformation computationally. Special cases of the transformation can be interpreted as a reversed companion linearization for polynomial eigenvalue problems, as well as a reversed (less known) linearization technique for certain algebraic eigenvalue problems with square-root terms. Moreover, by exploiting the structure of the NEP we present algorithm specializations for NEP-methods, although the technique also allows general solution methods for NEPs to be directly applied. The nonlinearization is illustrated in examples and simulations, with focus on problems where the eliminated equation is of much smaller size than the other two-parameter eigenvalue equation. This situation arises naturally in domain decomposition techniques. A general error analysis is also carried out under the assumption that a backward stable eigensolver is used to solve the eliminated problem, leading to the conclusion that the error is benign in this situation.

  • 46.
    Ringh, Emil
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Mele, Giampaolo
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Karlsson, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Sylvester-based preconditioning for the waveguide eigenvalue problem2018In: Linear Algebra and its Applications, ISSN 0024-3795, E-ISSN 1873-1856, Vol. 542, no 1, p. 441-463Article in journal (Refereed)
    Abstract [en]

    We consider a nonlinear eigenvalue problem (NEP) arising from absorbing boundary conditions in the study of a partial differential equation (PDE) describing a waveguide. We propose a new computational approach for this large-scale NEP based on residual inverse iteration (Resinv) with preconditioned iterative solves. Similar to many preconditioned iterative methods for discretized PDEs, this approach requires the construction of an accurate and efficient preconditioner. For the waveguide eigenvalue problem, the associated linear system can be formulated as a generalized Sylvester equation AX+XB+A1XB1+A2XB2+K(ring operator)X=C, where (ring operator) denotes the Hadamard product. The equation is approximated by a low-rank correction of a Sylvester equation, which we use as a preconditioner. The action of the preconditioner is efficiently computed by using the matrix equation version of the Sherman-Morrison-Woodbury (SMW) formula. We show how the preconditioner can be integrated into Resinv. The results are illustrated by applying the method to large-scale problems.

    Download full text (pdf)
    manuscript
  • 47. Rott, Oliver
    et al.
    Jarlebring, Elias
    An iterative method for the multipliers of periodic delaydifferential equations and the analysis of a PDE milling model2010In: Proceedings of the 9th IFAC Workshop on Time Delay Systems / [ed] Vyhlidal, Tomas; Zitek, Pavel, 2010, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Locally convergent iterative schemes have turned out to be very useful in the analysis of the characteristic roots of delay-differential equations (DDEs) with constant coefficients. In this work we present a locally convergent iterative scheme for the characteristic multipliers of periodic-coefficient DDEs. The method is an adaption of an iterative method called residual inverse iteration. The possibility to use this method stems from an observation that the characteristic matrix can be expressed with the fundamental solution of a differential equation. We apply the method to a coupled milling model containing a partial and an ordinary differential equation. The conclusion of the numerical results is that the stability diagram of the coupled model differs significantly from the combined stability diagrams for each subsystem.

  • 48. Saadvandi, M.
    et al.
    Meerbergen, K.
    Jarlebring, Elias
    On dominant poles and model reduction of second order time-delay systems2012In: Applied Numerical Mathematics, ISSN 0168-9274, E-ISSN 1873-5460, Vol. 62, no 1, p. 21-34Article in journal (Refereed)
    Abstract [en]

    The method known as the dominant pole algorithm (DPA) has previously been successfully used in combination with model order reduction techniques to approximate standard linear time-invariant dynamical systems and second order dynamical systems. In this paper, we show how this approach can be adapted to a class of second order delay systems, which are large scale nonlinear problems whose transfer functions have an infinite number of simple poles. Deflation is a very important ingredient for this type of methods. Because of the nonlinearity, many deflation approaches for linear systems are not applicable. We therefore propose an alternative technique that essentially removes computed poles from the systemʼs input and output vectors. In general, this technique changes the residues, and hence, modifies the order of dominance of the poles, but we prove that, under certain conditions, the residues stay near the original residues. The new algorithm is illustrated by numerical examples.

  • 49.
    Upadhyaya, Parikshit
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA. KTH, Centres, SeRC - Swedish e-Science Research Centre.
    Rubensson, Emanuel
    A density matrix approach to the convergence of the self-consistent field iteration2019In: Numerical Algebra, Control and Optimization, ISSN 2155-3289, E-ISSN 2155-3297, Vol. 0, no 0, p. 0-0Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a local convergence analysis of the self-consistent field (SCF) iteration using the density matrix as the state of a fixed-point iteration. Conditions for local convergence are formulated in terms of the spectral radius of the Jacobian of a fixed-point map. The relationship between convergence and certain properties of the problem is explored by deriving upper bounds expressed in terms of higher gaps. This gives more information regarding how the gaps between eigenvalues of the problem affect the convergence, and hence these bounds are more insightful on the convergence behaviour than standard convergence results. We also provide a detailed analysis to describe the difference between the bounds and the exact convergence factor for an illustrative example. Finally we present numerical examples and compare the exact value of the convergence factor with the observed behaviour of SCF, along with our new bounds and the characterization using the higher gaps. We provide heuristic convergence factor estimates in situations where the bounds fail to well capture the convergence.

  • 50.
    Upadhyaya, Parikshit
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Jarlebring, Elias
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Numerical Analysis, NA.
    Tudisco, Francesco
    The self-consistent field iteration for p-spectral clustering2021Manuscript (preprint) (Other academic)
    Abstract [en]

    The self-consistent field (SCF) iteration, combined with its variants, is one of the most widely used algorithms in quantum chemistry. We propose a procedure to adapt the SCF iteration for the p-Laplacian eigenproblem, which is an important problem in the field of unsupervised learning. We formulate the p-Laplacian eigenproblem as a type of nonlinear eigenproblem with one eigenvector nonlinearity , which then allows us to adapt the SCF iteration for its solution after the application of suitable regularization techniques. The results of our numerical experiments confirm the viablity of our approach.

12 1 - 50 of 51
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf