kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 51) Show all publications
Correnty, S., Jarlebring, E. & Szyld, D. B. (2023). Preconditioned Chebyshev BiCG method for parameterized linear systems. Electronic Transactions on Numerical Analysis, 58, 629-656
Open this publication in new window or tab >>Preconditioned Chebyshev BiCG method for parameterized linear systems
2023 (English)In: Electronic Transactions on Numerical Analysis, E-ISSN 1068-9613, Vol. 58, p. 629-656Article in journal (Refereed) Published
Abstract [en]

We consider the problem of approximating the solution to A(μ)x(μ) = b for many different values of the parameter μ. Here, A(μ) is large, sparse, and nonsingular with a nonlinear dependence on μ. Our method is based on a companion linearization derived from an accurate Chebyshev interpolation of A(μ) on the interval [-a; a], a 2 R+, inspired by Effenberger and Kressner [BIT, 52 (2012), pp. 933-951]. The solution to the linearization is approximated in a preconditioned BiCG setting for shifted systems, as proposed in Ahmad et al. [SIAM J. Matrix Anal. Appl., 38 (2017), pp. 401-424], where the Krylov basis matrix is formed once. This process leads to a short-term recurrence method, where one execution of the algorithm produces the approximation of x(μ) for many different values of the parameter μ ∈ [-a; a] simultaneously. In particular, this work proposes one algorithm which applies a shift-and-invert preconditioner exactly as well as an algorithm which applies the preconditioner inexactly based on the work by Vogel [Appl. Math. Comput., 188 (2007), pp. 226-233]. The competitiveness of the algorithms is illustrated with large-scale problems arising from a finite element discretization of a Helmholtz equation with a parameterized material coefficient. The software used in the simulations is publicly available online, and thus all our experiments are reproducible.

Place, publisher, year, edition, pages
Osterreichische Akademie der Wissenschaften, Verlag, 2023
Keywords
Chebyshev interpolation, companion linearization, inexact preconditioning, Krylov subspace methods, parameterized Helmholtz equation, parameterized linear systems, shifted linear systems, short-term recurrence methods, time-delay systems
National Category
Computational Mathematics Computer Sciences
Identifiers
urn:nbn:se:kth:diva-341929 (URN)10.1553/etna_vol58s629 (DOI)2-s2.0-85180534256 (Scopus ID)
Note

QC 20240108

Available from: 2024-01-08 Created: 2024-01-08 Last updated: 2024-02-27Bibliographically approved
Jarlebring, E., Fasi, M. & Ringh, E. (2022). Computational Graphs for Matrix Functions. ACM Transactions on Mathematical Software, 48(4), 1-35, Article ID 39.
Open this publication in new window or tab >>Computational Graphs for Matrix Functions
2022 (English)In: ACM Transactions on Mathematical Software, ISSN 0098-3500, E-ISSN 1557-7295, Vol. 48, no 4, p. 1-35, article id 39Article in journal (Refereed) Published
Abstract [en]

Many numerical methods for evaluating matrix functions can be naturally viewed as computational graphs. Rephrasing these methods as directed acyclic graphs (DAGs) is a particularly effective approach to study existing techniques, improve them, and eventually derive new ones. The accuracy of these matrix techniques can be characterized by the accuracy of their scalar counterparts, thus designing algorithms for matrix functions can be regarded as a scalar-valued optimization problem. The derivatives needed during the optimization can be calculated automatically by exploiting the structure of the DAG in a fashion analogous to backpropagation. This article describes GraphMatFun.jl, a Julia package that offers the means to generate and manipulate computational graphs, optimize their coefficients, and generate Julia, MATLAB, and C code to evaluate them efficiently at a matrix argument. The software also provides tools to estimate the accuracy of a graph-based algorithm and thus obtain numerically reliable methods. For the exponential, for example, using a particular form (degree-optimal) of polynomials produces implementations that in many cases are cheaper, in terms of computational cost, than the Pade-based techniques typically used in mathematical software. The optimized graphs and the corresponding generated code are available online.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2022
Keywords
Polynomials of matrices, functions of matrices, computational graphs
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-325600 (URN)10.1145/3568991 (DOI)000952026200003 ()2-s2.0-85151501445 (Scopus ID)
Note

QC 20230412

Available from: 2023-04-12 Created: 2023-04-12 Last updated: 2023-04-12Bibliographically approved
Jarlebring, E. & Correnty, S. (2022). Infinite GMRES for Parameterized Linear Systems. SIAM Journal on Matrix Analysis and Applications, 43(3), 1382-1405
Open this publication in new window or tab >>Infinite GMRES for Parameterized Linear Systems
2022 (English)In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 43, no 3, p. 1382-1405Article in journal (Refereed) Published
Abstract [en]

We consider linear parameterized systems A(mu)x(mu) = b for many different mu, where A is large and sparse and depends nonlinearly on mu. Solving such systems individually for each mu would require great computational effort. In this work we propose to compute a partial parameterization (x) over tilde approximate to x(mu), where (x) over tilde(mu) is cheap to evaluate for many mu. Our methods are based on the observation that a companion linearization can be formed where the dependence on mu is only linear. In particular, methods are presented that combine the well-established Krylov subspace method for linear systems, GMRES, with algorithms for nonlinear eigenvalue problems (NEPs) to generate a basis for the Krylov subspace. Within this new approach, the basis matrix is constructed in three different ways, using a tensor structure and exploiting that certain problems have low-rank properties. The methods are analyzed analogously to the standard convergence theory for the method GMRES for linear systems. More specifically, the error is estimated based on the magnitude of the parameter mu and the spectrum of the linear companion matrix, which corresponds to the reciprocal solutions to the corresponding NEP. Numerical experiments illustrate the competitiveness of the methods for large-scale problems. The simulations are reproducible and publicly available online.

Place, publisher, year, edition, pages
Society for Industrial & Applied Mathematics (SIAM), 2022
Keywords
parameterized linear systems, Krylov methods, companion linearization, shifted linear systems, infinite Arnoldi, low-rank matrices
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-320420 (URN)10.1137/21M1410324 (DOI)000861196300015 ()2-s2.0-85138468029 (Scopus ID)
Note

QC 20221107

Available from: 2022-11-07 Created: 2022-11-07 Last updated: 2023-12-05Bibliographically approved
Claes, R., Jarlebring, E., Meerbergen, K. & Upadhyaya, P. (2022). Linearizable Eigenvector Nonlinearities. SIAM Journal on Matrix Analysis and Applications, 43(2), 764-786
Open this publication in new window or tab >>Linearizable Eigenvector Nonlinearities
2022 (English)In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 43, no 2, p. 764-786Article in journal (Refereed) Published
Abstract [en]

We present a method to linearize, without approximation, a specific class of eigenvalue problems with eigenvector nonlinearities (NEPv), where the nonlinearities are expressed by scalar functions that are defined by a quotient of linear functions of the eigenvector. The exact linearization relies on an equivalent multiparameter eigenvalue problem (MEP) that contains the exact solutions of the NEPv. Due to the characterization of MEPs in terms of a generalized eigenvalue problem this provides a direct way to compute all NEPv solutions for small problems, and it opens up the possibility to develop locally convergent iterative methods for larger problems. Moreover, the linear formulation allows us to easily determine the number of solutions of the NEPv. We propose two numerical schemes that exploit the structure of the linearization: inverse iteration and residual inverse iteration. We show how symmetry in the MEP can be used to improve reliability and reduce computational cost of both methods. Two numerical examples verify the theoretical results, and a third example shows the potential of a hybrid scheme that is based on a combination of the two proposed methods.

Place, publisher, year, edition, pages
Society for Industrial & Applied Mathematics (SIAM), 2022
Keywords
nonlinear eigenvalue problem, multiparameter eigenvalue problem, linearization
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-323340 (URN)10.1137/21M142931X (DOI)000903762900010 ()2-s2.0-85130555762 (Scopus ID)
Note

QC 20230127

Available from: 2023-01-27 Created: 2023-01-27 Last updated: 2023-01-27Bibliographically approved
Jarlebring, E. & Upadhyaya, P. (2021). Implicit algorithms for eigenvector nonlinearities. Numerical Algorithms
Open this publication in new window or tab >>Implicit algorithms for eigenvector nonlinearities
2021 (English)In: Numerical Algorithms, ISSN 1017-1398, E-ISSN 1572-9265Article in journal (Refereed) Published
Abstract [en]

We study and derive algorithms for nonlinear eigenvalue problems, where the system matrix depends on the eigenvector, or several eigenvectors (or their corresponding invariant subspace). The algorithms are derived from an implicit viewpoint. More precisely, we change the Newton update equation in a way that the next iterate does not only appear linearly in the update equation. Although the modifications of the update equation make the methods implicit, we show how corresponding iterates can be computed explicitly. Therefore, we can carry out steps of the implicit method using explicit procedures. In several cases, these procedures involve a solution of standard eigenvalue problems. We propose two modifications, one of the modifications leads directly to a well-established method (the self-consistent field iteration) whereas the other method is to our knowledge new and has several attractive properties. Convergence theory is provided along with several simulations which illustrate the properties of the algorithms.

Place, publisher, year, edition, pages
Springer Nature, 2021
Keywords
Eigenvector nonlinearity, Inexact Newton, Implicit Newton, SCF
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-305171 (URN)10.1007/s11075-021-01189-4 (DOI)000692056000001 ()2-s2.0-85114113879 (Scopus ID)
Funder
Swedish Research Council
Note

QC 20220302

Available from: 2021-11-22 Created: 2021-11-22 Last updated: 2022-06-25Bibliographically approved
Ringh, E. & Jarlebring, E. (2021). Nonlinearizing two-parameter eigenvalue problems. SIAM Journal on Matrix Analysis and Applications, 42(2), 775-799
Open this publication in new window or tab >>Nonlinearizing two-parameter eigenvalue problems
2021 (English)In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 42, no 2, p. 775-799Article in journal (Refereed) Published
Abstract [en]

We investigate a technique to transform a linear two-parameter eigenvalue problem, into a nonlinear eigenvalue problem (NEP). The transformation stems from an elimination of one of the equations in the two-parameter eigenvalue problem, by considering it as a (standard) generalized eigenvalue problem. We characterize the equivalence between the original and the nonlinearized problem theoretically and show how to use the transformation computationally. Special cases of the transformation can be interpreted as a reversed companion linearization for polynomial eigenvalue problems, as well as a reversed (less known) linearization technique for certain algebraic eigenvalue problems with square-root terms. Moreover, by exploiting the structure of the NEP we present algorithm specializations for NEP-methods, although the technique also allows general solution methods for NEPs to be directly applied. The nonlinearization is illustrated in examples and simulations, with focus on problems where the eliminated equation is of much smaller size than the other two-parameter eigenvalue equation. This situation arises naturally in domain decomposition techniques. A general error analysis is also carried out under the assumption that a backward stable eigensolver is used to solve the eliminated problem, leading to the conclusion that the error is benign in this situation.

Place, publisher, year, edition, pages
Society for Industrial & Applied Mathematics (SIAM), 2021
Keywords
two-parameter eigenvalue problem, nonlinear eigenvalue problem, multiparameter eigenvalue problem, iterative algorithms, implicit function theorem
National Category
Computational Mathematics
Research subject
Applied and Computational Mathematics, Numerical Analysis; Applied and Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-291995 (URN)10.1137/19M1274316 (DOI)000674142000015 ()2-s2.0-85108326265 (Scopus ID)
Funder
Swedish Research Council
Note

QC 20210407

Available from: 2021-03-23 Created: 2021-03-23 Last updated: 2022-06-25Bibliographically approved
Upadhyaya, P., Jarlebring, E. & Tudisco, F. (2021). The self-consistent field iteration for p-spectral clustering.
Open this publication in new window or tab >>The self-consistent field iteration for p-spectral clustering
2021 (English)Manuscript (preprint) (Other academic)
Abstract [en]

The self-consistent field (SCF) iteration, combined with its variants, is one of the most widely used algorithms in quantum chemistry. We propose a procedure to adapt the SCF iteration for the p-Laplacian eigenproblem, which is an important problem in the field of unsupervised learning. We formulate the p-Laplacian eigenproblem as a type of nonlinear eigenproblem with one eigenvector nonlinearity , which then allows us to adapt the SCF iteration for its solution after the application of suitable regularization techniques. The results of our numerical experiments confirm the viablity of our approach.

Keywords
spectral clustering, scf, nonlinear eigenproblems, data science, p-Laplacian
National Category
Computational Mathematics
Research subject
Applied and Computational Mathematics, Numerical Analysis
Identifiers
urn:nbn:se:kth:diva-305183 (URN)
Note

QC 20211125

Available from: 2021-11-22 Created: 2021-11-22 Last updated: 2024-03-15Bibliographically approved
Mele, G., Ringh, E., Ek, D., Izzo, F., Upadhyaya, P. & Jarlebring, E. (2020). Preconditioning for linear systems. KD Publishing
Open this publication in new window or tab >>Preconditioning for linear systems
Show others...
2020 (English)Book (Other academic)
Place, publisher, year, edition, pages
KD Publishing, 2020
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-273402 (URN)9798646306334 (ISBN)
Note

QC 20200523

Available from: 2020-05-17 Created: 2020-05-17 Last updated: 2024-03-18Bibliographically approved
Upadhyaya, P., Jarlebring, E. & Rubensson, E. (2019). A density matrix approach to the convergence of the self-consistent field iteration. Numerical Algebra, Control and Optimization, 0(0), 0-0
Open this publication in new window or tab >>A density matrix approach to the convergence of the self-consistent field iteration
2019 (English)In: Numerical Algebra, Control and Optimization, ISSN 2155-3289, E-ISSN 2155-3297, Vol. 0, no 0, p. 0-0Article in journal (Refereed) Published
Abstract [en]

In this paper, we present a local convergence analysis of the self-consistent field (SCF) iteration using the density matrix as the state of a fixed-point iteration. Conditions for local convergence are formulated in terms of the spectral radius of the Jacobian of a fixed-point map. The relationship between convergence and certain properties of the problem is explored by deriving upper bounds expressed in terms of higher gaps. This gives more information regarding how the gaps between eigenvalues of the problem affect the convergence, and hence these bounds are more insightful on the convergence behaviour than standard convergence results. We also provide a detailed analysis to describe the difference between the bounds and the exact convergence factor for an illustrative example. Finally we present numerical examples and compare the exact value of the convergence factor with the observed behaviour of SCF, along with our new bounds and the characterization using the higher gaps. We provide heuristic convergence factor estimates in situations where the bounds fail to well capture the convergence.

Place, publisher, year, edition, pages
American Institute of Mathematical Sciences (AIMS), 2019
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-273579 (URN)10.3934/naco.2020018 (DOI)000594844400008 ()2-s2.0-85101269567 (Scopus ID)
Note

QC 20201119

Available from: 2020-05-19 Created: 2020-05-19 Last updated: 2022-06-26Bibliographically approved
Jarlebring, E. (2019). BROYDEN'S METHOD FOR NONLINEAR EIGENPROBLEMS. SIAM Journal on Scientific Computing, 41(2), A989-A1012
Open this publication in new window or tab >>BROYDEN'S METHOD FOR NONLINEAR EIGENPROBLEMS
2019 (English)In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 41, no 2, p. A989-A1012Article in journal (Refereed) Published
Abstract [en]

Broyden's method is a general method commonly used for nonlinear systems of equations when very little information is available about the problem. We develop an approach based on Broyden's method for the structure appearing in nonlinear eigenvalue problems. Our approach is designed for problems where the evaluation of a matrix vector product is computationally expensive, essentially as expensive as solving the corresponding linear system of equations. We show how the structure of the Jacobian matrix can be incorporated into the algorithm to improve convergence. The algorithm exhibits local superlinear convergence for simple eigenvalues, and we characterize the convergence. We show how deflation can be integrated and combined such that the method can be used to compute several eigenvalues. A specific problem in machine tool milling, coupled with a PDE, is used to illustrate the approach. The simulations were carried out using the Julia programming language, and the simulation software is provided publicly for reproducibility.

Place, publisher, year, edition, pages
SIAM PUBLICATIONS, 2019
Keywords
nonlinear eigenproblems, Broyden methods, Newton methods, time-delay systems, eigenvalue problems, CKER DW, 1985, SIAM JOURNAL ON NUMERICAL ANALYSIS, V22, P566 zanson Jeff, 2017, SIAM REVIEW, V59, P65 oyden C.G., 1973, Journal of the Institute of Mathematics and Its Applications, V12, P223 rrett C. Kristopher, 2016, ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, V43, UFLHARD P., 2004, Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms, yld Daniel B., 2015, NUMERISCHE MATHEMATIK, V129, P383
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-254045 (URN)10.1137/18M1173150 (DOI)000469225300013 ()2-s2.0-85065566542 (Scopus ID)
Note

QC 20190813

Available from: 2019-08-13 Created: 2019-08-13 Last updated: 2022-06-26Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-9443-8772

Search in DiVA

Show all publications