- •1 Introduction
- •1.1 What makes eigenvalues interesting?
- •1.2 Example 1: The vibrating string
- •1.2.1 Problem setting
- •1.2.2 The method of separation of variables
- •1.3.3 Global functions
- •1.3.4 A numerical comparison
- •1.4 Example 2: The heat equation
- •1.5 Example 3: The wave equation
- •1.6 The 2D Laplace eigenvalue problem
- •1.6.3 A numerical example
- •1.7 Cavity resonances in particle accelerators
- •1.8 Spectral clustering
- •1.8.1 The graph Laplacian
- •1.8.2 Spectral clustering
- •1.8.3 Normalized graph Laplacians
- •1.9 Other sources of eigenvalue problems
- •Bibliography
- •2 Basics
- •2.1 Notation
- •2.2 Statement of the problem
- •2.3 Similarity transformations
- •2.4 Schur decomposition
- •2.5 The real Schur decomposition
- •2.6 Normal matrices
- •2.7 Hermitian matrices
- •2.8 Cholesky factorization
- •2.9 The singular value decomposition (SVD)
- •2.10 Projections
- •2.11 Angles between vectors and subspaces
- •Bibliography
- •3 The QR Algorithm
- •3.1 The basic QR algorithm
- •3.1.1 Numerical experiments
- •3.2 The Hessenberg QR algorithm
- •3.2.1 A numerical experiment
- •3.2.2 Complexity
- •3.3 The Householder reduction to Hessenberg form
- •3.3.2 Reduction to Hessenberg form
- •3.4 Improving the convergence of the QR algorithm
- •3.4.1 A numerical example
- •3.4.2 QR algorithm with shifts
- •3.4.3 A numerical example
- •3.5 The double shift QR algorithm
- •3.5.1 A numerical example
- •3.5.2 The complexity
- •3.6 The symmetric tridiagonal QR algorithm
- •3.6.1 Reduction to tridiagonal form
- •3.6.2 The tridiagonal QR algorithm
- •3.7 Research
- •3.8 Summary
- •Bibliography
- •4.1 The divide and conquer idea
- •4.2 Partitioning the tridiagonal matrix
- •4.3 Solving the small systems
- •4.4 Deflation
- •4.4.1 Numerical examples
- •4.6 Solving the secular equation
- •4.7 A first algorithm
- •4.7.1 A numerical example
- •4.8 The algorithm of Gu and Eisenstat
- •4.8.1 A numerical example [continued]
- •Bibliography
- •5 LAPACK and the BLAS
- •5.1 LAPACK
- •5.2 BLAS
- •5.2.1 Typical performance numbers for the BLAS
- •5.3 Blocking
- •5.4 LAPACK solvers for the symmetric eigenproblems
- •5.6 An example of a LAPACK routines
- •Bibliography
- •6 Vector iteration (power method)
- •6.1 Simple vector iteration
- •6.2 Convergence analysis
- •6.3 A numerical example
- •6.4 The symmetric case
- •6.5 Inverse vector iteration
- •6.6 The generalized eigenvalue problem
- •6.7 Computing higher eigenvalues
- •6.8 Rayleigh quotient iteration
- •6.8.1 A numerical example
- •Bibliography
- •7 Simultaneous vector or subspace iterations
- •7.1 Basic subspace iteration
- •7.2 Convergence of basic subspace iteration
- •7.3 Accelerating subspace iteration
- •7.4 Relation between subspace iteration and QR algorithm
- •7.5 Addendum
- •Bibliography
- •8 Krylov subspaces
- •8.1 Introduction
- •8.3 Polynomial representation of Krylov subspaces
- •8.4 Error bounds of Saad
- •Bibliography
- •9 Arnoldi and Lanczos algorithms
- •9.2 Arnoldi algorithm with explicit restarts
- •9.3 The Lanczos basis
- •9.4 The Lanczos process as an iterative method
- •9.5 An error analysis of the unmodified Lanczos algorithm
- •9.6 Partial reorthogonalization
- •9.7 Block Lanczos
- •9.8 External selective reorthogonalization
- •Bibliography
- •10 Restarting Arnoldi and Lanczos algorithms
- •10.2 Implicit restart
- •10.3 Convergence criterion
- •10.4 The generalized eigenvalue problem
- •10.5 A numerical example
- •10.6 Another numerical example
- •10.7 The Lanczos algorithm with thick restarts
- •10.8 Krylov–Schur algorithm
- •10.9 The rational Krylov space method
- •Bibliography
- •11 The Jacobi-Davidson Method
- •11.1 The Davidson algorithm
- •11.2 The Jacobi orthogonal component correction
- •11.2.1 Restarts
- •11.2.2 The computation of several eigenvalues
- •11.2.3 Spectral shifts
- •11.3 The generalized Hermitian eigenvalue problem
- •11.4 A numerical example
- •11.6 Harmonic Ritz values and vectors
- •11.7 Refined Ritz vectors
- •11.8 The generalized Schur decomposition
- •11.9.1 Restart
- •11.9.3 Algorithm
- •Bibliography
- •12 Rayleigh quotient and trace minimization
- •12.1 Introduction
- •12.2 The method of steepest descent
- •12.3 The conjugate gradient algorithm
- •12.4 Locally optimal PCG (LOPCG)
- •12.5 The block Rayleigh quotient minimization algorithm (BRQMIN)
- •12.7 A numerical example
- •12.8 Trace minimization
- •Bibliography
204 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
CHAPTER 11. THE JACOBI-DAVIDSON METHOD |
|||||||||||||||||||||||||||||||
As uj Qk we have |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
(I |
− |
u |
u )(I |
|
− |
Q |
|
|
Q ) = I |
− |
Q˜ |
k |
Q˜ |
, |
|
|
|
Q˜ |
k |
= [x˜ |
, . . . , x˜ |
|
, u |
]. |
||||||||||||||||||||||||||
|
|
j |
|
j |
|
|
|
|
|
|
k |
|
|
k |
|
|
|
|
|
|
|
k |
|
|
|
|
|
|
1 |
|
|
|
|
k |
j |
|
||||||||||||||
Thus, we can write (11.19) in the form |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||||||||||||||||||||
(11.20) |
(I |
− |
Q˜ |
k |
Q˜ )(A |
|
− |
ϑ |
I)(I |
− |
Q˜ |
k |
Q˜ |
)t = |
|
r |
, |
|
Q˜ |
t = 0. |
|
|||||||||||||||||||||||||||||
|
|
|
|
|
k |
|
|
|
|
|
j |
|
|
|
|
|
|
|
|
k |
|
|
− j |
|
|
|
k |
|
|
|
|
|
|
|||||||||||||||||
The preconditioner becomes |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||||||||
(11.21) |
|
|
K˜ = (I |
|
− |
Q˜ |
k |
Q˜ |
)K(I |
− |
Q˜ |
k |
Q˜ ), |
|
|
|
K |
≈ |
A |
− |
ϑ |
I. |
|
|
|
|||||||||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
k |
|
|
|
|
|
|
|
|
k |
|
|
|
|
|
|
|
j |
|
|
|
|
|
||||||||||||
Similarly as earlier, for solving |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
K˜ z = A˜v, Q˜kz = Q˜kv = 0, |
|
|
|
|
|
|
|
|
|
|||||||||||||||||||||||||||||
we execute the following steps. Since |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||||||||||||||||||||
A˜v = (I |
− |
Q˜ |
k |
Q˜ ) (A |
− |
ϑ |
I)v |
=: (I |
− |
Q˜ |
k |
Q˜ )y =: y |
− |
Q˜ |
k |
Q˜ y . |
||||||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
k |
|
|
|
|
|
|
|
j |
|
|
|
|
|
|
|
|
|
|
|
|
k |
|
|
|
|
|
|
k |
|
|||||||||||||
we have to solve |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
{z |
|
|
|
} |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|{z} |
|||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
y |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
a |
|
|
|
|
|
K˜ z = (I − Q˜kQ˜k)Kz = (I − Q˜kQ˜k)y, |
|
z Q˜k. |
|
|
|
||||||||||||||||||||||||||||||||||||||||||
Thus, |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
z = K−1y − K−1Q˜ka. |
|
|
|
|
|
|
|
|
|
|
|
||||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||||||||||||||||||
˜ Similarly as earlier, we determine a by means of the constraint Qkz = 0,
˜ −1 ˜ −1 ˜ −1
a = (QkK Qk) QkK y.
If the iteration has converged to the vector x˜k+1 we can extend the partial Schur decomposition (11.17). Setting
|
Qk+1 := [Qk, x˜k+1], |
we get |
|
(11.22) |
AQk+1 = Qk+1Tk+1 |
with |
Tk QkAx˜k+1 |
|
|
|
Tk+1 = 0 x˜k+1Ax˜k+1 . |
11.2.3Spectral shifts
In the correction equation (11.9) and implicitely in the preconditioner (11.15) a spectral shift ϑj appears. Experiments show that it is not wise to always choose the Rayleigh quotient of the recent eigenvector approximation u as the shift. In particular, far away from convergence, i.e., in the first few iteration steps, the Rayleigh quotient may be far away from the (desired) eigenvalue, and in fact may direct the JD iteration to an unwanted solution. So, one proceeds similarly as in the plain Rayleigh quotient iteration, cf. Remark 6.8 on page 129. Initially, the shift is held fixed, usually equal to the target value τ. As soon as the norm of the residual is small enough, the Rayleigh quotient of the actual approximate is chosen as the spectral shift in the correction equation. For e ciency
11.2. THE JACOBI ORTHOGONAL COMPONENT CORRECTION |
205 |
Algorithm 11.2 The Jacobi–Davidson QR algorithm to compute p of the eigenvalues closest to a target value τ
1: |
Q0 := []; k = 0. |
/* Initializations */ |
2: |
Choose v1 with kv1k = 1. |
|
3: |
w1 = Av1; H1 := v1w1; V1 := [v1]; W1 := [W1]; |
4: |
q˜ = v1; ϑ˜ = v1w1; r := w1 − ϑ˜q˜. |
5:j := 1;
6:while k < p do
7: |
/* Compute Schur vectors one after the other */ |
8:Approximatively solve the correction equation for t
(I |
− |
Q˜ |
k |
Q˜ |
)(A |
− |
ϑI˜ |
)(I |
− |
Q˜ |
k |
Q˜ |
)t = |
r |
, |
|
|
|
k |
|
|
|
|
k |
− j |
|
˜
Qkt = 0.
|
|
|
˜ |
= [Qk, q˜]. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||||
|
where Qk |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||||||
9: |
vj |
= (I |
− |
Vj |
− |
1V |
|
)t/ |
(I |
− |
Vj |
− |
1V |
|
)t |
; Vj := [Vj |
− |
1, vj ]. |
||||||||||
|
|
|
|
|
|
|
j−1 |
k |
|
|
|
j−1 |
|
k |
|
|
|
|
|
|
|
|||||||
10: |
w |
|
= Av |
; H |
|
= |
|
Hj |
− |
1 |
V |
|
wj |
; |
W |
|
= [W |
|
|
|
, w |
]. |
||||||
|
|
|
|
|
|
|
j−1 |
|
|
j−1 |
||||||||||||||||||
|
|
j |
|
j |
|
|
|
j |
|
vj Wj−1 |
|
vj wj |
|
|
j |
|
j |
|
||||||||||
11: |
Compute the Schur decomposition of |
|
|
|
|
|
|
|
|
|
|
|||||||||||||||||
Hj =: Sj Rj Sj
with the eigenvalues rii(j) sorted according to their distance to τ.
12:/* Test for convergence */
13:repeat
14: |
˜ |
(j) |
; q˜ = Vj s1; |
˜ |
ϑ |
= λ1 |
w˜ = Wj s1; r = w˜ − ϑq˜ |
15:found := krk < ε
16:if found then
17: |
Qk+1 = [Qk, q˜]; k := k + 1; |
18:end if
19:until not found
20:/* Restart */
21:if j = jmax then
22:VjMIN := Vj [s1, . . . , smin]; TjMIN := Tj (1 : jmin, 1 : jmin);
23: |
HjMIN := TjMIN ; SjMIN := IjMIN ; J := jmin |
24:end if
25:end while
reasons, the spectral shift in the preconditioner K is always fixed. In this way it has to
˜
be computed just once. Notice that K is changing with each correction equation.
Remark 11.2. As long as the shift is held fixed Jacobi–Davidson is actually performing a shift-and-invert Arnoldi iteration. 
Algorithm 11.2 gives the framework for an algorithm to compute the partial Schur decomposition of a matrix A. Qk stores the converged Schur vectors; Vj stores the ‘active’ search space. This algorithm does not take into account some of the just mentioned issues. In particular the shift is always taken to be the Rayleigh quotient of the most recent approximate q˜.
