- •1 Introduction
- •1.1 What makes eigenvalues interesting?
- •1.2 Example 1: The vibrating string
- •1.2.1 Problem setting
- •1.2.2 The method of separation of variables
- •1.3.3 Global functions
- •1.3.4 A numerical comparison
- •1.4 Example 2: The heat equation
- •1.5 Example 3: The wave equation
- •1.6 The 2D Laplace eigenvalue problem
- •1.6.3 A numerical example
- •1.7 Cavity resonances in particle accelerators
- •1.8 Spectral clustering
- •1.8.1 The graph Laplacian
- •1.8.2 Spectral clustering
- •1.8.3 Normalized graph Laplacians
- •1.9 Other sources of eigenvalue problems
- •Bibliography
- •2 Basics
- •2.1 Notation
- •2.2 Statement of the problem
- •2.3 Similarity transformations
- •2.4 Schur decomposition
- •2.5 The real Schur decomposition
- •2.6 Normal matrices
- •2.7 Hermitian matrices
- •2.8 Cholesky factorization
- •2.9 The singular value decomposition (SVD)
- •2.10 Projections
- •2.11 Angles between vectors and subspaces
- •Bibliography
- •3 The QR Algorithm
- •3.1 The basic QR algorithm
- •3.1.1 Numerical experiments
- •3.2 The Hessenberg QR algorithm
- •3.2.1 A numerical experiment
- •3.2.2 Complexity
- •3.3 The Householder reduction to Hessenberg form
- •3.3.2 Reduction to Hessenberg form
- •3.4 Improving the convergence of the QR algorithm
- •3.4.1 A numerical example
- •3.4.2 QR algorithm with shifts
- •3.4.3 A numerical example
- •3.5 The double shift QR algorithm
- •3.5.1 A numerical example
- •3.5.2 The complexity
- •3.6 The symmetric tridiagonal QR algorithm
- •3.6.1 Reduction to tridiagonal form
- •3.6.2 The tridiagonal QR algorithm
- •3.7 Research
- •3.8 Summary
- •Bibliography
- •4.1 The divide and conquer idea
- •4.2 Partitioning the tridiagonal matrix
- •4.3 Solving the small systems
- •4.4 Deflation
- •4.4.1 Numerical examples
- •4.6 Solving the secular equation
- •4.7 A first algorithm
- •4.7.1 A numerical example
- •4.8 The algorithm of Gu and Eisenstat
- •4.8.1 A numerical example [continued]
- •Bibliography
- •5 LAPACK and the BLAS
- •5.1 LAPACK
- •5.2 BLAS
- •5.2.1 Typical performance numbers for the BLAS
- •5.3 Blocking
- •5.4 LAPACK solvers for the symmetric eigenproblems
- •5.6 An example of a LAPACK routines
- •Bibliography
- •6 Vector iteration (power method)
- •6.1 Simple vector iteration
- •6.2 Convergence analysis
- •6.3 A numerical example
- •6.4 The symmetric case
- •6.5 Inverse vector iteration
- •6.6 The generalized eigenvalue problem
- •6.7 Computing higher eigenvalues
- •6.8 Rayleigh quotient iteration
- •6.8.1 A numerical example
- •Bibliography
- •7 Simultaneous vector or subspace iterations
- •7.1 Basic subspace iteration
- •7.2 Convergence of basic subspace iteration
- •7.3 Accelerating subspace iteration
- •7.4 Relation between subspace iteration and QR algorithm
- •7.5 Addendum
- •Bibliography
- •8 Krylov subspaces
- •8.1 Introduction
- •8.3 Polynomial representation of Krylov subspaces
- •8.4 Error bounds of Saad
- •Bibliography
- •9 Arnoldi and Lanczos algorithms
- •9.2 Arnoldi algorithm with explicit restarts
- •9.3 The Lanczos basis
- •9.4 The Lanczos process as an iterative method
- •9.5 An error analysis of the unmodified Lanczos algorithm
- •9.6 Partial reorthogonalization
- •9.7 Block Lanczos
- •9.8 External selective reorthogonalization
- •Bibliography
- •10 Restarting Arnoldi and Lanczos algorithms
- •10.2 Implicit restart
- •10.3 Convergence criterion
- •10.4 The generalized eigenvalue problem
- •10.5 A numerical example
- •10.6 Another numerical example
- •10.7 The Lanczos algorithm with thick restarts
- •10.8 Krylov–Schur algorithm
- •10.9 The rational Krylov space method
- •Bibliography
- •11 The Jacobi-Davidson Method
- •11.1 The Davidson algorithm
- •11.2 The Jacobi orthogonal component correction
- •11.2.1 Restarts
- •11.2.2 The computation of several eigenvalues
- •11.2.3 Spectral shifts
- •11.3 The generalized Hermitian eigenvalue problem
- •11.4 A numerical example
- •11.6 Harmonic Ritz values and vectors
- •11.7 Refined Ritz vectors
- •11.8 The generalized Schur decomposition
- •11.9.1 Restart
- •11.9.3 Algorithm
- •Bibliography
- •12 Rayleigh quotient and trace minimization
- •12.1 Introduction
- •12.2 The method of steepest descent
- •12.3 The conjugate gradient algorithm
- •12.4 Locally optimal PCG (LOPCG)
- •12.5 The block Rayleigh quotient minimization algorithm (BRQMIN)
- •12.7 A numerical example
- •12.8 Trace minimization
- •Bibliography
6.6. THE GENERALIZED EIGENVALUE PROBLEM |
125 |
If σn = O(ε) then
(A − σI)z = UΣV z = y.
Thus,
|
|
n |
|
|
|
|
|
|
|
1 |
|
Xi |
u y |
σn |
|
σn−1 |
|
u y |
|
|
|
i |
|
|
|
n |
|||
z = V Σ− |
U y = |
vi |
|
|
≈ |
vn |
|
. |
|
σi |
|
σn |
|||||||
|
|
=1 |
|
|
|
|
|
|
|
The tiny σn blows up the component in direction of vn. So, the vector z points in the desired ‘most singular’ direction.
6.6The generalized eigenvalue problem
Applying the vector iteration (6.1) to the generalized eigenvalue problem Ax = λBx leads
to the iteration
x(k) := B−1Ax(k−1), k = 1, 2, . . .
Since the solution of a linear system is required in each iteration step, we can execute an inverse iteration right-away,
(6.24) |
(A |
− |
σB) x(k) := Bx(k−1) |
, k = 1, 2, . . . |
|
|
|
|
The iteration performs an ordinary vector iteration for the eigenvalue problem
(6.25) |
(A − σB)−1Bx := µx, µ = |
|
1 |
|
. |
|
|
|
|||
λ |
− |
σ |
|||
|
|
|
|
|
Thus, the iteration (6.24) converges to the largest eigenvector of (6.25), i.e., the eigenvector with eigenvalue closest to the shift σ.
6.7Computing higher eigenvalues
In order to compute higher eigenvalues λ2, λ3, . . . , we make use of the mutual orthogonality of the eigenvectors of symmetric matrices, see Theorem 2.14. (In the case of Schur vecturs we can proceed in a similar way.)
So, in order to be able to compute the second eigenpair (λ2, u2) we have to know the eigenvector u1 corresponding to the lowest eigenvalue. Most probably is has been computed previously. If this is the case we can execute an inverse iteration orthogonal to u1.
More generally, we can compute the j-th eigenpair (λj , uj ) by inverse iteration, keeping the iterated vector x(k) orthogonal to the already known or computed eigenvectors
u1, . . . , uj−1.
In exact arithmetic, the condition u1x(0) = · · · = uj−1x(0) = 0 implies that all x(k) are orthogonal to u1, . . . , uj−1. In general, however, one has to expect rounding errors that introduce components in the directions of already computed eigenvectors. Therefore, it is necessary to enforce the orthogonality conditions during the iteration.
Assuming exact arithmetic, Theorem 6.7 immediately implies that
sin (x(k), xj ) |
|
c1 |
λj |
|
k |
|||
|
|
|||||||
|
λ ′ |
|
||||||
|
|
|
≤ |
λj |
|
|||
|
(k) |
|
2jk |
|
||||
|λ |
|
− λj | ≤ c2 |
|
|
|
|
||
|
λj′ |
|
|
|||||
where j′ is the smallest index for which λj′ > λj .
