- •1 Introduction
- •1.1 What makes eigenvalues interesting?
- •1.2 Example 1: The vibrating string
- •1.2.1 Problem setting
- •1.2.2 The method of separation of variables
- •1.3.3 Global functions
- •1.3.4 A numerical comparison
- •1.4 Example 2: The heat equation
- •1.5 Example 3: The wave equation
- •1.6 The 2D Laplace eigenvalue problem
- •1.6.3 A numerical example
- •1.7 Cavity resonances in particle accelerators
- •1.8 Spectral clustering
- •1.8.1 The graph Laplacian
- •1.8.2 Spectral clustering
- •1.8.3 Normalized graph Laplacians
- •1.9 Other sources of eigenvalue problems
- •Bibliography
- •2 Basics
- •2.1 Notation
- •2.2 Statement of the problem
- •2.3 Similarity transformations
- •2.4 Schur decomposition
- •2.5 The real Schur decomposition
- •2.6 Normal matrices
- •2.7 Hermitian matrices
- •2.8 Cholesky factorization
- •2.9 The singular value decomposition (SVD)
- •2.10 Projections
- •2.11 Angles between vectors and subspaces
- •Bibliography
- •3 The QR Algorithm
- •3.1 The basic QR algorithm
- •3.1.1 Numerical experiments
- •3.2 The Hessenberg QR algorithm
- •3.2.1 A numerical experiment
- •3.2.2 Complexity
- •3.3 The Householder reduction to Hessenberg form
- •3.3.2 Reduction to Hessenberg form
- •3.4 Improving the convergence of the QR algorithm
- •3.4.1 A numerical example
- •3.4.2 QR algorithm with shifts
- •3.4.3 A numerical example
- •3.5 The double shift QR algorithm
- •3.5.1 A numerical example
- •3.5.2 The complexity
- •3.6 The symmetric tridiagonal QR algorithm
- •3.6.1 Reduction to tridiagonal form
- •3.6.2 The tridiagonal QR algorithm
- •3.7 Research
- •3.8 Summary
- •Bibliography
- •4.1 The divide and conquer idea
- •4.2 Partitioning the tridiagonal matrix
- •4.3 Solving the small systems
- •4.4 Deflation
- •4.4.1 Numerical examples
- •4.6 Solving the secular equation
- •4.7 A first algorithm
- •4.7.1 A numerical example
- •4.8 The algorithm of Gu and Eisenstat
- •4.8.1 A numerical example [continued]
- •Bibliography
- •5 LAPACK and the BLAS
- •5.1 LAPACK
- •5.2 BLAS
- •5.2.1 Typical performance numbers for the BLAS
- •5.3 Blocking
- •5.4 LAPACK solvers for the symmetric eigenproblems
- •5.6 An example of a LAPACK routines
- •Bibliography
- •6 Vector iteration (power method)
- •6.1 Simple vector iteration
- •6.2 Convergence analysis
- •6.3 A numerical example
- •6.4 The symmetric case
- •6.5 Inverse vector iteration
- •6.6 The generalized eigenvalue problem
- •6.7 Computing higher eigenvalues
- •6.8 Rayleigh quotient iteration
- •6.8.1 A numerical example
- •Bibliography
- •7 Simultaneous vector or subspace iterations
- •7.1 Basic subspace iteration
- •7.2 Convergence of basic subspace iteration
- •7.3 Accelerating subspace iteration
- •7.4 Relation between subspace iteration and QR algorithm
- •7.5 Addendum
- •Bibliography
- •8 Krylov subspaces
- •8.1 Introduction
- •8.3 Polynomial representation of Krylov subspaces
- •8.4 Error bounds of Saad
- •Bibliography
- •9 Arnoldi and Lanczos algorithms
- •9.2 Arnoldi algorithm with explicit restarts
- •9.3 The Lanczos basis
- •9.4 The Lanczos process as an iterative method
- •9.5 An error analysis of the unmodified Lanczos algorithm
- •9.6 Partial reorthogonalization
- •9.7 Block Lanczos
- •9.8 External selective reorthogonalization
- •Bibliography
- •10 Restarting Arnoldi and Lanczos algorithms
- •10.2 Implicit restart
- •10.3 Convergence criterion
- •10.4 The generalized eigenvalue problem
- •10.5 A numerical example
- •10.6 Another numerical example
- •10.7 The Lanczos algorithm with thick restarts
- •10.8 Krylov–Schur algorithm
- •10.9 The rational Krylov space method
- •Bibliography
- •11 The Jacobi-Davidson Method
- •11.1 The Davidson algorithm
- •11.2 The Jacobi orthogonal component correction
- •11.2.1 Restarts
- •11.2.2 The computation of several eigenvalues
- •11.2.3 Spectral shifts
- •11.3 The generalized Hermitian eigenvalue problem
- •11.4 A numerical example
- •11.6 Harmonic Ritz values and vectors
- •11.7 Refined Ritz vectors
- •11.8 The generalized Schur decomposition
- •11.9.1 Restart
- •11.9.3 Algorithm
- •Bibliography
- •12 Rayleigh quotient and trace minimization
- •12.1 Introduction
- •12.2 The method of steepest descent
- •12.3 The conjugate gradient algorithm
- •12.4 Locally optimal PCG (LOPCG)
- •12.5 The block Rayleigh quotient minimization algorithm (BRQMIN)
- •12.7 A numerical example
- •12.8 Trace minimization
- •Bibliography
3.3. THE HOUSEHOLDER REDUCTION TO HESSENBERG FORM |
59 |
3.3The Householder reduction to Hessenberg form
In the previous section we discovered that it is a good idea to perform the QR algorithm with Hessenberg matrices instead of full matrices. But we have not discussed how we transform a full matrix (by means of similarity transformations) into Hessenberg form. We catch up on this issue in this section.
3.3.1Householder reflectors
Givens rotations are designed to zero a single element in a vector. Householder reflectors are more e cient if a number of elements of a vector are to be zeroed at once. Here, we follow the presentation given in [5].
Definition 3.3 A matrix of the form
P = I |
− |
2uu , |
k |
u |
k |
= 1, |
|
|
|
|
is called a Householder reflector.
It is easy to verify that Householder reflectors are Hermitian and that P 2 = I. From this
we deduce that P is unitary. |
It is clear that we only have to store the Householder |
||
vector u to be able to multiply a vector (or a matrix) with P , |
|||
(3.6) |
P x = x |
− |
u(2u x). |
|
|
|
|
This multiplication only costs 4n flops where n is the length of the vectors.
A task that we repeatedly want to carry out with Householder reflectors is to transform a vector x on a multiple of e1,
P x = x − u(2u x) = αe1.
Since P is unitary, we must have α = ρkxk, where ρ C has absolute value one. Therefore,
|
x |
ρ x e1 |
|
1 |
|
x2 |
|
|||
|
kx − ρkxke1k |
|
kx − ρkxke1k |
x1 − |
ρkxk |
|
||||
|
|
.. |
||||||||
u = |
|
− k k |
|
= |
|
|
|
. |
|
|
|
|
|
|
|||||||
|
|
|
|
|
|
|
x |
n |
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
We can freely choose ρ provided that |ρ| = 1. Let x1 = |x1|eiφ. To avoid numerical cancellation we set ρ = −eiφ.
In the real case, one commonly sets ρ = −sign(x1). If x1 = 0 we can set ρ in any way.
3.3.2Reduction to Hessenberg form
Now we show how to use Householder reflectors to reduce an arbitrary square matrix to Hessenberg form. We show the idea by means of a 5 × 5 example. In the first step of the reduction we introduce zeros in the first column below the second element,
|
× × × × × |
P1 |
× × × × × |
P1 |
× × × × × |
|
|||||
|
|
× × × × × |
−−−→ |
|
× |
× × × × |
−−−→ |
|
× |
× × × × |
= P1 AP1. |
A = |
× × × × × |
0 |
× × × × |
0 |
× × × × |
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
× × × × × |
|
|
0 |
× × × × |
|
|
0 |
× × × × |
|
|
|
|
|
|
|
0 |
|
|
|
0 |
|
|
|
× × × × × |
|
|
|
× × × × |
|
|
|
× × × × |
|
|
60 |
|
|
|
|
|
|
|
|
|
|
CHAPTER 3. THE QR ALGORITHM |
||||
Notice that P1 = P1 since it is a Householder reflector! It has the structure |
|||||||||||||||
|
|
|
0 |
× |
× |
× |
× |
1 |
0T |
|
|
||||
|
|
|
|
1 |
0 |
0 |
0 |
0 |
= 0 I4 |
|
2u1u1 . |
||||
P1 |
= |
0 |
× × × × |
|
|||||||||||
|
|
|
|
|
× |
× |
× |
|
|
|
− |
|
|
|
|
|
|
|
|
0 |
× |
|
|
|
|
|
|||||
|
|
|
0 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
× |
× |
× |
× |
|
|
|
|
|
||
The Householder vector u1 is determined such that |
|
|
u2 |
. |
|||||||||||
(I |
|
2u1u ) |
a31 |
|
= |
0 |
|
with |
u1 = |
||||||
|
|
|
|
|
a21 |
|
|
α |
|
|
|
|
u1 |
|
|
|
− |
|
|
1 |
a41 |
0 |
|
|
|
u3 |
|||||
|
|
|
|
|
a51 |
|
|
0 |
|
|
|
|
u4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The multiplication of P1 from the left inserts the desired zeros in column 1 of A. The multiplication from the right is necessary in order to have similarity. Because of the nonzero structure of P1 the first column of P1A is not a ected. Hence, the zeros stay there.
The reduction continues in a similar way:
|
|
× × × × × |
P2 / P2 |
× × × × × |
||||||||||||||||
|
|
|
× |
× × × |
× |
−−−−−−−−→ |
|
× × × × × |
|
|||||||||||
|
|
0 |
× × × × |
0 |
0 |
|
|
|
||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
P1AP1 |
= |
|
0 |
× × × × |
|
|
|
|
0 |
|
|
× × × |
||||||||
|
|
|
|
|
|
|
|
× × × × |
|
|||||||||||
|
|
0 |
|
|
|
|
|
0 |
0 |
|
|
|
||||||||
|
|
|
|
× × × × |
|
|
|
|
|
|
|
× × × |
||||||||
|
|
|
|
P3 |
|
/ P3 |
× × × × × |
|
|
|
|
|
|
|||||||
|
|
|
|
|
|
|
|
× × × × × |
|
|
|
|
|
|
|
|||||
|
|
|
|
−−−−−−−−→ |
|
0 0 |
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
U |
|||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
× |
× |
|
× |
= P3P2P1A P1P2P3 . |
||||||
|
|
|
|
|
|
|
|
|
0 × × × × |
|
||||||||||
|
|
|
|
|
|
|
|
0 0 |
0 |
|
|
|
|
| {z } |
||||||
|
|
|
|
|
|
|
|
|
|
|
× |
|
× |
|
||||||
Algorithm 3.3 gives the details for the general n × n case. In step 4 of this algorithm, the Householder reflector is generated such that
|
|
ak+2,k |
|
|
0 |
|
|
|
u2 |
|
|
|
||||
|
|
|
ak+1,k |
|
|
|
α |
|
|
|
|
u1 |
|
|
|
|
− |
k |
.. |
|
|
0 |
|
|
|
.. |
|
| | k k |
|||||
(I |
2uku ) |
a |
. |
|
= |
0 |
with |
uk = |
u |
. |
|
and |
α = x |
|||
|
|
|
|
n,k |
|
|
|
|
|
|
|
|
n−k |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
according to the considerations of the previous subsection. The Householder vectors are stored at the locations of the zeros. Therefore the matrix U = P1 · · · Pn−2 that e ects the similarity transformation from the full A to the Hessenberg H is computed after all Householder vectors have been generated, thus saving (2/3)n3 flops. The overall complexity of the reduction is
• |
kP |
4(n − k − 1)(n − k) |
≈ 34 n3 |
|
Application of Pk from the left: n−2 |
||||
|
=1 |
|
|
|
• Application of Pk from the right: n−2 |
4(n)(n − k) ≈ 2n3 |
|
||
|
=1 |
|
|
|
|
kP |
|
|
|
