- •1 Introduction
- •1.1 What makes eigenvalues interesting?
- •1.2 Example 1: The vibrating string
- •1.2.1 Problem setting
- •1.2.2 The method of separation of variables
- •1.3.3 Global functions
- •1.3.4 A numerical comparison
- •1.4 Example 2: The heat equation
- •1.5 Example 3: The wave equation
- •1.6 The 2D Laplace eigenvalue problem
- •1.6.3 A numerical example
- •1.7 Cavity resonances in particle accelerators
- •1.8 Spectral clustering
- •1.8.1 The graph Laplacian
- •1.8.2 Spectral clustering
- •1.8.3 Normalized graph Laplacians
- •1.9 Other sources of eigenvalue problems
- •Bibliography
- •2 Basics
- •2.1 Notation
- •2.2 Statement of the problem
- •2.3 Similarity transformations
- •2.4 Schur decomposition
- •2.5 The real Schur decomposition
- •2.6 Normal matrices
- •2.7 Hermitian matrices
- •2.8 Cholesky factorization
- •2.9 The singular value decomposition (SVD)
- •2.10 Projections
- •2.11 Angles between vectors and subspaces
- •Bibliography
- •3 The QR Algorithm
- •3.1 The basic QR algorithm
- •3.1.1 Numerical experiments
- •3.2 The Hessenberg QR algorithm
- •3.2.1 A numerical experiment
- •3.2.2 Complexity
- •3.3 The Householder reduction to Hessenberg form
- •3.3.2 Reduction to Hessenberg form
- •3.4 Improving the convergence of the QR algorithm
- •3.4.1 A numerical example
- •3.4.2 QR algorithm with shifts
- •3.4.3 A numerical example
- •3.5 The double shift QR algorithm
- •3.5.1 A numerical example
- •3.5.2 The complexity
- •3.6 The symmetric tridiagonal QR algorithm
- •3.6.1 Reduction to tridiagonal form
- •3.6.2 The tridiagonal QR algorithm
- •3.7 Research
- •3.8 Summary
- •Bibliography
- •4.1 The divide and conquer idea
- •4.2 Partitioning the tridiagonal matrix
- •4.3 Solving the small systems
- •4.4 Deflation
- •4.4.1 Numerical examples
- •4.6 Solving the secular equation
- •4.7 A first algorithm
- •4.7.1 A numerical example
- •4.8 The algorithm of Gu and Eisenstat
- •4.8.1 A numerical example [continued]
- •Bibliography
- •5 LAPACK and the BLAS
- •5.1 LAPACK
- •5.2 BLAS
- •5.2.1 Typical performance numbers for the BLAS
- •5.3 Blocking
- •5.4 LAPACK solvers for the symmetric eigenproblems
- •5.6 An example of a LAPACK routines
- •Bibliography
- •6 Vector iteration (power method)
- •6.1 Simple vector iteration
- •6.2 Convergence analysis
- •6.3 A numerical example
- •6.4 The symmetric case
- •6.5 Inverse vector iteration
- •6.6 The generalized eigenvalue problem
- •6.7 Computing higher eigenvalues
- •6.8 Rayleigh quotient iteration
- •6.8.1 A numerical example
- •Bibliography
- •7 Simultaneous vector or subspace iterations
- •7.1 Basic subspace iteration
- •7.2 Convergence of basic subspace iteration
- •7.3 Accelerating subspace iteration
- •7.4 Relation between subspace iteration and QR algorithm
- •7.5 Addendum
- •Bibliography
- •8 Krylov subspaces
- •8.1 Introduction
- •8.3 Polynomial representation of Krylov subspaces
- •8.4 Error bounds of Saad
- •Bibliography
- •9 Arnoldi and Lanczos algorithms
- •9.2 Arnoldi algorithm with explicit restarts
- •9.3 The Lanczos basis
- •9.4 The Lanczos process as an iterative method
- •9.5 An error analysis of the unmodified Lanczos algorithm
- •9.6 Partial reorthogonalization
- •9.7 Block Lanczos
- •9.8 External selective reorthogonalization
- •Bibliography
- •10 Restarting Arnoldi and Lanczos algorithms
- •10.2 Implicit restart
- •10.3 Convergence criterion
- •10.4 The generalized eigenvalue problem
- •10.5 A numerical example
- •10.6 Another numerical example
- •10.7 The Lanczos algorithm with thick restarts
- •10.8 Krylov–Schur algorithm
- •10.9 The rational Krylov space method
- •Bibliography
- •11 The Jacobi-Davidson Method
- •11.1 The Davidson algorithm
- •11.2 The Jacobi orthogonal component correction
- •11.2.1 Restarts
- •11.2.2 The computation of several eigenvalues
- •11.2.3 Spectral shifts
- •11.3 The generalized Hermitian eigenvalue problem
- •11.4 A numerical example
- •11.6 Harmonic Ritz values and vectors
- •11.7 Refined Ritz vectors
- •11.8 The generalized Schur decomposition
- •11.9.1 Restart
- •11.9.3 Algorithm
- •Bibliography
- •12 Rayleigh quotient and trace minimization
- •12.1 Introduction
- •12.2 The method of steepest descent
- •12.3 The conjugate gradient algorithm
- •12.4 Locally optimal PCG (LOPCG)
- •12.5 The block Rayleigh quotient minimization algorithm (BRQMIN)
- •12.7 A numerical example
- •12.8 Trace minimization
- •Bibliography
108 |
CHAPTER 5. LAPACK AND THE BLAS |
dsyr2k.
Directly after the loop there is a call to the ‘unblocked dsytrd’ named dsytd2 to deal with the first/last few (<NB) rows/columns of the matrix. This excerpt concerns the situation when the upper triangle of the matrix A is stored. In that routine the mentioned loop looks very much the way we derived the formulae.
ELSE
*
*Reduce the lower triangle of A
DO 20 I = 1, N - 1
*Generate elementary reflector H(i) = I - tau * v * v’
*to annihilate A(i+2:n,i)
* |
|
|
|
CALL DLARFG( N-I, A( I+1, I ), A( MIN( I+2, N ), I ), 1, |
|
$ |
|
TAUI ) |
|
E( I ) = A( I+1, I ) |
|
* |
|
|
|
IF( TAUI.NE.ZERO ) THEN |
|
* |
|
|
* |
Apply H(i) from both sides to A(i+1:n,i+1:n) |
|
* |
|
|
|
A( I+1, I ) = ONE |
|
* |
|
|
* |
Compute |
x := tau * A * v storing y in TAU(i:n-1) |
* |
|
|
|
CALL DSYMV( UPLO, N-I, TAUI, A( I+1, I+1 ), LDA, |
|
$ |
|
A( I+1, I ), 1, ZERO, TAU( I ), 1 ) |
* |
|
|
* |
Compute |
w := x - 1/2 * tau * (x’*v) * v |
* |
|
|
|
ALPHA = -HALF*TAUI*DDOT( N-I, TAU( I ), 1, A( I+1, I ), |
|
$ |
|
1 ) |
|
CALL DAXPY( N-I, ALPHA, A( I+1, I ), 1, TAU( I ), 1 ) |
|
* |
|
|
* |
Apply the transformation as a rank-2 update: |
|
* |
A := A - v * w’ - w * v’ |
|
* |
|
|
|
CALL DSYR2( UPLO, N-I, -ONE, A( I+1, I ), 1, TAU( I ), 1, |
|
$ |
|
A( I+1, I+1 ), LDA ) |
* |
|
|
A( I+1, I ) = E( I )
END IF
D( I ) = A( I, I )
TAU( I ) = TAUI
20CONTINUE
D( N ) = A( N, N )
END IF
Bibliography
[1]E. ANDERSON, Z. BAI, C. BISCHOF, J. DEMMEL, J. DONGARRA, J. D. CROZ, A. GREENBAUM, S. HAMMARLING, A. MCKENNEY, S. OSTROUCHOV,
AND D. SORENSEN, LAPACK Users’ Guide - Release 2.0, SIAM, Philadelphia, PA, 1994. (Software and guide are available from Netlib at URL http://www.netlib.org/lapack/).
BIBLIOGRAPHY |
|
109 |
|
[2] L. |
S. BLACKFORD, J. CHOI, |
A. CLEARY, E. D’AZEVEDO, J. |
DEMMEL, |
I. |
DHILLON, J. DONGARRA, |
S. HAMMARLING, G. HENRY, A. |
PETITET, |
K. STANLEY, D. WALKER, AND R. C. WHALEY, ScaLAPACK Users’ Guide, SIAM, Philadelphia, PA, 1997. (Software and guide are available at URL http://www.netlib.org/scalapack/).
[3]J. J. DONGARRA, J. R. BUNCH, C. B. MOLER, AND G. W. STEWART, LINPACK
Users’ Guide, SIAM, Philadelphia, PA, 1979.
[4]J. J. DONGARRA, J. D. CROZ, I. DUFF, AND S. HAMMARLING, A proposal for a
set of level 3 basic linear algebra subprograms, ACM SIGNUM Newsletter, 22 (1987).
[5], A set of level 3 basic linear algebra subprograms, ACM Trans. Math. Softw., 16 (1990), pp. 1–17.
[6]J. J. DONGARRA, J. DU CROZ, S. HAMMARLING, AND R. J. HANSON, An extended
set of fortran basic linear algebra subprograms, ACM Trans. Math. Softw., 14 (1988),
pp.1–17.
[7], An extended set of fortran basic linear algebra subprograms: Model implementation and test programs, ACM Transactions on Mathematical Software, 14 (1988),
pp.18–32.
[8]B. S. GARBOW, J. M. BOYLE, J. J. DONGARRA, AND C. B. MOLER, Matrix Eigen-
system Routines – EISPACK Guide Extension, Lecture Notes in Computer Science 51, Springer-Verlag, Berlin, 1977.
[9]C. LAWSON, R. HANSON, D. KINCAID, AND F. KROGH, Basic linear algebra sub-
programs for Fortran usage, ACM Trans. Math. Softw., 5 (1979), pp. 308–325.
[10]B. T. SMITH, J. M. BOYLE, J. J. DONGARRA, B. S. GARBOW, Y. IKEBE, V. C. KLEMA, AND C. B. MOLER, Matrix Eigensystem Routines – EISPACK Guide, Lecture Notes in Computer Science 6, Springer-Verlag, Berlin, 2nd ed., 1976.
110 |
CHAPTER 5. LAPACK AND THE BLAS |
