
- •If a is a 2-d matrix.
- •In which case they are expanded so that the first three arguments
- •Working with sparse matrices
- •If s is symmetric, then colperm generates a permutation so that
- •Linear algebra
- •If sigma is a real or complex scalar including 0, eigs finds the
- •Is compensated so that column sums are preserved. That is, the
- •Xreginterprbf/condest
- •X and y are vectors of coordinates in the unit square at which
- •If you have a fill-reducing permutation p, you can combine it with an
- •Miscellaneous
- •In previous versions of matlab, the augmented matrix was used by
Linear algebra
<eigs> - A few eigenvalues, using ARPACK.
EIGS Find a few eigenvalues and eigenvectors of a matrix using ARPACK
D = EIGS(A) returns a vector of A's 6 largest magnitude eigenvalues.
A must be square and should be large and sparse.
[V,D] = EIGS(A) returns a diagonal matrix D of A's 6 largest magnitude
eigenvalues and a matrix V whose columns are the corresponding
eigenvectors.
[V,D,FLAG] = EIGS(A) also returns a convergence flag. If FLAG is 0 then
all the eigenvalues converged; otherwise not all converged.
EIGS(A,B) solves the generalized eigenvalue problem A*V == B*V*D. B must be
the same size as A. EIGS(A,[],...) indicates the standard eigenvalue problem
A*V == V*D.
EIGS(A,K) and EIGS(A,B,K) return the K largest magnitude eigenvalues.
EIGS(A,K,SIGMA) and EIGS(A,B,K,SIGMA) return K eigenvalues. If SIGMA is:
'LM' or 'SM' - Largest or Smallest Magnitude
For real symmetric problems, SIGMA may also be:
'LA' or 'SA' - Largest or Smallest Algebraic
'BE' - Both Ends, one more from high end if K is odd
For nonsymmetric and complex problems, SIGMA may also be:
'LR' or 'SR' - Largest or Smallest Real part
'LI' or 'SI' - Largest or Smallest Imaginary part
If sigma is a real or complex scalar including 0, eigs finds the
eigenvalues closest to SIGMA.
EIGS(A,K,SIGMA,OPTS) and EIGS(A,B,K,SIGMA,OPTS) specify options:
OPTS.issym: symmetry of A or A-SIGMA*B represented by AFUN [{false} |
true]
OPTS.isreal: complexity of A or A-SIGMA*B represented by AFUN [false | {true}]
OPTS.tol: convergence: Ritz estimate residual <= tol*NORM(A) [scalar | {eps}]
OPTS.maxit: maximum number of iterations [integer | {300}]
OPTS.p: number of Lanczos vectors: K+1<p<=N [integer | {2K}]
OPTS.v0: starting vector [N-by-1 vector | {randomly generated}]
OPTS.disp: diagnostic information display level [{0} | 1 | 2]
OPTS.cholB: B is actually its Cholesky factor CHOL(B) [{false} | true]
OPTS.permB: sparse B is actually CHOL(B(permB,permB)) [permB | {1:N}]
Use CHOL(B) instead of B when SIGMA is a string other than 'SM'.
EIGS(AFUN,N) accepts the function AFUN instead of the matrix A. AFUN is
a function handle and Y = AFUN(X) should return
A*X if SIGMA is unspecified, or a string other than 'SM'
A\X if SIGMA is 0 or 'SM'
(A-SIGMA*I)\X if SIGMA is a nonzero scalar (standard problem)
(A-SIGMA*B)\X if SIGMA is a nonzero scalar (generalized problem)
N is the size of A. The matrix A, A-SIGMA*I or A-SIGMA*B represented by
AFUN is assumed to be real and nonsymmetric unless specified otherwise
by OPTS.isreal and OPTS.issym. In all these EIGS syntaxes, EIGS(A,...)
may be replaced by EIGS(AFUN,N,...).
Example:
A = delsq(numgrid('C',15)); d1 = eigs(A,5,'SM');
Equivalently, if dnRk is the following one-line function:
%----------------------------%
function y = dnRk(x,R,k)
y = (delsq(numgrid(R,k))) \ x;
%----------------------------%
n = size(A,1); opts.issym = 1;
d2 = eigs(@(x)dnRk(x,'C',15),n,5,'SM',opts);
See also eig, svds, arpackc, function_handle.
Overloaded methods:
codistributed/eigs
Reference page in Help browser
doc eigs
<svds> - A few singular values, using eigs.
SVDS Find a few singular values and vectors.
If A is M-by-N, SVDS(A,...) manipulates a few eigenvalues and vectors
returned by EIGS(B,...), where B = [SPARSE(M,M) A; A' SPARSE(N,N)],
to find a few singular values and vectors of A. The positive
eigenvalues of the symmetric matrix B are the same as the singular
values of A.
S = SVDS(A) returns the 6 largest singular values of A.
S = SVDS(A,K) computes the K largest singular values of A.
S = SVDS(A,K,SIGMA) computes the K singular values closest to the
scalar shift SIGMA. For example, S = SVDS(A,K,0) computes the K
smallest singular values.
S = SVDS(A,K,'L') computes the K largest singular values (the default).
S = SVDS(A,K,SIGMA,OPTIONS) sets some parameters (see EIGS):
Field name Parameter Default
OPTIONS.tol Convergence tolerance: 1e-10
NORM(A*V-U*S,1) <= tol * NORM(A,1).
OPTIONS.maxit Maximum number of iterations. 300
OPTIONS.disp Number of values displayed each iteration. 0
[U,S,V] = SVDS(A,...) computes the singular vectors as well.
If A is M-by-N and K singular values are computed, then U is M-by-K
with orthonormal columns, S is K-by-K diagonal, and V is N-by-K with
orthonormal columns.
[U,S,V,FLAG] = SVDS(A,...) also returns a convergence flag.
If EIGS converged then NORM(A*V-U*S,1) <= TOL * NORM(A,1) and
FLAG is 0. If EIGS did not converge, then FLAG is 1.
Note: SVDS is best used to find a few singular values of a large,
sparse matrix. To find all the singular values of such a matrix,
SVD(FULL(A)) will usually perform better than SVDS(A,MIN(SIZE(A))).
Example:
load west0479
sf = svd(full(west0479))
sl = svds(west0479,10)
ss = svds(west0479,10,0)
s2 = svds(west0479,10,2)
sl will be a vector of the 10 largest singular values, ss will be a
vector of the 10 smallest singular values, and s2 will be a vector
of the 10 singular values of west0479 which are closest to 2.
See also svd, eigs.
Reference page in Help browser
doc svds
<ilu> - Incomplete LU factorization.
ILU Sparse Incomplete LU factorization
The factors given by this factorization may be useful as
preconditioners for a system of linear equations being solved by
iterative methods such as BICG (BiConjugate Gradients) and GMRES
(Generalized Minimum Residual Method).
ILU(A,SETUP) performs the incomplete LU factorization of A. SETUP is
a structure with up to five fields:
type --- type of factorization
droptol --- the drop tolerance of incomplete LU
milu --- modified incomplete LU
udiag --- replace zeros on the diagonal of U
thresh --- the pivot threshold
type may be 'nofill' which is the ILU factorization with 0 level of
fill in, known as ILU(0), 'crout' which is the Crout Version of ILU,
known as ILUC, or 'ilutp' which is the ILU factorization with
threshold and pivoting. If type is not specified the ILU factorization
with pivoting ILUTP will be performed. Pivoting is never performed
with type 'nofill' and with type 'crout'.
droptol is a non-negative scalar used as the drop tolerance which
means that all entries which are smaller in magnitude than the local
drop tolerance, which is droptol * NORM of the column of A for the
column and droptol * NORM of the row of A for the row, are "dropped"
from L or U. The only exception to this dropping rule is the diagonal
of the upper triangular factor U which is never dropped. Note that
entries of the lower triangular factor L are tested before being
scaled by the pivot. Setting droptol = 0 produces the complete LU
factorization, which is the default.
milu stands for modified incomplete LU factorization. Its value can
be 'row' (row-sum), 'col' (column-sum), or 'off'. When milu is equal
to 'row', the diagonal element of the upper triangular factor U is
compensated in such a way as to preserve row sums. That is, the
product A*e is equal to L*U*e, where e is the vector of ones. When
milu is equal to 'col', the diagonal of the upper triangular factor U