Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Скачиваний:
17
Добавлен:
09.02.2015
Размер:
239.1 Кб
Скачать

Values, sqrt(diag(c'*c)./diag(s'*s)).

The nonzero elements of S are always on its main diagonal. If

m >= p the nonzero elements of C are also on its main diagonal.

But if m < p, the nonzero diagonal of C is diag(C,p-m). This

allows the diagonal elements to be ordered so that the generalized

singular values are nondecreasing.

GSVD(A,B,0), with three input arguments and either m or n >= p,

produces the "economy-sized" decomposition where the resulting

U and V have at most p columns, and C and S have at most p rows.

The generalized singular values are diag(C)./diag(S).

When I = eye(size(A)), the generalized singular values, gsvd(A,I),

are equal to the ordinary singular values, svd(A), but they are

sorted in the opposite order. Their reciprocals are gsvd(I,A).

In this formulation of the GSVD, no assumptions are made about the

individual ranks of A or B. The matrix X has full rank if and only

if the matrix [A; B] has full rank. In fact, svd(X) and cond(X) are

are equal to svd([A; B]) and cond([A; B]). Other formulations, eg.

G. Golub and C. Van Loan, "Matrix Computations", require that null(A)

and null(B) do not overlap and replace X by inv(X) or inv(X').

Note, however, that when null(A) and null(B) do overlap, the nonzero

elements of C and S are not uniquely determined.

Class support for inputs A,B:

float: double, single

See also svd.

Reference page in Help browser

doc gsvd

<eigs> - A few eigenvalues.

EIGS Find a few eigenvalues and eigenvectors of a matrix using ARPACK

D = EIGS(A) returns a vector of A's 6 largest magnitude eigenvalues.

A must be square and should be large and sparse.

[V,D] = EIGS(A) returns a diagonal matrix D of A's 6 largest magnitude

eigenvalues and a matrix V whose columns are the corresponding

eigenvectors.

[V,D,FLAG] = EIGS(A) also returns a convergence flag. If FLAG is 0 then

all the eigenvalues converged; otherwise not all converged.

EIGS(A,B) solves the generalized eigenvalue problem A*V == B*V*D. B must be

the same size as A. EIGS(A,[],...) indicates the standard eigenvalue problem

A*V == V*D.

EIGS(A,K) and EIGS(A,B,K) return the K largest magnitude eigenvalues.

EIGS(A,K,SIGMA) and EIGS(A,B,K,SIGMA) return K eigenvalues. If SIGMA is:

'LM' or 'SM' - Largest or Smallest Magnitude

For real symmetric problems, SIGMA may also be:

'LA' or 'SA' - Largest or Smallest Algebraic

'BE' - Both Ends, one more from high end if K is odd

For nonsymmetric and complex problems, SIGMA may also be:

'LR' or 'SR' - Largest or Smallest Real part

'LI' or 'SI' - Largest or Smallest Imaginary part

If sigma is a real or complex scalar including 0, eigs finds the

eigenvalues closest to SIGMA.

EIGS(A,K,SIGMA,OPTS) and EIGS(A,B,K,SIGMA,OPTS) specify options:

OPTS.issym: symmetry of A or A-SIGMA*B represented by AFUN [{false} |

true]

OPTS.isreal: complexity of A or A-SIGMA*B represented by AFUN [false | {true}]

OPTS.tol: convergence: Ritz estimate residual <= tol*NORM(A) [scalar | {eps}]

OPTS.maxit: maximum number of iterations [integer | {300}]

OPTS.p: number of Lanczos vectors: K+1<p<=N [integer | {2K}]

OPTS.v0: starting vector [N-by-1 vector | {randomly generated}]

OPTS.disp: diagnostic information display level [{0} | 1 | 2]

OPTS.cholB: B is actually its Cholesky factor CHOL(B) [{false} | true]

OPTS.permB: sparse B is actually CHOL(B(permB,permB)) [permB | {1:N}]

Use CHOL(B) instead of B when SIGMA is a string other than 'SM'.

EIGS(AFUN,N) accepts the function AFUN instead of the matrix A. AFUN is

a function handle and Y = AFUN(X) should return

A*X if SIGMA is unspecified, or a string other than 'SM'

A\X if SIGMA is 0 or 'SM'

(A-SIGMA*I)\X if SIGMA is a nonzero scalar (standard problem)

(A-SIGMA*B)\X if SIGMA is a nonzero scalar (generalized problem)

N is the size of A. The matrix A, A-SIGMA*I or A-SIGMA*B represented by

AFUN is assumed to be real and nonsymmetric unless specified otherwise

by OPTS.isreal and OPTS.issym. In all these EIGS syntaxes, EIGS(A,...)

may be replaced by EIGS(AFUN,N,...).

Example:

A = delsq(numgrid('C',15)); d1 = eigs(A,5,'SM');

Equivalently, if dnRk is the following one-line function:

%----------------------------%

function y = dnRk(x,R,k)

y = (delsq(numgrid(R,k))) \ x;

%----------------------------%

n = size(A,1); opts.issym = 1;

d2 = eigs(@(x)dnRk(x,'C',15),n,5,'SM',opts);

See also eig, svds, arpackc, function_handle.

Overloaded methods:

codistributed/eigs

Reference page in Help browser

doc eigs

<svds> - A few singular values.

SVDS Find a few singular values and vectors.

If A is M-by-N, SVDS(A,...) manipulates a few eigenvalues and vectors

returned by EIGS(B,...), where B = [SPARSE(M,M) A; A' SPARSE(N,N)],

to find a few singular values and vectors of A. The positive

eigenvalues of the symmetric matrix B are the same as the singular

values of A.

S = SVDS(A) returns the 6 largest singular values of A.

S = SVDS(A,K) computes the K largest singular values of A.

S = SVDS(A,K,SIGMA) computes the K singular values closest to the

scalar shift SIGMA. For example, S = SVDS(A,K,0) computes the K

smallest singular values.

S = SVDS(A,K,'L') computes the K largest singular values (the default).

S = SVDS(A,K,SIGMA,OPTIONS) sets some parameters (see EIGS):

Field name Parameter Default

OPTIONS.tol Convergence tolerance: 1e-10

NORM(A*V-U*S,1) <= tol * NORM(A,1).

OPTIONS.maxit Maximum number of iterations. 300

OPTIONS.disp Number of values displayed each iteration. 0

[U,S,V] = SVDS(A,...) computes the singular vectors as well.

If A is M-by-N and K singular values are computed, then U is M-by-K

with orthonormal columns, S is K-by-K diagonal, and V is N-by-K with

orthonormal columns.

[U,S,V,FLAG] = SVDS(A,...) also returns a convergence flag.

If EIGS converged then NORM(A*V-U*S,1) <= TOL * NORM(A,1) and

FLAG is 0. If EIGS did not converge, then FLAG is 1.

Note: SVDS is best used to find a few singular values of a large,

sparse matrix. To find all the singular values of such a matrix,

SVD(FULL(A)) will usually perform better than SVDS(A,MIN(SIZE(A))).

Example:

load west0479

sf = svd(full(west0479))

sl = svds(west0479,10)

ss = svds(west0479,10,0)

s2 = svds(west0479,10,2)

sl will be a vector of the 10 largest singular values, ss will be a

vector of the 10 smallest singular values, and s2 will be a vector

of the 10 singular values of west0479 which are closest to 2.

See also svd, eigs.

Reference page in Help browser

doc svds

<poly> - Characteristic polynomial.

POLY Convert roots to polynomial.

POLY(A), when A is an N by N matrix, is a row vector with

N+1 elements which are the coefficients of the

characteristic polynomial, DET(lambda*EYE(SIZE(A)) - A) .

POLY(V), when V is a vector, is a vector whose elements are

the coefficients of the polynomial whose roots are the

elements of V . For vectors, ROOTS and POLY are inverse

functions of each other, up to ordering, scaling, and

roundoff error.

ROOTS(POLY(1:20)) generates Wilkinson's famous example.

Class support for inputs A,V:

float: double, single

See also roots, conv, residue, polyval.

Overloaded methods:

sym/poly

Reference page in Help browser

doc poly

<polyeig> - Polynomial eigenvalue problem.

POLYEIG Polynomial eigenvalue problem.

[X,E] = POLYEIG(A0,A1,..,Ap) solves the polynomial eigenvalue problem

of degree p:

(A0 + lambda*A1 + ... + lambda^p*Ap)*x = 0.

The input is p+1 square matrices, A0, A1, ..., Ap, all of the same

order, n. The output is an n-by-n*p matrix, X, whose columns

are the eigenvectors, and a vector of length n*p, E, whose

elements are the eigenvalues.

for j = 1:n*p

lambda = E(j)

x = X(:,j)

(A0 + lambda*A1 + ... + lambda^p*Ap)*x is approximately 0.

end

E = POLYEIG(A0,A1,..,Ap) is a vector of length n*p whose

elements are the eigenvalues of the polynomial eigenvalue problem.

Special cases:

p = 0, polyeig(A), the standard eigenvalue problem, eig(A).

p = 1, polyeig(A,B), the generalized eigenvalue problem, eig(A,-B).

n = 1, polyeig(a0,a1,..,ap), for scalars a0, ..., ap,

is the standard polynomial problem, roots([ap .. a1 a0])

If both A0 and Ap are singular the problem is potentially ill-posed.

Theoretically, the solutions might not exist or might not be unique.

Computationally, the computed solutions may be inaccurate.

If one, but not both, of A0 and Ap is singular, the problem is well

posed, but some of the eigenvalues may be zero or "infinite".

[X,E,S] = POLYEIG(A0,A1,..,AP) also returns a P*N length vector S of

condition numbers for the eigenvalues. At least one of A0 and AP must

be nonsingular. Large condition numbers imply that the problem is

near one with multiple eigenvalues.

See also eig, cond, condeig.

Reference page in Help browser

doc polyeig

<condeig> - Condition number with respect to eigenvalues.

CONDEIG Condition number with respect to eigenvalues.

CONDEIG(A) is a vector of condition numbers for the eigenvalues

of A. These condition numbers are the reciprocals of the cosines

of the angles between the left and right eigenvectors.

[V,D,s] = CONDEIG(A) is equivalent to:

[V,D] = EIG(A); s = CONDEIG(A);

Large condition numbers imply that A is near a matrix with

multiple eigenvalues.

Class support for input A:

float: double, single

See also cond.

Reference page in Help browser

doc condeig

<hess> - Hessenberg form.

HESS Hessenberg form.

H = HESS(A) is the Hessenberg form of the matrix A.

The Hessenberg form of a matrix is zero below the first

subdiagonal and has the same eigenvalues as A. If the matrix

Соседние файлы в папке Библиотеки Matlab