
- •Contents
- •Preface
- •Abbreviations
- •Notations
- •1 Introduction
- •2 Sequences; series; finance
- •3 Relations; mappings; functions of a real variable
- •4 Differentiation
- •5 Integration
- •6 Vectors
- •7 Matrices and determinants
- •8 Linear equations and inequalities
- •9 Linear programming
- •10 Eigenvalue problems and quadratic forms
- •11 Functions of several variables
- •12 Differential equations and difference equations
- •Selected solutions
- •Literature
- •Index

10Eigenvalue problems and quadratic forms
In this chapter, we deal with an application of homogeneous systems of linear equations, namely eigenvalue problems. Moreover, we introduce so-called quadratic forms and investigate their sign. Quadratic forms play an important role when determining extreme points and values of functions of several variables, which are discussed in Chapter 11.
10.1 EIGENVALUES AND EIGENVECTORS
An eigenvalue problem can be defined as follows.
Definition 10.1 Let A be an n × n matrix. Then the scalar λ is called an eigenvalue of matrix A if there exists a non-trivial solution x Rn, x = 0, of the matrix equation
Ax = λx. |
(10.1) |
The solution xT = (x1, x2, . . . , xn) |
= (0, 0, . . . , 0) is called an eigenvector of A |
(associated with scalar λ). |
|
Equation (10.1) is equivalent to a homogeneous linear system of equations. From the matrix equation
Ax = λx = λI x
we obtain
(A − λI )x = 0, |
(10.2) |
where I is the identity matrix of order n × n. Hence, system (10.2) includes n linear equations with n variables x1, x2, . . . , xn.
According to Definition 10.1, eigenvalue problems are defined only for square matrices A. In an eigenvalue problem, we look for all (real or complex) values λ such that the image of a non-zero vector x given by a linear mapping described by matrix A is a multiple λx of this vector x. It is worth noting that value zero is possible as an eigenvalue, while the zero vector is not possible as an eigenvector. Although eigenvalue problems arise mainly in engineering
Eigenvalue problems and quadratic forms 369
sciences, they also have some importance in economics. For example, as we show in the following two chapters, they are useful for deciding whether a function has an extreme point or for solving certain types of differential and difference equations.
The following theorem gives a necessary and sufficient condition for the existence of nontrivial (i.e. different from the zero vector) solutions of problem (10.1).
THEOREM 10.1 Problem (10.1) has a non-trivial solution x = 0 if and only if the determinant of matrix A − λI is equal to zero, i.e. |A − λI | = 0.
The validity of the above theorem can easily be seen by taking into account that a homogeneous system (10.2) of linear equations has non-trivial solutions if and only if the rank of the coefficient matrix of the system (i.e. the rank of matrix A − λI ) is less than the number of variables. The latter condition is equivalent to the condition that the determinant of the coefficient matrix is equal to zero, which means that matrix A − λI has no inverse matrix.
We rewrite the determinant of matrix A − λI as a function of the variable λ. Letting P(λ) = |A − λI |, where A = (aij ) is a matrix of order n × n, we get the following equation in λ:
|
= |
|
a11 − λ |
a12 |
. . . |
|
a1n |
|
= |
|
|||
|
|
. |
|
. |
|
|
|
. |
|
||||
|
|
|
a21 |
a22 |
|
λ . . . |
|
a2n |
|
|
|
||
|
|
|
.− |
|
|
|
|
||||||
P(λ) |
|
. |
|
|
|
|
. |
|
|
0, |
|||
|
|
|
|
. |
|
. |
|
|
|
. |
|
|
|
|
|
|
|
|
|
|
|
|
− |
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
a |
|
a |
|
. . . |
a |
|
|
|
|
|
|
|
|
n1 |
n2 |
nn |
|
λ |
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
which is known as the characteristic equation (or eigenvalue equation) of matrix A.
From the definition of a determinant, it follows that P(λ) is a polynomial in λ which has degree n for a matrix A of order n × n. The zeroes of this characteristic polynomial
P(λ) = (−1)nλn + bn−1λn−1 + · · · + b1λ + b0
of degree n are the eigenvalues of matrix A. Thus, in order to find all eigenvalues of an n × n matrix A, we have to determine all zeroes of polynomial P(λ) of degree n (i.e. all roots of the characteristic equation P(λ) = 0). Here we often have to apply numerical methods (described in Chapter 4) to find them (approximately). In general, the eigenvalues of a real matrix A can be complex numbers (and also the eigenvectors may contain complex components). The following theorem describes a case when all eigenvalues of matrix A are real.
THEOREM 10.2 If matrix A of order n × n is a symmetric matrix (i.e. A = AT), then all eigenvalues of A are real numbers.
For each eigenvalue λi, i = 1, 2, . . . , n, |
we have to find the general solution of the |
homogeneous system of linear equations |
|
(A − λiI )x = 0 |
(10.3) |
in order to get the corresponding eigenvectors. Since the rank of matrix A is smaller than n, the solution of the corresponding system of equations is not uniquely determined and for each eigenvalue λi, i = 1, 2, . . . , n, of system (10.3), there is indeed a solution where not all variables are equal to zero.

370 Eigenvalue problems and quadratic forms
We continue with some properties of the set of eigenvectors belonging to the same eigenvalue.
THEOREM 10.3 Let λ be an eigenvalue of multiplicity k (i.e. λ is k times a root of the characteristic equation P(λ) = 0) of a matrix A of order n × n. Then:
(1)The number of linearly independent eigenvectors associated with eigenvalue λ is at least one and at most k.
(2)If A is a symmetric matrix, then there exist k linearly independent eigenvectors associated with λ. The set of linearly independent eigenvectors associated with eigenvalue λ forms a vector space.
As a consequence of Theorem 10.3, we mention that, if x1 and x2 are eigenvectors associated with eigenvalue λ, then also vector sx1 + tx2 with s R and t R is an eigenvector associated with λ. It also follows from Theorem 10.3 that for an arbitrary square matrix A there always exists exactly one linearly independent eigenvector associated with an eigenvalue of multiplicity one.
THEOREM 10.4 Let A be a matrix of order n × n. Then:
(1)Eigenvectors associated with different eigenvalues of matrix A are linearly independent.
(2)If matrix A is symmetric, then eigenvectors associated with different eigenvalues are orthogonal.
Let us consider the following three examples to determine all eigenvalues and eigenvectors of a given matrix.
Example 10.1 We determine the eigenvalues and eigenvectors of matrix
A = |
1 |
2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
1 . |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
The characteristic equation is given by |
= − |
− − = |
− − = |
||||||||||||||
|
= | − | = |
|
|
2 |
1 |
− |
λ |
||||||||||
|
|
|
|
|
|
|
− λ |
|
|
|
|
|
|
|
|
|
|
P(λ) |
A λI |
|
1 |
|
2 |
|
(1 λ)(1 |
λ) |
4 |
λ2 |
2λ |
3 0 |
|||||
|
|
|
|||||||||||||||
with the solutions |
|
|
|
|
|
|
|
|
|
|
|
||||||
λ1 = 1 + √ |
|
= 3 |
|
and |
|
λ2 = 1 − √ |
|
= −1. |
|
|
|||||||
1 + 3 |
|
|
1 + 3 |
|
|
||||||||||||
To determine |
the corresponding eigenvectors, |
we |
have |
to |
solve the |
matrix equation |
(A − λI )x = 0 for λ = λ1 and λ = λ2. Thus, we get the following system for λ = λ1 = 3:
−2x1 + 2x2 = 0 2x1 − 2x2 = 0.
The second equation may be obtained from the first equation by multiplying by −1 (i.e. both row vectors of the left-hand side are linearly dependent) and can therefore be dropped. The

Eigenvalue problems and quadratic forms 371
coefficient matrix of the above system has rank one and so we can choose one variable arbitrarily, say x2 = t, t R. This yields x1 = t, and each eigenvector associated with the eigenvalue λ1 = 3 can be described in the form
x1 = t |
1 |
, t R. |
1 |
Analogously, for λ = λ2 = −1, we get the system:
2x1 + 2x2 = 0
2x1 + 2x2 = 0.
Again, we can drop one of the two identical equations and can choose one variable arbitrarily, say x2 = s, s R. Then we get x1 = −s. Thus, all eigenvectors associated with λ2 = −1 can be represented as
|
= |
|
1 |
|
|
|
|
x2 |
|
s |
−1 |
, |
s |
|
R. |
This example illustrates Theorem 10.4. For arbitrary choice of s, t R, the eigenvectors x1 and x2 are linearly independent and orthogonal (i.e. the scalar product of vectors x1 and x2 is equal to zero).
Example 10.2 We determine all eigenvalues and eigenvectors of matrix
A |
−7 |
−0 |
5 . |
|
0 |
1 |
1 |
|
= −5 |
−2 |
5 |
To find the eigenvalues, we consider the characteristic equation P(λ) = 0:
= | − | = |
−5 |
−2 |
5 λ |
= |
−5 |
−2 |
3 − λ = |
|
|||||
P(λ) A λI |
|
λ |
1 |
1 |
|
|
|
λ |
1 |
5 |
0 |
λ |
0. |
−7 |
−λ |
5 |
|
−7 |
−λ |
||||||||
|
|
|
− |
− |
|
|
|
|
− |
|
− |
|
|
|
− |
|
|
− |
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The above transformation is obtained by adding columns 2 and 3 to get the third element equal to zero in row 1. Expanding the latter determinant by row 1, we obtain
P(λ) = |A − λI | = −λ · |
−2 |
3 |
− |
λ |
+ 1 · |
−5 |
3 |
− |
λ |
||||
|
− |
5 |
− |
λ |
|
|
− |
7 |
5 |
− |
λ |
|
|
|
|
λ |
|
|
|
|
|
|
|||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
=λ2(3 − λ) − 2λ(5 − λ) + [(−7) · (3 − λ) + 5(5 − λ)]
=(3λ2 − λ3 − 10λ + 2λ2) + (−21 + 7λ + 25 − 5λ)
=−λ3 + 5λ2 − 8λ + 4.
Considering now the characteristic equation P(λ) = 0, we try to find a first root and use Horner’s scheme (see Chapter 3.3.3) for the computation of the function value.

372 Eigenvalue problems and quadratic forms
Checking λ1 = 1, we get
λ1 = 1 |
−1 |
5 |
−8 |
4 |
|
−1 |
4 |
−4 |
|
|
−1 |
4 |
−4 |
0 |
i.e. λ1 = 1 is a root of the characteristic equation P(λ) = 0. From Horner’s scheme (see last row), we obtain that dividing P(λ) by the linear factor λ − λ1 = λ − 1 gives the polynomial P2(λ) of degree two:
P2(λ) = −λ2 + 4λ − 4.
Setting P2(λ) = 0, we obtain
λ2 = 2 + √4 − 4 = 2 and λ3 = 2 − √4 − 4 = 2,
i.e. λ2 = λ3 = 2 is an eigenvalue of multiplicity two.
In order to determine the eigenvectors associated with λ1 = 1, we get the homogeneous system of linear equations
−x1 |
− |
x2 |
+ x3 |
= 0 |
|
−7x1 |
− |
x2 |
+ |
5x3 |
= 0 |
−5x1 |
− 2x2 |
+ |
4x3 |
= 0. |
Applying Gaussian elimination, we get the following tableaus:
Row |
x1 |
x2 |
x3 |
b |
Operation |
1 |
−1 |
−1 |
1 |
0 |
|
2 |
−7 |
−1 |
5 |
0 |
|
3 |
−5 |
−2 |
4 |
0 |
|
4 |
−1 |
−1 |
1 |
0 |
row 1 |
5 |
0 |
6 |
−2 |
0 |
row 2 – 7 row 1 |
6 |
0 |
3 |
−1 |
0 |
row 3 – 5 row 1 |
7 |
−1 |
−1 |
1 |
0 |
row 4 |
8 |
0 |
6 |
−2 |
0 |
row 5 |
9 |
0 |
0 |
0 |
0 |
row 6 – 1 row 5 |
|
|
|
|
|
2 |
Since the rank of the coefficient matrix is equal to two, we can choose one variable arbitrarily. Setting x3 = 3, we get x2 = 1 and x1 = 2 (here we set x3 = 3 in order to get integer solutions for the other two variables), i.e. each eigenvector associated with λ1 = 1 has the form
x1 |
= |
|
2 |
|
|
|
|
|
1 |
s |
R. |
||||||
|
s |
, |
|
3

Eigenvalue problems and quadratic forms 373
Considering the eigenvalue λ2 = λ3 = 2, we get the following system of linear equations:
−2x1 |
− x2 |
+ x3 |
= 0 |
|||
−7x1 |
− |
2x2 |
+ |
5x3 |
= |
0 |
−5x1 |
− |
2x2 |
+ |
3x3 |
= |
0. |
After applying Gaussian elimination or pivoting, we find that the coefficient matrix of this system of linear equations has rank two. Hence we can choose one variable arbitrarily. If we choose x3 = 1, we get x2 = −1 and x1 = 1. Therefore, each eigenvector associated with eigenvalue λ2 = λ3 = 2 has the form
x2 |
= |
|
1 |
|
|
|
|
1 |
|
|
|||||
|
t |
−1 |
, |
t |
|
R. |
In particular, for the latter eigenvalue of multiplicity two, there exists only one linearly independent eigenvector.
Example 10.3 We determine the eigenvalues and eigenvectors of matrix
A |
|
−2 |
−3 −6 . |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||
|
|
4 |
3 |
|
3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
= −1 |
−3 |
|
0 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
The characteristic equation is given by |
|
|
|
|
|
|
|
λ |
|
|
|
|
|
|
|
|
|
|
||||||||||||||||||
|
|
= | |
− | = |
|
|
|
|
|
1 |
|
|
|
|
|
|
3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||
|
|
|
|
|
|
|
4 − λ |
|
|
|
−3 |
|
|
|
|
3 |
|
|
|
|
|
|
|
|
|
|
|
|||||||||
P(λ) |
A |
λI |
|
|
|
− |
|
|
|
|
|
− |
|
|
|
|
|
|
− |
|
|
|
|
|
|
|
|
|
|
|
|
|||||
|
− |
4 |
2 |
λ |
|
|
3 |
− λ |
|
|
−6 |
|
|
|
|
|
|
|
|
|
|
|||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
0 |
|
|
|
|
|
|
3 |
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
− |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
= |
− |
|
2 |
|
|
|
|
|
3 |
− |
λ |
|
|
|
6 |
|
|
|
|
|
|
|
|
|
||||||||
|
|
|
|
|
|
|
|
|
1 |
|
|
|
−3 |
λ |
|
−λ |
|
|
|
|
|
|
|
|
|
|||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
− |
|
|
|
|
− − |
|
|
|
− |
|
|
|
|
|
|
|
|
|
|||||||||||
|
|
|
|
|
|
|
4 |
|
− |
λ |
|
|
|
|
|
0 |
|
|
|
|
|
|
|
3 |
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
|
|
|
|
= |
− |
|
3 |
|
|
|
|
|
3 |
0 |
|
|
λ |
|
− |
6 + λ |
|
|
|
|
|
|||||||||||
|
|
|
|
|
|
|
|
|
1 |
|
|
|
|
|
|
|
|
|
|
|
|
λ |
|
|
|
|
|
|
||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
− |
|
|
|
|
− − |
|
|
|
|
|
− |
|
|
|
|
|
|
|
|
|
|||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
− |
λ |
|
0 |
|
|
|
|
3 |
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
· |
|
|
|
|
|
|
|
1 |
|
− |
|
λ |
|
|
|
||||||||
|
|
|
|
= − − |
|
|
|
|
|
|
1 |
|
|
|
|
|
|
|
||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
− |
|
|
|
|
|
|
|
|
|
|
|
− |
|
|
|
|
|
|||
|
|
|
|
|
( 3 λ) |
|
|
|
|
|
|
|
|
0 |
|
|
|
|
|
|
|
|
|
|||||||||||||
|
|
|
|
|
|
− |
|
3 |
|
|
|
|
|
|
|
6 + |
λ |
|
||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
− |
4 − λ |
|
|
|
|
|
3 |
|
λ |
|
|
||||||||||
|
|
|
|
|
|
( |
− |
3 |
− |
λ) |
· |
|
|
|
|
|
|
|
|
|||||||||||||||||
|
|
|
|
= − |
|
|
|
|
|
|
|
3 |
|
|
|
− |
6 |
+ |
|
|
|
|||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
( |
− |
3 |
− |
λ) |
· |
[( |
− |
4 |
|
− |
λ)( |
− |
6 |
+ |
λ) |
− |
9] |
|||||||||||||
|
|
|
|
= − |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
= (3 + λ) · (−λ2 + 2λ + 15) = 0.
In the transformations above, we have first added column 3 to column 2, and then we have added row 3 multiplied by −1 to row 2. In the next step, the term (−3 − λ) has been factored
374 Eigenvalue problems and quadratic forms
out from the second column and finally, the resulting determinant has been expanded by column 2. From equation 3 + λ = 0, we obtain the first eigenvalue
λ1 = −3,
and from equation −λ2 + 2λ + 15 = 0, we obtain the two eigenvalues
λ2 = −3 and λ3 = 5.
Next, we determine a maximum number of linearly independent eigenvectors for each eigenvalue. We first consider the eigenvalue λ1 = λ2 = −3 of multiplicity two and obtain the following system of linear equations:
−x1 − 3x2 + 3x3 = 0 2x1 + 6x2 − 6x3 = 0
−x1 − 3x2 + 3x3 = 0.
Since equations one and three coincide and since equation two corresponds to equation one multiplied by −2, the rank of the coefficient matrix of the above system is equal to one, and therefore we can choose two variables arbitrarily. Consequently, there exist two linearly independent eigenvectors associated with this eigenvalue. Using our knowledge about the general solution of homogeneous systems of linear equations, we get linearly independent solutions by choosing for the first vector x1 : x21 = 1, x31 = 0 and for the second vector x2 : x22 = 0, x32 = 1 (i.e. we have taken x2 and x3 as the variables that can be chosen arbitrarily). Then the remaining variables are uniquely determined and we obtain e.g. from
the first equation of the above system: x11 = −3, x12 = 3. Therefore, the set of all eigenvectors |
|||||||||||||
associated with λ2 = λ3 = −3 is given by |
|
|
|
|
|||||||||
|
|
|
= |
|
0 |
|
+ |
1 |
|
|
|||
x R3 |
|
|
|
−3 |
|
|
3 |
|
|
|
|
R . |
|
|
|
|
1 |
|
|
|
|
|
R, t |
|
|||
x s |
|
|
|
t 0 |
, s |
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
While in Example 10.2 only one linearly independent vector was associated with the eigenvalue of multiplicity two, in this example there are two linearly independent eigenvectors associated with an eigenvalue of multiplicity two. This is the maximal possible number since we know from Theorem 10.3 that at most k linearly independent eigenvectors are associated with an eigenvalue of multiplicity k. To finish this example, we still have to find the eigenvalue associated with λ3 = 5 by the solution of the following system of linear equations:
−9x1 − 3x2 + 3x3 = 0 2x1 − 2x2 − 6x3 = 0
−x1 − 3x2 − 5x3 = 0.
By applying Gaussian elimination or pivoting, we find that the coefficient matrix has rank two, and therefore one variable can be chosen arbitrarily. Choosing x3 = 1, we finally get
x2 = −2x3 = −2 and x1 = −3x2 − 5x3 = 1.

Eigenvalue problems and quadratic forms 375
Therefore, an eigenvector associated with λ3 = 5 can be written in the form
x3 |
= |
|
1 |
|
|
|
|
1 |
|
|
|||||
|
u |
−2 |
, |
u |
|
R. |
In the next chapters, we show how eigenvalues can be used for solving certain optimization problems as well as differential and difference equations. The problem of determining eigenvalues and the corresponding eigenvectors often arises in economic problems dealing with processes of proportionate growth or decline. We demonstrate this by the following example.
Example 10.4 Let xtM be the number of men and xtW the number of women in some population at time t. The relationship between the populations at some successive times t and t + 1 has been found to be as follows:
xtM+1 = 0.8xtM + 0.4xtW xtW+1 = 0.3xtM + 0.9xtW.
Letting xt = (xtM, xtW)T, we obtain the following relationship between the populations xt+1 and xt at successive times:
xM |
0.8 |
0.4 |
xtM |
t+1 |
= 0.3 |
|
. |
xtW+1 |
0.9 xtW |
Moreover, we assume that the ratio of men and women is constant over time, i.e.
xtW+1 |
= |
xtW |
|
xM |
|
xM |
|
t+1 |
|
λ |
t . |
Now, the question is: do there exist such values λ R+ and vectors xt satisfying the above equations, i.e. can we find numbers λ and vectors xt such that
Axt = λxt ?
To answer this question, we have to find the eigenvalues of matrix
A = |
0.8 |
0.4 |
0.3 |
0.9 |
and then for an appropriate eigenvalue the corresponding eigenvector. We obtain the characteristic equation
= | |
− |
| = |
|
0.3 |
0.9 |
− |
λ |
= |
|
− |
|
− − |
· |
= |
P(λ) |
A λI |
|
|
0.8 − λ |
|
|
|
(0.8 |
|
λ)(0.9 |
λ) |
0.3 0.4 |
0. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

376 Eigenvalue problems and quadratic forms
This yields
P(λ) = λ2 − 1.7λ + 0.6 = 0
and the eigenvalues
λ1 = 0.85 + √0.7225 − 0.6 = 0.85 + 0.35 = 1.2 and λ2 = 0.85 − 0.35 = 0.5.
Since we are looking for a proportionate growth in the population, only the eigenvalue greater than one is of interest, i.e. we have to consider λ1 = 1.2 and determine the corresponding eigenvector from the system
−0.4xtM + 0.4xtW = 0 0.3xtM − 0.3xtW = 0.
The coefficient matrix has rank one, we can choose one variable arbitrarily and get the eigenvector
xt = u |
1 |
, u R. |
1 |
In order to have a proportionate growth in the population, the initial population must consist of the same number of men and women, and the population grows by 20 per cent until the next time it is considered. This means that if initially at time t = 1, the population is given by vector x1 = (1, 000; 1, 000)T, then at time t = 2 the population is given by x2 = (1, 200; 1, 200)T, at time t = 3 the population is given by x3 = (1, 440; 1, 440)T, and so on.
10.2 QUADRATIC FORMS AND THEIR SIGN
We start with the following definition.
Definition 10.2 If A = (aij ) is a matrix of order n × n and xT = (x1, x2, . . . , xn), then the term
Q(x) = xTAx |
(10.4) |
is called a quadratic form.

Eigenvalue problems and quadratic forms 377
Writing equation (10.4) explicitly, we have:
|
|
|
|
|
|
a11 |
a12 |
. . . a1n |
|
x1 |
|
||||
|
Q(x1, x2, . . . , xn) |
|
(x1 |
, x2 |
a21 |
a22 |
. . . a2n |
x2 |
|||||||
Q(x) |
= |
, . . . , xn) . . |
|
. |
|
. |
|
||||||||
= |
|
|
|
|
|
. . |
|
. |
|
. |
|||||
|
|
|
. . |
|
. |
. |
|||||||||
|
|
|
|
|
a |
n1 |
a |
n2 |
. . . a |
nn |
x |
n |
|
||
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
= a11x1x1 + a12x1x2 + · · · + a1nx1xn + a21x2x1 + a22x2x2 + · · · + a2nx2xn + · · ·
+ an1xnx1 + an2xnx2 + · · · + annxnxn
n n
=aij xixj .
i=1 j=1
THEOREM 10.5 Let A be a matrix of order n × n. Then the quadratic form xTAx can be
written as a quadratic form xTA x of a symmetric matrix A of order n × n, i.e. we have xTAx = xTA x, where
A = 1 · A + AT . 2
As a result of Theorem 10.5, we can restrict ourselves to the consideration of quadratic forms of symmetric matrices, where all eigenvalues are real numbers (see Theorem 10.2). In the following definition, the sign of a quadratic form xTAx is considered.
Definition 10.3 A square matrix A of order n × n and its associated quadratic form Q(x) are said to be
(1)positive definite if Q(x) = xTAx > 0 for all xT = (x1, x2, . . . , xn) = (0, 0, . . . , 0);
(2)positive semi-definite if Q(x) = xTAx ≥ 0 for all x Rn;
(3)negative definite if Q(x) = xTAx < 0 for all xT = (x1, x2, . . . , xn) = (0, 0, . . . , 0);
(4)negative semi-definite if Q(x) = xTAx ≤ 0 for all x Rn;
(5)indefinite if they are neither positive semi-definite nor negative semi-definite.
The following example illustrates Definition 10.3.
Example 10.5 |
Let |
|
A = |
−1 |
−1 . |
|
1 |
1 |
We determine the sign of the quadratic form Q(x) = xTAx by applying Definition 10.3. Then
1 |
1 x1 |
|
Q(x) = xTAx = (x1, x2) −1 |
−1 x2 |
= x1(x1 − x2) + x2(−x1 + x2)

378Eigenvalue problems and quadratic forms
=x1(x1 − x2) − x2(x1 − x2)
=(x1 − x2)2 ≥ 0.
Therefore, matrix A is positive semi-definite. However, matrix A is not positive definite since there exist vectors xT = (x1, x2) = (0, 0) such that Q(x) = xTAx = 0, namely if x1 = x2 but is different from zero.
The following theorem shows how we can decide by means of the eigenvalues of a symmetric matrix whether the matrix is positive or negative (semi-)definite.
THEOREM 10.6 Let A be a symmetric matrix of order n × n with the eigenvalues λ1, λ2, . . . , λn R. Then:
(1)A is positive definite if and only if all eigenvalues of A are positive (i.e. λi > 0 for i = 1, 2, . . . , n).
(2)A is positive semi-definite if and only if all eigenvalues of A are non-negative (i.e. λi ≥ 0 for i = 1, 2, . . . , n).
(3)A is negative definite if and only if all eigenvalues of A are negative (i.e. λi < 0 for i = 1, 2, . . . , n).
(4)A is negative semi-definite if and only if all eigenvalues of A are non-positive (i.e. λi ≤ 0 for i = 1, 2, . . . , n).
(5)A is indefinite if and only if A has at least two eigenvalues with opposite signs.
Example 10.6 Let us consider matrix A with
A = |
0 |
|
2 |
1 |
− |
1 . |
|
|
|
|
We determine the eigenvalues of A and obtain the characteristic equation
P(λ) = |A − λI | = |
−1 |
− |
1 |
− |
λ |
= 0. |
||
|
|
λ |
|
2 |
|
|
||
|
|
|
|
|
|
|
|
|
This yields |
|
|
|
|
|
|
|
|
−λ(−1 − λ) − 2 = λ2 + λ − 2 = 0.
The above quadratic equation has the solutions
λ1 |
= − |
1 |
+ |
|
1 |
+ 2 = − |
1 |
+ |
3 |
|
= 1 > 0 and |
||
|
|
|
|
|
|
|
|||||||
2 |
4 |
2 |
2 |
||||||||||
λ2 |
= − 2 |
− |
|
|
|
|
− 2 = −2 < 0. |
||||||
|
4 |
+ 2 = − 2 |
|||||||||||
|
|
1 |
|
|
1 |
|
|
1 |
|
3 |
|
|
Since both eigenvalues have opposite signs, matrix A is indefinite according to part (5) of Theorem 10.6.

Eigenvalue problems and quadratic forms 379
Next, we present another criterion to decide whether a given matrix A is positive or negative definite. To apply this criterion, we have to investigate the sign of certain minors introduced in the following definition.
Definition 10.4 The leading principal minors of matrix A = (aij ) of order n × n are
the determinants |
|
|
|
|
|
|
|
|
||||
|
= |
. |
|
. |
|
· · · |
. |
|
= |
|||
|
|
|
a11 |
a12 |
a1k |
|
|
|
||||
|
|
a21 |
a22 |
a2k |
|
|
|
|||||
Dk |
|
a |
|
a |
|
|
a |
|
|
k 1, 2, . . . , n, |
||
|
. |
|
. |
|
· · · . |
|
, |
|||||
|
|
|
. |
|
. |
|
· · · |
. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
k1 |
|
k2 |
|
|
kk |
|
|
|
i.e. Dk is obtained |
from |A| by crossing |
out the last n − k columns and rows. |
By means of the leading principal minors, we can give a criterion to decide whether a matrix A is positive or negative definite.
THEOREM 10.7 Let matrix A be a symmetric matrix of order n×n with the leading principal minors Dk , k = 1, 2, . . . , n. Then:
(1)A is positive definite if and only if Dk > 0 for k = 1, 2, . . . , n.
(2)A is negative definite if and only if (−1)k · Dk > 0 for k = 1, 2, . . . , n.
For a symmetric matrix A of order n × n, we have to check the sign of n determinants to find out whether A is positive (negative) definite. If all leading principal minors are greater than zero, matrix A is positive definite according to part (1) of Theorem 10.7. If the signs of the n leading principal minors alternate, where the first minor is negative (i.e. element a11 is smaller than zero), then matrix A is negative definite according to part (2) of Theorem 10.7. The following two examples illustrate the use of Theorem 10.7.
Example 10.7 |
Let |
0 . |
|
A |
−2 |
−3 |
|
|
3 |
2 |
0 |
|
= |
|
|
|
0 |
0 |
−5 |
For matrix A, we get the leading principal minors
D1 |
= a11 = −3 < 0, |
−2 |
|
3 |
= 9 − 4 = 5 > 0, |
|
|
||||||||||||||||
D2 |
= |
a21 |
a22 |
|
= |
− |
|
|
|||||||||||||||
|
|
|
a11 |
a12 |
|
|
|
|
3 |
2 |
|
|
|
|
|
|
|
|
|
||||
|
|
a |
11 |
a |
12 |
|
a |
|
|
|
|
3 |
2 |
|
0 |
|
|
|
|
|
|||
|
|
|
|
|
|
|
13 |
|
|
|
|
|
|
5 |
|
|
|
|
|||||
|
= |
a |
31 |
a |
32 |
|
a |
33 |
|
= |
0 |
−0 |
|
= − |
|
= − |
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
− |
|
|
|
|
|
|
||||
D3 |
|
|
|
|
a22 |
|
a23 |
|
|
|
|
|
3 |
|
|
|
5D2 |
|
25 < 0. |
||||
|
a21 |
|
|
|
−2 |
|
0 |
|
|
||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

380 Eigenvalue problems and quadratic forms
Since the leading principal minors Dk , k {1, 2, 3}, alternate in sign, starting with a negative sign of D1, matrix A is negative definite according to part (2) of Theorem 10.7.
Example 10.8 We check for which values a R the matrix
|
= |
|
2 |
1 |
0 |
0 |
|
|
0 |
0 |
1 |
2 |
|||
A |
|
|
1 |
a |
1 |
0 |
|
|
|
|
0 |
1 |
3 |
1 |
|
|
|
|
|
is positive definite. We apply Theorem 10.7 and investigate for which values of a all leading principal minors are greater than zero. We obtain:
D1 |
= 2 > 0, |
|
||||
|
|
|
2 |
1 |
|
|
D2 |
= |
|
|
|
|
= 2a − 1 > 0. |
1 a |
||||||
|
|
|
|
|
|
|
The latter inequality holds for a > 1/2. Next, we obtain
|
= |
0 |
1 |
3 |
= |
− − |
||
|
|
|
2 |
1 |
0 |
|
|
|
D3 |
|
|
|
|
|
|
|
6a 2 3 > 0, |
|
1 a 1 |
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
which holds for a > 5/6. For calculating D4 = |A|, we can expand |A| by row 4:
= | | = − |
0 1 |
1 |
+ |
|
= − − + − = − |
|||
|
|
2 |
1 |
0 |
|
|
|
|
D4 A |
|
|
|
|
|
|
2D3 |
(2a 1) 12a 10 10a 9 > 0, |
1 a 0 |
|
|||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
which holds for a > 9/10. Due to 1/2 < 5/6 < 9/10, and since all leading principal minors must be positive, matrix A is positive definite for a > 9/10.
THEOREM 10.8 Let A be a symmetric matrix of order n × n. Then:
(1) If matrix A is positive semi-definite, then each leading principal minor Dk , k = 1, 2, . . . , n, is non-negative.
(2)If matrix A is negative semi-definite, then each leading principal minor Dk is either zero or has the same sign as (−1)k , k = 1, 2, . . . , n.
It is worth noting that Theorem 10.8 is only a necessary condition for a positive (negative) semi-definite matrix A. If all leading principal minors of a matrix A are non-negative, we cannot conclude that this matrix must be positive semi-definite. We only note that, in order to get a sufficient condition for a positive (negative) semi-definite matrix, we have to check

Eigenvalue problems and quadratic forms 381
all minors of matrix A and they have to satisfy the conditions of Theorem 10.8 concerning their signs.
EXERCISES
10.1Given are the matrices
A = |
4 −1 ; |
B = |
−2 |
4 ; |
|
|
|
|
||
|
|
2 |
1 |
|
2 |
1 |
|
|
|
|
|
|
−3 |
−2 |
4 |
|
|
|
2 |
0 |
0 |
C |
= −2 |
−2 |
3 |
and |
D |
= −1 0 |
2 |
|||
|
−3 |
2 |
3 |
|
2 |
1 |
−2 . |
Find the eigenvalues and the eigenvectors of each of these matrices.
10.2Let xt be the consumption value of a national economy in period t and yt the capital investment of the economy in this period. For the following period t + 1, we have
xt+1 = 0.7xt + 0.6yt
which describes the change in consumption from one period to the subsequent one depending on consumption and capital investment in the current period. Consumption increases by 70 per cent of consumption and by 60 per cent of the capital investment. The capital investment follows the same type of strategy:
yt+1 = 0.6xt + 0.2yt .
Thus, we have the system
ut+1 = Aut |
with |
ut = (xt , yt )T, |
t = 1, 2, . . . . |
(a)Find the greatest eigenvalue λ of the matrix A and the eigenvectors associated with this value.
(b)Interpret the result above with λ as a factor of proportionate growth.
(c)Let 10, 000 units be the sum of consumption value and capital investment in the first period. How does it have to be split for proportionate growth? Assume you have the same growth rate λ for the following two periods, what are the values of consumption and capital investment?
10.3Given are the matrices
|
|
|
0 |
0 |
2 |
0 |
|
|
|
|
|
|
|
|
|
|
|
|
1 |
2 |
0 |
1 |
|
|
|
|
4 |
0 |
0 |
|
|
A |
= |
0 0 |
−0 |
5 |
and B |
= |
|
0 |
3 |
1 |
|
. |
|||
|
0 |
3 |
2 |
0 |
|
|
0 |
1 |
3 |
|
|||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(a)Find the eigenvalues and the eigenvectors of each of these matrices.
(b)Verify that the eigenvectors of matrix A form a basis of the space R4 and that the eigenvectors of matrix B are linearly independent and orthogonal.
382 Eigenvalue problems and quadratic forms
10.4Verify that the quadratic form xTBx with matrix B from Exercise 10.1 is positive definite.
10.5Given are the matrices:
A = |
−1 |
−1 ; |
B = |
2 |
1 ; |
|
|
|
||||
|
|
|
|
3 |
|
1 |
|
2 |
2 |
|
|
|
|
|
1 |
5 |
1 |
0 |
|
|
|
1 |
0 |
2 |
|
|
= |
0 0 |
−8 |
|
|
|
= 2 0 |
5 |
||||
C |
|
|
|
1 |
5 |
0 |
|
and |
D |
0 |
1 |
0 . |
|
2 |
|
(a)Find the eigenvalues of each of these matrices.
(b)Determine by the given criterion (see Theorem 10.7) which of the matrices A, B, C, D (and their associated quadratic forms, respectively) are positive definite and which are negative definite.
(c)Compare the results of (b) with the results of (a).
10.6Let x = (1, 1, 0)T be an eigenvector associated with the eigenvalue λ1 = 3 of the matrix
a1 0 1
A = 2 a2 0 . 1 −1 a3
(a)What can you conclude about the values of a1, a2 and a3?
(b)Find another eigenvector associated with λ1.
(c)Is it possible in addition to the answers concerning part (a) to find further conditions for a1, a2 and a3 when A is positive definite?
(d)If your answer is affirmative for part (c), do you see a way to find a1, a2 and a3 exactly when λ2 = −3 is also an eigenvalue of the matrix A?