
AG1 / dixon_m_kurdachenko_l_subbotin_i_algebra_and_number_theory_a
.pdfVECTOR SPACES |
169 |
respectively. Of course, we could also represent this equation using a row vector for the vector of coefficients in which case the equation becomes a 1 = b1 T.
We next consider the subspaces of finite-dimensional vector spaces. As we would intuitively suspect, the dimension of a subspace cannot be larger than the dimension of the original space.
4.2.20. Theorem. Let A be a finite-dimensional vector space over a field F and let B be a subspace of A. Then B is finite dimensional and dimF(B):::; dimF(A). Furthennore, dimF(B) = dimF(A) if and only if B =A.
Proof. If B = {OA}, then dimF(B) = 0:::; dimF(A). Therefore, we will assume
that |
B is a nonzero subspace. Suppose that B does |
not have a finite basis |
||||
and |
let |
OA =f:. a1 E B. As we have |
already seen |
{ad |
is linearly |
independent |
so, |
by |
our assumption, Le({ad) =f:. |
B. Therefore, |
we |
can choose |
an element |
a2 ¢ Le({aJ}). By Lemma 4.2.8, the subset {a1, a2 } is linearly independent, and again, Le({a 1, a2}) =f:. B. In this way, using the same argument, we construct an infinite subset {an In E N} such that for each n, the subset {a1, ... , an} is linearly independent and Le({a1, ... , an}) =f:. B, for each n E N. If S is a finite subset of {anln EN}, then there exists a positive integer k such that S s; {aJ, ... , ak}. By Proposition 4.2.7, the subset S is linearly independent and again using Proposition 4.2.7, we see that the set {anln EN} is also linearly independent. Since the space A is finite dimensional, Theorem 4.2.11 shows that there exists a finite linearly independent subset K such that {an In E N} U K is a basis of A. However, Theorem 4.2.14 shows that each basis of A is finite. This contradiction shows that B has a finite basis {b1, ... , b1}. Theorem 4.2.11 shows that the subset {b1 , ... , b1 } can be extended to a basis of the entire space A and so it follows that dimF(B) :S dimF(A).
Finally, suppose that dimF(B) = dimF(A) =nand let {b1, ••• , bn} be a basis of B. If B =f:. A then, since B = Le({bJ, ... , bn}), Lemma 4.2.8 implies that {b1, ... , bn, c} is linearly independent for each element c ¢ B. Theorem 4.2.11 shows that the subset {bJ, ... , bn, c} can be extended to a basis U of the entire space A. Consequently, the space A has a basis which contains at least n + 1 elements and we obtain a contradiction with Theorem 4.2.14. This proves that
B=A.
Next we consider the question of the dimension of direct products.
4.2.21. Lemma. Let A be a vector space over a field F and let C, B be subspaces of A. Suppose that B n C = {OA}. If M (respectively S) is a linearly independent subset of B (respectively C), then M U S is linearly independent.
Proof. By Proposition 4.2.7, it is sufficient to prove that every finite subset of MUS is linearly independent. If K is a finite subset of MUS, then K = M1 U S1, where M1 (respectively SJ) is a finite subset of M (respectively S). Therefore,
170 ALGEBRA AND NUMBER THEORY: AN INTEGRATED APPROACH
we may assume that the subsets M and S are finite. Let M = {a1 , ... , an} and S = {b1, ... , bk}. Choose a1, ... , an, f31, ... , f3k E F such that
Then a1a1 +···+an an = (-f3I)b1 + ···+ (-f3k)bk, where a1a1 +···+an an is an element of the subspace B while (-f3I)b 1 + ···+ (-f3dbk is an element of the subspace C. Since B n C = {OA}, it follows that
The subsets M, S are linearly independent, so Proposition 4.2.7 implies that
a1 =···=an= f31 = ···= f3k =OF.
Again by Proposition 4.2.7, the subset MUS is linearly independent.
4.2.22. Proposition. Let A be a vector space over a field F, let A1, ... , An be subspaces of A and let M1, ... , Mn be linearly independent subsets of A1, ... , An, respectively. If C = A 1 + ···+ An is the internal direct sum of A 1, ... , An, then M 1 U ···U Mn is a linearly independent subset.
Proof. We use induction on n. If n = 2, the assertion follows from Lemma 4.2.21. Suppose inductively that we have already proved that M1 U · · · U Mn-1 is linearly independent. By Proposition 4.1.14, (AI+···+ An-d nAn = {OA} and Lemma 4.2.21 applies again to give the result.
4.2.23. Corollary. Let A be a vector space over a field F and let A 1, ... , An be finite-dimensional subspaces of A. If C = A 1 + ···+ An is the internal direct sum of A1, ... , An, then dimF(C) = dimF(AJ) + ···+ dimF(An)·
Proof. Let Mj |
be a basis of Aj, for 1 S |
j S n. |
By Proposition 4.2.22, |
|
M1 U · · · U Mn |
is linearly independent. Also, if c E C, |
then c = c, + ···+ Cn, |
||
where Cj E Aj, |
for 1 S j S n. Since |
Mj is |
a basis |
of Aj, Cj is a linear |
combination of the elements of Mj, for |
1 S j |
S n. It follows that cis a linear |
combination of the elements from M1 U · · · U Mn, so M 1 U · · · U Mn is a subset of generators for C. However, M 1 U · · · U Mn is linearly independent, so is a basis of the subspace C. The result now follows easily.
4.2.24. Definition. Let A be a vector space over a field F and let B be a subspace of A. A subspace Cis called a complement to B, if A= B EB C.
The following assertion is very useful.
4.2.25. Proposition. Let A be a finite-dimensional vector space over a field F. Then every subspace of A has a complement.
VECTOR SPACES |
171 |
Proof. Let B be an arbitrary subspace of A. By Theorem 4.2.20, B is finite dimensional so let {b1, ... , bk} be a basis for B. Since M = {b1, ... , bk} is linearly independent, Theorem 4.2.11 shows that M can be extended to a basis of the entire space A. Thus, there exists a finite subset S = {CJ, ... , cd such that MUS is a basis of A and we set C = Le(S). Let a be an arbitrary element of A. By Proposition 4.2.16,
for certain elements fh, ... , f3k, y1, ... , Yt E F. Clearly, f31b1 + ···+ f3kbk E B and YJCJ + · · · + YtCt E C, so that a E B +C. It follows that A= B +C.
Next, let y E B n C. Then y = )qb1 + ···+ Akbk. where ).. 1, ... , Ak E F. On
the other hand, y = ~-t 1 |
c1 + ···+ 1-ttCt. where /-LJ, ... , I-tt E F. We have |
AJbl |
+ ... + Akbk = y = /-LJCJ + ... + 1-ttCt |
or |
|
Since {b1, ... , bk. c1, ... , cd is a basis, Proposition 4.2.7 shows that
AJ = ... = Ak = /-tl = ... =I-tt= OF.
Consequently, y = OA, so that B n C = {OA} and it follows that A= B EB C.
We note that a subspace usually has more than one complement. For example, let A be the vector space, having basis {a!, a2} and let B be the subspace generated by a1. We observe that the subset {a!, a1 + a2} is also a basis of A. If C (respectively D) is the subspace generated by a2 (respectively by a1 + a2), then C, D are complements to the subspace B.
Finally, we would like to consider some important examples of finite dimensional spaces and subspaces. First we consider the most important example for us; namely, the space Fn. Put
e1 = (e, OF, OF, ... , OF, OF),
e2 =(OF, e, OF, ... , OF, OF), ... ,
ej=(OF,OF, ... ,OF, e ,OF, ... ,OF,OF), ... ,
'--.--' j
en-1 =(OF, OF, OF, ... , OF, e, OF), en= (OF, OF, OF, ... , OF, e).
These elements are linearly independent. For, let a1, ... , an be elements of F such that + a2e2 + · · · + anen. We have
172 ALGEBRA AND NUMBER THEORY: AN INTEGRATED APPROACH
atet +···+an en =at (e, OF, ... , OF)+ a2(0F, e, OF, ... , OF)+···
+ an(OF, OF, ... , OF, e)
=(at, OF, ... , OF)+ (OF, a2, OF, ... , OF, OF)+··· +(OF, OF, OF, ... , OF, an)= (at, a2, ... , an).
It follows that at = a 2 = ···=an =OF, and by Proposition 4.2.7, the elements et, ... , en are linearly independent. Furthermore, for an arbitrary element
(Yt, y2, ... , Yn) of Fn, we have
(Yt, · · ·, Yn) = (yt, OF,···, OF)+ (OF, }'2, OF, ... , OF)+··· +(OF, OF, .. ·, OF, Yn)
=Yt(e, OF, ... , OF)+ Y2(0F, e, OF, ... , OF)+···
+Yn(OF, ... , OF, e)
=Ytet + Y2e2 + ···+ Ynen.
This proves that {et, ... , en} generates the vector space Fn and is linearly independent. Consequently, this subset is a basis of Fn called the standard or canonical basis of P. Thus, the space Fn is finite dimensional and dimF(Fn) = n.
Now consider the space FN and its elements
ej = (vjn)nEN, where Vjj = e and Vjn =OF whenever j # n.
As proved above, we can show that the subset {ej I 1 ::; j ::; k} is linearly independent for every k E N. Proposition 4.2.7 implies that {en I n E N} is linearly independent. Hence, the vector space FN contains an infinite linearly independent subset and therefore, cannot be finite dimensional. In fact, the arguments stated above allow us to prove that {en I n E N} is a basis for the subspace F(N), but
not of FN which has an uncountable basis. |
|
|
The vector space Mkxn (F) is |
also finite dimensional. Indeed, |
it is possible |
using arguments similar to those |
given above, to show that the |
subset {Erj I |
1 ::; t ::; k, 1 ::; j ::; n} is a basis of this vector space called the standard or canonical basis of the vector space Mkxn(F). Hence dimF(Mkxn(F)) = kn and dimF(Mn(F)) = n2, in particular.
EXERCISE SET 4.2
Justify your work, providing a proof or counterexample where necessary.
4.2.1. Prove |
that |
the subset {(all,a12,al3,···'atn),(O,a22,a23, ... ,a2n), |
(0, 0, a33, ... , a3n), ... , (0, 0, 0, ... , akk, ak,k+t, ... , akn)} of the vector |
||
space |
A = |
«:t is linearly independent if and only if the numbers a11, |
a22, a33, ... , akk are nonzero.
VECTOR SPACES |
173 |
4.2.2.Let {a1, a2, a3, ... , am} be a linearly independent subset of a vector space A. Is the subset {a1, a1 + a2, a2 + a3, ... , am-I +am} linearly independent?
4.2.3.Let A be a finite set and let !AI = n. On the Boolean IJ3(A), we introduce the operation of addition and the operation of scalar multiplication by
elements of the field F 2 = {0, 1} using the following rules: X + Y = (XU Y)\(X n Y), IX= X, OX= 0. Prove that the Boolean IJ3(A) is a vector space under these operations. Prove that if X 1 c Xz c ···c Xn and Xk =f. X j where k =f. j, then X 1, X 2, ... , Xn are linearly independent. Find a basis and the dimension of IJ3(A).
4.2.4. Let B be the subset of the vector space M2 (~) consisting of all matrices
of the form (a |
0 |
)·IsBa subspace? If yes, find dimJR(B). |
|
2a |
3a |
4.2.5.Let A= Q 22 , B = {(a1, a2, ... , a22) I a1 + a2 + ···+ a22 = 0}. IsBa subspace? If yes, find dimJR(B).
4.2.6. Let |
A = ~221 , |
B = {(a 1, a 2 , ... , a221) I 2a1 = a221 }. Is B a subspace? |
If yes, find dimJR(B). |
||
4.2.7. Let |
A= Q23 , |
B = {(a1, a2, ... , a23) I ai = az, a~= a3, ... , a~2 = |
a23}. IsBa subspace? If yes, find dimJR(B).
4.2.8.Give an example of a nonstandard basis in M3(~).
4.2.9.Prove that the subset of all symmetric matrices is a subspace of the vector space M13(~). Find a basis and the dimension of this subspace.
4.2.10.Is the set of all skew-symmetric matrices a subspace of the vector space M41 (~)? If yes, find a basis and the dimension of this subspace.
4.2.11. Let A= Q4 • |
Do the |
vectors (1, 2, 3, 4), (0, 1, -1, 3), (1, 2, 4, 3), (-1, |
|
-1, -4, 1) form a basis of this space? If yes, find the transition matrix |
|||
from the standard basis. |
|
||
4.2.12. Is the subset |
{ (~ |
-~), G~), (~ |
~), (~ ~)} a basis of the |
vector space M2(Q)? If yes, find the coordinates of the matrix ( _; ~) |
|||
relative to this basis. |
~), G~), (~ |
~), G-u} a basis of the |
|
4.2.13. Is the subset |
{ (~ |
||
vector space M2(Q)? |
|
( _ i ~) relative to this basis. |
|
If yes, find the coordinates of the matrix |
174 ALGEBRA AND NUMBER THEORY: AN INTEGRATED APPROACH
4.2.14. In the vector space Mz(Q), find the transition matrix from the basis { G-~),G~),G~),G~)}to the basis
{(~ n·G n.(~ ~)·G ~)l·
4.2.15.In the vector space JR.4 , find a basis containing the vector (0, 1, 4, 3).
4.2.16.In the vector space JR.4 , find a basis containing the vector (2, 1, 1, 0).
4.2.17. Let A = Q4 . Is the matrix |
( |
~ |
! -! |
i) a transition matrix from |
||
|
-1 |
-1 |
4 |
1 |
|
|
the standard basis to another one? If yes, find the new basis. |
||||||
4.2.18. Let A = Q3 , let B |
be |
the |
linear |
envelope |
of the subset |
|
{(1, 2, 1), (1, 1, -1), (1, 3, 3)} |
and |
let C |
be the |
linear envelope |
of the subset {(2, 3, -1), (1, 1, 2), (1, 1, -3)}. Find the bases of the sum and of the intersection of the spaces B and C.
4.2.19. Let A= NT 19(Q) be the subspace of all zero-triangular matrices. Find a basis and the dimension of this subspace.
4.3THE RANK OF A MATRIX
Matrices are very important tools in linear algebra. In this section, we consider a concept known as the rank of a matrix. This concept is based on the dimension of a space.
4.3.1. Definition. Let A be a vector space over a field F and let M be a finite subset of A. Then dimF (Le(M)) is called the rank of the subset M and is denoted by rank(M).
From Corollary 4.2.13, we know that M contains some basis R of the subspace Le(M). By Theorem 4.2.10, R is a maximal linearly independent subset ofLe(M) and hence R is also a maximal linearly independent subset of M. Thus, we obtain the following characterization of the rank of a subset.
4.3.2. Proposition. Let F be afield and let A be a vector space over F. Suppose that M is a finite subset of A. Then rank(M) is equal to the number of elements in every maxima/linearly independent subset of M.
Proof. |
Let |
S = |
{a1, ••• , ak} |
be |
an |
arbitrary maximal |
linearly |
inde- |
|||||||||
pendent |
subset of |
M. |
Clearly, |
Le(M) |
is |
finite |
dimensional |
and |
we |
||||||||
claim that S is indeed |
a |
basis |
of |
it, from |
which |
the |
result |
will |
fol- |
||||||||
low |
by |
the |
definition. |
If |
x E M\S, |
then |
S U {x} |
is linearly |
dependent |
||||||||
so |
by |
Proposition |
4.2.7, |
there |
are |
scalars |
)q, ... , )..k. f3 E F, not |
all |
OF |
VECTOR SPACES |
175 |
such that |
|
|
(4.1) |
If f3 =OF, then the fact that S is linearly independent and Proposition 4.2.7
imply that A., |
= A.z = ···= Ak =OF, contrary to the choice of the scalars |
|
A. 1, ... , A.b {3. |
|
Thus, f3 =f. OF so, multiplying Equation 4.1 by {3- 1 gives |
x = -{3- 1A. a |
1 |
- · ·· -{3- 1A.kak. Thus, every element of M is a linear combina- |
1 |
|
tion of the elements of S. Since every element of Le(M) is a linear combination of the elements of M, it follows that every element of Le(M) is a linear combination of the elements of S so that Sis indeed a basis of Le(M) as required.
The following corollary is immediate.
4.3.3. Corollary. Let F be afield and let A be a vector space over F. Suppose that M is a finite subset of A. Then rank(M) = IMI if and only if M is linearly independent.
There is a very nice way of applying |
these results to matrices. Let F be a |
|||||
field and consider the k x n matrix A E Mkxn(F), where |
|
|||||
|
|
a,z |
CV13 |
CVJ,n-1 |
a,.) |
|
|
c |
az3 |
az,n-i |
CVzn |
|
|
A= |
az1 |
azz |
. |
|||
. |
|
|
|
. |
||
|
CVki |
akz |
ak3 |
ak,n-i |
CVkn |
|
Every row of this matrix is an n-tuple consisting of elements of F, so, we may consider each row as an element of the vector space Fn. Similarly, every column of this matrix is a k-tuple, with entries in F and so each column can be considered as an element of the vector space Fk.
4.3.4. Definition. Let F be a field and let A = [at}] be a k x n matrix over the field F. Let R(A) (respectively C(A)) denote the set of all rows (respectively columns) of the matrix A. Then R(A) (respectively C(A)) is a subset of the vector space P (respectively Fk ). The numbers rank(R(A)) andrank(C(A)) are called the row rank and the column rank of A, respectively.
We are going to prove that these ranks coincide and exhibit a method for computing them.
4.3.5. Theorem. Let F be a field and let A = [a1j] be a k x n matrix over the field F. Suppose that t is a positive integer satisfying the conditions:
(i)the matrix A has a nonzero minor of degree t;
(ii)each minor of degrees> t is equal to OF.
Then rank(C(A)) = t.
176 ALGEBRA AND NUMBER THEORY: AN INTEGRATED APPROACH
Proof. We suppose first that minor{1, 2, ... , t; 1, 2, ... , t} is nonzero and that the corresponding cofactor is denoted by Ll. As we will see later, this will not affect the generality of the result but will significantly simplify the notation. Let a1 denote the jth column of the matrix A and consider the matrix
a 11 |
alt |
a, 1 |
a21 |
a21 |
a21 |
B= |
|
|
a 11 |
au |
at} |
ami |
amt |
am} |
where k + 1 ::; j ::; n and 1 ::; m ::; k. If m ::; t |
then the matrix B has two iden- |
tical rows, those numbered m and t + 1, so, by Corollary 2.3.8, det(B) =OF. If m > t, then
det(B) =minor {1, 2, ... , t, m; 1, 2, ... , t, j}.
This minor has degree t + 1, and by hypothesis (ii), det(B) =OF so, in any case, det(B) =OF. Using Theorem 2.4.3, we may expand the deter-
minant of B about the last row. |
The cofactor corresponding to ami is |
|||||
±minor{l, 2, ... , t; 1, 2, ... , t} = Ll, |
whereas the minor corresponding to ams |
|||||
is the determinant of |
|
|
|
|
|
|
|
a21 |
aJ,s-J |
al,s+J |
alt |
alj) |
|
|
a2,s-J |
a2,s+J |
a21 |
a21 |
. |
|
Bs = |
. |
|
|
|
. |
|
|
an |
at,s-J |
at,s+J |
au |
at} |
|
|
C' |
|
|
|
|
|
We denote the corresponding cofactor by Ll 5 • As we can see, the elements of the row numbered m do not belong to the matrix B5 • Therefore, det(Bs) and consequently, Lls is independent of m. By Theorem 2.4.3,
am1Ll1 + ···+ amtLlt + amjLl =OF.
Since Ll is a nonzero element of the field F, Ll has a multiplicative inverse, Ll- 1, so we have
Since this equation is valid for each m, where 1 ::; m ::; k, we obtain the following linear combination of the columns considered as elements of the vector space pk:
It follows that Le(C(A)) is generated by the columns a,, ... , a1• We next show that the set {a,, ... , atl is linearly independent, which implies that it is a basis of Le(C(A)). Suppose that the contrary is true. Then there exists an index q
VECTOR SPACES |
177 |
such that the column Clq |
is a linear combination of the other columns, say Clq = |
|||
LJ::ojs_t,qi-J AjClj· Let aj |
denote the column |
|
|
|
|
ft)j) |
|
|
|
|
ctzj |
|
|
|
|
( . . |
|
|
|
|
CXtj |
|
|
|
For these "shortened" columns, the same |
linear |
combination |
aq = |
|
LJ::oj::ot,qi-J Ajaj is true. However, Corollary |
2.3.10 |
shows that in |
this |
|
case, the matrix |
|
|
|
|
has determinant zero, a contradiction which proves that {a1, ••• , ar} is a basis of Le(C(A)). Thus, dimFLe(C(A)) = t, which proves the result.
Computation of the rank of a matrix appears to require the computation of a possibly very large (but finite!) number of minors of the matrix. However, if we look carefully at the proof of the previous theorem, we see that we did not use the fact that all minors of degree s > t are equal to zero. We actually used only the fact that the minors of degree s including the given nonzero minor of degree t are equal to zero. We may infer from this fact that t is the number of columns in a maximal linearly independent subset of the set of all columns. This fact implies that all other minors of degree s > t are equal to zero.
4.3.6. Corollary. Let F be afield and let A= [atj] beak x n matrix over the field F. Then, the row rank of this matrix coincides with its column rank.
Proof. Suppose that the column rank of A is denoted by w. By Theorem 4.3.5, there exists a nonzero minor
Ll = minor{p(1), p(2), ... , p(w); j(1), j(2), ... , j(w)}
of degree t. Let At= [,Bij] E Mnxk be the transpose of A, so .Bij = CXji· Then R(At) (respectively C(AI)) is the set of all columns (respectively rows) of the matrix A. Therefore, the column rank of AI is equal to the row rank of A, and conversely. We will find the column rank of the matrix At. By Proposition 2.3.3, the minor of the matrix At corresponding to the rows numbered j (1), j (2), ... , j (w) and the columns numbered p(l), p(2), ... , p(w) is nonzero. Next, choose s arbitrary columns and rows in AI, where s > w. We suppose that the chosen rows are rows m(1), ... , m(s) and the chosen columns are d(1), ... , d(s). By Proposition 2.3.3, the minor of the matrix AI corresponding to these rows and
178 ALGEBRA AND NUMBER THEORY: AN INTEGRATED APPROACH
columns is equal to the minor of the matrix A consisting of the rows numbered d(1), ... , d(s) and columns numbered m(l), ... , m(s), and therefore it is equal to zero. Hence, every minor of the matrix AI of degree s > w is equal to zero. By Theorem 4.3.5, the column rank of AI is therefore w. Hence the row rank of A is also w, and in particular, the column and the rows ranks of A are equal.
Because of Corollary 4.3.6, we call the common value of the row rank and column rank of a matrix A simply the rank of A and denote it by rank(A). It will normally be clear what ranks we are talking about.
The rank of a matrix has important applications in solution of systems of linear equations. To see this, we consider the system of linear equations
|
|
<X11X1 + <X12X2 + · · · + <XInXn |
= fJ1 |
|
|
||||
|
|
<X21X1 |
+ <X22X2 + · · · + <X2nXn |
= fJ2 |
|
|
|||
|
|
|
|
|
|
|
|
(4.2) |
|
|
<Xk-I,IXI |
+ <Xk-1.2X2 + · · · + <Xk-l,nXn |
= f3k-1 |
|
|
||||
|
|
<Xk1X1 |
+ <Xk2X2 + · · · + <XknXn |
= f3k· |
|
|
|||
The coefficients |
atj, for |
1 :::; t |
:::; k, 1 :::; j |
:::; n and elements {31 , |
for 1 :::; t |
:::; k |
|||
belong to F. |
|
|
|
|
|
|
|
|
|
4.3.7. Definition. An n-tuple (YI, ... , Yn) |
consisting of elements of a field |
F is |
|||||||
called a solution of the system |
(Eq. |
4.2) |
if every equation from |
Equation |
4.2 |
||||
becomes an identity after replacing the variables x j |
by the corresponding elements |
||||||||
Yj· for 1 :::; j :::; |
n, so Ll~j~n <XtjYj = |
f3t for all t, |
where 1 :::; t :::; |
k. |
|
||||
Note that the elements Yl, ... , Yn |
form only one solution (y1, ... , Yn) of the |
given system, not n solutions. Also a system of linear equations need not have a solution. We next consider the question of the existence of a solution to such a system.
The matrix
|
c |
al,n-1 |
a!,)<X2n |
|
|
|
a12 |
||
A-- |
<X21 |
<X22 |
a2,n-l |
|
. |
|
|
|
|
|
<Xkl |
ak2 |
ak,n-1 |
<Xkn |
consisting of the coefficients of the variables x j, where 1 :::; j :::; n, is called the coefficient matrix of the system (Eq. 4.2). The matrix
|
c |
al,n-1 |
<X1n |
f32~I) |
|
|
|
al2 |
|||
A*= |
<X21 |
a22 |
a2,n-l |
<X2n |
|
. |
|
|
|
|
|
|
<Xkl |
ak2 |
ak,n-1 |
<Xkn |
f3k |
is called the extended or augmented matrix of the system (Eq. 4.2).