
AG1 / dixon_m_kurdachenko_l_subbotin_i_algebra_and_number_theory_a
.pdfMATRICES AND DETERMINANTS |
49 |
Here is a summary of all the properties we have obtained so far, using our previously established notation.
A+B=B+A,
A+ (B +C) = (A+ B)+ C,
A+ 0 =A,
A+(-A)=O,
A(B +C)= AB + AC, (A+ B)C = AC + BC, A(BC) = (AB)C,
AI= /A= A, (a+ f3)A =a A+ f3A, a(A+B)=aA+aB,
a(f3A) = (af3)A,
lA =A,
a(AB) = (aA)B = A(aB).
With the aid of these arithmetic operations on matrices, we can define some additional operations on the set Mn(ffi.). The following two operations are the most useful: the operation of commutation and the operation of transposing.
2.1.8. Definition. Let A, B E Mn(ffi.). The matrix [A, B] =AB-BA is called the commutator of A and B.
The operation of commutation is anticommutative in the sense that [A, B] = -[B, A]. Note also that [A, A]= 0. This operation is not associative. In fact, by applying Definition 2.1.8, we have
[[A, B], C] =[AB-BA, C] =ABCBAC- CAB+ CBA, whereas
[A, [B, C)]= [A, BCCB] =ABCACB- BCA + CBA.
Thus, if the associative law were to hold for commutation then it would follow that BAC +CAB= ACB + BCA. However, the matrices
50 ALGEBRA AND NUMBER THEORY: AN INTEGRATED APPROACH |
|
|||||
show that this is not true. In fact, |
|
|
|
|
||
BAC +CAB= c~) (~ |
~) (~ ~) |
|
||||
|
+ (~ |
~) (~ |
|
~)c~) |
|
|
= |
(~ |
~) (~ |
~)+ (~ |
nc~) |
||
= C~)+ (~ |
~) = G~),whereas |
|||||
ACB+BCA = |
(~ |
~) G~)c~) |
|
|||
|
+ c~) (~ |
|
~) (~ |
~) |
|
|
= |
(~ |
~)c~)+ (~ |
~) (~ |
~) |
||
= (~ |
~)+ (~ |
~) = G~). |
|
|||
Thus, BAC +CAB i= ACB + BCA, |
so |
we |
can see |
that [[A, B], C] i= |
||
[A, [B, C]]. |
|
|
|
|
|
|
However, for commutation there is a weakened form of associativity, known as the Jacobi identity which, states
[[A, B], C] + [[C, A], B] + [[B, C], A]= 0.
Indeed
[[A, B], C] + [[C, A], B] + [[B, C], A] =[AB-BA, C]
+ [CAAC, B] + [BCCB, A]
=ABCBAC- CAB+ CBA +CABACB- BCA + BAC +BCA-CBA-ABC+ACB= 0.
The second operation of interest is transposition which we now define.
2.1.9. Definition. Let A = [aij] be a matrix from the set MkxnOR). The transpose of A is the matrix N = [bij] which is the matrix from the set Mnxk(JR) whose entries are bij = aji· Thus, the rows of A1 are the columns of A, and the columns of A1 are the rows of A. We will say that we obtain N by transposition of A.
Here are the main properties of transposition.
MATRICES AND DETERMINANTS |
51 |
2.1.10. Theorem. Transposition has the following properties:
(i)(N)t = A, for all matrices A.
(ii)(A+ B)t =AI+ Bt, if A, BE Mkxn(lR).
(iii)(AB)t = Bt AI, if A E Mkxn(lR) and BE Mnxt(lR).
(iv)(A -I )t = (N)- 1,for all invertible square matrices A. Thus, if A -i exists so does (At)- 1•
(v)(aA)t =aN, for all matrices A and real numbers a.
Proof. Assertions (i) and (ii) are quite easy to show and the method of proof will be seen in the remaining cases.
(iii) Let A= [aij] E Mkxn(lR) and B = [bij] E Mnxt(lR). Put
AB = C = [C;j] E Mkxt(lR), At= [uij] E Mnxk(JR),
Bt = [viJ] E Mtxn(lR), Bt At= [wij] E Mtxk(JR).
Then
Cji = L ajmbmi and
!:Om :On
It follows that (AB)t = Bt At.
(iv) If A- 1 exists then we have A- 1A = AA- 1 =/.Using (iii) we obtain
Thus, |
(A -I )t is the inverse of N, which is to say that At is invertible and |
(At)-! |
= (A -I )t |
(v) is also easily shown. |
2.1.11. Definition. The matrix A = [aij] E Mn(lR) is called symmetric if A= AI. In this case a;j = aji for every pair of indices (i, j), where 1 :::; i, j :::; n.
The matrix A= [aij] E Mn(lR) is called skew symmetric, if A= -AI. Then aiJ = -a ji for every pair of indices (i, j), where 1 :::; i, j :::; n.
We note that if a matrix A has the property that A = N, then A is necessarily square. Also the elements of the main diagonal of a skew-symmetric matrix
must be 0 since, in this case, we have au = |
-au, or a;; = 0 for each i, where |
1 :::; i :::; n. If A is skew symmetric then N = |
-A clearly. |
The following remarkable result illustrates the role that symmetric and skewsymmetric matrices play.
52 ALGEBRA AND NUMBER THEORY: AN INTEGRATED APPROACH
2.1.12. Theorem. Every square matrix A can be represented in the form A = S + K, where Sis a symmetric matrix and K is a skew-symmetric matrix. This representation is unique.
Proof. LetS= 1<A +AI) and K = 1<A- AI) and note, using Theorem 2.1.10, that
Thus S is symmetric. Also, again by Theorem 2.1.1 0,
K |
t |
1 |
t t |
1 t |
t t 1 t |
1 |
t |
|
=-(A- A) |
=-(A |
+(-A))= -(A |
-A)= --(A- A)= -K |
|||
|
|
2 |
|
2 |
2 |
2 |
' |
so that K is skew symmetric. Furthermore, S + K = 1<A + A 1 ) + 1<A- AI) = A. Consequently, it is always possible to write the matrix A as a sum of a symmetric and a skew-symmetric matrix. To show uniqueness, let S1 be sym-
metric and let K1 be |
skew symmetric such |
that also A= S1 + K1 = S + K. |
|||
Then |
-S + S1 = S1 - |
S = K- K1. However, |
X= S1 - S is symmetric |
and |
|
X = |
K - K 1 is skew symmetric. Then X = X 1 |
= |
-X and it follows that X = |
0. |
Therefore, S = S1 and K = K 1 and the uniqueness of the expression follows.
EXERCISE SET 2.1
2.1.1. Prove that there are no matrices A and B for which the equation [A, B] =
Iis valid. Hint. Just show that the sum of all elements of the principal diagonal of the matrix [A, B] is equal to 0.
2.1.2.Let A be a diagonal matrix whose diagonal entries are all different. Let
Bbe a matrix such that AB = BA. Prove that B is diagonal.
2.1.3.Find all matrices A E M2 (IR) with the property that A2 = 0.
2.1.4.If we interchange rows j and k of a matrix A, what changes does this imply in the matrix AB?
2.1.5.If we interchange columns j and k of a matrix A, what changes does this imply in the matrix AB?
2.1.6. If we add a times row k to row j in the matrix A, what changes does this imply in the matrix AB?
1 |
1 |
0 |
1 |
2.1.7. Find A3 if A = 0 |
0 |
0 0 0
MATRICES
AND
DETERMINANTS
53
2.1.8.
2.1.9.
2.1.10.
2.1.11.
Find
Find
Find
Find
0 1 1
1 |
0 |
0 |
1 |
1 |
1 |
0 |
0 |
0 |
0 |
n |
0 |
1 |
|
2 |
|
3 |
|
4 |
|
n- |
1 |
n |
|
1 |
|
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 0 1
0 0 1 0
0 0
0 1 0 0
0 0
0 1 1 0 0 0 0 0 0
0 0 1 1
0 0
0 0 1 0
0 0
0 0 1 1 0 0 0 0 0
0 |
3 |
0 |
|
0 |
|
0 |
0 |
3 |
|
|
|
||
0 |
0 |
|
|
1 |
1 |
|
|
0 |
0 |
|
|
1 |
0 |
|
|
0 |
1 |
|
|
0 |
0 |
0 |
3 |
|
|||
0 |
0 |
0 |
|
0 |
0 |
0 |
|
1 |
0 |
0 |
|
0 |
1 |
0 |
|
0 |
0 |
1 |
|
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
0 |
1 |
|
0 |
0 |
1 |
0 |
|
0 |
0 |
0 |
1 |
|
0 |
0 |
0 |
0 |
8
2.1.12. |
Find the |
n |
x |
n |
matrix |
0 |
0 |
0 |
0 |
0 |
0 |
7
2.1.13.
0 |
0 |
0 |
0 |
0 |
0 |
n |
0 |
0 |
Prove that for any square matrix |
||
matrix. |
|
|
0 |
|
0 |
1 |
|
|
|
0 |
|
0 |
0 |
|
|
|
0 |
|
0 |
0 |
|
|
|
A |
the |
product |
AA |
1 |
is |
|
|
a |
symmetric |
2.1.14.
Prove |
that a |
matrix |
if and |
product only if
of AB
two symmetric matrices = BA.
A,
B
is
a |
symmetric |
54ALGEBRA AND NUMBER THEORY: AN INTEGRATED APPROACH
2.1.15.Prove that a product of two skew-symmetric matrices A, B is a symmetric matrix if and only if AB = BA.
2.1.16.Prove that for every pair of symmetric (respectively, skew symmetric) matrices A, B the commutator [A, B] is a skew-symmetric matrix.
2.1.17. Prove that for every pair |
of symmetric matrices |
A, B |
the product |
AB AB ... ABA is also a symmetric matrix. |
|
|
|
2.1.18. Let A E Mn (JR). A matrix |
A is called nilpotent, if |
A k = |
0 for some |
positive integer k. Let A, B be nilpotent matrices. Prove that AB = BA implies the nilpotency of A + B.
2.1.19.Let A E Mn(lR). A matrix A is called nilpotent, if Ak = 0 for some positive integer k. The minimal such number k is called the nilpotency class of A. Prove that every zero triangular matrix is nilpotent.
2.1.20. Let A be a nilpotent matrix. Prove that the matrices I - A and I + A are invertible.
2.2PERMUTATIONS OF FINITE SETS
There is a key numerical characteristic of a matrix called the determinant of the matrix which requires some ideas from permutations of finite sets. The properties of determinants are therefore closely connected with properties of permutations and, for this reason, in this section we shall study some basic properties of permutations. The properties that we discuss now will often be used in the next section when we study determinants.
Let A be a finite set, say A = {a 1, a2 , ... , an}. In the case of sets, the order that the elements are written is not important as we saw in Definition 1.1.1. However, there are cases when the order of the elements in a set is important. One such case arose when we considered the Cartesian product. As we saw, the elements of a Cartesian nth power of a set are ordered n-tuples. This means, for example, that the n-tuples (a,, az, a3, ... , an) and (az, a,, a3, ... , an) are different. An n- tuple consisting of all elements of a finite set A= {a,, az, ... , an} that contains each element from A once and only once is called a pennutation of the elements a 1, a2 , •.• , an. These elements in an n-tuple appear in some order: the tuple has a first element (unless it is empty), a second element (unless its length is less than 2), and so on. For example, if A= {1, 2, 3}, then (1,2,3) and (3,2,1) are two different ways to list the elements of A in some order; these constitute two different permutations of the numbers 1, 2, 3.
We have already used the term pennutation to mean a bijective transformation of sets. This term is also widely used in combinatorics but has a different meaning. This often happens in mathematics and can be a cause for confusion, but usually the context should make clear which meaning is in use. In this case, the two concepts are closely related and it should be clear from the context which meaning of permutation is being used.
|
|
|
MATRICES AND DETERMINANTS |
55 |
|
To justify some of these |
remarks, |
let |
A be a set with n elements, |
say |
|
A= {a,, a2, ... , an} and let |
Tr denote |
a |
permutation of A. For 1 :S j |
:S n, |
|
let n(a1) = ak. where k is |
dependent upon j. Then Tr induces a mapping |
||||
no: {1, 2, ... , n}---+ {1, 2, ... , n} defined by |
|
|
|||
no(})= k whenever n(aJ) = ak. |
|
||||
Thus, n(a1) = arco(}) for all j |
such that 1 :S |
j :S n. The mapping no is a permu- |
|||
tation of {1, ... , n}. To see this, note that if no(}) = |
Tro(i) then n(aJ) = arco(}) = |
||||
arco(i) = n(a; ). However, Tr is a permutation of A |
so this implies that aJ = a; |
and hence j = i. Thus, no is injective and hence is bijective by Corollary 1.2.10. Conversely, every permutation a of {1, 2, ... , n} gives rise to a permutation <Pa
of A. We simply define </Ja(a1) = aa<J)• for each j such that 1 :S j |
:S n. Then, |
if <Pu(aj) = </Ja(a;), we have au(})= aa(i) and hence a(j) = a(i). |
Since a is |
a permutation of {1, 2, ... , n} it follows that j = i and hence <Pa is injective. Corollary 1.2.10 further implies that <Pu is bijective and hence a permutation of A. Furthermore, if n =!= <P are two permutations of A then there is an index r such that n(ar) =/= </J(ar). It follows that n 0 (r) =/= </Jo(r) and therefore no=/= <Po.
Consequently, every permutation n of the set A corresponds to precisely one permutation no of {1, 2, ... , n} and the mapping Tr ~-------+ n 0 is bijective.
Every algebraic permutation n of the set A is equivalent to a combinatorial permutation, since informally both involve some type of listing of the elements of {a1, a2, ... , an}. More formally, let a denote a mapping from {1, ... , n} to itself and let (aa(l), au(2), ... , aa(n)) be a combinatorial permutation of the elements a 1, a2, ... , an. This implies that a is a bijection and hence a permutation of the set {1, ... , n}. By our analysis above this means that the transformation Tr of A, defined by the rule n(a1) = au()), where 1 .:::; j :S n, is a bijection and hence an (algebraic) permutation of A. Thus, every combinatorial permutation of A gives rise to an algebraic permutation of A.
Conversely, let n be a permutation of the set A. Then n(a1) is an element of A and hence n(a1) = au(J)• where 1 .:::; j .:::; nand a is a mapping from {1, ... , n}
to itself. Since |
Tr is an |
injective mapping, the elements aa(l)• aa(2)• ... , aa(n) |
|
are distinct. It |
follows |
that |
{aa(l)• aa( 2), ... , aa(n)} ={a,, a2, ... , an}. Hence |
(au(!)• aa(2), ... , au(n)) |
is |
a combinatorial permutation of the elements |
a,, a2, ... , an. Thus, every algebraic permutation gives rise to a combinatorial permutation.
For our purposes we do not need to focus on the nature of the elements of the given set A. When we study permutations of the elements of A we actually only need to work with their indices, which means that we only work with the set {1, 2, ... , n}.
These arguments show that in order to study permutations of the set A = {a,, a2, ... , an}, we can study permutations of {1, 2, ... , n} (notice that the two sets have the same number of elements). Earlier we used the notation S(A) for the set of permutations of A. However, the notation S({1, 2, ... , n}) is cumbersome so we shall instead use the notation Sn for the set of all permutations of the
56 ALGEBRA AND NUMBER THEORY: AN INTEGRATED APPROACH
set {1, 2, ... , n}, which is in accord with standard usage. If n E Sn, then we will say that n is a permutation of degree n. Every permutation of degree n can conveniently be written as a matrix consisting of two rows, where the first row has the entries 1, 2, ... , n and n (m) is written in the second row under the entry m in the first row. The permutation n can be written as
2
n(2)
which we will call the tabular form of the permutation. We note that this is just a notational device; we shall not be adding or multiplying such tabular forms in the manner usually reserved for matrices. Since n is a permutation of the set {1, 2, ... , n}, we see that
{1, 2, ... , n} = {n(l), n(2), ... , n(n)}.
Thus the second row of a tabular form is a permutation of the numbers 1, 2, ... , n. It is not necessary to write all elements of the first row in the natural order from 1 to n, although this is often the way such permutations are written. Sometimes it is convenient to write the first row in a different order. What is most important is that every element of the second row is the image of the corresponding element
of the first row |
situated just above. For example, |
|
|
|
|
|
|
|
|||||||||
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9) |
(2 |
5 |
7 |
1 |
9 |
3 |
6 |
4 |
8) |
( 4 |
9 |
1 |
7 |
8 |
3 |
5 |
2 |
6 and |
9 |
8 |
5 |
4 |
6 |
1 |
3 |
7 |
2 |
are the same permutation. Perhaps, for beginners, in order to better understand permutations, it may be worthwhile to write the permutation with arrows connecting the element of the first row with its image in the second row as in
2 |
3 |
4 |
5 |
6 |
7 |
8 |
,j, |
,j, |
,j, |
,j, |
,j, |
,j, |
,j, |
9 |
1 |
7 |
8 |
3 |
5 |
2 |
This way of writing a permutation will be useful only at the beginning and soon one will feel no need to continue this.
We will multiply permutations by using the general rule of multiplication of mappings, namely composition of functions, introduced in Section 1.3. According to that rule, the product of the two permutations n and a is the permutation
1 2
n oa = ( n(a(l)) n(a(2))
Thus, to multiply two permutations in tabular form, in the first row of the table corresponding to the permutation a we choose an arbitrary element i. We locate a(i) in the second row of a corresponding to i and then find this number a(i) in
MATRICES AND DETERMINANTS |
57 |
the first row of the table corresponding to the permutation n. In the second row of the table corresponding ton just under the number a(i), we find the number n(a(i)). This is the image of i under the product permutation, no a. A diagram conveniently illustrates this process
1 |
2 |
n |
-1- |
-l- |
-1- |
a(l) |
a(2) |
a(n) |
-1- |
-l- |
-1- |
n(a(l)) |
n(a(2)) |
n(a(n)) |
Given a set A, we next obtain some elements of S (A). |
||
2.2.1. Lemma. Let A be a set, |
let f be a fixed but arbitrary element of S(A), |
and let g E S(A) be arbitrary. The following mappings are permutations of the set S(A):
(i)!J1: g-----+ g- 1;
(ii)!J2 : g -----+ f 0 g;
(iii)!J3 : g -----+ g 0 f.
Proof. (i) Note that if g E S(A), then g has an inverse which is also an element of S(A) so that !J1 is a mapping from S(A) to itself. We show that !J1 is injective and, to this end, suppose that there are permutations g1, g2 E S(A) such that !JI(gi) = !J1(g2). Then g) 1 = g;- 1. Since (g- 1)- 1 = g it follows that g1 =
(g) 1)- 1 = (g:2 1)- 1 = g2 and this implies that !J1 is injective. Also if g E S(A), then !J1(g- 1) = (g- 1)- 1 = g so that !J1 is surjective. Thus, !J1 is bijective.
(ii) Note that when f, g E S(A) then fog E S(A) so that !J2 is a mapping from S(A) to itself. To prove that !J is injective, let g1, g2 E S(A) and suppose that !J2(g1) = !J2Cg2). Then we have f og1 = f og2. Since f- 1 exists, we may multiply both sides of this equation by f- 1. We have
gl =SAogl =(f-lof)ogl =f-lo(fogi)
= f-1 o (f og2) =(/-I of) og2 =SAo g2 = g2,
which shows that !J2 is injective. Furthermore, the equation
implies that !J2 is surjective. Hence !J2 is bijective.
(iii) A similar proof to that in (ii) shows that the mapping !J3 : g -----+ g o f is also bijective.
Permutations interchanging just two integers from the set {1, 2, ... , n} and leaving all others fixed have special significance.
58ALGEBRA AND NUMBER THEORY: AN INTEGRATED APPROACH
2.2.2.Definition. The permutation t of the set A is called a transposition (more
precisely, the transposition of the symbols k, tEA) if t(k) = t, t(t) = k, and t(j) = j for all other elements j E A.
The transposition of k and twill be denoted by Lkt or (k t). Thus, a transposition is a permutation that interchanges two selected symbols and leaves all other symbols fixed.
Consider tfj = Lij o Lij. We have
Also, if k 'fi {i, j}, then
Thus, Li,j(k) = k for all k E A so that tfj = e is the identity permutation.
We recall that the number of different permutations of elements of the set A consisting of n elements is equal ton!= 1 · 2 · 3 · · · · · (n- 1) · n. So we have the following result.
2.2.3. Theorem. ISnl = 1 ·2 ·3 ·····(n- 1) ·n.
Proof. The tabular form of the permutation TC E Sn consists of two rows. We can suppose here that the upper row of the tabular form TC is 1, 2, ... , n in this order. The lower row of TC is a permutation of the numbers 1, 2, ... , n. Hence, the order of Sn is equal to the number of different permutations of the numbers 1, 2, ... , n and this is n!.
We now consider all differences |
(t - k) where 1 :::; k < t :::; n, and let |
Vn |
||
denote the product of such expressions. Then |
|
|
||
v= |
|
n (t -k). |
|
|
n |
i::Ok<t::On |
|
|
|
If TC E Sn, then let |
|
|
|
|
x ( y) ~JL(x(t) - |
x(k)). |
|
||
For every pair t, k, where 1 :::; k < t |
:::; n, there |
are natural numbers m, j |
such |
that t = Tr(m) and k = Tr(j) so that t - k = Tr(m)- Tr(j). Two cases now occur:
(i) |
If m > j, |
then (t - k) is a factor in the decomposition of Tr(Vn). |
(ii) |
If m < j, |
then Tr(m) = t > k = Tr(j). |