L E C T U R E 7
Transformation of the coordinates at transition to a new basis
In the linear space
we can choose a basis by non-unique way, and therefore the rule of
changing the coordinates of an element of the space at transition
from one basis to another has a practical interest.
Let there be two bases: e1,
e2,
e3,
… (old) and
(new) in a n-dimensional
linear space
,
and dependences expressing each vector of
the new basis through vectors of the old basis are the following:
![]()
(*)
…………………………….
![]()
The matrix
is called the transition
matrix from the old basis to the new
one.
Theorem. Every transition matrix A is regular, i.e. det A 0.
Proof: Prove the
theorem for n =
2. Let
and
be
“old” and “new” bases of a linear spaceV.
Let
be the transition matrix from
to
,
i.e.

Assume the contrary:
,
i.e.
.
Suppose that
or
(if both
or
are equal to zero, then obviously
and
are linearly dependent). Multiply both parts of the first equation of
(**) on
,
and multiply both parts of the second equation of (**) on
.
Then add the obtained expressions. We have
,
i.e.
and
are linearly dependent, contradicting the hypothesis that they form
a basis.
Take an arbitrary vector x.
Let
be
the coordinates of this vector in the old basis, and
be
its coordinates in the new basis.
Theorem. The
coordinates
and
are connected by the following formulas:
![]()
![]()
…………………………….
![]()
which are called formulas of transformation of coordinates.
It is easily to see that the columns of the matrix A are the coordinates in formulas of transition from the old basis to new, and the rows of this matrix are the coordinates in formulas of transformation of old coordinates through new.
Proof. By (*) we
have

Consequently,
.
We have a linear combination of linearly independent elements that is
equal to zero. It must be trivial. Then we obtain:
.
In a matrix form: If , then.
Example. Let a
vector
be
given. Express this
vector through the new basis
if
.
Solution:
I way. Write the transition matrix from the old basis to new:
.
The rows of the matrix are coefficients in formulas of transformation of coordinates:
![]()
Since
then by solving the system of equations we find
and
.
II way.
Since
then
![]()
Consequently,
.
Example. The system of coordinates xOy has been turned around the origin of coordinates on angle α. Express the coordinates of a vector a = xi + yj in the new system through its coordinates in the old system.
Solution:
Decompose the vectors
and
on
orts
i and
j:
![]()
![]()
Since
then the matrix of transition from the old basis {i,
j}
to the new basis {
,
}
is the following:
![]()
Then
we obtain
![]()
![]()
i.e.
![]()
![]()
Subspace of a linear space
A
non-empty set
formed
of elements of a linear space
is called asubspace
of the linear space
if for all
and every number![]()
and
.
Remark.
Obviously, the set
is a linear space itself since all the axioms of a linear space hold.
Example. The set of all vectors that are parallel to the same plane is a subspace of all geometric vectors of the space.
Example.
Determine whether the set
of
square matrices of the
n-th
order with zero first row in the linear space
of
all square matrices of then-th
order is a linear subspace, and if yes then find its dimension.
Solution:
Indeed, if we take two matrices with zero first row
:
and
,
we have
and

Thus,
is a subspace of
.
Obviously,
.
Example.
Determine whether the set
of
singular square matrices of the 2-nd order (i.e. having zero
determinant) in the linear space
of
all square matrices of the 2-nd order is a linear subspace.
Solution:
Let
and
.
Obviously,
since detA
= 0, det B
= 0.
But
,
i.e.
is not a subspace of
.
If
x,
y, z, …, u
are vectors of a linear space
then all vectors
,
where α, β, γ, …, λ are all possible real numbers, form a
subspace of the space
.
The set of all linear combinations of vectors
is
called alinear
hull
of the vectors x,
y, z, …, u
and denoted by L(x,
y, z, …, u).
Example.
Find
the dimension and a basis of the linear hull of vectors
and
in the space
.
Solution:
Let
.
Obviously,
,
i.e. these vectors are linearly dependent. Since
,
vectors
and
are linearly independent. Then
and
form a basis ofL.
If
is
a subspace of a linear space
then
.
Let
and
be
subspaces of a linear space
.
The
union
of
and
is called the set of elements
such that
or
.
The union of
and
is denoted by
.
The
intersection
of
and
is called the set of all elements simultaneously belonging to
and
.
The intersection of
and
is denoted by
.
The
sum
of
and
is called the set of all elements of kindx
+ y
where
and
.
The sum of
and
is denoted by
.
The
direct
sum
of
and
is called the set of all elements of kindx
+ y
where
,
and
.
The direct sum of
and
is denoted by
.
Theorem.
Both the intersection and the sum of subspaces
and
are subspaces of
.
Theorem.
The dimension of the sum of subspaces
and
is equal to:
![]()
Proof.
Let the subspace
has a basis
,
i.e.
.
Complete this basis by elements
to a basis in
and by elements
to a basis in
.
Then every element
can be decomposed on the system of elements
![]()
Show
that this system is linearly independent in
.
Consider some linear combination of these elements that is equal to
zero:
.
Observe
that by our construction
.
But
on the other hand
.
This means that
and consequently all
and
in (**). And since
is a basis, all
,
and the linear combination of (**) is trivial. Consequently,
is a linearly independent system of elements. Then it is a basis in
.
Thus,
.
