Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Patterson, Bailey - Solid State Physics Introduction to theory

.pdf
Скачиваний:
1117
Добавлен:
08.01.2014
Размер:
7.07 Mб
Скачать

650 Appendices

elements are given by υij (in the usual notation), then the eigenvalues of V are ω2, determined by (B.4). V is a real symmetric matrix; hence it is Hermitian; hence its eigenvalues must be real.

Let us suppose that the eigenvalues ω2 determined by (B.4) are denoted by k. There will be the same number of eigenvalues as there are coordinates xi. Let ajk be the value of aj, which has a normalization determined by (B.7), when the system is in the mode corresponding to the kth eigenvalue k. In this situation we can write

jυij a jk = Ωk j δij a jk .

(B.5)

Let A stand for the matrix with elements ajk and be the matrix with elements lk

= kδlk. Since kjδijajk = kaik = aikk = ∑lailkδlk = ∑laillk, we can write (B.5) in matrix notation as

VA = AΩ .

(B.6)

It can be shown [2] that the matrix A that is constructed from the eigenvectors is an orthogonal matrix, so that

~

~

(B.7)

AA

= AA = I .

à means the transpose of A. Combining (B.6) and (B.7) we have

 

~

 

(B.8)

AVA = Ω .

This equation shows how V is diagonalized by the use of the matrix that is constructed from the eigenvectors.

We still must indicate how the new eigenvectors are related to the old coordinates. If a column matrix a is constructed from the aj as defined by (B.3), then the eigenvectors E (also a column vector, each element of which is an eigenvector) are defined by

~

(B.9a)

E = Aa ,

or

 

a = AE .

(B.9b)

That (B.9) does define the eigenvectors is easy to see because substituting (B.9b) into the Hamiltonian reduces the Hamiltonian to diagonal form. The kinetic energy is already diagonal, so we need consider only the potential energy

~

~~

~

υijaia j = aVa = EAVAE = EΩE

 

~

Ω jk Ek =

= k, j (E) j

= ~ Ω =

j (E) j j E j j,k

~

j,k (E)

ω2 ~

j (Ei

j Ωkδ jk Ek )Eiδ jk ,

which tells us that the substitution reduces V to diagonal form. For our purposes, the essential thing is to notice that a substitution of the form (B.9) reduces the Hamiltonian to a much simpler form.

Normal Coordinates 651

An example should clarify these ideas. Suppose the eigenvalue condition yielded

 

 

 

ω

2

 

 

2

 

 

 

= 0 .

(B.10)

det 1

 

 

 

 

2

 

 

 

 

2

 

 

3 ω

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This implies the two eigenvalues

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ω2

= 2 +

5

 

 

 

(B.11a)

 

 

 

 

1

 

 

 

 

 

 

 

 

 

 

 

 

 

ω22 = 2 5 .

 

 

(B.11b)

Equation (B.4) for each of the eigenvalues gives for

 

 

ω =ω2

: a

=

2a2

 

,

(B.12a)

 

 

 

 

1

 

 

1

1

+

 

5

 

 

 

 

 

 

 

 

 

 

 

 

 

and for

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ω =ω2

: a

=

2a2

 

.

(B.12b)

 

 

 

 

2

 

 

1

1

 

5

 

 

 

 

 

 

 

 

 

 

 

 

 

From (B.12) we then obtain the matrix A

 

 

 

 

 

 

 

 

 

 

 

 

 

2N

 

,

 

N

 

 

 

 

 

~

 

 

 

1

 

 

 

 

 

 

 

 

+

 

5

 

 

1

,

 

 

 

A = 1

 

 

 

 

(B.13)

 

 

 

 

2N

2

,

 

N2

 

 

 

 

 

 

 

 

 

 

 

 

1

 

 

 

 

 

 

 

 

 

 

5

 

 

 

 

 

 

where

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(N )1

 

 

 

 

4

 

 

 

1/ 2

 

=

 

(

 

 

 

+1

,

(B.14a)

 

1

 

 

 

5 +1)2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

and

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(N

 

)1

=

 

 

 

 

4

 

 

+

1/ 2

(B.14b)

2

 

(

5 1)2

1 .

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The normal coordinates of this system are given by

 

E

 

 

2N1

 

1 + 5

E =

1

 

=

 

 

 

 

2N2

E2

 

 

 

 

 

1 5

 

 

 

 

 

 

 

 

 

 

,

N1

a

 

 

 

 

 

1

.

(B.15)

 

 

 

 

,

N2

 

 

 

 

a2

 

 

 

 

 

 

 

 

652 Appendices

Problems

B.1 Show that (B.13) satisfies (B.7)

B.2 Show for A defined by (B.13) that

~ 1

2

 

2

+ 5,

0

 

 

A

 

 

A =

 

 

 

 

.

 

2

3

 

 

 

0,

2

5

 

 

 

 

 

 

This result checks (B.8).

C Derivations of Bloch’s Theorem

Bloch’s theorem concerns itself with the classifications of eigenfunctions and eigenvalues of Schrödinger-like equations with a periodic potential. It applies equally well to electrons or lattice vibrations. In fact, Bloch’s theorem holds for any wave going through a periodic structure. We start with a simple onedimensional derivation.

C.1 Simple One-Dimensional Derivation35

This derivation is particularly applicable to the Kronig–Penney model. We will write the Schrödinger wave equation as

d2ψ(x) +U (x)ψ(x) = 0 ,

(C.1)

dx2

 

where U(x) is periodic with period a, i.e.,

 

U (x + na) =U (x) ,

(C.2)

with n an integer. Equation (C.1) is a second-order differential equation, so that there are two linearly independent solutions ψ1 and ψ2:

ψ1′′+Uψ1 = 0 ,

(C.3)

ψ2′′ +Uψ2 = 0 .

(C.4)

3See Ashcroft and Mermin [A.3].

4See Jones [A.10].

5See Dekker [A.4].

Derivations of Bloch’s Theorem 653

From (C.3) and (C.4) we can write

ψ2ψ1′′+Uψ2ψ1 = 0 ,

ψ1ψ2′′ +Uψ1ψ2 = 0 . Subtracting these last two equations, we obtain

ψ2ψ1′′−ψ1ψ2′′ = 0 .

(C.5)

This last equation is equivalent to writing

 

 

 

 

 

 

dW

 

= 0 ,

 

 

(C.6)

 

dx

 

 

 

 

 

 

 

 

where

 

 

 

 

 

 

 

 

W =

 

ψ1

ψ

2

 

(C.7)

 

 

 

ψ

ψ

 

 

 

 

 

 

 

 

 

1

 

2

 

 

is called the Wronskian. For linearly independent solutions, the Wronskian is a constant not equal to zero.

It is easy to prove one result from the periodicity of the potential. By dummy variable change (x) → (x + a) in (C.1) we can write

d2ψ(x + a) +U (x + a)ψ(x + a) = 0 . dx2

The periodicity of the potential implies

d2ψ(x + a)

+U (x)ψ(x + a) = 0 .

(C.8)

dx2

 

 

Equations (C.1) and (C.8) imply that if ψ(x) is a solution, then so is ψ(x + a). Since there are only two linearly independent solutions ψ1 and ψ2, we can write

 

 

 

ψ1(x + a) = Aψ1(x) + Bψ2

(x)

 

 

 

 

 

 

 

(C.9)

 

 

 

ψ2 (x + a) = Cψ1(x) + Dψ2

(x) .

 

 

 

 

 

 

 

(C.10)

The Wronskian W is a constant ≠ 0, so W(x + a) = W(x), and we can write

 

Aψ1 + Bψ2

Cψ1 + Dψ2

 

=

 

ψ1

ψ2

 

A C

 

=

 

 

ψ1

ψ2

 

,

 

 

 

 

 

 

 

 

Aψ

+ Bψ

Cψ

+ Dψ

 

 

ψ

ψ

 

B D

 

 

 

ψ

ψ

 

 

 

 

 

 

 

 

 

 

 

 

 

1

2

1

2

 

 

 

1

2

 

 

 

 

 

 

 

1

2

 

 

or

AC =1,

BD

654 Appendices

or

AD BC =1 .

We can now prove that it is possible to choose solutions ψ(x) so that ψ(x + a) = ψ(x) ,

where is a constant ≠ 0. We want ψ(x) to be a solution so that ψ(x) =αψ1(x) + βψ2 (x) ,

or

ψ(x + a) =αψ1(x + a) + βψ2 (x + a) . Using (C.9), (C.10), (C.12), and (C.13), we can write

ψ(x + a) = (αA + βC)ψ1(x) + (αB + βD)ψ2 (x) = αψ1(x) + βψ2 (x).

In other words, we have a solution of the form (C.12), provided that

αA + βC =

α ,

and

 

αB + βD =

β .

(C.11)

(C.12)

(C.13a)

(C.13b)

(C.14)

For nontrivial solutions for α and β, we must have

 

 

A

C

 

= 0 .

(C.15)

 

 

 

B

D

 

 

 

Equation (C.15) is equivalent to, using (C.11),

 

 

+

1 = A + D .

(C.16)

If we let + and be the eigenvalues of the matrix ( AB CD) and use the fact that the trace of a matrix is the sum of the eigenvalues, then we readily find from (C.16) and the trace condition

+ + ( +)1 = A + D ,

 

+ (

)1 = A + D ,

(C.17)

and

 

 

+ +

= A + D .

 

Derivations of Bloch’s Theorem

655

 

 

Equations (C.17) imply that we can write

 

+ = ( )1 .

(C.18)

If we set

 

+ = eb ,

(C.19)

and

 

= eb ,

(C.20)

the above implies that we can find linearly independent solutions ψ1i that satisfy

ψ11(x + a) = ebψ11(x) ,

(C.21)

and

 

ψ12 (x + a) = ebψ12 (x) .

(C.22)

Real b is ruled out for finite wave functions (as x → ± ∞ ), so we can write b = ika, where k is real. Dropping the superscripts, we can write

ψ(x + a) = e±ikaψ(x) .

(C.23)

Finally, we note that if

 

ψ(x) = eikxu(x) ,

(C.24)

where

 

u(x + a) = u(x) ,

(C.25)

then (C.23) is satisfied. (C.23) or (C.24), and (C.25) are different forms of Bloch’s theorem.

C.2 Simple Derivation in Three Dimensions

 

Let

 

(x1 xN ) = Eψ(x1 xN )

(C.26)

be the usual Schrödinger wave equation. Let Tl be a translation operator that translates the lattice by l1a1 + l2a2 + l3a3, where the li are integers and the ai are the primitive translation vectors of the lattice.

Since the Hamiltonian is invariant with respect to translations by Tl, we have

[H ,Tl ] = 0 ,

(C.27)

656 Appendices

and

[Tl ,Tl] = 0 .

(C.28)

Now we know that we can always find simultaneous eigenfunctions of commuting observables. Observables are represented by Hermitian operators. The Tl are unitary. Fortunately, the same theorem applies to them (we shall not prove this here). Thus we can write

E,l = EψE,l ,

TlψE,l = tlψE,l .

Now certainly we can find a vector k such that

tl = eik·l .

Further

all space ψ(r) 2 dτ = ψ(r + l) 2 dτ = tl 2 ψ(r) 2 dτ ,

so that

tl 2 =1.

(C.29)

(C.30)

(C.31)

(C.32)

This implies that k must be a vector over the real field.

We thus arrive at Bloch’s theorem

Tlψ(r) =ψ(r + l) = eik·lψ(r) .

(C.33)

The theorem says we can always choose the eigenfunctions to satisfy (C.33). It does not say the eigenfunction must be of this form. If periodic boundary conditions are applied, the usual restrictions on the k are obtained.

C.3 Derivation of Bloch’s Theorem by Group Theory

The derivation here is relatively easy once the appropriate group theoretic knowledge is acquired. We have already discussed in Chaps. 1 and 7 the needed results from group theory. We simply collect together here the needed facts to establish Bloch’s theorem.

1.It is clear that the group of the Tl is abelian (i.e. all the Tl commute).

2.In an abelian group each element forms a class by itself. Therefore the number of classes is O(G), the order of the group.

3.The number of irreducible representations (of dimension ni) is the number of classes.

Density Matrices and Thermodynamics

657

 

 

4. ∑n2i = O(G) and thus by above

n12 + n22 + + n02(G) = 0(G) .

This can be satisfied only if each ni = 1. Thus the dimensions of the irreducible representations of the Tl are all one.

5. In general

Tlψik = j Aijl,kψ kj ,

where the Ali ,kj are the matrix elements of the Tl for the kth representation and the sum over j goes over the dimensionality of the kth representation. The ψki are the basis functions selected as eigenfunctions of H (which is possible since [H, Tl] = 0). In our case the sum over j is not necessary and so

T lψ k = Al,kψ k .

As before, the Al,k can be chosen to be eil·k. Also in one dimension we could use the fact that {Tl} is a cyclic group so that the Al,k are automatically the roots of one.

D Density Matrices and Thermodynamics

A few results will be collected here. The proofs of these results can be found in any of several books on statistical mechanics.

If ψi(x, t) is the wave function of system (in an ensemble of N systems where 1 ≤ i N) and if |n is a complete orthonormal set, then

ψ i (x,t) = n cni (t) n .

The density matrix is defined by

ρnm = N1 iN=1cmi (t)cmi (t) cmcn .

It has the following properties:

Tr(ρ) n ρnn =1,

the ensemble average (denoted by a bar) of the quantum-mechanical expectation value of an operator A is

A Tr(ρA) ,

658 Appendices

and the equation of motion of the density operator ρ is given by

i

ρ

=[ρ, H ] ,

t

 

 

where the density operator is defined in such a way that n|ρ|m ρnm. For a canonical ensemble in equilibrium

 

F H

ρ = exp

 

.

kT

 

 

Thus we can readily link the idea of a density matrix to thermodynamics and hence to measurable quantities. For example, the internal energy for a system in equilibrium is given by

U =

 

F H

H = Tr H exp

kT

 

 

 

= Tr[H exp(H / kT )]

Tr[exp(H / kT )] .

Alternatively, the internal energy can be calculated from the free energy F where for a system in equilibrium,

F = −kT lnTr[exp(H / kT )] .

It is fairly common to leave the bar off ¯A so long as the meaning is clear. For further properties and references see Patterson [A.19], see also Huang [A.8].

E Time-Dependent Perturbation Theory

A common problem in solid-state physics (as in other areas of physics) is to find the transition rate between energy levels of a system induced by a small timedependent perturbation. More precisely, we want to be able to calculate the time development of a system described by a Hamiltonian that has a small timedependent part. This is a standard problem in quantum mechanics and is solved by the time-dependent perturbation theory. However, since there are many different aspects of time-dependent perturbation theory, it seems appropriate to give a brief review without derivations. For further details any good quantum mechanics book such as Merzbacher6 can be consulted.

6 See Merzbacher [A.15 Chap. 18].

Time-Dependent Perturbation Theory 659

f(t,ω)

 

 

2π

4π

ω

6π

t

t

t

Fig. E.1. f(t, ω) versus ω. The area under the curve is 2πt

Let

H (t) = H 0 +V (t) ,

(E.1)

H 0 l = E0 l ,

(E.2)

 

l

 

Vkl (t) = k V (t) l ,

(E.3)

ωkl =

Ek0 El0

.

(E.4)

 

In first order in V, for V turned on at t = 0 and constant otherwise, the probability per unit time of a discrete i f transition for t > 0 is

P

 

2π

 

 

fi

 

2δ (E0

E0 ) .

(E.5)

 

V

 

 

if

 

 

 

 

 

i

f

 

In deriving (E.5) we have assumed that the f (t, ω) in Fig. E.1 can be replaced by a Dirac delta function via the equation

1

cos(ωif t)

=

πt

δ (Ei0 E0f

) =

f (t,ω)

.

(E.6)

lim

 

 

 

 

 

( ωif )2

 

2 2

t →∞

 

 

 

 

 

 

If we have transitions to a group of states with final density of states pf(Ef), a similar calculation gives

P

=

2π

 

V

fi

 

2 p

f

(E

f

) .

(E.7)

 

 

 

if

 

 

 

 

 

 

 

 

 

In the same approximation, if we deal with periodic perturbations represented by

V (t) = geiωt + geiωt ,

(E.8)

Соседние файлы в предмете Химия