Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Теория информации / Gray R.M. Entropy and information theory. 1990., 284p

.pdf
Скачиваний:
28
Добавлен:
09.08.2013
Размер:
1.32 Mб
Скачать

12.9.

SLIDING BLOCK SOURCE AND CHANNEL CODING

269

if yN

2 S £ Wi; i = 1; ¢ ¢ ¢ ; M 0. Otherwise set ˆN (yN ) = , an arbitrary

reference vector. Choose L so large that the conditions and conclusions of

Lemma 12.9.1 hold for C and °N . The sliding

block decoder g

m

: Bm

!

G,

m

 

 

 

 

^

 

 

 

 

 

m = (L + 1)N , yielding decoded process Uk = gm(Yk¡NL) is deflned as follows:

If s(yk¡NL; ¢ ¢ ¢ ; yk ¡ 1) = µ, form b

N

 

 

 

^

 

 

 

= ˆN (yk¡µ; ¢ ¢ ¢ ; yk¡µ¡N ) and set Uk(y) =

gm(yk¡NL; ¢ ¢ ¢ ; yk+N ) = bµ, the appropriate symbol of the appropriate block. The sliding block encoder f will send very long sequences of block words

with random spacing to make the code stationary. Let K be a large number satisfying K† ‚ L + 1 so that m • †KN and recall that N ‚ 3 and L ‚ 1. We

then have that

 

1

 

 

 

1

:

(12.41)

 

KN

3K

6

Use Corollary 9.4.2 to produce a (KN; †) punctuation sequence Zn using a flnite length sliding block code of the input sequence. The punctuation process is stationary and ergodic, has a ternary output and can produce only isolated 0’s followed by KN 1’s or individual 2’s. The punctuation sequence is then used to convert the block encoder °N into a sliding block coder: Suppose that the encoder views an input sequence u = ¢ ¢ ¢ ; u¡1; u0; u1; ¢ ¢ ¢ and is to produce a single encoded symbol x0. If u0 is a 2, then the encoder produces an arbitrary channel symbol, say a. If x0 is not a 2, then the encoder inspects u0, u¡1, u¡2 and so on into the past until it locates the flrst 0. This must happen within KN input symbols by construction of the punctuation sequence. Given that the flrst 1 occurs at, say, Zl = 1,, the encoder then uses the block code °N to encode successive blocks of input N -tuples until the block including the symbol at time 0 is encoded. The sliding block encoder than produces the corresponding channel symbol x0. Thus if Zl = 1, then for some J < Kx0 = (°N (ul+JN ))l mod N where the subscript denotes that the (l mod N )th coordinate of the block codeword is put out. The flnal sliding block code has a flnite length given by the maximum of the lengths of the code producing the punctuation sequence and the code imbedding the block code °N into the sliding block code.

 

^

We now proceed to compute the probability of the error event fu; y : U0(y) 6=

^

U0(u)g = E. Let Eu denote the section fy : U0(y) 6= U0(u)g, f be the sequence

coder induced by f , and F = fu : Z0(u) = 0g.

Note that if u 2 T ¡1F ,

then T u 2 F and hence Z0(T u) = Z1(u) since the coding is stationary. More

generally, if uT ¡iF , then Zi

= 0. By construction any 1 must be followed by

KN 1’s and hence the sets T ¡iF are disjoint for i = 0; 1;

¢ ¢ ¢

; KN

¡

1 and hence

 

 

we can write

 

^

 

 

 

 

 

 

 

 

 

 

 

Pe = Pr(U0 6= U0) = „”(E)

 

 

 

 

 

= Z d„(u)f(u)(Eu)

 

 

 

 

LN¡1

 

KN¡1

 

 

 

 

i=0

ZT ¡iF d„(u)f(u)(Eu) + i=LN ZT ¡iF d„(u)f(u)(Eu)

X

 

X

 

 

 

 

 

+

Z( i=0 ¡ T ¡iF )c

 

 

 

 

 

d„(u)

 

 

 

 

SKN 1

270

 

CHAPTER 12. CODING FOR NOISY CHANNELS

 

 

KN¡1

 

= LN „(F ) + i=LN ZT ¡iF d„(u)f(u)(Eu) + †a • 2

 

 

X

KN¡1

akN 2GkN Zu0

2T ¡i(F c(aK N)) d„(u0)f(u0)(y0 : U0(u0) 6= U^0(u0));

+ i=LN

X

X

 

T

where hence u = T

(12.42) we have used the fact that (F ) (KN )¡1 (from Corollary 9.4.2) and

LN „(F ) • L=K • †.

Fix i = kN + j; 0 • j • N ¡ 1 and deflne

j+LN u0 and y = T j+LN y0, and the integrals become

 

Z

 

d„(u0)

 

 

(y0 : U

(u0) = g (Y m

(y0))

u02T ¡i(F T c(aKN ))

f (u0)

 

0

6 m ¡NL

 

= Zu2T ¡(k¡L)N (F c(aKN )) d„(u0)f(T ¡(j+LN )u)(y :

 

U

 

(T j+LN

u T

(Y

 

 

N Lm(T j+NLy)))

 

 

0

) = g

¡

 

 

 

6 m

 

 

 

 

Z

=

 

 

 

 

 

d„(u0)

(j+LN )

u)

(y : u

j+LN

u2T ¡

 

¡

m

T c(a

 

f (T ¡

 

 

 

 

))

 

 

 

 

 

(k

 

L)N (F

 

KN

 

 

 

 

 

6= gm(yj

)) = Zu2T ¡(k¡L)N (F c(aKN )) d„(u0)

 

T

£” ( ¡(j+LN ) )(y : uN = ˆN (yN ) or s(yLN 6= j)):

f T u LN LN j

If uNLN = j 2 GN , then uNLN = ˆN (yLNN ) if yLNN 2 S £ Wi.

T ¡(k¡L)N c(aKN ), then um = am¡ and hence from Lemma 12.9.1

(k L)N

tionarity we have for i = kN + j that

 

aKN 2GKN ZT ¡i(c(aKN )

F ) d„(u)f„(u)(Eu)

 

 

 

 

X

 

 

 

 

 

 

T

 

 

 

 

 

 

 

 

 

3

 

 

X

 

 

 

(T ¡(k¡L)N (c(aKN )

\

 

a

 

 

 

 

 

 

KN

2

GKN

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

m

 

 

 

 

LN

 

 

 

 

 

 

 

 

 

 

a(k¡L)N 2 ' T(G

 

 

£ GN )

 

(k L)N

 

KN

\

 

a

 

X

 

 

 

 

(T ¡ ¡

 

 

+

 

 

 

 

 

 

 

 

(c(a )

F ))

 

 

KN

2

GKN

 

 

 

 

 

 

 

 

 

 

 

m

 

 

 

LN

 

 

 

 

 

 

 

 

 

 

 

a(k¡L)N

62' T(G

 

 

£ GN )

 

KN

)

\ F ))

 

 

 

 

 

3

 

 

X

(c(a

 

 

 

 

(12.43)

If u 2 and sta-

 

 

 

aKN 2GKN

 

 

 

\

a(k L)N

 

' X

 

 

N

(c(aKN )

+

 

2

 

S

£

 

 

F ))

m

¡

 

 

 

 

 

 

 

c

(GLN

 

G

 

)c

 

12.9. SLIDING BLOCK SOURCE AND CHANNEL CODING

271

3†„(F ) + (c('c) \ F ) + (c(GN ) \ F ):

(12.44)

Choose the partition in Lemmas 9.5.1{9.5.2 to be that generated by the sets c('c) and c(GN ) (the partition with all four possible intersections of these sets or their complements). Then the above expression is bounded above by

 

3

+

+

5

 

N K

N K

N K

 

N K

 

and hence from (12.42)

 

Pe 5† • –

 

 

 

 

 

 

(12.45)

which completes the proof. 2

The lemma immediately yields the following corollary.

Corollary 12.9.1: If is a stationary d-continuous totally ergodic channel with Shannon capacity C, then any totally ergodic source [G; „; U ] with H() < C is admissible.

Ergodic Sources

If a preflxed blocklength N block code of Corollary 12.9.1 is used to block encode a general ergodic source [G; „; U ], then successive N -tuples from may not be ergodic, and hence the previous analysis does not apply. From the Nedoma ergodic decomposition [106] (see, e.g., [50], p. 232), any ergodic source can be represented as a mixture of N -ergodic sources, all of which are shifted versions of each other. Given an ergodic measure and an integer N , then there exists a decomposition of into M N -ergodic, N -stationary components where M divides N , that is, there is a set ƒ 2 BG1 such that

 

 

 

T M ƒ = ƒ

 

 

(12.46)

(T iƒ \ T j ƒ) = 0; i; j • M; i 6= j

(12.47)

 

 

 

1

 

 

 

 

 

(

i[

 

 

 

 

 

T iƒ) = 1

 

 

 

 

 

=0

 

 

 

 

 

 

 

 

(ƒ) =

1

;

 

 

 

 

 

M

 

 

 

 

 

 

 

 

 

such that the sources [G; „i; U ], where i(W ) = (W T iƒ) = M „(W

T iƒ)

are N -ergodic and N -stationary and

 

j

T

1

1

1

1

\

 

 

 

X

 

 

 

X

 

(W ) =

 

i(W ) =

 

(W

T iƒ):

(12.48)

M

i=0

 

 

M

i=0

 

 

 

 

 

 

 

 

 

This decomposition provides a method of generalizing the results for totally ergodic sources to ergodic sources. Since (¢jƒ) is N -ergodic, Lemma 12.9.2 is valid if is replaced by (¢jƒ). If an inflnite length sliding block encoder f is

272 CHAPTER 12. CODING FOR NOISY CHANNELS

used, it can determine the ergodic component in efiect by testing for T ¡iƒ in the base of the tower and insert i dummy symbols and then encode using the length N preflxed block code. In other words, the encoder can line up the block code with a prespecifled one of the N -possible N -ergodic modes. A flnite length encoder can then be obtained by approximating the inflnite length encoder by a flnite length encoder. Making these ideas precise yields the following result.

Theorem 12.9.1: If is a stationary d-continuous totally ergodic channel with Shannon capacity C, then any ergodic source [G; „; U ] with H() < C is admissible.

Proof: Assume that N is large enough for Corollary 12.8.1 and (12.38){ (12.40) to hold. From the Nedoma decomposition

M¡1

1 X N (GN jT iƒ) = N (GN ) 1 ¡ †: M

i=0

and hence there exists at least one i for which

N (GN jT iƒ) 1 ¡ †;

that is, at least one N -ergodic mode must put high probability on the set GN of typical N -tuples for . For convenience relabel the indices so that this good mode is (¢jƒ) and call it the design mode. Since (¢jƒ) is N -ergodic and N - stationary, Lemma 12.9.1 holds with replaced by (¢jƒ); that is, there is a source/channel block code (°N ; ˆN ) and a sync locating function s : BLN ! f0; 1; ¢ ¢ ¢ ; M ¡ 1g such that there is a set ' 2 Gm; m = (L + 1)N , for which (12.31) holds and

m('jƒ) 1 ¡ †:

The sliding block decoder is exacted exactly as in Lemma 12.9.1. The sliding block encoder, however, is somewhat difierent. Consider a punctuation sequence or tower as in Lemma 9.5.2, but now consider the partition generated by ', GN ,

i

 

 

 

; M

1. The inflnite length sliding block code is deflned

and T

 

 

 

 

 

ƒ, i = 0; 1; ¢N¢ ¢K 1

¡

 

 

 

 

 

u 2 T

(F T ¡

ƒS

 

 

 

 

 

 

 

i

 

 

j62 k=0

T kF , then f (u) = a, an arbitrary channel symbol. If

as follows: If u

 

 

¡

alignmentT

 

 

 

 

 

¡

N¡

N

 

 

 

 

) and if i < j, set f (u) = a (these are spacing symbols to force

 

 

with the proper N -ergodic mode). If j

 

i

KN

(M

j), then

i = j + kN + r for some 0 • k • (K ¡ 1)N , r • N ¡ 1. Form GN (uj+kN ) = a and set f (u) = ar. This is the same encoder as before, except that if u 2 T j ƒ,

then block encoding is postponed for j symbols (at which time u 2 ƒ). Lastly, if KN ¡ (M ¡ j) • i • KN ¡ 1, then f (u) = a.

As in the proof of Lemma 12.9.2

Pe(„; ”; f; gm) = Z

d„(u)f (u)(y : U0(u) 6= gm(Y¡mLN (y)))

KN¡1

Z

u 2 T iF d„(u)f (u)(y : U0(u) 6= U^0(y))

2+ i=LN

X

 

 

 

12.9. SLIDING BLOCK SOURCE AND CHANNEL CODING

273

KN¡1 1

X X X

= 2+

i=LN j=0 aKN 2GKN

Z

6 ^ d„(u)f (u)(y : U0(u) = U0(y))

T T

u2T i(c(aKN ) F T ¡j ƒ)

1 KN¡(M¡j)

X X X

2+

j=0 i=LN+j aKN 2GKN

Z

6 ^ d„(u)f (u)(y : U0(u) = U0(y))

T T

u2T i(c(aKN ) F T ¡j ƒ)

1

X\

 

 

 

 

 

+

 

M „(F

T ¡j ƒ);

 

 

 

 

 

 

 

 

(12.49)

 

 

 

 

 

 

 

 

j=0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

where the rightmost term is

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

 

 

 

\

 

 

 

M

1

 

 

 

 

 

 

 

 

 

 

X

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

M

(F

T ¡j ƒ)

 

 

 

 

 

 

 

†:

 

 

 

 

 

 

 

j=0

 

 

 

 

 

 

KN

K

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Thus

 

 

 

 

 

 

 

 

 

1 KN¡(M¡j)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

X

 

 

 

 

 

 

 

 

 

 

 

 

 

 

X

 

X

 

 

 

 

 

 

 

Pe(„; ”; f; gm) 3+

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

j=0

i=LN+j

 

a

KN

2G

KN

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Zu2T i(c(aKN )

F T ¡j ƒ) d„(u)f (u)(y : U0(u) 6= U^0(y)):

 

Analogous to (12.43)

(except that here i = j + kN + r, u = T

¡

(LN+r)

u0)

T T

 

 

 

 

 

 

 

 

 

 

 

 

Zu02T i(c(aKN )

 

F

T ¡j

ƒ) d„(u0)f (u0)(y0 : U0(u0) = gm(Y¡mLN (y0)))

 

 

T T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ZT j+(k¡L)N (c(aKN )

F T ¡j ƒ) d„(u)

 

 

 

 

 

i

 

 

(y : uN

= ˆ T(y T )ors(y

LN

) = r):

 

 

 

 

 

 

 

 

 

 

 

 

 

N

 

 

6

 

 

 

 

 

f (T +LNu)

 

 

 

LN 6 N LN

 

r

 

 

 

 

 

 

Thus since u 2 T j+(k¡L)N (c(aKN )

F T ¡j ƒ implies um = ajm+(k¡L)N , anal-

 

 

 

 

 

 

 

 

 

= j + kN + r

 

 

 

 

 

 

 

 

 

ogous to (12.44) we have that for iT

T

 

 

 

 

 

 

 

 

 

 

 

 

aKN 2GKN ZT i(c(aKN )

F

 

 

 

T ¡j ƒ) d„(u)f (u)(y : U0(u) 6= gm(Y¡LN m(y)))

X

 

 

T T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

=

 

j X¡

 

 

2

 

 

 

 

 

 

 

\ \

 

 

 

 

 

 

 

 

 

 

(T j+(k¡L)N (c(aKN ) F T ¡j ƒ))

 

aKN :am

 

L)N

 

'

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

+(k

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

274

CHAPTER 12.

CODING FOR NOISY CHANNELS

j X¡

 

62

 

 

 

\ \

+

 

(T j+(k¡L)N (c(aKN ) F T ¡j ƒ))

aKN :am

 

'

 

 

 

 

+(k L)N

 

 

 

 

\ \

=

j X¡

 

2

 

 

 

 

 

(c(aKN ) F T ¡j ƒ)

aKN :am

 

 

'

 

 

 

+(k L)N

 

 

\ \

+

j X¡

62

 

 

 

 

(c(aKN ) F T ¡j ƒ)

aKN :am

 

 

'

 

 

+(k L)N

 

 

 

 

\ \

= †„(T ¡(j+(k¡L)N)c(') F T ¡j ƒ)

\ \

+(T ¡(j+(k¡L)N)c(')c F T ¡j ƒ):

From Lemma 9.5.2 (the Rohlin-Kakutani theorem), this is bounded above

by

 

 

¡ KN

T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

T T ¡

 

 

 

¡

 

¡

j ƒ)

+

 

 

 

¡

 

 

 

¡

KN

j ƒ)

 

 

(T

 

(j+(k

L)N)c(') T

 

 

(T

 

 

(j+(k

 

 

L)N)c(')c

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

=

(T ¡(j+(k¡L)N)c(')jT ¡j ƒ)(ƒ)

+

(T ¡(j+(k¡L)N)c(')cjT ¡j ƒ)(ƒ)

 

 

 

 

 

KN

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

KN

 

 

 

 

 

 

 

 

 

(ƒ)

 

 

 

 

 

 

(ƒ)

 

 

 

 

2

 

 

 

 

 

 

 

= †„(c(')jƒ)

 

(c(')cjƒ)

 

 

+

 

:

 

 

 

 

 

 

KN

KN

M KN

 

 

With (12.48){(12.49) this yields

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Pe(„; ”; f; gm)

3+

 

M KN 2

5†;

 

(12.50)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

M KN

 

 

 

 

which completes the result for an inflnite sliding block code.

The proof is completed by applying Corollary 10.5.1, which shows that by choosing a flnite length sliding block code f0 from Lemma 4.2.4 so that Pr(f 6= f0) is su–ciently small, then the resulting Pe is close to that for the inflnite length sliding block code. 2

In closing we note that the theorem can be combined with the sliding block source coding theorem to prove a joint source and channel coding theorem similar to Theorem 12.7.1, that is, one can show that given a source with distortion rate function D(R) and a channel with capacity C, then sliding block codes exist with average distortion approximately D(C).

Bibliography

[1]N. M. Abramson. Information Theory and Coding. McGraw-Hill, New York, 1963.

[2]R. Adler. Ergodic and mixing properties of inflnite memory channels.

Proc. Amer. Math. Soc., 12:924{930, 1961.

[3]R. L. Adler, D. Coppersmith, and M. Hassner. Algorithms for slidingblock codes{an application of symbolic dynamics to information theory.

IEEE Trans. Inform. Theory, IT-29:5{22, 1983.

[4]R. Ahlswede and P. G¶acs. Two contributions to information theory. In Topics in Information Theory, pages 17{40, Keszthely,Hungary, 1975.

[5]R. Ahlswede and J. Wolfowitz. Channels without synchronization. Adv. in Appl. Probab., 3:383{403, 1971.

[6]P. Algoet. Log-Optimal Investment. PhD thesis, Stanford University, 1985.

[7]P. Algoet and T. Cover. A sandwich proof of the Shannon-McMillan- Breiman theorem. Ann. Probab., 16:899{909, 1988.

[8]E. Ayan‚oglu and R. M. Gray. The design of joint source and channel trellis waveform coders. IEEE Trans. Inform. Theory, IT-33:855{865, November 1987.

[9]A. R. Barron. The strong ergodic theorem for densities: generalized Shannon-McMillan-Breiman theorem. Ann. Probab., 13:1292{1303, 1985.

[10]T. Berger. Rate distortion theory for sources with abstract alphabets and memory. Inform. and Control, 13:254{273, 1968.

[11]T. Berger. Rate Distortion Theory. Prentice-Hall Inc., Englewood Clifis,New Jersey, 1971.

[12]T. Berger. Multiterminal source coding. In G. Longo, editor, The Information Theory Approach to Communications, volume 229 of CISM Courses and Lectures, pages 171{231. Springer-Verlag, Vienna and New York, 1978.

275

276

BIBLIOGRAPHY

[13]E. Berlekamp. Algebraic Coding Theory. McGraw-Hill, New York, 1968.

[14]E. Berlekamp, editor. Key Papers in the Development of Coding Theory. IEEE Press, New York, 1974.

[15]P. Billingsley. Ergodic Theory and Information. Wiley, New York, 1965.

[16]G. D. Birkhofi. Proof of the ergodic theorem. Proc. Nat. Acad. Sci., 17:656{660, 1931.

[17]R. E. Blahut. Computation of channel capacity and rate-distortion functions. IEEE Trans. Inform. Theory, IT-18:460{473, 1972.

[18]R. E. Blahut. Theory and Practice of Error Control Codes. Addison Wesley, Reading, Mass., 1987.

[19]L. Breiman. The individual ergodic theorem of information theory. Ann. of Math. Statist., 28:809{811, 1957.

[20]L. Breiman. A correction to ‘The individual ergodic theorem of information theory’. Ann. of Math. Statist., 31:809{810, 1960.

[21]J. R. Brown. Ergodic Theory and Topological Dynamics. Academic Press, New York, 1976.

[22]J. A. Bucklew. A large deviation theory proof of the abstract alphabet source coding theorem. IEEE Trans. Inform. Theory, IT-34:1081{1083, 1988.

[23]T. M. Cover, P. Gacs, and R. M. Gray. Kolmogorov’s contributions to information theory and algorithmic complexity. Ann. Probab., 17:840{865, 1989.

[24]I. Csisz¶ar. Information-type measures of difierence of probability distributions and indirect observations. Studia Scientiarum Mathematicarum Hungarica, 2:299{318, 1967.

[25]I. Csisz¶ar. I-divergence geometry of probability distributions and minimization problems. Ann. Probab., 3(1):146{158, 1975.

[26]I. Csisz¶ar and J. K˜orner. Coding Theorems of Information Theory. Academic Press/Hungarian Academy of Sciences, Budapest, 1981.

[27]L. D. Davisson and R.M. Gray. A simplifled proof of the sliding-block source coding theorem and its universal extension. In Conf. Record 1978 Int’l. Conf. on Comm. 2, pages 34.4.1{34.4.5, Toronto, 1978.

[28]L. D. Davisson, R. J. McEliece, M. B. Pursley, and M. S. Wallace. E–cient universal noiseless source codes. IEEE Trans. Inform. Theory, IT-27:269{ 279, 1981.

BIBLIOGRAPHY

277

[29]L. D. Davisson and M. B. Pursley. An alternate proof of the coding theorem for stationary ergodic sources. In Proceedings of the Eighth Annual Princeton Conference on Information Sciences and Systems, 1974.

[30]M. Denker, C. Grillenberger, and K. Sigmund. Ergodic Theory on Compact Spaces, volume 57 of Lecture Notes in Mathematics. Springer-Verlag, New York, 1970.

[31]J.-D. Deushcel and D. W. Stroock. Large Deviations, volume 137 of Pure and Applied Mathematics. Academic Press, Boston, 1989.

[32]R. L. Dobrushin. A general formulation of the fundamental Shannon theorem in information theory. Uspehi Mat. Akad. Nauk. SSSR, 14:3{104, 1959. Translation in Transactions Amer. Math. Soc, series 2,vol. 33,323{ 438.

[33]R. L. Dobrushin. Shannon’s theorems for channels with synchronization errors. Problemy Peredaci Informatsii, 3:18{36, 1967. Translated in Problems of Information Transmission, vol.,3,11{36 (1967),Plenum Publishing Corporation.

[34]M. D. Donsker and S. R. S. Varadhan. Asymptotic evaluation of certain Markov process expectations for large time. J. Comm. Pure Appl. Math., 28:1{47, 1975.

[35]J. G. Dunham. A note on the abstract alphabet block source coding with a fldelity criterion theorem. IEEE Trans. Inform. Theory, IT-24:760, November 1978.

[36]P. Elias. Two famous papers. IRE Transactions on Information Theory, page 99, 1958.

[37]R. M. Fano. Transmission of Information. Wiley, New York, 1961.

[38]A. Feinstein. A new basic theorem of information theory. IRE Transactions on Information Theory, pages 2{20, 1954.

[39]A. Feinstein. Foundations of Information Theory. McGraw-Hill, New York, 1958.

[40]A. Feinstein. On the coding theorem and its converse for flnite-memory channels. Inform. and Control, 2:25{44, 1959.

[41]G. D. Forney, Jr. The Viterbi algorithm. Proc. IEEE, 61:268{278, March 1973.

[42]N. A. Friedman. Introduction to Ergodic Theory. Van Nostrand Reinhold Company, New York, 1970.

[43]R. G. Gallager. Information Theory and Reliable Communication. John Wiley & Sons, New York, 1968.

278

BIBLIOGRAPHY

[44]A. El Gamal and T. Cover. Multiple user information theory. Proc. IEEE, 68:1466{1483, 1980.

[45]I. M. Gelfand, A. N. Kolmogorov, and A. M. Yaglom. On the general deflnitions of the quantity of information. Dokl. Akad. Nauk, 111:745{ 748, 1956. (In Russian.).

[46]A. Gersho and V. Cuperman. Vector quantization: A pattern-matching technique for speech coding. IEEE Communications Magazine, 21:15{21, December 1983.

[47]A. Gersho and R. M. Gray. Vector Quantization and Signal Compression. Kluwer Academic Publishers, Boston, 1992.

[48]R. M. Gray. Tree-searched block source codes. In Proceedings of the 1980 Allerton Conference, Allerton IL, Oct. 1980.

[49]R. M. Gray. Vector quantization. IEEE ASSP Magazine, 1,No. 2:4{29, April 1984.

[50]R. M. Gray. Probability, Random Processes, and Ergodic Properties. Springer-Verlag, New York, 1988.

[51]R. M. Gray. Spectral analysis of quantization noise in a single-loop sigmadelta modulator with dc input. IEEE Trans. Comm., COM-37:588{599, 1989.

[52]R. M. Gray. Source Coding Theory. Kluwer Academic Press, Boston, 1990.

[53]R. M. Gray and L. D. Davisson. Source coding without the ergodic assumption. IEEE Trans. Inform. Theory, IT-20:502{516, 1974.

[54]R. M. Gray and J. C. Kiefier. Asymptotically mean stationary measures. Ann. Probab., 8:962{973, 1980.

[55]R. M. Gray, D. L. Neuhofi, and J. K. Omura. Process deflnitions of distortion rate functions and source coding theorems. IEEE Trans. Inform. Theory, IT-21:524{532, 1975.

[56]R. M. Gray, D. L. Neuhofi, and D. Ornstein. Nonblock source coding with a fldelity criterion. Ann. Probab., 3:478{491, 1975.

[57]R. M. Gray, D. L. Neuhofi, and P. C. Shields. A generalization of ornstein’s d-bar distance with applications to information theory. Ann. Probab., 3:315{328, April 1975.

[58]R. M. Gray and D. S. Ornstein. Sliding-block joint source/noisy-channel coding theorems. IEEE Trans. Inform. Theory, IT-22:682{690, 1976.