Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Шумов задачи и учебник / [Valeri_P._Ipatov]_Spread_Spectrum_and_CDMA_Princ(Bookos.org).pdf
Скачиваний:
93
Добавлен:
15.04.2015
Размер:
3.46 Mб
Скачать

Channel coding in spread spectrum systems

279

 

 

soft algorithms is the reason why an alternative hard decoding is often used instead. This decoding mode includes two steps: at the first of them decisions are made on all individual code symbols and as a result an observation appears to be demodulated into the vector consisting of symbols belonging to the code alphabet (in the considered case, binary). Some of the symbols of the binary observation thus obtained may be erroneous and then the whole demodulated vector is likely to differ from all codewords. The second step finds among all allowed codewords the one having maximal likelihood, meaning the greatest probability to be transformed by the channel to the current demodulated binary observation. Such a decoding procedure, typically ending in declaring some specific code vector as true, is called error correction. Alternatively, the goal of decoding may be limited to only checking whether the current binary observation is a true codeword or whether some errors occurred due to the channel corrupting effects. Then, if a binary observation enters the set of allowed codewords, it is mapped back onto the corresponding data bit sequence. Otherwise the fact of unsuccessful transmission is registered and the receiver either requests the transmitter to repeat the message (as in ARQ systems) or tries to restore it by interpolating the previous and subsequent ones. This sort of decoding, called error detection, is characteristic of the application of block codes in modern commercial wireless spread spectrum systems, and the following section focuses on coding formats complying with the error detection task.

9.2 Error-detecting block codes

9.2.1 Binary block codes and detection capability

Suppose that b0, b1, . . . , bk 1 are k source bits to be encoded in a binary codeword u ¼ u0, u1, . . . , un 1 of length n > k. All 2k combinations of k source bits are assumed possible, meaning that there are M ¼ 2k codewords altogether. In every codeword n k binary symbols are redundant in the sense that only k symbols are necessary for one- to-one mapping of M source messages onto binary vectors. These redundant symbols make codewords more discernible from each other, securing better noise resistance. The set of all M ¼ 2k words of length n is called (n,k) block code. The hard decoding, i.e. demodulation of continuous observation into a binary one, corresponds to the model of the binary symmetrical channel (BSC), transforming an input symbol u ¼ 0, 1 into the opposite output binary one y 6¼u with the crossover (or symbol error) probability p. The attribute ‘symmetric’ emphasizes the equal probabilities of transitions of ‘0’ into ‘1’ and vice versa (see Figure 9.1).

u

1 – p

y

0

 

0

p

 

 

 

p

 

1

 

1

1 – p

 

 

Figure 9.1 Binary symmetric channel model

280

Spread Spectrum and CDMA

 

 

Suppose that y ¼ (y0, y1, . . . , yn 1) is the binary observation at the BSC output. If y does not coincide with one of M code vectors u, the receiver sees that erroneous symbols are present (error detection), otherwise k data bits are released corresponding to the estimated codeword ^u ¼ y. Clearly, if a transmitted word is u0 but the binary observation coincides with another codeword, i.e. ^u ¼ y 6¼u0, an undetected error occurs and the released bits are not the true transmitted ones.

Let us introduce some more definitions. The Hamming distance dH (f, g) between two vectors f ¼ (f0, f1, . . . , fn 1) and g ¼ (g0, g1, . . . , gn 1) of the same length n is the number of positions where the vectors have different elements fi ¼6 gi. The Hamming weight wH (f) of a vector f is the number of its non-zero components. If, for example, f ¼ (01011), g ¼ (11000), then dH (f, g) ¼ 3, wH (f) ¼ 3 and wH (g) ¼ 2. It is straightforward to make sure that dH (f, g) ¼ wH (f g) and wH (f) ¼ dH (f, 0), with 0 being zero vector.

Let us assume as transmitted a codeword u. The BSC will transform it into the different fixed codeword v (causing thereby an undetected error and releasing untrue data bits corresponding to v), if symbol errors occur in all dH (u, v) positions where u and v differ with no corruption of n dH (u, v) symbols coinciding in u and v. For a memoryless BSC, i.e. the one where all symbol errors are independent we may then put the probability P(y ¼ vju) of the event above as:

Pðy ¼ vjuÞ ¼ pdH ðu;vÞð1 pÞn dH ðu;vÞ

ð9:1Þ

Since p < 1/2 (otherwise we just interchange the designations of output ‘0’ and ‘1’), to reduce the probability of confusion of codewords u and v the Hamming distance between them should be as large as possible. Consider now Hamming distances dH (u, v) between all different code vectors of a code U. Denote the least among them as dH and call it the (minimum) code distance of a code U:

dH ¼ u6¼v;

H ð

u

;

v

Þ

ð :

2

Þ

min d

 

 

 

9

 

u;v2U

Any codeword of U may appear as a transmitted one and to minimize the risk of confusing the two closest code vectors the distance between them, i.e. code distance dH , has to be maximal. We arrive thereby at the following assertion.

Proposition 9.2.1. Code U is capable of detecting any td or fewer symbol errors (up to td errors) if and only if its code distance dH td þ 1.

Indeed, take a code with dH td and pick out a pair of its closest code vectors u, v. If dH symbols of u different from those of v are corrupted, then u becomes v, meaning that a pattern exists of no more than td errors, which is not detectable. Conversely, if the distance between any code vectors exceeds td , no pattern of td or fewer errors can transform one codeword into another.

One may prove the following statement in the same way (Problem 9.4).

Proposition 9.2.2. Code U is capable of correcting up to tc or fewer symbol errors if and only if its code distance dH 2tc þ 1.

Channel coding in spread spectrum systems

281

 

 

9.2.2 Linear codes and their polynomial representation

Let us treat binary code symbols f0, 1g as elements of a binary finite field GF(2) (see Section 6.6) and consider symbol-wise linear operations over codewords of a code U obeying the arithmetic of GF(2). Clearly, there is only one non-trivial operation of this sort, namely symbol-wise GF(2) addition:

u ¼ ðu0; u1; . . . ; un 1Þ; v ¼ ðv0; v1; . . . ; vn 1Þ ) u þ v ¼ ðu0 þ v0; u1 þ v1; . . . ; un 1 þ vn 1Þ

For example, if u ¼ (100111) and v ¼ (010110), then u þ v ¼ (110001). Symbol-wise subtraction has no independent role and just repeats addition, since in GF(2) the negative of an element is the element itself. In the same way symbol-wise multiplication by a scalar from GF(2) (i.e. by 0 or 1) of any codeword either makes it zero vector or does not change it at all.

A binary code U is called linear if the sum of any of its code vectors is again some code vector belonging to U. The name stems from the fact that such a code is a vector (linear) space over the field GF(2) [31,33,91], although this concept is not involved seriously in our further discussion. Any linear code U of length n contains zero vector (i.e. with n zero components) as a code vector, since the sum of an arbitrary code vector entering U with itself produces exactly zero vector: u þ u ¼ 0. The following statement explains one of the reasons why linear codes are of special interest.

Proposition 9.2.3. The code distance of a linear code U is equal to the minimum Hamming weight over all non-zero codewords:

dH ¼ u2U

H ð Þ

ð : Þ

min w

u

9 3

u6¼0

 

 

To prove this just make a substitution dH (u, v) ¼ wH (u v) in (9.2) and note that u v ¼ u0 is again a codeword of U.

As (9.3) shows, there is no need to test all M(M 1)/2 different vector pairs to find code distance of a linear code containing M words. It is enough to ‘weigh’ M 1 nonzero code vectors, i.e. to perform M/2 smaller number of tests, which, considering the typically enormous number of words M, offers a highly significant gain.

To get the idea of designing error-detecting codes of 2G and 3G wireless standards a polynomial description of linear codes is a good aid.

Let us associate with a codeword u ¼ (u0, u1, . . . , un 1) the code polynomial u(z) of a dummy variable z arranged as:

uðzÞ ¼ un 1zn 1 þ un 2zn 2 þ þ u0

where by way of agreement we put z0 ¼ 1. This polynomial representation, which is widely used in coding theory, is just a form of the z-transform, underlying discrete

282 Spread Spectrum and CDMA

linear system analysis, discrete signal processing, digital filtering etc. [2,7]. One-to-one correspondence between sets of codewords and code polynomials means that the sum of two code polynomials u(z), v(z) of a linear code U is again a code polynomial of the same

P

n 1

 

(z)

 

P

 

zi are code polynomials of words u,

code. Specifically, if u(z) ¼

i¼0 uiz

, vn 1

¼

i¼0

i

 

v of a linear code U, u(z) þ v(z) ¼

 

i¼0

(ui þ vi)zi

 

is a code polynomial of the word

u þ v 2 U, where addition of

coefficients follows the rules of the field they belong to,

 

P

 

 

 

 

i.e. in our case GF(2).

Polynomial arithmetic used in coding analysis and design includes two more operations: multiplication and division with remainder. The rules of these operations are universal regardless of the fields of polynomial coefficients, but dealing now with binary codes we use the terms of binary arithmetic. Consider an arbitrary (not necessarily a code one) binary (i.e. with coefficients from GF(2)) polynomial a(z). The highest power of z in this polynomial holding non-zero coefficient is called the degree of a(z) with symbolism deg a(z). Let a(z), b(z) be two binary polynomials and deg a(z) ¼ m, deg b(z) ¼ n. Then their product a(z)b(z) is a polynomial of degree m þ n obtained by extending the commutativity (zia ¼ azi) and distributivity laws to operations including a formal variable z and gathering together coefficients of equal degrees of z:

aðzÞbðzÞ ¼ ðamzm þam 1zm 1 þ þa0Þðbnzn þbn 1zn 1 þ þb0Þ

¼ ambnzmþn þðambn 1 þam 1bnÞzmþn 1 þ þða1b0 þa0b1Þz þa0b0

¼ mþn

k

aibk i!zk

X

X

 

k¼0

i¼0

 

Certainly, we put zmzn ¼ zmþn, all operations over ai, bi are performed in GF(2), and in the last internal sum coefficients ai, bi, whose indexes become negative or exceed the polynomial degree should be set equal to zero. Take for example binary polynomials a(z) ¼ z4 þ z3 þ 1, b(z) ¼ z2 þ z þ 1, then their product a(z)b(z) ¼ z6 þ z3 þ z2 þ z þ 1.

The algorithm of dividing a dividend a(z) by the divisor b(z) with remainder looks as

follows:

 

aðzÞ ¼ qðzÞbðzÞ þ rðzÞ

ð9:4Þ

where q(z) is a quotient and r(z) is remainder. The uniqueness of q(z), r(z) is secured by relations deg a(z) ¼ deg q(z) þ deg b(z) and deg r(z) < deg b(z). The algorithm (9.4) is similar to the ‘school’ rule of dividing integers with remainder, the polynomial degree replacing a number magnitude. One of the techniques realizing this operation is ‘long division’, i.e. successive computation of remainder and division of it by a divisor until the remainder degree becomes smaller than the degree of the divisor. At the first iteration b(z) is multiplied by z raised to the power equalizing the degree of the product with that of a dividend a(z). Subtraction (equivalent to addition in GF(2)) from a(z) of the product obtained results in the first remainder, which at the second iteration plays the role of dividend, and so forth. Let us illustrate it by an example.

Channel coding in spread spectrum systems

283

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Example 9.2.1. Suppose a(z) ¼ z4 þ z3 þ 1, b(z) ¼ z2 þ z þ 1

and apply a long division

algorithm:

 

z2 þ z þ 1

 

 

z2 þ 1

 

z4 þ z3 þ 1

 

 

 

 

 

þ

 

 

 

z4 þ z3 þ z2

 

 

 

 

 

z2 þ 1

 

 

 

þ

 

 

 

 

z2 þ z þ 1

 

 

 

 

z

 

 

After two iterations we have q(z) ¼ z2 þ 1, r(z) ¼ z so that division with remainder results in z4 þ z3 þ 1 ¼ (z2 þ 1)(z2 þ z þ 1) þ z.

Similar to integers, we say that a(z) is divisible by b(z) (or b(z) divides a(z)) when the remainder is zero, i.e. a(z) ¼ q(z)b(z).

Consider now a linear code U of length n with all code polynomials divisible by the fixed polynomial g(z) of degree r51. Any code polynomial of U then has the form u(z) ¼ b(z)g(z), and since there are 2n r different factors b(z) securing the product degree no greater than n 1, such a code can include 2n r codewords at most. In fact, it is always in one’s power to go the reverse way and build up a code with this maximal number of words, i.e. transmitting k ¼ n r information bits, polynomial g(z) of a fixed

degree r given. For that it is enough

to

use k ¼ n r data bits

b0, b1, . . . , bk 1 as

 

¼

 

 

1zk 1

þ

 

2zk 2

þ þ

k

 

coefficients of a data polynomial b(z)

 

 

bk

 

 

bk

 

b0 and

construct

the corresponding code polynomial as

 

a

product

u(z) ¼ b(z)g(z). Then 2

different

data k-bit blocks are in one-to-one correspondence with the same number of code polynomials of degree no greater than n 1. The linearity of this code can be checked readily (Problem 9.11). Polynomial g(z) in such a construction is called the generator polynomial of U. Note that the number of redundant or check symbols of such a code always equals r, i.e. the degree of the generator polynomial.

When a data polynomial b(z) is multiplied with the generator polynomial g(z) directly the codewords corresponding to polynomials u(z) ¼ b(z)g(z) are non-systematic, i.e. data bits in them are not explicitly seen. To come to a systematic code, in which, e.g., the last k binary symbols are information bits themselves and r ¼ n k first ones are check symbols, some rearrangement of codewords may be done. The complete set of codewords in so doing remains the same and only the mapping of data bits onto codewords alters. Let us multiply a data polynomial b(z) by zr ¼ zn k, coming to a polynomial zn kb(z) of degree no greater than n 1. If the remainder r(z) of its division by g(z) is discarded (just added to zn kb(z) in the binary case), it becomes divisible by a generator polynomial g(z), i.e. becomes a code polynomial. The last one:

uðzÞ ¼ zn kbðzÞ þ rðzÞ