Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Шумов задачи и учебник / [Valeri_P._Ipatov]_Spread_Spectrum_and_CDMA_Princ(Bookos.org).pdf
Скачиваний:
93
Добавлен:
15.04.2015
Размер:
3.46 Mб
Скачать

286 Spread Spectrum and CDMA

Example 9.2.4. Several CRC are used in 2G and 3G specifications [18, 69, 92]. The exemplary one entering all three standards (cdmaOne, UMTS and cdma2000) has the generator polynomial

g(z) ¼ z16 þ z12 þ z5 þ 1 ¼ (z þ 1)g1(z), where g1(z) ¼ z15 þ z14 þ z13 þ z12 þ z4 þ z3 þ z2

þ z þ 1 and is primitive. Similarly, generator polynomials of other CRC of those standards (of degrees 30, 24 etc.) are factored into the binomial z þ 1 and a primitive polynomial.

9.3 Convolutional codes

Convolutional codes are in common use in modern telecommunications as an effective tool for securing reliable data transmission over noisy channels. Belonging to a more general class of tree codes, they are distinguished inside it by the linearity of the encoding algorithm. The difference between convolutional and block codes is rather fuzzy: any convolutional code may be thought of as a block code of a properly large length. It is more satisfactory to see the peculiarity and the reason for the extreme popularity of convolutional codes in their recurrent nature, which allows a much more feasible error-correction decoding procedure (Viterbi algorithm) as compared to the others.

9.3.1 Convolutional encoder

The idea of a convolutional encoding may in general terms be described as follows. Take a block (vector) of c consecutive bits of a source and convert it linearly into n > 1 output binary code symbols occupying time space of one source bit. The linearity as applied to vectors with components from GF(2) means just modulo 2 summations of selected components. After this, update a block of source bits, inserting one new bit and discarding the oldest. We again have a block of c source bits, this time lagging the initial one by one bit (and containing c 1 former bits and a new one), which is encoded to the new n code symbols. These steps are continuously executed one by one, each time involving a new bit and dropping the oldest. Figure 9.2, where c ¼ 3, n ¼ 3, illustrates the procedure: the encoder watches a source bit stream through the sliding window of width c and encodes all bits currently seen into n code symbols. After every step the window moves by one source bit and the next step is accomplished. The number of source bits c determining the

νc bits window One bit duration

Bit stream

 

Codestream

 

n code symbols

n code symbols

n code symbols

Figure 9.2 Convolutional encoding

Channel coding in spread spectrum systems

 

 

 

 

 

 

287

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Source bits

 

 

 

 

 

 

νc – 1

 

 

 

 

 

1

 

 

2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

b0, b1, ...

+

1

+

2

+

n

 

 

 

u01, u11, ...

u02, u12, ...

 

 

u0n , u1n , ...

 

Code symbols

u01, u20, ..., u0n, u11, ...

Figure 9.3 Convolutional encoder

code symbols at one step is called the constraint length. The principle described can be implemented as shown in Figure 9.3, where the shift register containing c 1 flip-flops stores c 1 previous source bits. Along with the incoming bit, they are fed into the linear logic circuit consisting of n modulo 2 adders. The output of every flip-flop and encoder input may or may not be connected to each adder (that is why the links are marked as dashed lines), the scheme of connections determining the concrete dependence of output code symbols on c source bits, i.e. the encoding rule. With a current source bit bi arriving at the input n code symbols u1i , u2i , . . . , uni appear in parallel at the adders’ outputs. After every clocking, the bit pattern in the register shifts to the right by one cell, preparing the circuit to generate the next n code symbols. The output switch running through n positions during one bit space converts the code symbols from a parallel to a serial pattern, creating an output codestream u10, u20, . . . , un0, u11, u21, . . . , un1, . . .. It is seen that the encoder of Figure 9.3 in the steady state responds to every new source bit by n code symbols (see also Figure 9.2), so that its code rate Rc measured in bits per code symbol is 1/n. The example below helps to explain better the principle of convolutional encoding.

Example 9.3.1. Figure 9.4 illustrates the implementation of the convolutional encoder with the constraint length c ¼ 3 and rate Rc ¼ 1/2. Input bits b0, b1, . . . produce two streams of code

 

 

 

 

 

 

 

 

+

 

u01, u11, ...

Source bits

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Code symbols

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

b0, b1, ...

 

 

 

 

 

 

 

 

 

u01, u02, u11, u21, ...

 

 

 

 

 

 

 

 

 

 

 

 

u02, u12, ...

+

Figure 9.4 Convolutional encoder of rate 1/2

288

Spread Spectrum and CDMA

 

 

symbols u01, u11, . . . and u02, u12, . . ., which are then multiplexed so that ui1 and ui2 occupy, respectively, even and odd positions in a common codestream. For instance, a bit stream fbi g ¼ 10100100 . . . produces sequences fui1g ¼ 11011111 . . . and fui2g ¼ 10001101 . . .

which are then multiplexed into a codestream f(ui1, ui2)g ¼ 1110001011111011 . . ..

The following question sounds natural: are rates Rc other than 1/n, i.e. equal to k/n, where 1 < k < n, possible with convolutional encoding? There are two classical ways of solving this task. The first of them just generalizes the principle above: at every step k c bits rather than vc are linearly transformed into n code symbols, after which k oldest bits (instead of one) are replaced by k new ones and the encoder proceeds to the next step. The second way, called puncturing, uses rate 1/n code as the raw material and removes some of its code symbols according to a pre-assigned pattern. The puncturing, when arranged properly, reduces the number of code symbols per data bit, providing rate k/n. For reasons of implementation feasibility puncturing is frequently considered preferable, accentuating further a dominant interest towards the codes of rate 1/n. We will follow this line and focus further only on the convolutional codes of rate 1/n.

Clearly, a circuit including a shift register and an individual modulo 2 adder with all its connections is nothing but an FIR filter (see Figure 6.20) outputting the convolution of the input bit stream with the filter pulse response, which explains the name of the codes under study. This also underlies one of the convenient ways to describe formally a convolutional encoder. The convolution relating a code symbol uli (i.e. appearing at the lth adder output when a bit bi arrives) to the input bit stream is:

 

c 1

 

uil ¼

X

ð9:6Þ

bi tgtl ; i ¼ 0; 1; . . . ; l ¼ 1; 2; . . . ; n

 

t¼0

 

where glt ¼ 1 if the lth adder is connected to the tth flip-flop (t ¼ 0 corresponds to the encoder input) and glt ¼ 0 otherwise; bi ¼ 0 whenever i < 0. An appropriate frequencydomain instrument for discrete systems is z-transform, which was mentioned in the previous section. A convolution corresponds in the z-domain to a product of z-transforms, so that (9.6) takes an equivalent form:

1

uilzi ¼ bðzÞglðzÞ; l ¼ 1; 2; . . . ; n

 

ulðzÞ ¼

ð9:7Þ

i¼0

 

 

X

 

 

where b(z) ¼ Pi1¼0 bizi is a z-transform of an input bit stream and:

 

glðzÞ ¼ g0l þ g1l z þ . . . gl c 1z c 1; l ¼ 1; 2; . . . ; n

ð9:8Þ

is a transfer function of the lth FIR filter (i.e. forming the lth code symbol) called also the lth generator polynomial of a convolutional code. The set of n generator polynomials determines a convolutional code exhaustively, since their non-zero coefficients specify connections of adders with a shift register.

Channel coding in spread spectrum systems

289

 

 

Example 9.3.2. The encoder shown in Figure 9.4 has generator polynomials g1(z) ¼ 1 þ z þ z2 and g2(z) ¼ 1 þ z2. Make sure that sequences of code symbols cited in Example 9.3.1 may be obtained from (9.7).

Putting g1(z) ¼ 1 leads to a systematic convolutional code, in which bits of source data are directly seen on definite positions. The trouble, however, is that within the fixed structure of Figure 9.3 systematic codes are usually not the best ones as regards error correction capability (see Problem 9.15). This underlies modifying the shift register of a convolutional encoder to a feedback structure when the systematic property is critical. We will have to revisit this matter in more detail on familiarizing ourselves with turbo codes in Section 9.4.1.

9.3.2 Trellis diagram, free distance and asymptotic coding gain

The shift register of a convolutional encoder has 2 c 1 possible states and there are only two states where it can pass from a current state after clocking. It is an incoming source bit bi that selects one of these two paths. When a state of the register on the ith clock interval is

(bi 1, bi 2, . . . , bi cþ1), then the next state will be either (0, bi 1, . . . , bi cþ2), if the incoming source bit bi ¼ 0, or (1, bi 1, . . . , bi cþ2), if bi ¼ 1. Similarly, the register comes to the

state (bi, bi 1, . . . , bi cþ2) if the previous state was either (bi 1, bi 2, . . . , bi cþ2, 0) or (bi 1, bi 2, . . . , bi cþ2, 1). To depict schematically all these details of register behaviour

the trellis is an appropriate tool. It includes two columns of 2 c 1 nodes, the left column for the current state and the right for the next one. The branches (arrows) go out from each node of the left column to two nodes of the right, solid and dashed branches showing paths selected by incoming bit zero and one, respectively. In the same way, two branches enter every node of the right column, being both either solid (zero input bit) or dashed (input bit equalling one). Every branch is labelled by an n-tuple which is nothing but a group of n code symbols issued by the encoder when an input source bit directs it from one state to another.

The example below clarifies the method of building the trellis for the convolutional encoder of Figure 9.4.

Example 9.3.3. The four states of a two-cell register starting with the left flip-flop are: (00), (10), (11) and (01). Figure 9.5 presents the trellis built as described. For instance, the branch from

(00)

00

 

(00)

 

 

 

11

 

 

(10)

11

00

(10)

 

 

 

 

(01)

 

10

(01)

01

 

 

 

 

 

 

 

01

 

 

(11)

10

 

(11)

 

 

 

Figure 9.5 Trellis of encoder of Figure 9.4

290

Spread Spectrum and CDMA

 

 

(10) to (01) is solid, while the one to (11) is dashed, code symbols 01 label the branch leading from (11) to (01), since with ones in both cells and zero input bit the upper adder outputs zero, while the lower outputs one, etc.

During every clock interval the encoder moves along some branch of a trellis issuing code symbols labelling the branch. Tracing this process in a diagram presented as in Figure 9.5, one has to jump from the right column to the same node in the left column at every next step. To escape this, let us just repeat the trellis as many times as is wished using the right column of the current step as the left one for the following step. Then encoding will be identified with a motion along the obtained trellis diagram, the current input bit directing the encoder along a solid or dashed branch depending on whether it is zero or one. To illustrate this, let us turn to Figure 9.6, which presents a trellis diagram for the code of Example 9.3.1. Every candidate sequence of input data bits selects a specific path on the trellis diagram, which we may trace, e.g. for the sequence fbig ¼ 10100100 . . .. Its first bit is 1, which directs the encoder from the node (00) to the node (10), issuing output code symbols 11; the second bit 0 transfers the encoder from (10) to (01), generating code symbols 10; the third changes the state from (01) to (10), outputting 00 and so forth. The thick line shows the resulting codeword: 1110001011111011.

It was already noted that one might treat a convolutional code as the block one of a properly long length. The codewords of this latter block code are just different paths on the trellis diagram, the minimum Hamming distance over all pairs of them giving a code distance. In its turn, the code linearity simplifies the task of finding code distance: on the strength of Proposition 9.2.3, the minimum distance between paths is the least Hamming weight over all non-zero words. Suppose now that an encoded bit stream is terminated after some large enough (no smaller than c) number of bits and padded by c 1 tail zeros to set the register to the all-zero state. Realized practically (one example is cdmaOne), this padding would not insert a material overhead whenever the length of an encoded bit stream is many times larger than the constraint length. On the other hand, the padding makes all the paths converge to the all-zero register state, as Figure 9.6 shows for the code of Example 9.3.1. If we take a padded bit stream starting

(00)

00

00

00

00

00

00

00

00

 

 

 

 

 

 

 

 

 

 

 

 

 

11

11

11

11

11

11

11

11

11

11

11

11

 

 

 

 

 

 

 

(10)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

10

 

00

 

00

 

00

 

00

 

 

 

 

 

10

 

10

 

10

 

10

10

 

(01)

 

01

01

 

01

 

01

 

01

 

 

 

 

 

 

 

 

 

 

 

 

 

 

01

 

01

 

01

 

01

 

01

 

(11)

 

 

10

 

10

 

10

 

10

 

 

 

 

 

 

 

 

 

 

Figure 9.6 Trellis diagram of encoder of Figure 9.4

Channel coding in spread spectrum systems

291

with some number n0 of zeros followed by bit ‘1’ and replace these n0 initial zeros to the end of the stream, we just move a corresponding path on the diagram by n0 steps to the left with no change of its weight. After such a shift the path diverges from the zero path at the very first step and merges with it no later than n0 steps before the last tail zero. Along this path several returns to zero and further diverging may happen (see Figure 9.7), each of them only increasing the path weight. Since the objective is to find the minimum weight, any deviations from the zero path but the first one should be ignored. Summarizing, we conclude that to find the distance of a convolutional code one ought to investigate only the paths deflecting from the zero path at the origin of the trellis diagram and having no deviation from the zero path after the first merging with it. In the theory of convolutional codes this entity is traditionally called the free distance. Denoting it df we see, for example, that among all the paths in Figure 9.6 with only a single deviation from the zero path the codeword 11101100 . . . encoding a bit stream 100 . . . has minimal weight, so that df ¼ 5. Certainly, free distance df guarantees

jk

correcting any

df 1

symbol errors (see Proposition 9.2.2); however, typically many

 

2

 

patterns with a greater number of errors are corrected, too. There are only a few examples of effective algebraic rules for convolutional encoding. The majority of known convolutional codes with good correction capability are found by a computer search [31,33,93].

Due to the specificity of the encoding algorithm, finding all possible weights of words (weight spectrum) of an arbitrary convolutional code proves to be not so analytically difficult as it is for many linear block-codes. In particular, directly from the trellis (or, equivalently, state diagram) a system of linear equations is made up, the solution of which leads to an explicit expression for a weight spectrum [2,7,93].

Coding gain, which shows how many times signal energy per bit or signal power can be reduced as a result of encoding, error probability fixed, is a universal measure to characterize the efficiency of one or other code. We discussed this notion in Section 2.6 applied to orthogonal signalling and noted that the asymptotic coding gain for the case of the AWGN channel is the gain in Euclidean distance. With BPSK transmission every discrepancy in symbols of two signals adds to their squared Euclidean distance 4Es, where Es is symbol energy. There is a pair of words of a convolutional code having df different symbols and no pair with smaller discrepancy (Hamming distance). Therefore, the minimum squared Euclidean distance between BPSK-mapped convolutional code-

words dmin,2 cc ¼ 4df Es. At the same time (see Section 2.6) a similar quantity for uncoded transmission dmin,2 u ¼ 4Eb, resulting in asymptotic coding gain of a convolutional code:

 

 

 

 

 

 

 

 

dmin2 ; cc

 

df Es

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Ga ¼

 

 

 

¼

 

¼ df Rc

 

 

 

 

 

 

dmin2 ; u

Eb

 

 

 

 

 

 

 

 

 

Zero path

 

First merging

Second merging

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

First diversion

 

 

 

Second diversion

Figure 9.7 Diversion and merging of paths