Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Шумов задачи и учебник / [Valeri_P._Ipatov]_Spread_Spectrum_and_CDMA_Princ(Bookos.org).pdf
Скачиваний:
93
Добавлен:
15.04.2015
Размер:
3.46 Mб
Скачать

9

Channel coding in spread spectrum systems

9.1 Preliminary notes and terminology

In the course of transmitting, storing or processing the data need to be presented in some appropriate form. In digital communications the primary message generated by a source may be thought of as a sequence of data bits or a bit stream. It is mapping the bit stream onto a sequence of symbols of some predetermined alphabet that is traditionally called coding. The goals of coding may be different. For example, the terms source coding or data compression mean removal of redundancy from a bit stream to represent the source data in the most economical form. Another case of coding is encryption, which is performed to protect data from unintended interception or forging. The subject of this chapter is channel coding, aimed at making data transmission over the communication channel as immune as possible to the corrupting effects of unavoidable channel interference. The particular cases considered in Sections 2.3 and 2.5–2.7 show how important it is to find a proper signalling manner for overcoming the degrading influence of channel noise. Along with modulation, channel coding governs reliable data transmission over a noisy channel.

For over five decades of its existence, channel coding theory has been directed and motivated by the fundamental Shannon’s capacity theorem mentioned in Chapter 1. According to this theorem, any channel is characterized by the constant C (measured in bits per second) called capacity, which establishes the upper bound of achievable rate R of information transmission over the channel. Whenever R > C no signalling mode can secure an arbitrarily reliable data transmission. On the other hand, when R < C one can always find a code guaranteeing as small a probability of mistaking one message for another at the receiving end as desired (see Figure 1.1). Shannon’s capacity theorem, being a pure mathematical existence assertion, does not point to any concrete coding algorithm to achieve the quality tipped by it. Moreover, its proof, based on averaging the error probability over all possible channel codes, shows that almost all codes of

Spread Spectrum and CDMA: Principles and Applications Valery P. Ipatov

2005 John Wiley & Sons, Ltd

278

Spread Spectrum and CDMA

 

 

sufficient length are good from this angle. And yet finding specific code rules allowing Shannon’s limit to be approached remained impenetrable up to the moment of discovering turbo codes in 1993, although lots of important and widely utilized results had been obtained in pursuing this target.

Of course, modern coding theory is too sophisticated to permit pressing even its initial basics into a brief chapter. It looks all the more inappropriate against the background of the key role of coding theory in general information technology, of which spread spectrum communications is only a particular branch. Still, the importance of channel coding in spread spectrum systems is extremely high, since the majority of them are designed to operate in a very noisy environment and, what is more, many themselves create strong intra-system interference (MAI in asynchronous CDMA). The MAI effects, unlike the natural (thermal) noise, cannot be overcome just by brute force, i.e. increasing signal power, since all users have equal rights and gain in SIR for one of them obtained in this way turns into a loss for the others (see Sections 4.5 and 4.6). This leaves the designer with only two resources for withstanding MAI: increasing the spreading factor and involving powerful channel codes. Trying to handle the available space reasonably, we limit this chapter to only coding issues related to the commercial 2G and 3G spread spectrum standards cdmaOne, UMTS and cdma2000. Accordingly, the mathematical tools, designations and description manner below are narrowly adapted to match this particular task in the most economical and fast way. We refer readers interested in a more universal scope to the books on coding theory (e.g. [31,33,91]).

Let us start with some basic classification of channel codes. The first feature to distinguish between them is alphabet size, according to which we talk about binary, ternary etc. codes. Although the range of applications of non-binary (e.g. Reed–Solomon or Ungerboeck) codes is pretty wide nowadays, we concentrate on only binary ones, which are used in the specifications mentioned above. Another form of classification is the way in which information data are mapped onto the codewords or code vectors (i.e. sequences of code symbols carrying the transmitted message). The point is that any channel coding consists of inserting some redundancy into the message, making the transmitted signals more distant from each other, and thereby reducing the risk of confusion between them. Depending on the way of adding this redundancy, all channel codes are classified into block or tree (trellis) codes. A characteristic of block codes is segmentation of the source bitstream, which is divided into blocks of k information bits, every block being encoded into n > k binary symbols. In so doing, the redundant n k symbols serve to protect only the k source bits of their own codeword. Codewords of tree (e.g. convolutional) codes have a different structure: a continuous-source bitstream is encoded into an infinite stream of code symbols (codestream) with no fragmentation (see details in Section 9.3).

When it arrives at the receiving end the encoded word should be mapped back onto the transmitted data bits. This operation is called decoding. Physically, due to modulation, any codeword travels via the channel as some signal. When transmitted over the AWGN (or another state-continuous) channel, the signal gets corrupted by noise whose instantaneous samples are continuous. The optimal (ML) decision strategy of a receiver in the case of Gaussian noise is equivalent to the minimum Euclidean distance rule (see Section 2.1), which means declaring true the signal closest to the observation obtained. This straightforward procedure ends directly in decoded data bits and bears the name (along with its numerous approximations) soft decoding. The complexity of