Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Шумов задачи и учебник / [Valeri_P._Ipatov]_Spread_Spectrum_and_CDMA_Princ(Bookos.org).pdf
Скачиваний:
93
Добавлен:
15.04.2015
Размер:
3.46 Mб
Скачать

2

Classical reception problems and signal design

It is typical of communication theory to start analysing a system from the receiving end. The aim is usually to design an optimal receiver, which retrieves the information contained in the observed waveform with the best possible quality. Knowing optimal reception processing algorithms depending on a specific transmitted signal structure, it is possible afterwards to design an optimal transmitted signal, i.e. to choose the best means of encoding and modulation. In this chapter we investigate how classical reception problems appeal to the spread spectrum, or, in other words, which of the classical reception problems demand (or not) the involvement of spread spectrum signals. We call reception problems ‘classical’ if they are based on the traditional Gaussian channel model.

2.1 Gaussian channel, general reception problem and optimal decision rules

The following abstract model can describe any information system in which data is transmitted from one point in space to another. There is some source that can generate one of M possible messages. This source may be governed or at least created by some human being, but it may also have a human-independent nature. In any case, each of the M competitive messages is carried by a specific signal so that there is a set S of M possible signals: S ¼ fsk(t): k ¼ 1, 2, . . . , Mg. There is no limitation in principle on the cardinality of S, i.e. the number of signals M, and, if necessary, the set S may even be assumed uncountable. The source selects some specific signal sk(t) 2 S and applies it to the channel input (see Figure 2.1). At the receiving side (channel output) the observation waveform y(t) is received, which is not an accurate copy of the sent signal sk(t) but, instead, is the result of sk(t) being corrupted by noise and interference intrinsic to any real channel. For the receiver there are M competitive hypotheses Hk on which one of M possible signals was actually transmitted and turned by the channel into this specific observation y(t), and only one of these hypotheses should be chosen as true. Denote the

Spread Spectrum and CDMA: Principles and Applications Valery P. Ipatov

2005 John Wiley & Sons, Ltd

8

 

 

 

 

Spread Spectrum and CDMA

 

 

 

 

 

 

 

sk(t)

 

y(t)

 

Channel

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 2.1 General system model

^

result of this choice, i.e. the decision, as Hj, read as ‘the decision is made in favour of signal number j’. With this the classical reception problem emerges: what is the best strategy to decide which one of the possible messages (or signals) was sent, based on the observation y(t)?

To answer this question it is necessary to know the channel model. The channel is mathematically described by its transition probability p[y(t)js(t)], which shows how probable it is for the given input signal to be transformed by the channel into one or another output observation y(t). When the transition probability p[y(t)js(t)] is known for all possible pairs s(t) and y(t), the channel is characterized exhaustively.

When all source messages are equiprobable (which is typically the case in a properly designed system) the optimum observer’s strategy, securing minimum risk of mistaking an actually sent signal for some other, is the maximum likelihood (ML) rule. According to this rule, after the waveform y(t) is observed the decision should be made in favour of the signal which has the greatest (as compared to the rest of the signals) probability of being transformed by the channel into this very observation y(t).

The primary channel model in communication theory is the additive white Gaussian noise (AWGN) or, more simply, the Gaussian channel in which the transition probability drops exponentially with the growth of the squared Euclidean distance between a sent signal and output observation:

p½yðtÞjsðtÞ& ¼ k exp N0 d2

ðs; yÞ

ð2:1Þ

1

 

 

 

where k is a constant independent of s(t) and y(t), N0 is white noise one-side power spectral density, and the Euclidean distance from s(t) to y(t) is defined as:

v uuZT

dðs; yÞ ¼ u ½yðtÞ sðtÞ&2dt ð2:2Þ t

0

Explanation of the particular importance of the Gaussian model lies in the physical origin of many real noises. According to the central limit theorem of probability theory, the probability distribution of a sum of a great number of elementary random components, which are neither strongly dependent on each other nor prevailing over the others, approaches the Gaussian law whenever the number of addends goes to infinity. But thermal noise and many other types of noise, typical of real channels, are produced precisely as the result of summation of a great many elementary random currents or voltages caused by chaotic motion of charged particles (electrons, ions etc.).

When talking about the distance between signals or waveforms, we interpret them as vectors, which is universally accepted in all information-related disciplines. If the reader finds it difficult to imagine the association between signals and vectors, a very

Classical reception problems and signal design

9

simple mental trick may be a useful aid. Imagine discretization of a continuous signal in time, i.e. representing s(t) by samples si ¼ s(iTs), i ¼ 0, 1, . . ., taken with a sampling period Ts. If the total signal energy is concentrated within the bandwidth W and Ts 1/2W (ignoring that theoretically no signal is finite in both the time and the frequency domains), samples si represent exhaustively the original continuous-time signal s(t). With signal duration T there are n ¼ T/Ts such samples altogether, and therefore the n-dimensional vector s ¼ (s0, s1, . . . , sn 1) describes the signal entirely. Having done the same with observation y(t), we come to its n-dimensional vector equivalent y ¼ (y0, y1, . . . , yn 1) and find the Euclidean distance between vectors s and y by Pythagorean theorem for the n-dimensional vector space:

d

ð

s; y

vi i

 

 

Þ ¼ u i 0

ð

 

 

 

Þ

 

 

 

 

n 1

 

y

 

s

 

2

uX t

¼

One possible way of finishing the game is letting Ts go to zero. Then vectors s, y, remaining signal and observation equivalents, become of infinite dimension (actually repeat s(t), y(t) since there is no longer any discretization in the limit). At the same time, the sum above (ignoring the cofactor) turns into the integral in the right-hand side of equality (2.2). The latter, thereby, is the definition of Euclidean distance for continuous time waveforms.

Now come back to the ML rule for the Gaussian channel. According to equations (2.1) and (2.2), signal likelihood (the probability of being transformed by the channel into the observed y(t)) falls with Euclidean distance between s(t) and y(t). Therefore, the ML decision in the Gaussian channel can be restated as the minimum distance rule:

dðsj; yÞ ¼ k

ð

k;

y

Þ )

^

j

is taken

ð :

3

Þ

min d

s

 

 

H

2

 

i.e. the decision is made in favour of signal sj(t) if it is closest (in terms of Euclidean distance) to observation y(t) among all M competitive signals (Figure 2.2). Another, more direct, notation of (2.3) is:

^s ¼ arg min dðs; yÞ

s2S

where ^s is an estimation of the received signal (i.e. the signal declared received).

s2

 

s1

 

 

sj

y

d(sj, y) = min d(sk, y)

k

sM

 

Figure 2.2 Illustration of minimum distance rule

10 Spread Spectrum and CDMA

Continuing the geometrical interpretation of signals, we can introduce signal geometric length (norm) ksk as its distance from the origin. Then from (2.2) it follows

that ksk ¼ d(s, 0) ¼

 

E ¼ Z0T s2ðtÞdt

ð2:4Þ

 

pE, where:

 

 

is signal energy. Another important geometrical characteristic is the inner (scalar) product (u, v) of two signals u(t), v(t):

ZT

ðu; vÞ ¼ uðtÞvðtÞdt ð2:5Þ

0

which again can be thought of as a limit form of an inner product of two n-dimensional vectors. The same entity may also be calculated through the lengths of the vectors and the cosine of the angle between them: (u, v) ¼ kukkvk cos , and thus the inner product describes the closeness or resemblance between signals, since the closer the signals are to each other, with lengths (energies) fixed, the closer to one is cos and the greater is the inner product. Because of this the inner product is also called the correlation of signals.

In order to outline the special role of this entity, let us now give a slightly different version of the minimum distance rule. Opening the brackets in (2.2) leads to:

d2ðsk; yÞ ¼ Z0T y2ðtÞ dt 2 Z0T yðtÞskðtÞ dt þ Z0T sk2ðtÞ dt ¼ kyk2 2zk þ kskk2

ð2:6Þ

where zk stands for correlation of observation y(t) with kth signal sk(t):

 

zk ¼ ðy; skÞ ¼ Z0T yðtÞskðtÞ dt

ð2:7Þ

The first summand in the right-hand side of equation (2.6) is fixed for a given observation, and therefore does not affect comparing distances and the decision on which signal is received. The last term is just the kth signal energy Ek. With this in mind, distance rule (2.3) can be reformulated as the following correlation decision rule:

 

Ej

 

 

Ek

 

 

 

 

 

max

 

 

^

 

zj 2 ¼

k

zk

2 ) Hj is taken

ð2:8Þ

meaning, in particular, that it is maximally correlated with observation y(t) signal, which is announced as having actually been received among all M competitive signals of equal energies. The last case is very well explainable physically: preference is simply given to the signal which has stronger resemblance to y(t) than all the rest, correlation (inner product) being accepted as a criterion of resemblance.