Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Шумов задачи и учебник / [Valeri_P._Ipatov]_Spread_Spectrum_and_CDMA_Princ(Bookos.org).pdf
Скачиваний:
93
Добавлен:
15.04.2015
Размер:
3.46 Mб
Скачать

Classical reception problems and signal design

11

 

 

n

s1

sM y

s2

sk

Figure 2.3 Observation scattering and signal design problem

It is interesting to note in passing that these deliberations, although very preliminary, already give a rather clear idea of good signal set design. Look at Figure 2.3, where the signal vectors are depicted. Suppose signal s1 is transmitted and corrupted by the AWGN channel, which adds to s1 noise vector n. A Gaussian vector n has symmetrical (spherical) probability distribution dropping exponentially with the length of the vector n, which is readily seen from (2.1) after removing the signal from it (substituting s(t) ¼ 0). Hence, observation vector y ¼ s1 þ n proves to be scattered around s1, as is shown by the figure, and, according to the minimum distance rule (2.3), as soon as y comes closer to some other signal than to s1 a wrong decision will happen. To minimize the risk of such an error we should have all the other signals as distant from s1 as possible. Because any one of M signals may be transmitted equiprobably, i.e. be in place of s1, it is clear that all distances d(sk, sl), 1 k < l M should be as large as possible. When M is large enough it is not a simple task to maximize all the distances simultaneously, since they can conflict with each other: moving one vector from another may make the first closer to a third one. Due to this, the problem of designing a maximally distant signal set (entering a wide class of so-called packing problems) is in many cases rather complicated and has found no general solution so far.

Note that in what preceded all M signals were by default treated as fully deterministic, i.e. all their parameters are assumed to be known a priori at the receiving end, the observer being unaware only of which of the competitive M signals is received. This model is adequate to many situations in baseband or coherent bandpass signal reception. However, the general thread, with some adjustments, also retains its validity in more complicated scenarios, such as noncoherent reception (Section 2.5).

Having refreshed these basic ideas of optimal reception, we are now ready to get down to specific problems, putting particular emphasis on aspects of signal design and analysing the potential advantages of spread spectrum—or their absence—in various classical reception scenarios.

2.2 Binary data transmission (deterministic signals)

To demonstrate the strong dependence of reception quality on the distances between signals, let us start with the simplest but very typical communication problem of binary data transmission, where one of only M ¼ 2 competitive messages is sent over the channel. Practically, this may correspond to the transmission of one data bit in a system

12

Spread Spectrum and CDMA

where no channel coding is used, or one symbol of binary code in a system with errorcorrecting code and hard decisions, etc. Numbering the messages 0 and 1 and assuming that signals s0(t) and s1(t) (again deterministic!) are used for their transmission, we can represent the minimum distance decision rule (2.3) as:

^

 

H<0

ð2:9Þ

dðs0; yÞ>dðs1; yÞ

^

 

H1

 

where placement of the decision symbols points directly to when one or the other of two decisions is made. The same can be rewritten in correlation-based form following from rule (2.8):

 

 

 

 

 

 

^

 

 

 

 

 

 

 

 

 

 

 

 

H0

E0

E1

 

 

 

 

z

¼

z

 

 

z

>

;

ð

2:10

Þ

 

 

 

 

0

1

<

 

2

 

 

^

H1

with correlations zk, k ¼ 0, 1 of each signal and observation y(t) defined by equation (2.7) and Ek ¼ kskk2, k ¼ 0, 1, being the kth signal energy given by (2.4). Optimal rules (2.9) and (2.10) of distinguishing between two signals can be explained geometrically in a very clear way. Two signal vectors s0 and s1 always lie in a signal plane SP. Observation vector y does not necessarily fall onto this plane but the closeness of it to one or the other signal is determined by the closeness to them of the projection y0 of y onto SP (see Figure 2.4a). Therefore, we can divide SP into two half-planes by the straightline bound passing strictly perpendicular to the straight line connecting the signal vectors,

^ ^ 0

and base decisions H0, H1 on y hitting the corresponding half-plane (Figure 2.4b). It is also seen from Figure 2.4b that the probability of mistaking one signal for the other (error probability) depends on the distance between vectors s0 and s1 in comparison with the range of random fluctuations of y0 caused by channel noise. According to (2.10), the actually received signal s0(t) will be erroneously taken for the wrong one s1(t) if and only if the correlation difference is lower than the threshold (E0 E1)/2. Therefore, the probability p01 of such an error is found as:

 

 

 

 

 

 

 

 

E0 E1

 

 

 

 

 

 

 

 

 

 

01 ¼

 

2

j

0

ð

Þ ¼

Z

2

ð

j

0

ð

ÞÞ

 

ð

 

Þ

p

 

Pr z < E0

E1

s

 

t

 

 

 

W z s

 

t

 

dz

 

2:11

 

 

 

 

 

 

 

 

 

1

 

 

 

 

 

 

 

 

 

 

 

y

 

 

 

 

 

 

 

 

 

s1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

SP

 

 

 

 

 

 

 

H1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

SP

 

s1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

s0

y

 

 

 

 

 

 

 

 

 

 

y

 

H0

 

 

 

 

 

 

 

 

 

 

 

s0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(a)

 

 

 

 

 

 

 

(b)

 

 

 

 

 

 

 

 

Figure 2.4 Signal plane and decision half-planes

Classical reception problems and signal design

13

 

 

where Pr (AjB) stands for the conditional probability of event A given that event B occurred, and W(zjs0(t)) is the conditional probability density function (PDF) of correlation difference z in (2.10), given that the signal s0(t) is actually received. One of the remarkable features of the Gaussian process is that any linear transform of it again produces a Gaussian process. Therefore z, as a result of a linear transform of the Gaussian observation y(t) (see (2.7) and (2.10)), has the Gaussian PDF:

ð j

0

ð

ÞÞ ¼ p2

 

"

ð 2 2

#

W z s

 

t

1

exp

 

 

z zÞ2

 

 

 

 

 

 

 

 

 

 

integration of which according to (2.11) results in:

p01

¼ Q

2 þ

 

ð2:12Þ

 

 

2z

E0

E1

 

where

 

1

2

dt

QðxÞ ¼ p2 Z

exp t2

1

 

 

 

 

 

x

 

 

 

is the complementary error function.

The expectation of z conditioned in the received signal (the bar above will be used from now on to symbolize expectation) and variance 2 ¼ varfzg of z can be found directly from equations (2.7) and (2.10). When the signal s0(t) is assumed true, i.e.

 

 

 

 

 

 

 

 

 

 

 

 

y(t) ¼ s0(t), expectation of z:

 

 

 

 

 

 

 

 

 

 

z ¼ ZT yðtÞ½s0ðtÞ s1ðtÞ& dt ¼ E0 01

E0E

1

0

 

 

 

 

 

 

 

 

p

 

where:

 

 

 

 

 

 

 

 

 

 

kl ¼

ðskk;

sl

 

T

skðtÞslðtÞdt

 

 

¼ pEkEl Z

 

 

 

 

 

s

 

s

1

 

 

 

 

 

 

 

k

kk k

 

0

 

 

 

ð2:13Þ

ð2:14Þ

is called the correlation coefficient of the signals sk(t), sl(t), Ek, El being their energies. As is seen, geometrically 01 is simply the cosine of the angle between the signals s0(t), s1(t) (or the signal vectors s0, s1), and hence characterizes closeness or resemblance of the signals.

To find 2 ¼ varfzg we rely on the fact that it is not affected by a deterministic component of the observation y(t), i.e. in the situation in question the signal s0(t), since the noise is additive. Therefore we can virtually remove the signal from y(t), putting

14

Spread Spectrum and CDMA

 

 

y(t) ¼ n(t), where n(t) is white noise

with two-sided power spectral density N0/2.

After this let us calculate the variance of correlation (2.7) of y(t) and some arbitrary signal s(t):

 

 

 

 

 

 

 

 

 

 

 

 

 

2

 

var z

 

8 T n t s t

dt92 T

T n t n t0

s t s t0

 

dt dt0

 

¼

f

g ¼ <Z ð Þ ð Þ

= ¼ Z

Z

 

 

 

Þ

 

 

ð Þ ð

Þ ð Þ ð

 

:;

0

0

0

where the squared integral is presented as a double integral with separable variables, order of integration and averaging is changed (expectation of sum is sum of expectations!) and, finally, averaging is applied to the only random cofactor in the integrand.

Recall now, that due to uniformity of the spectrum of white noise over the entire frequency range, its autocorrelation function (statistical average of product of samples at two time moments) is the Dirac delta function: n(t)n(t0) ¼ (N0/2) (t t0). In other words, any two samples of white noise, notwithstanding how close in time, are uncorrelated. Using this result in the integral above along with the sifting property of the delta function:

ZT

sðt0Þ ðt0 tÞ dt0 ¼ sðtÞ

0

leads to:

 

T

s2ðtÞ dt ¼

2

ð2:15Þ

2 ¼ 20 Z0

 

N

 

N0E

 

where E is the energy of the signal s(t).

In the case under consideration, as (2.10) and (2.7) show, substitution s(t) ¼ s0(t) s1(t) should be made in (2.15), i.e. E is energy Ed of the signal difference s0(t) s1(t). Deriving it gives:

Ed ¼ ZT½s0ðtÞ s1ðtÞ&2 dt ¼ d2ðs0; s1Þ ¼ E0 þ E1 2 01

E0E1

ð2:16Þ

0

p

 

Taking into account the geometrical content of the correlation coefficient and energy, this is just the cosine theorem from ‘school’ mathematics.

Now, substitute (2.13), (2.15) and (2.16) into (2.12), arriving at:

p01 ¼ Q0

 

ð2N0

Þ1

ð2:17Þ

 

s

 

@

 

d2 s0; s1

 

A

 

Since the problem is absolutely probability of mistaking s1(t) for

symmetrical, the same result will be obtained for the s0(t). With this in mind, the complete (unconditional)

Classical reception problems and signal design

15

 

 

error probability Pe does not depend on a priori probability w of sending the signal s0(t) and is expressed by the equation:

Pe ¼ wp01 þ ð1 wÞp10 ¼ Q0

 

ð2N0

Þ1

ð2:18Þ

 

s

 

@

 

d2 s0; s1

 

A

 

It is quite obvious now that the only way to achieve high data transmission fidelity is to make the distance between the two signals d(s0, s1) large enough. Certainly, d(s0, s1) can be increased at the cost of large signal energies or vector lengths, as is seen from (2.16). But what is the optimal signal pair when recourse to this ‘brute force’ approach is limited, i.e. signal energies are fixed beforehand? Consider first the very typical case of signals of equal energies: E0 ¼ E1 ¼ E, meaning that signal intensity is not utilized as a message indicator. Decision rule (2.10) is now reduced to just comparison between z0 and z1 or, equivalently, to testing the polarity of their difference:

^

H0

z ¼ z0 z1 ><0

^

H1

Of course, to maximize the distance between two vectors of fixed lengths, one should

make them antipodal, as shown in Figure 2.5a. Then the angle between s0, s1 ¼ , p

cos ¼ 01 ¼ 1 and d(s0, s1) ¼ 2 E, which turns (2.18) into the following:

 

s!

 

Pe;a ¼ Q

2E

ð2:19Þ

N0

representing the minimal achievable error probability in binary data transmission, signal energy E being fixed. The correlator and matched filter, which will often be

referred to in this book, are devices typically used to physically calculate correlation z, p

and parameter q ¼ 2E/N0 is nothing but signal-to-noise ratio (SNR) at the correlator or matched filter output.

 

 

s0

E

s0

 

2E

s0

 

E

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

E

 

s1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

s

1

= –s

 

 

 

 

 

 

 

s1 = 0

 

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

E

 

 

 

 

 

 

(a)

 

 

 

(b)

 

 

(c)

Figure 2.5 Signal pairs in various binary transmission modes

16

Spread Spectrum and CDMA

 

 

The optimal signal pair is therefore the antipodal pair s1(t) ¼ s0(t). Binary phase shift keying (BPSK) is the implementation that is widespread in digital data transmission systems, in which 0 is transmitted by some bandpass signal with phase zero while the same signal but with phase serves to transmit message 1.

To find out how critical adequate choice of signals may prove to be, compare BPSK with another popular binary transmission mode. Although BPSK is the best possible method of binary signalling, it is based on a phase discrepancy of two signals carrying data, and thus requires accurate knowledge of the frequency carrier current phase at the receiver end. To realize it, a special frequency recovery loop should be used in the receiver, which is sometimes regarded as an undesirable complication. One way to avoid it is to employ frequency shift keying (FSK), in which messages 0 and 1 are carried by

signals of different frequencies. Typically frequencies are chosen so that the signals are p

orthogonal: cos ¼ 01 ¼ 0, d(s0, s1) ¼ 2E (see Figure 2.5b). Substitution of it into (2.18) gives:

 

r

 

Pe;o ¼ Q

E

ð2:20Þ

N0

Comparing this result with (2.19), one can see that for an orthogonal pair (FSK) twice the signal energy is needed to provide the same error reception fidelity as is secured by the antipodal signals (BPSK). To put it another way, the orthogonal signals rank 3 dB worse than the antipodal ones in necessary energy.

There is one more very old mode of binary transmission still in use: amplitude shift keying (ASK), in which the bit ‘1’ is transmitted by the signal s1(t) ¼ s(t) of energy

E1 ¼ E, and ‘0’ is transmitted by a pause: s0(t) ¼ 0, E0 ¼ 0. In this case (see Figure 2.5c)

d(s0, s1) ¼

 

 

 

 

 

pE and the error probability (2.18) becomes:

 

 

 

r

 

 

Pe;as ¼ Q

 

E

ð2:21Þ

 

2N0

Comparing the last result with (2.19), one may conclude that ASK requires four times (6 dB) higher energy than BPSK for the same reception quality. This is true when peak energy is a limiting issue. More practical is usually the limitation on average energy. Since in ASK no energy is spent at all when ‘0’ is transmitted, for equiprobable messages 0 and 1 the average energy is (E0 þ E1)/2 ¼ E/2. Thus, average energy only two times higher than BPSK will give the same error probability with the ASK mode, and loss of ASK to BPSK is the same as in the case of FSK, i.e. 3 dB.

Now we have to make a final remark on the potential impact of the binary data transmission problem upon signal pair design. We should conclude that there is no hint of any special benefits of spread spectrum in this case, since widening signal bandwidth versus its minimum 1/T promises no improvements in error probability. Indeed, to provide the necessary reception quality it is enough to have two maximally distant signals, which automatically involves two antipodal signals with no additional demand as to their shape or modulation. If for some reason an antipodal pair is rejected, orthogonal (say, frequency shifted) or ASK pairs can be used and, again, this in no way calls for the use of spread spectrum.