
- •Contents
- •Preface
- •1 Spread spectrum signals and systems
- •1.1 Basic definition
- •1.2 Historical sketch
- •2 Classical reception problems and signal design
- •2.1 Gaussian channel, general reception problem and optimal decision rules
- •2.2 Binary data transmission (deterministic signals)
- •2.3 M-ary data transmission: deterministic signals
- •2.4 Complex envelope of a bandpass signal
- •2.5 M-ary data transmission: noncoherent signals
- •2.6 Trade-off between orthogonal-coding gain and bandwidth
- •2.7 Examples of orthogonal signal sets
- •2.7.1 Time-shift coding
- •2.7.2 Frequency-shift coding
- •2.7.3 Spread spectrum orthogonal coding
- •2.8 Signal parameter estimation
- •2.8.1 Problem statement and estimation rule
- •2.8.2 Estimation accuracy
- •2.9 Amplitude estimation
- •2.10 Phase estimation
- •2.11 Autocorrelation function and matched filter response
- •2.12 Estimation of the bandpass signal time delay
- •2.12.1 Estimation algorithm
- •2.12.2 Estimation accuracy
- •2.13 Estimation of carrier frequency
- •2.14 Simultaneous estimation of time delay and frequency
- •2.15 Signal resolution
- •2.16 Summary
- •Problems
- •Matlab-based problems
- •3 Merits of spread spectrum
- •3.1 Jamming immunity
- •3.1.1 Narrowband jammer
- •3.1.2 Barrage jammer
- •3.2 Low probability of detection
- •3.3 Signal structure secrecy
- •3.4 Electromagnetic compatibility
- •3.5 Propagation effects in wireless systems
- •3.5.1 Free-space propagation
- •3.5.2 Shadowing
- •3.5.3 Multipath fading
- •3.5.4 Performance analysis
- •3.6 Diversity
- •3.6.1 Combining modes
- •3.6.2 Arranging diversity branches
- •3.7 Multipath diversity and RAKE receiver
- •Problems
- •Matlab-based problems
- •4 Multiuser environment: code division multiple access
- •4.1 Multiuser systems and the multiple access problem
- •4.2 Frequency division multiple access
- •4.3 Time division multiple access
- •4.4 Synchronous code division multiple access
- •4.5 Asynchronous CDMA
- •4.6 Asynchronous CDMA in the cellular networks
- •4.6.1 The resource reuse problem and cellular systems
- •4.6.2 Number of users per cell in asynchronous CDMA
- •Problems
- •Matlab-based problems
- •5 Discrete spread spectrum signals
- •5.1 Spread spectrum modulation
- •5.2 General model and categorization of discrete signals
- •5.3 Correlation functions of APSK signals
- •5.4 Calculating correlation functions of code sequences
- •5.5 Correlation functions of FSK signals
- •5.6 Processing gain of discrete signals
- •Problems
- •Matlab-based problems
- •6 Spread spectrum signals for time measurement, synchronization and time-resolution
- •6.1 Demands on ACF: revisited
- •6.2 Signals with continuous frequency modulation
- •6.3 Criterion of good aperiodic ACF of APSK signals
- •6.4 Optimization of aperiodic PSK signals
- •6.5 Perfect periodic ACF: minimax binary sequences
- •6.6 Initial knowledge on finite fields and linear sequences
- •6.6.1 Definition of a finite field
- •6.6.2 Linear sequences over finite fields
- •6.6.3 m-sequences
- •6.7 Periodic ACF of m-sequences
- •6.8 More about finite fields
- •6.9 Legendre sequences
- •6.10 Binary codes with good aperiodic ACF: revisited
- •6.11 Sequences with perfect periodic ACF
- •6.11.1 Binary non-antipodal sequences
- •6.11.2 Polyphase codes
- •6.11.3 Ternary sequences
- •6.12 Suppression of sidelobes along the delay axis
- •6.12.1 Sidelobe suppression filter
- •6.12.2 SNR loss calculation
- •6.13 FSK signals with optimal aperiodic ACF
- •Problems
- •Matlab-based problems
- •7 Spread spectrum signature ensembles for CDMA applications
- •7.1 Data transmission via spread spectrum
- •7.1.1 Direct sequence spreading: BPSK data modulation and binary signatures
- •7.1.2 DS spreading: general case
- •7.1.3 Frequency hopping spreading
- •7.2 Designing signature ensembles for synchronous DS CDMA
- •7.2.1 Problem formulation
- •7.2.2 Optimizing signature sets in minimum distance
- •7.2.3 Welch-bound sequences
- •7.3 Approaches to designing signature ensembles for asynchronous DS CDMA
- •7.4 Time-offset signatures for asynchronous CDMA
- •7.5 Examples of minimax signature ensembles
- •7.5.1 Frequency-offset binary m-sequences
- •7.5.2 Gold sets
- •7.5.3 Kasami sets and their extensions
- •7.5.4 Kamaletdinov ensembles
- •Problems
- •Matlab-based problems
- •8 DS spread spectrum signal acquisition and tracking
- •8.1 Acquisition and tracking procedures
- •8.2 Serial search
- •8.2.1 Algorithm model
- •8.2.2 Probability of correct acquisition and average number of steps
- •8.2.3 Minimizing average acquisition time
- •8.3 Acquisition acceleration techniques
- •8.3.1 Problem statement
- •8.3.2 Sequential cell examining
- •8.3.3 Serial-parallel search
- •8.3.4 Rapid acquisition sequences
- •8.4 Code tracking
- •8.4.1 Delay estimation by tracking
- •8.4.2 Early–late DLL discriminators
- •8.4.3 DLL noise performance
- •Problems
- •Matlab-based problems
- •9 Channel coding in spread spectrum systems
- •9.1 Preliminary notes and terminology
- •9.2 Error-detecting block codes
- •9.2.1 Binary block codes and detection capability
- •9.2.2 Linear codes and their polynomial representation
- •9.2.3 Syndrome calculation and error detection
- •9.2.4 Choice of generator polynomials for CRC
- •9.3 Convolutional codes
- •9.3.1 Convolutional encoder
- •9.3.2 Trellis diagram, free distance and asymptotic coding gain
- •9.3.3 The Viterbi decoding algorithm
- •9.3.4 Applications
- •9.4 Turbo codes
- •9.4.1 Turbo encoders
- •9.4.2 Iterative decoding
- •9.4.3 Performance
- •9.4.4 Applications
- •9.5 Channel interleaving
- •Problems
- •Matlab-based problems
- •10 Some advancements in spread spectrum systems development
- •10.1 Multiuser reception and suppressing MAI
- •10.1.1 Optimal (ML) multiuser rule for synchronous CDMA
- •10.1.2 Decorrelating algorithm
- •10.1.3 Minimum mean-square error detection
- •10.1.4 Blind MMSE detector
- •10.1.5 Interference cancellation
- •10.1.6 Asynchronous multiuser detectors
- •10.2 Multicarrier modulation and OFDM
- •10.2.1 Multicarrier DS CDMA
- •10.2.2 Conventional MC transmission and OFDM
- •10.2.3 Multicarrier CDMA
- •10.2.4 Applications
- •10.3 Transmit diversity and space–time coding in CDMA systems
- •10.3.1 Transmit diversity and the space–time coding problem
- •10.3.2 Efficiency of transmit diversity
- •10.3.3 Time-switched space–time code
- •10.3.4 Alamouti space–time code
- •10.3.5 Transmit diversity in spread spectrum applications
- •Problems
- •Matlab-based problems
- •11 Examples of operational wireless spread spectrum systems
- •11.1 Preliminary remarks
- •11.2 Global positioning system
- •11.2.1 General system principles and architecture
- •11.2.2 GPS ranging signals
- •11.2.3 Signal processing
- •11.2.4 Accuracy
- •11.2.5 GLONASS and GNSS
- •11.2.6 Applications
- •11.3 Air interfaces cdmaOne (IS-95) and cdma2000
- •11.3.1 Introductory remarks
- •11.3.2 Spreading codes of IS-95
- •11.3.3 Forward link channels of IS-95
- •11.3.3.1 Pilot channel
- •11.3.3.2 Synchronization channel
- •11.3.3.3 Paging channels
- •11.3.3.4 Traffic channels
- •11.3.3.5 Forward link modulation
- •11.3.3.6 MS processing of forward link signal
- •11.3.4 Reverse link of IS-95
- •11.3.4.1 Reverse link traffic channel
- •11.3.4.2 Access channel
- •11.3.4.3 Reverse link modulation
- •11.3.5 Evolution of air interface cdmaOne to cdma2000
- •11.4 Air interface UMTS
- •11.4.1 Preliminaries
- •11.4.2 Types of UMTS channels
- •11.4.3 Dedicated physical uplink channels
- •11.4.4 Common physical uplink channels
- •11.4.5 Uplink channelization codes
- •11.4.6 Uplink scrambling
- •11.4.7 Mapping downlink transport channels to physical channels
- •11.4.8 Downlink physical channels format
- •11.4.9 Downlink channelization codes
- •11.4.10 Downlink scrambling codes
- •11.4.11 Synchronization channel
- •11.4.11.1 General structure
- •11.4.11.2 Primary synchronization code
- •11.4.11.3 Secondary synchronization code
- •References
- •Index

2
Classical reception problems and signal design
It is typical of communication theory to start analysing a system from the receiving end. The aim is usually to design an optimal receiver, which retrieves the information contained in the observed waveform with the best possible quality. Knowing optimal reception processing algorithms depending on a specific transmitted signal structure, it is possible afterwards to design an optimal transmitted signal, i.e. to choose the best means of encoding and modulation. In this chapter we investigate how classical reception problems appeal to the spread spectrum, or, in other words, which of the classical reception problems demand (or not) the involvement of spread spectrum signals. We call reception problems ‘classical’ if they are based on the traditional Gaussian channel model.
2.1 Gaussian channel, general reception problem and optimal decision rules
The following abstract model can describe any information system in which data is transmitted from one point in space to another. There is some source that can generate one of M possible messages. This source may be governed or at least created by some human being, but it may also have a human-independent nature. In any case, each of the M competitive messages is carried by a specific signal so that there is a set S of M possible signals: S ¼ fsk(t): k ¼ 1, 2, . . . , Mg. There is no limitation in principle on the cardinality of S, i.e. the number of signals M, and, if necessary, the set S may even be assumed uncountable. The source selects some specific signal sk(t) 2 S and applies it to the channel input (see Figure 2.1). At the receiving side (channel output) the observation waveform y(t) is received, which is not an accurate copy of the sent signal sk(t) but, instead, is the result of sk(t) being corrupted by noise and interference intrinsic to any real channel. For the receiver there are M competitive hypotheses Hk on which one of M possible signals was actually transmitted and turned by the channel into this specific observation y(t), and only one of these hypotheses should be chosen as true. Denote the
Spread Spectrum and CDMA: Principles and Applications Valery P. Ipatov
2005 John Wiley & Sons, Ltd

8 |
|
|
|
|
Spread Spectrum and CDMA |
|
|
|
|
|
|
|
sk(t) |
|
y(t) |
||
|
Channel |
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure 2.1 General system model
^
result of this choice, i.e. the decision, as Hj, read as ‘the decision is made in favour of signal number j’. With this the classical reception problem emerges: what is the best strategy to decide which one of the possible messages (or signals) was sent, based on the observation y(t)?
To answer this question it is necessary to know the channel model. The channel is mathematically described by its transition probability p[y(t)js(t)], which shows how probable it is for the given input signal to be transformed by the channel into one or another output observation y(t). When the transition probability p[y(t)js(t)] is known for all possible pairs s(t) and y(t), the channel is characterized exhaustively.
When all source messages are equiprobable (which is typically the case in a properly designed system) the optimum observer’s strategy, securing minimum risk of mistaking an actually sent signal for some other, is the maximum likelihood (ML) rule. According to this rule, after the waveform y(t) is observed the decision should be made in favour of the signal which has the greatest (as compared to the rest of the signals) probability of being transformed by the channel into this very observation y(t).
The primary channel model in communication theory is the additive white Gaussian noise (AWGN) or, more simply, the Gaussian channel in which the transition probability drops exponentially with the growth of the squared Euclidean distance between a sent signal and output observation:
p½yðtÞjsðtÞ& ¼ k exp N0 d2 |
ðs; yÞ |
ð2:1Þ |
|
1 |
|
|
|
where k is a constant independent of s(t) and y(t), N0 is white noise one-side power spectral density, and the Euclidean distance from s(t) to y(t) is defined as:
v uuZT
dðs; yÞ ¼ u ½yðtÞ sðtÞ&2dt ð2:2Þ t
0
Explanation of the particular importance of the Gaussian model lies in the physical origin of many real noises. According to the central limit theorem of probability theory, the probability distribution of a sum of a great number of elementary random components, which are neither strongly dependent on each other nor prevailing over the others, approaches the Gaussian law whenever the number of addends goes to infinity. But thermal noise and many other types of noise, typical of real channels, are produced precisely as the result of summation of a great many elementary random currents or voltages caused by chaotic motion of charged particles (electrons, ions etc.).
When talking about the distance between signals or waveforms, we interpret them as vectors, which is universally accepted in all information-related disciplines. If the reader finds it difficult to imagine the association between signals and vectors, a very

Classical reception problems and signal design |
9 |
simple mental trick may be a useful aid. Imagine discretization of a continuous signal in time, i.e. representing s(t) by samples si ¼ s(iTs), i ¼ 0, 1, . . ., taken with a sampling period Ts. If the total signal energy is concentrated within the bandwidth W and Ts 1/2W (ignoring that theoretically no signal is finite in both the time and the frequency domains), samples si represent exhaustively the original continuous-time signal s(t). With signal duration T there are n ¼ T/Ts such samples altogether, and therefore the n-dimensional vector s ¼ (s0, s1, . . . , sn 1) describes the signal entirely. Having done the same with observation y(t), we come to its n-dimensional vector equivalent y ¼ (y0, y1, . . . , yn 1) and find the Euclidean distance between vectors s and y by Pythagorean theorem for the n-dimensional vector space:
d |
ð |
s; y |
vi i |
||||||
|
|
Þ ¼ u i 0 |
ð |
|
|
|
Þ |
|
|
|
|
|
n 1 |
|
y |
|
s |
|
2 |
uX t
¼
One possible way of finishing the game is letting Ts go to zero. Then vectors s, y, remaining signal and observation equivalents, become of infinite dimension (actually repeat s(t), y(t) since there is no longer any discretization in the limit). At the same time, the sum above (ignoring the cofactor) turns into the integral in the right-hand side of equality (2.2). The latter, thereby, is the definition of Euclidean distance for continuous time waveforms.
Now come back to the ML rule for the Gaussian channel. According to equations (2.1) and (2.2), signal likelihood (the probability of being transformed by the channel into the observed y(t)) falls with Euclidean distance between s(t) and y(t). Therefore, the ML decision in the Gaussian channel can be restated as the minimum distance rule:
dðsj; yÞ ¼ k |
ð |
k; |
y |
Þ ) |
^ |
j |
is taken |
ð : |
3 |
Þ |
min d |
s |
|
|
H |
2 |
|
i.e. the decision is made in favour of signal sj(t) if it is closest (in terms of Euclidean distance) to observation y(t) among all M competitive signals (Figure 2.2). Another, more direct, notation of (2.3) is:
^s ¼ arg min dðs; yÞ
s2S
where ^s is an estimation of the received signal (i.e. the signal declared received).
s2 |
|
s1 |
|
|
sj |
y |
d(sj, y) = min d(sk, y) |
k |
|
sM |
|
Figure 2.2 Illustration of minimum distance rule

10 Spread Spectrum and CDMA
Continuing the geometrical interpretation of signals, we can introduce signal geometric length (norm) ksk as its distance from the origin. Then from (2.2) it follows
that ksk ¼ d(s, 0) ¼ |
|
E ¼ Z0T s2ðtÞdt |
ð2:4Þ |
|
pE, where: |
|
|
is signal energy. Another important geometrical characteristic is the inner (scalar) product (u, v) of two signals u(t), v(t):
ZT
ðu; vÞ ¼ uðtÞvðtÞdt ð2:5Þ
0
which again can be thought of as a limit form of an inner product of two n-dimensional vectors. The same entity may also be calculated through the lengths of the vectors and the cosine of the angle between them: (u, v) ¼ kukkvk cos , and thus the inner product describes the closeness or resemblance between signals, since the closer the signals are to each other, with lengths (energies) fixed, the closer to one is cos and the greater is the inner product. Because of this the inner product is also called the correlation of signals.
In order to outline the special role of this entity, let us now give a slightly different version of the minimum distance rule. Opening the brackets in (2.2) leads to:
d2ðsk; yÞ ¼ Z0T y2ðtÞ dt 2 Z0T yðtÞskðtÞ dt þ Z0T sk2ðtÞ dt ¼ kyk2 2zk þ kskk2 |
ð2:6Þ |
where zk stands for correlation of observation y(t) with kth signal sk(t): |
|
zk ¼ ðy; skÞ ¼ Z0T yðtÞskðtÞ dt |
ð2:7Þ |
The first summand in the right-hand side of equation (2.6) is fixed for a given observation, and therefore does not affect comparing distances and the decision on which signal is received. The last term is just the kth signal energy Ek. With this in mind, distance rule (2.3) can be reformulated as the following correlation decision rule:
|
Ej |
|
|
Ek |
|
|
|
|
|
|
max |
|
|
^ |
|
zj 2 ¼ |
k |
zk |
2 ) Hj is taken |
ð2:8Þ |
meaning, in particular, that it is maximally correlated with observation y(t) signal, which is announced as having actually been received among all M competitive signals of equal energies. The last case is very well explainable physically: preference is simply given to the signal which has stronger resemblance to y(t) than all the rest, correlation (inner product) being accepted as a criterion of resemblance.