
- •Contents
- •Preface
- •1 Spread spectrum signals and systems
- •1.1 Basic definition
- •1.2 Historical sketch
- •2 Classical reception problems and signal design
- •2.1 Gaussian channel, general reception problem and optimal decision rules
- •2.2 Binary data transmission (deterministic signals)
- •2.3 M-ary data transmission: deterministic signals
- •2.4 Complex envelope of a bandpass signal
- •2.5 M-ary data transmission: noncoherent signals
- •2.6 Trade-off between orthogonal-coding gain and bandwidth
- •2.7 Examples of orthogonal signal sets
- •2.7.1 Time-shift coding
- •2.7.2 Frequency-shift coding
- •2.7.3 Spread spectrum orthogonal coding
- •2.8 Signal parameter estimation
- •2.8.1 Problem statement and estimation rule
- •2.8.2 Estimation accuracy
- •2.9 Amplitude estimation
- •2.10 Phase estimation
- •2.11 Autocorrelation function and matched filter response
- •2.12 Estimation of the bandpass signal time delay
- •2.12.1 Estimation algorithm
- •2.12.2 Estimation accuracy
- •2.13 Estimation of carrier frequency
- •2.14 Simultaneous estimation of time delay and frequency
- •2.15 Signal resolution
- •2.16 Summary
- •Problems
- •Matlab-based problems
- •3 Merits of spread spectrum
- •3.1 Jamming immunity
- •3.1.1 Narrowband jammer
- •3.1.2 Barrage jammer
- •3.2 Low probability of detection
- •3.3 Signal structure secrecy
- •3.4 Electromagnetic compatibility
- •3.5 Propagation effects in wireless systems
- •3.5.1 Free-space propagation
- •3.5.2 Shadowing
- •3.5.3 Multipath fading
- •3.5.4 Performance analysis
- •3.6 Diversity
- •3.6.1 Combining modes
- •3.6.2 Arranging diversity branches
- •3.7 Multipath diversity and RAKE receiver
- •Problems
- •Matlab-based problems
- •4 Multiuser environment: code division multiple access
- •4.1 Multiuser systems and the multiple access problem
- •4.2 Frequency division multiple access
- •4.3 Time division multiple access
- •4.4 Synchronous code division multiple access
- •4.5 Asynchronous CDMA
- •4.6 Asynchronous CDMA in the cellular networks
- •4.6.1 The resource reuse problem and cellular systems
- •4.6.2 Number of users per cell in asynchronous CDMA
- •Problems
- •Matlab-based problems
- •5 Discrete spread spectrum signals
- •5.1 Spread spectrum modulation
- •5.2 General model and categorization of discrete signals
- •5.3 Correlation functions of APSK signals
- •5.4 Calculating correlation functions of code sequences
- •5.5 Correlation functions of FSK signals
- •5.6 Processing gain of discrete signals
- •Problems
- •Matlab-based problems
- •6 Spread spectrum signals for time measurement, synchronization and time-resolution
- •6.1 Demands on ACF: revisited
- •6.2 Signals with continuous frequency modulation
- •6.3 Criterion of good aperiodic ACF of APSK signals
- •6.4 Optimization of aperiodic PSK signals
- •6.5 Perfect periodic ACF: minimax binary sequences
- •6.6 Initial knowledge on finite fields and linear sequences
- •6.6.1 Definition of a finite field
- •6.6.2 Linear sequences over finite fields
- •6.6.3 m-sequences
- •6.7 Periodic ACF of m-sequences
- •6.8 More about finite fields
- •6.9 Legendre sequences
- •6.10 Binary codes with good aperiodic ACF: revisited
- •6.11 Sequences with perfect periodic ACF
- •6.11.1 Binary non-antipodal sequences
- •6.11.2 Polyphase codes
- •6.11.3 Ternary sequences
- •6.12 Suppression of sidelobes along the delay axis
- •6.12.1 Sidelobe suppression filter
- •6.12.2 SNR loss calculation
- •6.13 FSK signals with optimal aperiodic ACF
- •Problems
- •Matlab-based problems
- •7 Spread spectrum signature ensembles for CDMA applications
- •7.1 Data transmission via spread spectrum
- •7.1.1 Direct sequence spreading: BPSK data modulation and binary signatures
- •7.1.2 DS spreading: general case
- •7.1.3 Frequency hopping spreading
- •7.2 Designing signature ensembles for synchronous DS CDMA
- •7.2.1 Problem formulation
- •7.2.2 Optimizing signature sets in minimum distance
- •7.2.3 Welch-bound sequences
- •7.3 Approaches to designing signature ensembles for asynchronous DS CDMA
- •7.4 Time-offset signatures for asynchronous CDMA
- •7.5 Examples of minimax signature ensembles
- •7.5.1 Frequency-offset binary m-sequences
- •7.5.2 Gold sets
- •7.5.3 Kasami sets and their extensions
- •7.5.4 Kamaletdinov ensembles
- •Problems
- •Matlab-based problems
- •8 DS spread spectrum signal acquisition and tracking
- •8.1 Acquisition and tracking procedures
- •8.2 Serial search
- •8.2.1 Algorithm model
- •8.2.2 Probability of correct acquisition and average number of steps
- •8.2.3 Minimizing average acquisition time
- •8.3 Acquisition acceleration techniques
- •8.3.1 Problem statement
- •8.3.2 Sequential cell examining
- •8.3.3 Serial-parallel search
- •8.3.4 Rapid acquisition sequences
- •8.4 Code tracking
- •8.4.1 Delay estimation by tracking
- •8.4.2 Early–late DLL discriminators
- •8.4.3 DLL noise performance
- •Problems
- •Matlab-based problems
- •9 Channel coding in spread spectrum systems
- •9.1 Preliminary notes and terminology
- •9.2 Error-detecting block codes
- •9.2.1 Binary block codes and detection capability
- •9.2.2 Linear codes and their polynomial representation
- •9.2.3 Syndrome calculation and error detection
- •9.2.4 Choice of generator polynomials for CRC
- •9.3 Convolutional codes
- •9.3.1 Convolutional encoder
- •9.3.2 Trellis diagram, free distance and asymptotic coding gain
- •9.3.3 The Viterbi decoding algorithm
- •9.3.4 Applications
- •9.4 Turbo codes
- •9.4.1 Turbo encoders
- •9.4.2 Iterative decoding
- •9.4.3 Performance
- •9.4.4 Applications
- •9.5 Channel interleaving
- •Problems
- •Matlab-based problems
- •10 Some advancements in spread spectrum systems development
- •10.1 Multiuser reception and suppressing MAI
- •10.1.1 Optimal (ML) multiuser rule for synchronous CDMA
- •10.1.2 Decorrelating algorithm
- •10.1.3 Minimum mean-square error detection
- •10.1.4 Blind MMSE detector
- •10.1.5 Interference cancellation
- •10.1.6 Asynchronous multiuser detectors
- •10.2 Multicarrier modulation and OFDM
- •10.2.1 Multicarrier DS CDMA
- •10.2.2 Conventional MC transmission and OFDM
- •10.2.3 Multicarrier CDMA
- •10.2.4 Applications
- •10.3 Transmit diversity and space–time coding in CDMA systems
- •10.3.1 Transmit diversity and the space–time coding problem
- •10.3.2 Efficiency of transmit diversity
- •10.3.3 Time-switched space–time code
- •10.3.4 Alamouti space–time code
- •10.3.5 Transmit diversity in spread spectrum applications
- •Problems
- •Matlab-based problems
- •11 Examples of operational wireless spread spectrum systems
- •11.1 Preliminary remarks
- •11.2 Global positioning system
- •11.2.1 General system principles and architecture
- •11.2.2 GPS ranging signals
- •11.2.3 Signal processing
- •11.2.4 Accuracy
- •11.2.5 GLONASS and GNSS
- •11.2.6 Applications
- •11.3 Air interfaces cdmaOne (IS-95) and cdma2000
- •11.3.1 Introductory remarks
- •11.3.2 Spreading codes of IS-95
- •11.3.3 Forward link channels of IS-95
- •11.3.3.1 Pilot channel
- •11.3.3.2 Synchronization channel
- •11.3.3.3 Paging channels
- •11.3.3.4 Traffic channels
- •11.3.3.5 Forward link modulation
- •11.3.3.6 MS processing of forward link signal
- •11.3.4 Reverse link of IS-95
- •11.3.4.1 Reverse link traffic channel
- •11.3.4.2 Access channel
- •11.3.4.3 Reverse link modulation
- •11.3.5 Evolution of air interface cdmaOne to cdma2000
- •11.4 Air interface UMTS
- •11.4.1 Preliminaries
- •11.4.2 Types of UMTS channels
- •11.4.3 Dedicated physical uplink channels
- •11.4.4 Common physical uplink channels
- •11.4.5 Uplink channelization codes
- •11.4.6 Uplink scrambling
- •11.4.7 Mapping downlink transport channels to physical channels
- •11.4.8 Downlink physical channels format
- •11.4.9 Downlink channelization codes
- •11.4.10 Downlink scrambling codes
- •11.4.11 Synchronization channel
- •11.4.11.1 General structure
- •11.4.11.2 Primary synchronization code
- •11.4.11.3 Secondary synchronization code
- •References
- •Index

Channel coding in spread spectrum systems |
279 |
|
|
soft algorithms is the reason why an alternative hard decoding is often used instead. This decoding mode includes two steps: at the first of them decisions are made on all individual code symbols and as a result an observation appears to be demodulated into the vector consisting of symbols belonging to the code alphabet (in the considered case, binary). Some of the symbols of the binary observation thus obtained may be erroneous and then the whole demodulated vector is likely to differ from all codewords. The second step finds among all allowed codewords the one having maximal likelihood, meaning the greatest probability to be transformed by the channel to the current demodulated binary observation. Such a decoding procedure, typically ending in declaring some specific code vector as true, is called error correction. Alternatively, the goal of decoding may be limited to only checking whether the current binary observation is a true codeword or whether some errors occurred due to the channel corrupting effects. Then, if a binary observation enters the set of allowed codewords, it is mapped back onto the corresponding data bit sequence. Otherwise the fact of unsuccessful transmission is registered and the receiver either requests the transmitter to repeat the message (as in ARQ systems) or tries to restore it by interpolating the previous and subsequent ones. This sort of decoding, called error detection, is characteristic of the application of block codes in modern commercial wireless spread spectrum systems, and the following section focuses on coding formats complying with the error detection task.
9.2 Error-detecting block codes
9.2.1 Binary block codes and detection capability
Suppose that b0, b1, . . . , bk 1 are k source bits to be encoded in a binary codeword u ¼ u0, u1, . . . , un 1 of length n > k. All 2k combinations of k source bits are assumed possible, meaning that there are M ¼ 2k codewords altogether. In every codeword n k binary symbols are redundant in the sense that only k symbols are necessary for one- to-one mapping of M source messages onto binary vectors. These redundant symbols make codewords more discernible from each other, securing better noise resistance. The set of all M ¼ 2k words of length n is called (n,k) block code. The hard decoding, i.e. demodulation of continuous observation into a binary one, corresponds to the model of the binary symmetrical channel (BSC), transforming an input symbol u ¼ 0, 1 into the opposite output binary one y 6¼u with the crossover (or symbol error) probability p. The attribute ‘symmetric’ emphasizes the equal probabilities of transitions of ‘0’ into ‘1’ and vice versa (see Figure 9.1).
u |
1 – p |
y |
|
0 |
|
0 |
|
p |
|||
|
|
||
|
p |
|
|
1 |
|
1 |
|
1 – p |
|||
|
|
Figure 9.1 Binary symmetric channel model

280 |
Spread Spectrum and CDMA |
|
|
Suppose that y ¼ (y0, y1, . . . , yn 1) is the binary observation at the BSC output. If y does not coincide with one of M code vectors u, the receiver sees that erroneous symbols are present (error detection), otherwise k data bits are released corresponding to the estimated codeword ^u ¼ y. Clearly, if a transmitted word is u0 but the binary observation coincides with another codeword, i.e. ^u ¼ y 6¼u0, an undetected error occurs and the released bits are not the true transmitted ones.
Let us introduce some more definitions. The Hamming distance dH (f, g) between two vectors f ¼ (f0, f1, . . . , fn 1) and g ¼ (g0, g1, . . . , gn 1) of the same length n is the number of positions where the vectors have different elements fi ¼6 gi. The Hamming weight wH (f) of a vector f is the number of its non-zero components. If, for example, f ¼ (01011), g ¼ (11000), then dH (f, g) ¼ 3, wH (f) ¼ 3 and wH (g) ¼ 2. It is straightforward to make sure that dH (f, g) ¼ wH (f g) and wH (f) ¼ dH (f, 0), with 0 being zero vector.
Let us assume as transmitted a codeword u. The BSC will transform it into the different fixed codeword v (causing thereby an undetected error and releasing untrue data bits corresponding to v), if symbol errors occur in all dH (u, v) positions where u and v differ with no corruption of n dH (u, v) symbols coinciding in u and v. For a memoryless BSC, i.e. the one where all symbol errors are independent we may then put the probability P(y ¼ vju) of the event above as:
Pðy ¼ vjuÞ ¼ pdH ðu;vÞð1 pÞn dH ðu;vÞ |
ð9:1Þ |
Since p < 1/2 (otherwise we just interchange the designations of output ‘0’ and ‘1’), to reduce the probability of confusion of codewords u and v the Hamming distance between them should be as large as possible. Consider now Hamming distances dH (u, v) between all different code vectors of a code U. Denote the least among them as dH and call it the (minimum) code distance of a code U:
dH ¼ u6¼v; |
H ð |
u |
; |
v |
Þ |
ð : |
2 |
Þ |
min d |
|
|
|
9 |
|
u;v2U
Any codeword of U may appear as a transmitted one and to minimize the risk of confusing the two closest code vectors the distance between them, i.e. code distance dH , has to be maximal. We arrive thereby at the following assertion.
Proposition 9.2.1. Code U is capable of detecting any td or fewer symbol errors (up to td errors) if and only if its code distance dH td þ 1.
Indeed, take a code with dH td and pick out a pair of its closest code vectors u, v. If dH symbols of u different from those of v are corrupted, then u becomes v, meaning that a pattern exists of no more than td errors, which is not detectable. Conversely, if the distance between any code vectors exceeds td , no pattern of td or fewer errors can transform one codeword into another.
One may prove the following statement in the same way (Problem 9.4).
Proposition 9.2.2. Code U is capable of correcting up to tc or fewer symbol errors if and only if its code distance dH 2tc þ 1.

Channel coding in spread spectrum systems |
281 |
|
|
9.2.2 Linear codes and their polynomial representation
Let us treat binary code symbols f0, 1g as elements of a binary finite field GF(2) (see Section 6.6) and consider symbol-wise linear operations over codewords of a code U obeying the arithmetic of GF(2). Clearly, there is only one non-trivial operation of this sort, namely symbol-wise GF(2) addition:
u ¼ ðu0; u1; . . . ; un 1Þ; v ¼ ðv0; v1; . . . ; vn 1Þ ) u þ v ¼ ðu0 þ v0; u1 þ v1; . . . ; un 1 þ vn 1Þ
For example, if u ¼ (100111) and v ¼ (010110), then u þ v ¼ (110001). Symbol-wise subtraction has no independent role and just repeats addition, since in GF(2) the negative of an element is the element itself. In the same way symbol-wise multiplication by a scalar from GF(2) (i.e. by 0 or 1) of any codeword either makes it zero vector or does not change it at all.
A binary code U is called linear if the sum of any of its code vectors is again some code vector belonging to U. The name stems from the fact that such a code is a vector (linear) space over the field GF(2) [31,33,91], although this concept is not involved seriously in our further discussion. Any linear code U of length n contains zero vector (i.e. with n zero components) as a code vector, since the sum of an arbitrary code vector entering U with itself produces exactly zero vector: u þ u ¼ 0. The following statement explains one of the reasons why linear codes are of special interest.
Proposition 9.2.3. The code distance of a linear code U is equal to the minimum Hamming weight over all non-zero codewords:
dH ¼ u2U |
H ð Þ |
ð : Þ |
min w |
u |
9 3 |
u6¼0 |
|
|
To prove this just make a substitution dH (u, v) ¼ wH (u v) in (9.2) and note that u v ¼ u0 is again a codeword of U.
As (9.3) shows, there is no need to test all M(M 1)/2 different vector pairs to find code distance of a linear code containing M words. It is enough to ‘weigh’ M 1 nonzero code vectors, i.e. to perform M/2 smaller number of tests, which, considering the typically enormous number of words M, offers a highly significant gain.
To get the idea of designing error-detecting codes of 2G and 3G wireless standards a polynomial description of linear codes is a good aid.
Let us associate with a codeword u ¼ (u0, u1, . . . , un 1) the code polynomial u(z) of a dummy variable z arranged as:
uðzÞ ¼ un 1zn 1 þ un 2zn 2 þ þ u0
where by way of agreement we put z0 ¼ 1. This polynomial representation, which is widely used in coding theory, is just a form of the z-transform, underlying discrete

282 Spread Spectrum and CDMA
linear system analysis, discrete signal processing, digital filtering etc. [2,7]. One-to-one correspondence between sets of codewords and code polynomials means that the sum of two code polynomials u(z), v(z) of a linear code U is again a code polynomial of the same
P |
n 1 |
|
(z) |
|
P |
|
zi are code polynomials of words u, |
code. Specifically, if u(z) ¼ |
i¼0 uiz |
, vn 1 |
¼ |
i¼0 |
i |
|
|
v of a linear code U, u(z) þ v(z) ¼ |
|
i¼0 |
(ui þ vi)zi |
|
is a code polynomial of the word |
||
u þ v 2 U, where addition of |
coefficients follows the rules of the field they belong to, |
||||||
|
P |
|
|
|
|
i.e. in our case GF(2).
Polynomial arithmetic used in coding analysis and design includes two more operations: multiplication and division with remainder. The rules of these operations are universal regardless of the fields of polynomial coefficients, but dealing now with binary codes we use the terms of binary arithmetic. Consider an arbitrary (not necessarily a code one) binary (i.e. with coefficients from GF(2)) polynomial a(z). The highest power of z in this polynomial holding non-zero coefficient is called the degree of a(z) with symbolism deg a(z). Let a(z), b(z) be two binary polynomials and deg a(z) ¼ m, deg b(z) ¼ n. Then their product a(z)b(z) is a polynomial of degree m þ n obtained by extending the commutativity (zia ¼ azi) and distributivity laws to operations including a formal variable z and gathering together coefficients of equal degrees of z:
aðzÞbðzÞ ¼ ðamzm þam 1zm 1 þ þa0Þðbnzn þbn 1zn 1 þ þb0Þ
¼ ambnzmþn þðambn 1 þam 1bnÞzmþn 1 þ þða1b0 þa0b1Þz þa0b0
¼ mþn |
k |
aibk i!zk |
X |
X |
|
k¼0 |
i¼0 |
|
Certainly, we put zmzn ¼ zmþn, all operations over ai, bi are performed in GF(2), and in the last internal sum coefficients ai, bi, whose indexes become negative or exceed the polynomial degree should be set equal to zero. Take for example binary polynomials a(z) ¼ z4 þ z3 þ 1, b(z) ¼ z2 þ z þ 1, then their product a(z)b(z) ¼ z6 þ z3 þ z2 þ z þ 1.
The algorithm of dividing a dividend a(z) by the divisor b(z) with remainder looks as
follows: |
|
aðzÞ ¼ qðzÞbðzÞ þ rðzÞ |
ð9:4Þ |
where q(z) is a quotient and r(z) is remainder. The uniqueness of q(z), r(z) is secured by relations deg a(z) ¼ deg q(z) þ deg b(z) and deg r(z) < deg b(z). The algorithm (9.4) is similar to the ‘school’ rule of dividing integers with remainder, the polynomial degree replacing a number magnitude. One of the techniques realizing this operation is ‘long division’, i.e. successive computation of remainder and division of it by a divisor until the remainder degree becomes smaller than the degree of the divisor. At the first iteration b(z) is multiplied by z raised to the power equalizing the degree of the product with that of a dividend a(z). Subtraction (equivalent to addition in GF(2)) from a(z) of the product obtained results in the first remainder, which at the second iteration plays the role of dividend, and so forth. Let us illustrate it by an example.

Channel coding in spread spectrum systems |
283 |
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Example 9.2.1. Suppose a(z) ¼ z4 þ z3 þ 1, b(z) ¼ z2 þ z þ 1 |
and apply a long division |
||||||
algorithm: |
|
||||||
z2 þ z þ 1 |
|
|
z2 þ 1 |
|
|||
z4 þ z3 þ 1 |
|
|
|
|
|||
|
þ |
|
|||||
|
|
z4 þ z3 þ z2 |
|
|
|||
|
|
|
z2 þ 1 |
|
|||
|
|
þ |
|
||||
|
|
|
z2 þ z þ 1 |
|
|||
|
|
|
z |
|
|
After two iterations we have q(z) ¼ z2 þ 1, r(z) ¼ z so that division with remainder results in z4 þ z3 þ 1 ¼ (z2 þ 1)(z2 þ z þ 1) þ z.
Similar to integers, we say that a(z) is divisible by b(z) (or b(z) divides a(z)) when the remainder is zero, i.e. a(z) ¼ q(z)b(z).
Consider now a linear code U of length n with all code polynomials divisible by the fixed polynomial g(z) of degree r51. Any code polynomial of U then has the form u(z) ¼ b(z)g(z), and since there are 2n r different factors b(z) securing the product degree no greater than n 1, such a code can include 2n r codewords at most. In fact, it is always in one’s power to go the reverse way and build up a code with this maximal number of words, i.e. transmitting k ¼ n r information bits, polynomial g(z) of a fixed
degree r given. For that it is enough |
to |
use k ¼ n r data bits |
b0, b1, . . . , bk 1 as |
||||||||
|
¼ |
|
|
1zk 1 |
þ |
|
2zk 2 |
þ þ |
k |
|
|
coefficients of a data polynomial b(z) |
|
|
bk |
|
|
bk |
|
b0 and |
construct |
||
the corresponding code polynomial as |
|
a |
product |
u(z) ¼ b(z)g(z). Then 2 |
different |
data k-bit blocks are in one-to-one correspondence with the same number of code polynomials of degree no greater than n 1. The linearity of this code can be checked readily (Problem 9.11). Polynomial g(z) in such a construction is called the generator polynomial of U. Note that the number of redundant or check symbols of such a code always equals r, i.e. the degree of the generator polynomial.
When a data polynomial b(z) is multiplied with the generator polynomial g(z) directly the codewords corresponding to polynomials u(z) ¼ b(z)g(z) are non-systematic, i.e. data bits in them are not explicitly seen. To come to a systematic code, in which, e.g., the last k binary symbols are information bits themselves and r ¼ n k first ones are check symbols, some rearrangement of codewords may be done. The complete set of codewords in so doing remains the same and only the mapping of data bits onto codewords alters. Let us multiply a data polynomial b(z) by zr ¼ zn k, coming to a polynomial zn kb(z) of degree no greater than n 1. If the remainder r(z) of its division by g(z) is discarded (just added to zn kb(z) in the binary case), it becomes divisible by a generator polynomial g(z), i.e. becomes a code polynomial. The last one:
uðzÞ ¼ zn kbðzÞ þ rðzÞ