Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Biblio5

.pdf
Скачиваний:
63
Добавлен:
25.03.2016
Размер:
8.77 Mб
Скачать

422

TELEVISION TRANSMISSION

 

Table 14.1a Television Standards, NTSC/ US

 

 

 

Channel width (see Figure 14.6)

 

(Transmission)

6 MHz

Video

 

4.2 MHz

Aural

 

± 25 kHz

Picture carrier location

1.25 MHz above lower boundary

 

 

of channel

Modulation

AM composite picture and synchronizing

 

 

signal on visual carrier together with FM

 

 

audio signal on audio carrier

Scanning lines

525 per frame, interlaced 2 : 1

Scanning sequence

Horizontally from left to right, vertically

 

 

from top to bottom

Horizontal scanning frequency

15,750 Hz for monochrome, or 2/ 455 ×

 

 

chrominance subcarrier c 15,734.264 ±

 

 

0.044 Hz for NTSC color transmission

Vertical scanning frequency

60 Hz for monochrome, or 2/ 525 ×

 

 

horizontal scanning frequency for color c

 

 

59.95 Hz

Blanking level

Transmitted at 75 ± 25% of peak carrier

 

 

level

Reference black level

Black level is separated from blanking

 

 

level by 7.5 ± 2.5% of video range from

 

 

blanking level to reference white level

Reference white level

Luminance signal of reference white is

 

 

12.5 ± 2.5% of peak carrier

Peak-to-peak variation

Total permissible peak-to-peak variation

 

 

in one frame due to all causes is less

 

 

than 5%

Polarity of transmission

Negative; a decrease in initial light

 

 

intensity causes an increase in radiated

 

 

power

Transmitter brightness

For monochrome TV, RF output varies

response

in an inverse logarithmic relation to

 

 

brightness of scene

Aural transmitter power

Maximum radiated power is 20%

 

 

(minimum 10%) of peak visual transmitter

 

 

power

Source: Refs. 7, 8.

2. SECAM—Sequential color and memory (Europe); and 3. PAL—Phase alternation line (Europe).

The systems are similar in that they separate the luminance and chrominance information and transmit the chrominance information in the form of two color difference signals, which modulate a color subcarrier transmitted within the video band of the luminance signal. The systems vary in the processing of chrominance information.

In the NTSC system, the color difference signals I and Q amplitude-modulate subcarriers that are displaced in phase by p/ 2, giving a suppressed carrier output. A burst of the subcarrier frequency is transmitted during the horizontal back porch to synchronize the color demodulator.

In the PAL system, the phase of the subcarrier is changed from line to line, which requires the transmission of a switching signal as well as a color burst.

14.5 VIDEO TRANSMISSION STANDARDS (CRITERIA FOR BROADCASTERS)

423

Table 14.1b Basic European TV Standard

 

 

 

 

 

Channel width

 

 

 

(Transmission)

7 ot 8 MHz

 

Video

5, 5.5, and 6 MHz

 

Aural

FM, ±50 kHz

 

Picture carrier

1.25 MHz above lower boundary of

 

location

channel

 

Note: VSB transmission is used, similar to North American practice.

Modulation

AM composite picture and synchronizing

 

signal on visual carrier together with FM

 

audio signal on audio carrier

Scanning lines

625 per frame, interlaced 2 : 1

Scanning sequence

Horizontally from left to right, vertically

 

from top to bottom

Horizontal scanning

15,625 Hz ± 0.1%

frequency

 

Vertical scanning

50 Hz

frequency

Transmitted at 75 ± 2.5% of peak carrier

Blanking level

 

level

Reference black

Black level is separated from blanking by

level

3–6.5% of peak carrier

Peak white level as

10–12.5%

a percentage of

 

peak carrier

 

Polarity of

Negative; a decrease in initial light

transmission

intensity causes an increase in radiated

 

power

Aural transmitter

Maximum radiated power is 20% of peak

power

visual power

 

 

Source: Refs. 7, 8.

 

In the SECAM system, the color subcarrier is frequency modulated alternately by the color difference signals. This is accomplished by an electronic line-to-line switch. The switching information is transmitted as a line-switching signal.

14.5.2 Standardized Transmission Parameters (Point-to-Point TV)

These parameters are provided in Table 14.2a for the basic electrical interface at video frequencies (baseband) and Table 14.2b for the interface at IF for radio systems. Signal- to-weighted noise ratio at output of final receiver is 53 dB.

Table 14.2a Interconnection at Video Frequencies (Baseband)

Impedance

75 Q (unbalanced) or 124 Q (resistive)

Return loss

No less than 30 dB

Nominal signal amplitude

1 V peak-to-peak (monochrome)

Nominal signal amplitude

1.25 V peak-to-peak, maximum

 

(composite color)

Polarity

Black-to-white transitions, positive going

 

 

424 TELEVISION TRANSMISSION

Table 14.2b Interconnection at Intermediate Frequency (IF)

Impedance

75

Q unbalanced

Input level

0.3 V rms

Output level

0.5 V rms

IF up to 1 GHz (transmitter frequency)

35

MHz

IF above 1 GHz

70

MHz

 

 

 

14.6 METHODS OF PROGRAM CHANNEL TRANSMISSION

A program channel carries the accompanying audio. If feeding a stereo system, there will, of course, be two audio channels. It may also imply what are called cue and coordination channels. A cue channel is used by a program director or producer to manage her/ his people at the distant end. It is just one more audio channel. A coordination channel is a service channel among TV technicians.

Composite transmission normally is used on TV broadcast and community antenna television (CATV or cable TV) systems. Video and audio carriers are “combined” before being fed to the radiating antenna for broadcast. The audio subcarrier is illustrated in Figure 14.6 in the composite mode.

For point-to-point transmission on coaxial cable, radiolink, and earth station systems, the audio program channel is generally transmitted separately from its companion video providing the following advantages:

Allows for individual channel level control;

Provides greater control over crosstalk;

Increases guardband between video and audio;

Saves separation at broadcast transmitter;

Leaves TV studio at separate channel; and

Permits individual program channel preemphasis.

14.7 TRANSMISSION OF VIDEO OVER LOS MICROWAVE

Video/ TV transmission over line-of-sight (LOS) microwave has two basic applications:

1. For studio-to-transmitter link, connecting a TV studio to its broadcast transmitter; and

2. To extend CATV systems to increase local programming content.

As covered earlier in the chapter, video transmission requires special consideration. The following paragraphs summarize the special considerations a planner must take

into account for video transmission over radiolinks.

Raw video baseband modulates the radiolink transmitter. The aural channel is transmitted on a subcarrier well above the video portion. The overall subcarriers are themselves frequency modulated. Recommended subcarrier frequencies may be found in CCIR Rec. 402-2 (Ref. 9) and Rep. 289-4 (Ref. 10).

14.7 TRANSMISSION OF VIDEO OVER LOS MICROWAVE

425

14.7.1 Bandwidth of the Baseband and Baseband Response

One of the most important specifications in any radiolink system transmitting video is frequency response. A system with cascaded hops should have essentially a flat bandpass in each hop. For example, if a single hop is 3 dB down at 6 MHz in the resulting baseband, a system of five such hops would be 15 dB down. A good single hop should be ±0.25 dB or less out to 8 MHz. The most critical area in the baseband for video frequency response is in the low-frequency area of 15 kHz and below. Cascaded radiolink systems used in transmitting video must consider response down to 10 Hz.

Modern radiolink equipment used to transport video operates in the 2-GHz band and above. The 525-line video requires a baseband in excess of 4.2 MHz plus available baseband above the video for the aural channel. Desirable characteristics for 525-line video then would be a baseband at least 6 MHz wide. For 625-line TV, 8 MHz would be required, assuming that the aural channel would follow the channelization recommended by CCIR Rec. 402-2 (Ref. 9).

14.7.2 Preemphasis

Preemphasis is commonly used on wideband radio systems that employ FM modulation. After demodulation in an FM receiver, thermal noise has a ramplike characteristic with increasing noise per unit bandwidth toward band edges and decreasing noise toward band center. Preemphasis and its companion, deemphasis, makes thermal noise more uniform across the demodulated baseband. Preemphasis-deemphasis characteristics for television are described in CCIR Rec. 405-1 (Ref. 11).

14.7.3 Differential Gain

Differential gain is the difference in gain of the radio relay system as measured by a low-amplitude, high-frequency (chrominance) signal at any two levels of a low-fre- quency (luminance) signal on which it is superimposed. It is expressed in percentage of maximum gain. Differential gain shall not exceed the amounts indicated below at any value of APL (average picture level) between 10% and 90%:

• Short haul

2%

• Medium haul

5%

Satellite

4%

• Long haul

8%

End-to-end

10%

Based on ANSI/ EIA/ TIA-250C (Ref. 6). Also see CCIR Rec. 567-3 (Ref. 12).

14.7.4 Differential Phase

Differential phase is the difference in phase shift through the radio relay system exhibited by a low-amplitude, high-frequency (chrominance) signal at any two levels of a low-frequency (luminance) signal on which it is superimposed. Differential phase is expressed as the maximum phase change between any two levels. Differential phase, expressed in degrees of the high-frequency sine wave, shall not exceed the amounts indicated below at any value of APL (average picture level) between 10% and 90%:

426

TELEVISION TRANSMISSION

 

 

 

Short haul

0.58

 

• Medium haul

1.38

 

Satellite

1.58

 

Long haul

2.58

 

End-to-end

3.08

Based on ANSI/ EIA/ TIA-250C (Ref. 6).

14.7.5 Signal-to-Noise Ratio (10 kHz to 5 MHz)

The video signal-to-noise ratio is the ratio of the total luminance signal level (100 IRE units) to the weighted rms noise level. The noise referred to is predominantly thermal noise in the 10 kHz to 5.0 MHz range. Synchronizing signals are not included in the measurement. The EIA states that there is a difference of less than 1 dB between 525line systems and 625-line systems.

As stated in the ANSI/ EIA/ TIA 250C standard (Ref. 6), the signal-to-noise ratio shall not be less than the following:

• Short haul

67 dB

• Medium haul

60 dB

Satellite

56 dB

• Long haul

54 dB

End-to-end

54 dB

and, for the low-frequency range (0 –10 kHz), the signal-to-noise ratio shall not be less than the following:

• Short haul

53 dB

• Medium haul

48 dB

Satellite

50 dB

• Long haul

44 dB

End-to-end

43 dB

14.7.6 Continuity Pilot

A continuity pilot tone is inserted above the TV baseband at 8.5 MHz. This is a constant amplitude signal used to actuate an automatic gain control circuit in the far-end FM receiver to maintain signal level fairly uniform. It may also be used to actuate an alarm when the pilot is lost. The normal TV baseband signal cannot be used for this purpose because of its widely varying amplitude.

14.8 TV TRANSMISSION BY SATELLITE RELAY

TV satellite relay is widely used throughout the world. In fact, it is estimated that better than three-quarters of the transponder bandwidth over North America is dedicated to television relay. These satellite facilities serve CATV headends, hotel/ motel TV service, for TV broadcasters providing country, continental and worldwide coverage for networks, TV news coverage, and so on.6

6Headend is where CATV program derives, its point of origin (see Section 15.2).

 

 

14.9 DIGITAL TELEVISION

427

Table 14.3 Satellite Relay TV Performance

 

 

 

 

 

 

 

 

 

 

Space

Terrestrial

End-to-End

 

Parameters

Segment

Linka

Values

 

Nominal impedance

75 Q

 

 

 

Return loss

30 dB

 

 

 

Nonuseful dc component

0.5 V

 

 

 

Nominal signal amplitude

1V

 

 

 

Insertion gain

0 ± 0.25 dB

0 ± 0.3 dB

0 ± 0.5 dB

 

Insertion gain variation (1s)

± 0.1 dB

± 0.2 dB

± 0.3 dB

 

Insertion gain variation (1h)

± 0.25 dB

± 0.3 dB

± 0.5 dB

 

Signal-to-continuous random noise

53 dBb

58 dB

51 dB

 

Signal-to-periodic noise (0–1 kHz)

50 dB

45 dB

39 dB

 

Signal-to-periodic noise

55 dB

60 dB

53 dB

 

(1 kHz–6 MHz)

 

 

25 DBc

 

Signal-to-impulse noise

25 dB

25 dB

 

Crosstalk between channels

58 dB

64 dB

56 dB

 

(undistorted)

 

 

 

 

Crosstalk between channels

50 dB

56 dB

48 dB

 

(undifferentiated)

 

 

 

 

Luminance nonlinear distortion

10%

2%

12%

 

Chrominance nonlinear distortion

3.5%

2%

5%

 

(amplitude)

 

 

 

 

Chrominance nonlinear distortion

48

28

68

 

(phase)

 

 

 

 

Differential gain (x or y)

10%

5%

13%

 

Differential phase (x or y)

38

28

58

 

Chrominance–luminance

± 4.5%

± 2%

± 5%

 

intermodulation

± 0.5 dB

± 0.5 dBd

± 1.0 dBd

 

Gain–frequency characteristic

 

(0.15–6 MHz)

± 50 ns

± 50 nsd

± 105 nsd

 

Delay–frequency characteristic

 

(0.15–6 MHz)

aConnecting earth station to national technical control center.

bIn cases where the receive earth station is colocated with the broadcaster’s premises, a relaxation of up to 3 dB in video signal-to-weighted-noise ratio may be permissible. In this context, the term colocated is intended to represent the situation where the noise contribution of the local connection is negligible.

cLaw of addition not specified in CCIR. Rec. 567.

dHighest frequency: 5 MHz.

Source: CCIR Rep. 965-1, Ref. 13.

As discussed in Section 9.3, a satellite transponder is nothing more than an RF repeater. This means that the TV signal degrades but little due to the transmission medium. Whereas with LOS microwave, the signal will pass through many repeaters, each degrading the signal somewhat. With the satellite there is one repeater, possibly two for worldwide coverage. However, care must be taken with the satellite delay problem if programming is interactive when two satellites in tandem relay a TV signal. Table 14.3 summarizes TV performance through satellite relay.

14.9 DIGITAL TELEVISION

14.9.1 Introduction

Up to this point in the chapter we have covered the baseband TV signal and radio relaying of that signal via LOS microwave and satellite, both analog systems using

428 TELEVISION TRANSMISSION

frequency modulation. The PSTN is evolving into a fully digital network. It is estimated that by the year 2005 the world’s entire PSTN will be digital.

Now convert a TV signal to a digital format. Let’s assume it is PCM (Chapter 6) with those three steps required to develop a digital PCM signal from an analog counterpart. We will remember those steps are sampling, quantization, and coding. The bit rate of such a digital signal will be in the order of 80 Mbps–160 Mbps. If we were to assume one bit per Hz of bandwidth, the signal would require in the range of 80 MHz to 160 MHz for just one TV channel! This is a viable alternative on extremely wideband fiberoptic systems. However, for narrower bandwidth systems, such as radio and coaxial cable, compression techniques are necessary.

Television, by its very nature, is highly redundant. What we mean here is that we are repeating much of the same information over and over again. Thus a TV signal is a natural candidate for digital signal processing to remove much of the redundancy, with the potential to greatly reduce the required bit rate after compression.

In this section we will discuss several digitizing techniques for TV video and provide an overview of some bit rate reduction methods, including a brief introduction to a derivative of MPEG-2.7 The last section reviews several approaches to the development of a conference television signal.

14.9.2 Basic Digital Television

14.9.2.1 Two Coding Schemes. There are two distinct digital coding methods for color television: component and composite coding. For our discussion here, there are four components that make up a color video signal. These are R for red, G for green, B for blue, and Y for luminance. The output signals of a TV camera are converted by a linear matrix into luminance (Y) and two color difference signals R-Y and B-Y.

With the component method of transmission, these signals are individually digitized by an analog-to-digital (A/ D) converter. The resulting digital bit streams are then combined with overhead and timing by means of a multiplexer for transmission over a single medium such as specially conditioned wirepair or coaxial cable.

Composite coding, as the term implies, directly codes the entire video baseband. The derived bit stream has a notably lower bit rate than that for component coding.

CCIR Rep. 646-4 (Ref. 14) compares the two coding techniques. The advantages of separate-component coding are the following:

The input to the circuit is provided in separate component form by the signal sources (in the studio).

The component coding is adopted generally for studios, and the inherent advantages of component signals for studios must be preserved over the transmission link in order to allow downstream processing at a receiving studio.

The country receiving the signals via an international circuit uses a color system different from that used in the source country.

The transmission path is entirely digital, which fits in with the trend toward alldigital systems that is expected to continue.

The advantages of transmitting in the composite form are the following:

7MPEG stands for Motion Picture Experts Group.

14.9 DIGITAL TELEVISION

429

The input to the circuit is provided in the composite form by the signal sources (at the studio).

The color system used by the receiving country, in the case of an international circuit, is the same as that used by the source country.

The transmission path consists of mixed analog and digital sections.

14.9.2.2 Development of a PCM Signal from an Analog TV Signal. As we discussed in Chapter 6, there are three stages in the development of a PCM signal format from its analog counterpart. These are sampling, quantization, and coding. These same three stages are used in the development of a PCM signal from its equivalent analog video signal. There is one difference, though—no companding is used; quantization is linear.

The calculation of the sampling rate is based on the color subcarrier frequency called f sc. For NTSC television the color subcarrier is at 3.58 MHz. In some cases the sampling rate is three times this frequency (3f sc), in other cases four times the color subcarrier frequency (4f sc). For PAL television, the color subcarrier is 4.43 MHz. Based on 8-bit PCM words, the bit rates are 3 × 3.58 × 106 × 8 c 85.92 Mbps and 4 × 3.58 × 106 × 8 c 114.56 Mbps. In the case of PAL transmission system using 4f sc, the uncompressed bit rate for the video is 4 × 4.43 × 106 × 8 c 141.76 Mbps. These values are for composite coding.

In the case of component coding there are two separate digital channels. The luminance channel is 108 Mbps and the color-difference channel is 54 Mbps (for both NTSC and PAL systems).

In any case, linear quantization is employed, and an S/ D (signal-to-distortion ratio) of better than 48 dB can be achieved with 8-bit coding. The coding is pure binary or two’s complement encoding.

14.9.3 Bit Rate Reduction and Compression Techniques

In Section 14.9.2 raw video transmitted in a digital format required from 82 Mbps to 162 Mbps per video channel (no audio). Leaving aside studio-to-transmitter links (STL) and CATV supertrunks, it is incumbent on the transmission engineer to reduce these bit rates without sacrificing picture quality if there are any long-distance requirements involved.

CCIR Rep. 646-4 (Ref. 14) covers three basic bit rate reduction methods:

1. Removal of horizontal and vertical blanking intervals;

2. Reduction of sampling frequency; and

3. Reduction of the number of bits per sample.

We will only cover this last method.

14.9.3.1 Reduction of the Number of Bits per Sample. There are three methods that may be employed for bit rate reduction of digital television by reducing the number of bits per sample. These may be used singly or in combination:

Predictive coding, sometimes called differential PCM;

Entropy coding; and

Transform coding.

430 TELEVISION TRANSMISSION

Differential PCM, according to CCIR, has so far emerged as the most popular method. The prediction process required can be classified into two groups. The first one is called intraframe or intrafield, and is based only on the reduction of spatial redundancy. The second group is called interframe or interfield, and is based on the reduction of temporal redundancy as well as spatial redundancy.

14.9.3.2 Specific Bit Rate Reduction Techniques. The following specific bit rate reduction techniques are based on Refs. 14 and 15.

Intraframe coding. Intraframe coding techniques provide compression by removing redundant information within each video frame. These techniques rely on the fact that images typically contain a great deal of similar information; for example, a one-color background wall may occupy a large part of each frame. By taking advantage of this redundancy, the amount of data necessary to accurately reproduce each frame may be reduced.

Interframe coding. Interframe coding is a technique that adds the dimension of time to compression by taking advantage of the similarity between adjacent frames. Only those portions of the picture that have changed since the previous picture frame are communicated. Interframe coding systems do not transmit detailed information if it has not changed from one frame to the next. The result is a significant increase in transmission efficiency.

Intraframe and interframe coding used in combination. Intraframe and interframe coding used together provide a powerful compression technique. This is achieved by applying intraframe coding techniques to the image changes that occur from frame to frame. That is, by subtracting image elements between adjacent frames, a new image remains that contains only the differences between the frames. Intraframe coding, which removes similar information within a frame, is applied to this image to provide further reduction in redundancy.

Motion compensation coding. To improve image quality at low transmission rates, a specific type of interframe coding motion compensation is commonly used. Motion compensation applies the fact that most changes between frames of a video sequence occur because objects move. By focusing on the motion that has occurred between frames, motion compensation coding significantly reduces the amount of data that must be transmitted.

Motion compensation coding compares each frame with the preceding frame to determine the motion that has occurred between the two frames. It compensates for this motion by describing the magnitude and direction of an object’s movement (e.g., a head moving right). Rather than completely regenerating any object that moves, motion compensation coding simply commands that the existing object be moved to a new location.

Once the motion compensation techniques estimate and compensate for the motion that takes place from frame-to-frame, the differences between frames are smaller. As a result, there is less image information to be transmitted. Intraframe coding techniques are applied to this remaining image information.

14.9.4 Overview of the MPEG-2 Compression Technique

This section is based on the ATSC (Advanced Television Systems Committee) version of MPEG-2, which is used primarily for terrestrial broadcasting and cable TV. The objective of the ATSC standard (Refs. 16, 17) is to specify a system for the transmission

14.9 DIGITAL TELEVISION

431

Figure 14.7 Block diagram of the digital terrestrial television broadcasting model. (Based on the ITU-R Task Group 11.3 model. From Ref. 18, Figure 4.1. Reprinted with permission.)

of high-quality video, audio and ancillary services over a single 6-MHz channel.8 The ATSC system delivers 19 Mbps of throughput on a 6-MHz broadcasting channel and 38 Mbps on a 6-MHZ CATV channel. The video source, which is encoded, can have a resolution as much as five times better than conventional NTSC television. This means that a bit rate reduction factor of 50 or higher is required. To do this the system must be efficient in utilizing the channel capacity by exploiting complex video and audio reduction technology. The objective is to represent the video, audio, and data sources with as few bits as possible while preserving the level of quality required for a given application.

A block diagram of the ATSC system is shown in Figure 14.7. This system model consists of three subsystems:

1. Source coding and compression;

2. Service multiplex and transport; and

3. RF/ transmission subsystem.

Of course, source coding and compression refers to bit rate reduction methods (data compression), which are appropriate for the video, audio, and ancillary digital data bit streams. The ancillary data include control data, conditional access control data, and data associated with the program audio and video services, such as closed captioning. Ancillary data can also refer to independent program services. The digital television system uses MPEG-2 video stream syntax for coding of the video and Digital Audio Compression Standard, called AC-3, for the coding of the audio.

Service multiplex and transport refers to dividing the bit stream into packets of infor-

8The reader will recall that the conventional 525-line NTSC television signal, when radiated, is assigned a 6-MHz channel (see Figure 14.6).

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]