Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
литература / Digital_Video_and_HD_Second_Edition_Algorithms_and_Interfaces.pdf
Скачиваний:
0
Добавлен:
13.05.2026
Размер:
38.02 Mб
Скачать

A perverse encoder could use an intra quantizer matrix that isn’t perceptually coded.

A perverse encoder could use a nonintra quantizer matrix that isn’t flat. Separate nonintra quantizer matrices can be provided for luma and chroma.

circumstances concealment motion vectors (CMVs) are allowed: If a macroblock is lost owing to transmission error, CMVs allow a decoder to use its prediction facilities to synthesize picture information to conceal the erred macroblock. A CMV would be useless if it were contained in its own macroblock! So, a CMV is associated with the macroblock immediately below.

Coding of a block

Each macroblock is accompanied by a small amount of prediction mode information; zero, one, or more motion vectors (MVs); and DCT-coded residuals.

Each block of an intra macroblock is coded similarly to a block in JPEG. Transform coefficients are quantized with a quantizer matrix that is (ordinarily) perceptually weighted. Provision is made for 8-, 9-, and 10-bit DC coefficients. (In the 422 profile [422P], 11-bit DC coefficients are permitted.) DC coefficients are differentially coded within a slice (to be described on page 534).

In an I-picture, DC terms of the DCT are differentially coded: The DC term for each luma block is used as a predictor for the corresponding DC term of the following macroblock. DC terms for CB and CR blocks are similarly predicted.

In principle, residuals in a nonintra macroblock could be encoded directly. In MPEG, they are coded using DCT, for two reasons. First, DCT coding exploits any spatial coherence that may be present in the residual. Second, DCT coding allows use of the same rate control (based upon quantization) and VLE encoding that are already in place for intra macroblocks. The residuals for a nonintra block are dequantized, then added to the motion-compensated values from the reference frame. Because the dequantized transform coefficients are not directly viewed, it is not appropriate to use a perceptually weighted quantizer matrix. By default, the quantizer matrix for nonintra blocks is flat – that is, it contains the same value in all entries.

Frame and field DCT types

Luma in a macroblock is partitioned into four blocks according to one of two schemes, frame DCT coding or field DCT coding. I will describe three cases where frame

CHAPTER 47

MPEG-2 VIDEO COMPRESSION

525

Figure 47.4 The frame DCT type involves straightforward partitioning of luma samples of each 16× 16 macroblock into four 8× 8 blocks. This is most efficient for macroblocks of field pictures, native progressive frame pictures, and frame-struc- tured pictures having little interfield motion.

At first glance it is a paradox that field-structured pictures must use frame DCT coding!

DCT coding is appropriate, and then introduce field DCT coding.

In a frame-structured picture that originated from

a native-progressive source, every macroblock is best predicted by a spatially contiguous 16× 16 region of

a reference frame. This is frame DCT coding: Luma samples of a macroblock are partitioned into 8× 8 luma

blocks as depicted in Figure 47.4 above.

In a field-structured picture, alternate image rows of each source frame have been unwoven by the encoder into two fields, each of which is free from interlace

effects. Every macroblock in such a picture is best predicted from a spatially contiguous 16× 16 region of

a reference field (or, if you prefer to think of it this way, from alternate lines of a 16× 32 region of a reference

frame). This is also frame DCT coding.

In a frame-structured picture from an interlaced source, a macroblock that contains no scene element in motion is ordinarily best predicted by frame DCT coding.

An alternate approach is necessary in a frame-struc- tured picture from an interlaced source where a macroblock contains a scene element in motion. Such a scene

element will take different positions in the first and second fields: A spatially contiguous 16× 16 region of

a reference picture will form a poor predictor. MPEG-2 provides a way to efficiently code such a macroblock.

The scheme involves an alternate partitioning of luma into 8× 8 blocks: Luma blocks are formed by collecting

alternate rows of the reference frame. The scheme is called field DCT coding; it is depicted in Figure 47.5 at the top of the facing page.

526

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES

Figure 47.5 The field DCT type creates four 8× 8 luma blocks by collecting alternate image rows. This allows efficient coding of a frame-structured picture from an interlaced source, where there is significant interfield motion. (Comparable unweaving is already implicit in field-structured pictures.)

In MPEG terminology, the absolute value of an AC coefficient is its level. I prefer to call it amplitude. Sign is coded separately.

You might think it a good idea to handle chroma samples in interlaced frame pictures the same way that luma is handled. However, with 4:2:0 subsampling, that would force having either 8× 4 chroma blocks or 16× 32 macroblocks. Neither of these options is desirable; so, in a frame-structured picture with interfield motion, chroma blocks are generally poorly predicted. Owing to the absence of vertical subsampling in the 4:2:2 chroma format, 4:2:2 sequences are inherently free from such poor chroma prediction.

Zigzag and VLE

Once DCT coefficients are quantized, an encoder scans them in zigzag order. I sketched zigzag scanning in JPEG in Figure 45.8, on page 499. This scan order, depicted in Figure 47.6 overleaf, is also used in MPEG-1.

In addition to the JPEG/MPEG-1 scan order, MPEG-2 provides an alternate scan order optimized for framestructured pictures from interlaced sources. The alternate scan, sketched in Figure 47.7 overleaf, can be chosen by an encoder on a picture-by-picture basis.

After zigzag scanning, zero-valued AC coefficients are identified, then {run-length, level} pairs are formed and variable-length encoded. For intra macroblocks, MPEG-2 allows an encoder to choose between two VLE schemes: the scheme first standardized in MPEG-1, and an alternate scheme more suitable for frame-structured pictures with interfield motion.

Block diagrams of an MPEG-2 encoder and decoder system are sketched in Figure 47.8 overleaf.

CHAPTER 47

MPEG-2 VIDEO COMPRESSION

527

Distributed refresh does not guarantee a deterministic time to complete refresh. See Lookabaugh, cited at the end of this chapter.

Figure 47.6 Zigzag scan[0]

Figure 47.7 Zigzag scan[1]

denotes the scan order used

may be chosen by an MPEG-2

in JPEG and MPEG-1, and

encoder on a picture-by-

available in MPEG-2.

picture basis.

Refresh

Occasional insertion of I-macroblocks is necessary for three main reasons: to establish a reference picture upon channel acquisition; to limit the duration of artifacts introduced by uncorrectable transmission errors; and to limit drift (that is, divergence of encoder and decoder predictors due to mistracking between the encoder’s IDCT and the decoder’s IDCT). MPEG-2 mandates that every macroblock in the frame be refreshed by an intra macroblock before the 132nd P-macroblock. Encoders usually meet this requirement by periodically or intermittently inserting I-pictures. However, I-pictures are not a strict requirement of MPEG-2, and distributed refresh – where I-macroblocks are used for refresh, instead of I-pictures – is occasionally used, especially for direct broadcast from satellite (DBS).

A sophisticated encoder examines the source video to detect scene cuts, and adapts its sequence of picture types according to picture content.

Motion estimation

A motion vector must do more than cover motion from one frame to the next: With B-pictures, a motion vector must describe motion from one reference frame to the next – that is, from an I-picture or P-picture to the following I-picture or P-picture. As the number of interposed B-pictures increases – as page 155’s m value increases – motion vector range must increase. The cost

528

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES

47 CHAPTER

COMPRESSION VIDEO 2-MPEG

VIDEO IN

 

 

 

REFERENCE

ME

 

FRAMESTORES

+

 

+

 

 

-

DCT

Q

MCI

+

 

 

 

 

QUANT

 

 

 

 

MATRIX

DCT-1 Q-1

RATE CONTROL

VLE

TABLES

FIFO

MOTION

QUANTIZER MATRICES;

MPEG

VECTORS

QUANTIZER SCALE FACTOR

BITSTREAM

VIDEO OUT

REFERENCE

 

FRAMESTORES

 

MCI

 

+

+

 

DCT-1 Q-1

MATRIX

BUFFER

VLE-1

TABLES

529

Figure 47.8 MPEG encoder and decoder block diagrams are sketched here. The encoder includes a motion estimator (ME); this involves huge computational complexity. Motion vectors (MVs) are incorporated into the bitstream and thereby conveyed to the decoder; the decoder does not need to estimate motion. The encoder effectively contains a copy of the decoder; the encoder’s picture difference calculations are based upon reconstructed picture information that will be available at the decoder.

Whether an encoder actually searches this extent is not standardized!

and complexity of motion estimation increases dramatically as search range increases.

The burden of motion estimation (ME) falls on the encoder. Motion estimation is very complex and computationally intensive. MPEG-2 allows a huge motion vector range: For MP@ML frame-structured pictures, the 16× 16 prediction region can potentially lie anywhere within [-1024…+102312] luma samples horizontally and [-128…+12712] luma samples vertically from the macroblock being decoded. Elements in the picture header (f code) specify the motion vector range used in each picture; this limits the number of bits that need to be allocated to motion vectors for that picture.

The purpose of the motion estimation in MPEG is not exactly to estimate motion in regions of the picture – rather, it is to access a prediction region that minimizes the amount of prediction error (residual) information that needs to be coded. Usually this goal will be achieved by using the best estimate of average motion in the 16× 16 macroblock, but not always.

I make this distinction because some video processing algorithms need accurate motion vectors, where the estimated motion is a good match to motion as perceived by a human observer. In many video processing algorithms, such as in temporal resampling used in standards converters, or in deinterlacing,

a motion vector is needed for every luma sample, or every few samples. In MPEG, only one or two vectors are needed to predict a macroblock from a 16× 16 region in one or two reference pictures.

If the fraction bit of a motion vector is set, then predictions are formed by averaging sample values from neighboring pixels (at integer coordinates). This is straightforward for a decoder. However, for an encoder to produce 12-luma-sample motion vectors in both horizontal and vertical axes requires quadruple the computational effort of producing full-sample vectors.

There are three major methods of motion estimation:

Block matching, also called full search, involves an exhaustive search for the best match of the target macroblock through some two-dimensional extent of

530

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES

Соседние файлы в папке литература