Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
литература / Digital_Video_and_HD_Second_Edition_Algorithms_and_Interfaces.pdf
Скачиваний:
0
Добавлен:
13.05.2026
Размер:
38.02 Mб
Скачать

image are included at the head of the JPEG bitstream, and thereby conveyed to the decoder.

JPEG decoding

Decompression is achieved by performing the inverse of the encoder operations, in reverse order. Figure 45.11 shows the matrix of differences between original sample values and reconstructed sample values for this example – the reconstruction error. As is typical of JPEG, the original sample values are not perfectly reconstructed. However, discarding information according to the spatial frequency response of human vision ensures that the errors introduced during compression will not be too perceptible.

JPEG performance is loosely characterized by the error between the original image data and the reconstructed data. Metrics such as mean-squared error (MSE) are used to objectify this measure; however, MSE (and other engineering and mathematical measures) don’t necessarily correlate well with subjective performance. In practice, we take care to choose quantizer matrices according to the properties of perception. Imperfect recovery of the original image data after JPEG decompression effectively adds noise to the image. Imperfect reconstruction of the DC term can lead to JPEG’s 8× 8 blocks becoming visible – the JPEG blocking artifact.

The lossiness of JPEG, and its compression, come almost entirely from the quantizer step. The DCT itself may introduce a small amount of roundoff error; the inverse DCT may also introduce a slight roundoff error. The variable-length encoding and decoding processes are perfectly lossless.

Figure 45.11 Reconstruction

 

 

 

 

 

 

 

 

 

 

-5 -2

0

1

1

-1 -1 -1

error is shown in this matrix of

 

 

-4

1

1

2

3

0

0

0

differences between original

 

sample values and reconstructed

 

-5 -1

3

5

0

-1

0

1

sample values. Original sample

 

 

-1

0

1

-2

-1

0

2

4

values are not perfectly recon-

ε =

structed, but discarding informa-

-4

-3

-3

-1

0

-5

-3

-1

tion according to the spatial

 

-2

-2

-3

-3

-2

-3

-1

0

frequency response of human

 

vision ensures that the errors will

 

2

1

-1

1

0

-4 -2 -1

not be too perceptible.

 

 

4

3

0

0

1

-3

-1

0

 

 

 

 

 

 

 

 

 

 

 

 

500

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES

Figure 45.12 Compression ratio control in JPEG is effected by altering the quantizer matrix: The larger the entries in the quantizer matrix, the higher the compression ratio. The higher the compression ratio, the higher the reconstruction error. At some point, compression artifacts will become visible.

DCT

Q

VLE

 

Q

 

DCT-1 Q-1 VLE-1

In ISO JPEG, the quantizer matrix is directly conveyed in the bitstream. In the DV adaptation of JPEG, several quantizer matrices are defined in the standard; the bitstream indicates which one to use.

Compression ratio control

The larger the entries in the quantizer matrix, the higher the compression ratio. Compression ratio control in JPEG can be achieved by altering the quantizer matrix, as suggested by the manual control sketched in

Figure 45.12. Larger step sizes give higher compression ratios, but image quality is liable to suffer if the step sizes get too big. Smaller step sizes give better quality, at the expense of poorer compression ratio. There is no easy way to predict, in advance of actually performing the compression, how many bytes of compressed data will result from a particular image.

The quantizer matrix could, in principle, be chosen adaptively to maximize the performance for a particular image. However, this isn’t practical. JPEG encoders for still images generally offer a choice of several compression settings, each associated with a fixed quantizer that is chosen by the system designer.

Because different quantizer matrices may be associated with different images, the quantizer matrix must be conveyed to the decoder, as sketched in

Figure 45.13, either as part of the file, or through a side channel. In colour images, separate quantizers are typically used for the luma and chroma components. In stillframe applications, the overhead of this operation is small. In a realtime system, the overhead of conveying quantizer matrices with every frame, or even within

a frame, is a burden.

A modified approach to compression ratio control is adopted in many forms of M-JPEG (and also, as you will see in the next chapter, in MPEG): Reference luma and chroma quantizer matrices are established, and all of

CHAPTER 45

JPEG AND MOTION-JPEG (M-JPEG) COMPRESSION

501

Figure 45.13 Because the quantizer is adjustable, the quantizer matrix must be conveyed through a side channel to the decoder. In colour images, separate quantizers are used for the luma and chroma components.

DCT

Q

VLE

 

TABLE

 

DCT-1 Q-1 VLE-1

TABLE

The notation mquant is found in the ITU-T H.261 standard for teleconferencing; mquant (or MQUANT) is not found in JPEG or MPEG documents, but is used informally.

Hamilton, Eric (1992), JPEG File Interchange Format, Version 1.02 (Milpitas, Calif.: C-Cube Microsystems). This informal document was endorsed by ECMA and was published in June 2009 as ECMA TR/98 having the same title.

their entries are scaled up and down by a single numerical parameter, the quantizer scale factor (QSF, sometimes denoted Mquant). QSF can be varied to accomplish rate control.

As mentioned earlier, JPEG ordinarily uses luma/chroma coding with 4:2:0 chroma subsampling. However, the JPEG standard accommodates R’G’B’ image data without subsampling, and also accommodates four-channel image data (such as CMYK, used in print) without subsampling. These schemes are inapplicable to video.

JPEG/JFIF

The ISO/IEC standard for JPEG defines a bitstream, consistent with the original expectation that JPEG would be used across communication links. To apply the JPEG technique to computer files, a small amount of supplementary information is required; in addition, it is necessary to encode the ISO/IEC bitstream into

a bytestream. The de facto standard for single-image JPEG files is the JPEG File Interchange Format (JFIF), adopted by an industry group led by C-Cube. A JFIF file encapsulates a JPEG bitstream, along with a small amount of supplementary data. If you are presented with an image data file described as JPEG, it is almost certainly an ISO/IEC JPEG bitstream in a JFIF wrapper.

The JPEG standard itself implies that JPEG could be applied to linear-light RGB data. However, JPEG has poor visual performance unless applied to perceptually coded image data, that is, to gamma-corrected R’G’B’.

502

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES

The ISO/IEC JPEG standard itself seems to suggest that the technique can be applied to arbitrary RGB data. The standard itself fails to mention primary chromaticities, white point, transfer function, or gamma correction. If accurate colour is to be achieved, then means outside the standard must be employed to convey these parameters. Prior to Mac OS X 10.6, there were two classes of JPEG/JFIF files, PC and Macintosh. Files that were created on classic Macintosh conformed to the default Macintosh coding, where R’G’B’ codes were expected to be raised to the 1.52 power to produce display tristimulus. There was no reliable way to distinguish the two classes of files. Files created on PCs, and modern Macs, are interpreted in sRGB coding.

Motion-JPEG (M-JPEG)

Motion-JPEG (M-JPEG) refers to the use of a JPEG-like algorithm to compress every picture in a sequence of video fields or frames. I say “JPEG-like”: The algorithms used have all of the general features of the algorithm standardized by JPEG, including DCT, quantization, zigzag scanning, and variable-length encoding. However, ISO JPEG bitstreams are not typically produced, and some systems add algorithmic features outside of the JPEG standard. Various M-JPEG systems are widely used in desktop video editing; however, there are no well established standards, and compressed video files typically cannot be interchanged between M-JPEG systems.

In studio applications, file interchange is a practical necessity, and two approaches have emerged. Both are functionally equivalent to M-JPEG, but have firm standards.

The first approach is DV compression, developed for consumer digital recording on videotape, but also used in desktop video editing. DV compression is described in the following chapter.

The second approach is MPEG-2 video compression, described in Chapter 47, on page 513. MPEG-2 was developed to exploit interframe coherence to achieve much higher compression ratios than M-JPEG, and is intended mainly for video distribution. However, the I-picture-only variant of MPEG-2 (sometimes called I-frame-only) is functionally equivalent to M-JPEG, and is being used for studio editing.

CHAPTER 45

JPEG AND MOTION-JPEG (M-JPEG) COMPRESSION

503

Соседние файлы в папке литература