- •Contents
- •Figures
- •Tables
- •Preface
- •Acknowledgments
- •1. Raster images
- •Aspect ratio
- •Geometry
- •Image capture
- •Digitization
- •Perceptual uniformity
- •Colour
- •Luma and colour difference components
- •Digital image representation
- •Square sampling
- •Comparison of aspect ratios
- •Aspect ratio
- •Frame rates
- •Image state
- •EOCF standards
- •Entertainment programming
- •Acquisition
- •Consumer origination
- •Consumer electronics (CE) display
- •Contrast
- •Contrast ratio
- •Perceptual uniformity
- •The “code 100” problem and nonlinear image coding
- •Linear and nonlinear
- •4. Quantization
- •Linearity
- •Decibels
- •Noise, signal, sensitivity
- •Quantization error
- •Full-swing
- •Studio-swing (footroom and headroom)
- •Interface offset
- •Processing coding
- •Two’s complement wrap-around
- •Perceptual attributes
- •History of display signal processing
- •Digital driving levels
- •Relationship between signal and lightness
- •Algorithm
- •Black level setting
- •Effect of contrast and brightness on contrast and brightness
- •An alternate interpretation
- •Brightness and contrast controls in LCDs
- •Brightness and contrast controls in PDPs
- •Brightness and contrast controls in desktop graphics
- •Symbolic image description
- •Raster images
- •Conversion among types
- •Image files
- •“Resolution” in computer graphics
- •7. Image structure
- •Image reconstruction
- •Sampling aperture
- •Spot profile
- •Box distribution
- •Gaussian distribution
- •8. Raster scanning
- •Flicker, refresh rate, and frame rate
- •Introduction to scanning
- •Scanning parameters
- •Interlaced format
- •Interlace and progressive
- •Scanning notation
- •Motion portrayal
- •Segmented-frame (24PsF)
- •Video system taxonomy
- •Conversion among systems
- •9. Resolution
- •Magnitude frequency response and bandwidth
- •Visual acuity
- •Viewing distance and angle
- •Kell effect
- •Resolution
- •Resolution in video
- •Viewing distance
- •Interlace revisited
- •10. Constant luminance
- •The principle of constant luminance
- •Compensating for the CRT
- •Departure from constant luminance
- •Luma
- •“Leakage” of luminance into chroma
- •11. Picture rendering
- •Surround effect
- •Tone scale alteration
- •Incorporation of rendering
- •Rendering in desktop computing
- •Luma
- •Sloppy use of the term luminance
- •Colour difference coding (chroma)
- •Chroma subsampling
- •Chroma subsampling notation
- •Chroma subsampling filters
- •Chroma in composite NTSC and PAL
- •Scanning standards
- •Widescreen (16:9) SD
- •Square and nonsquare sampling
- •Resampling
- •NTSC and PAL encoding
- •NTSC and PAL decoding
- •S-video interface
- •Frequency interleaving
- •Composite analog SD
- •15. Introduction to HD
- •HD scanning
- •Colour coding for BT.709 HD
- •Data compression
- •Image compression
- •Lossy compression
- •JPEG
- •Motion-JPEG
- •JPEG 2000
- •Mezzanine compression
- •MPEG
- •Picture coding types (I, P, B)
- •Reordering
- •MPEG-1
- •MPEG-2
- •Other MPEGs
- •MPEG IMX
- •MPEG-4
- •AVC-Intra
- •WM9, WM10, VC-1 codecs
- •Compression for CE acquisition
- •AVCHD
- •Compression for IP transport to consumers
- •VP8 (“WebM”) codec
- •Dirac (basic)
- •17. Streams and files
- •Historical overview
- •Physical layer
- •Stream interfaces
- •IEEE 1394 (FireWire, i.LINK)
- •HTTP live streaming (HLS)
- •18. Metadata
- •Metadata Example 1: CD-DA
- •Metadata Example 2: .yuv files
- •Metadata Example 3: RFF
- •Metadata Example 4: JPEG/JFIF
- •Metadata Example 5: Sequence display extension
- •Conclusions
- •19. Stereoscopic (“3-D”) video
- •Acquisition
- •S3D display
- •Anaglyph
- •Temporal multiplexing
- •Polarization
- •Wavelength multiplexing (Infitec/Dolby)
- •Autostereoscopic displays
- •Parallax barrier display
- •Lenticular display
- •Recording and compression
- •Consumer interface and display
- •Ghosting
- •Vergence and accommodation
- •20. Filtering and sampling
- •Sampling theorem
- •Sampling at exactly 0.5fS
- •Magnitude frequency response
- •Magnitude frequency response of a boxcar
- •The sinc weighting function
- •Frequency response of point sampling
- •Fourier transform pairs
- •Analog filters
- •Digital filters
- •Impulse response
- •Finite impulse response (FIR) filters
- •Physical realizability of a filter
- •Phase response (group delay)
- •Infinite impulse response (IIR) filters
- •Lowpass filter
- •Digital filter design
- •Reconstruction
- •Reconstruction close to 0.5fS
- •“(sin x)/x” correction
- •Further reading
- •2:1 downsampling
- •Oversampling
- •Interpolation
- •Lagrange interpolation
- •Lagrange interpolation as filtering
- •Polyphase interpolators
- •Polyphase taps and phases
- •Implementing polyphase interpolators
- •Decimation
- •Lowpass filtering in decimation
- •Spatial frequency domain
- •Comb filtering
- •Spatial filtering
- •Image presampling filters
- •Image reconstruction filters
- •Spatial (2-D) oversampling
- •Retina
- •Adaptation
- •Contrast sensitivity
- •Contrast sensitivity function (CSF)
- •24. Luminance and lightness
- •Radiance, intensity
- •Luminance
- •Relative luminance
- •Luminance from red, green, and blue
- •Lightness (CIE L*)
- •Fundamentals of vision
- •Definitions
- •Spectral power distribution (SPD) and tristimulus
- •Spectral constraints
- •CIE XYZ tristimulus
- •CIE [x, y] chromaticity
- •Blackbody radiation
- •Colour temperature
- •White
- •Chromatic adaptation
- •Perceptually uniform colour spaces
- •CIE L*a*b* (CIELAB)
- •CIE L*u*v* and CIE L*a*b* summary
- •Colour specification and colour image coding
- •Further reading
- •Additive reproduction (RGB)
- •Characterization of RGB primaries
- •BT.709 primaries
- •Leggacy SD primaries
- •sRGB system
- •SMPTE Free Scale (FS) primaries
- •AMPAS ACES primaries
- •SMPTE/DCI P3 primaries
- •CMFs and SPDs
- •Normalization and scaling
- •Luminance coefficients
- •Transformations between RGB and CIE XYZ
- •Noise due to matrixing
- •Transforms among RGB systems
- •Camera white reference
- •Display white reference
- •Gamut
- •Wide-gamut reproduction
- •Free Scale Gamut, Free Scale Log (FS-Gamut, FS-Log)
- •Further reading
- •27. Gamma
- •Gamma in CRT physics
- •The amazing coincidence!
- •Gamma in video
- •Opto-electronic conversion functions (OECFs)
- •BT.709 OECF
- •SMPTE 240M OECF
- •sRGB transfer function
- •Transfer functions in SD
- •Bit depth requirements
- •Gamma in modern display devices
- •Estimating gamma
- •Gamma in video, CGI, and Macintosh
- •Gamma in computer graphics
- •Gamma in pseudocolour
- •Limitations of 8-bit linear coding
- •Linear and nonlinear coding in CGI
- •Colour acuity
- •RGB and R’G’B’ colour cubes
- •Conventional luma/colour difference coding
- •Luminance and luma notation
- •Nonlinear red, green, blue (R’G’B’)
- •BT.601 luma
- •BT.709 luma
- •Chroma subsampling, revisited
- •Luma/colour difference summary
- •SD and HD luma chaos
- •Luma/colour difference component sets
- •B’-Y’, R’-Y’ components for SD
- •PBPR components for SD
- •CBCR components for SD
- •Y’CBCR from studio RGB
- •Y’CBCR from computer RGB
- •“Full-swing” Y’CBCR
- •Y’UV, Y’IQ confusion
- •B’-Y’, R’-Y’ components for BT.709 HD
- •PBPR components for BT.709 HD
- •CBCR components for BT.709 HD
- •CBCR components for xvYCC
- •Y’CBCR from studio RGB
- •Y’CBCR from computer RGB
- •Conversions between HD and SD
- •Colour coding standards
- •31. Video signal processing
- •Edge treatment
- •Transition samples
- •Picture lines
- •Choice of SAL and SPW parameters
- •Video levels
- •Setup (pedestal)
- •BT.601 to computing
- •Enhancement
- •Median filtering
- •Coring
- •Chroma transition improvement (CTI)
- •Mixing and keying
- •Field rate
- •Line rate
- •Sound subcarrier
- •Addition of composite colour
- •NTSC colour subcarrier
- •576i PAL colour subcarrier
- •4fSC sampling
- •Common sampling rate
- •Numerology of HD scanning
- •Audio rates
- •33. Timecode
- •Introduction
- •Dropframe timecode
- •Editing
- •Linear timecode (LTC)
- •Vertical interval timecode (VITC)
- •Timecode structure
- •Further reading
- •34. 2-3 pulldown
- •2-3-3-2 pulldown
- •Conversion of film to different frame rates
- •Native 24 Hz coding
- •Conversion to other rates
- •Spatial domain
- •Vertical-temporal domain
- •Motion adaptivity
- •Further reading
- •36. Colourbars
- •SD colourbars
- •SD colourbar notation
- •Pluge element
- •Composite decoder adjustment using colourbars
- •-I, +Q, and Pluge elements in SD colourbars
- •HD colourbars
- •References
- •38. SDI and HD-SDI interfaces
- •Component digital SD interface (BT.601)
- •Serial digital interface (SDI)
- •Component digital HD-SDI
- •SDI and HD-SDI sync, TRS, and ancillary data
- •Analog sync and digital/analog timing relationships
- •Ancillary data
- •SDI coding
- •HD-SDI coding
- •Interfaces for compressed video
- •SDTI
- •Switching and mixing
- •Timing in digital facilities
- •Summary of digital interfaces
- •39. 480i component video
- •Frame rate
- •Interlace
- •Line sync
- •Field/frame sync
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Picture center, aspect ratio, and blanking
- •Halfline blanking
- •Component digital 4:2:2 interface
- •Component analog R’G’B’ interface
- •Component analog Y’PBPR interface, EBU N10
- •Component analog Y’PBPR interface, industry standard
- •40. 576i component video
- •Frame rate
- •Interlace
- •Line sync
- •Analog field/frame sync
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Picture center, aspect ratio, and blanking
- •Component digital 4:2:2 interface
- •Component analog 576i interface
- •Scanning
- •Analog sync
- •Picture center, aspect ratio, and blanking
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Component digital 4:2:2 interface
- •Scanning
- •Analog sync
- •Picture center, aspect ratio, and blanking
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Component digital 4:2:2 interface
- •43. HD videotape
- •HDCAM (D-11)
- •DVCPRO HD (D-12)
- •HDCAM SR (D-16)
- •JPEG blocks and MCUs
- •JPEG block diagram
- •Level shifting
- •Discrete cosine transform (DCT)
- •JPEG encoding example
- •JPEG decoding
- •Compression ratio control
- •JPEG/JFIF
- •Motion-JPEG (M-JPEG)
- •Further reading
- •46. DV compression
- •DV chroma subsampling
- •DV frame/field modes
- •Picture-in-shuttle in DV
- •DV overflow scheme
- •DV quantization
- •DV digital interface (DIF)
- •Consumer DV recording
- •Professional DV variants
- •47. MPEG-2 video compression
- •MPEG-2 profiles and levels
- •Picture structure
- •Frame rate and 2-3 pulldown in MPEG
- •Luma and chroma sampling structures
- •Macroblocks
- •Picture coding types – I, P, B
- •Prediction
- •Motion vectors (MVs)
- •Coding of a block
- •Frame and field DCT types
- •Zigzag and VLE
- •Refresh
- •Motion estimation
- •Rate control and buffer management
- •Bitstream syntax
- •Transport
- •Further reading
- •48. H.264 video compression
- •Algorithmic features, profiles, and levels
- •Baseline and extended profiles
- •High profiles
- •Hierarchy
- •Multiple reference pictures
- •Slices
- •Spatial intra prediction
- •Flexible motion compensation
- •Quarter-pel motion-compensated interpolation
- •Weighting and offsetting of MC prediction
- •16-bit integer transform
- •Quantizer
- •Variable-length coding
- •Context adaptivity
- •CABAC
- •Deblocking filter
- •Buffer control
- •Scalable video coding (SVC)
- •Multiview video coding (MVC)
- •AVC-Intra
- •Further reading
- •49. VP8 compression
- •Algorithmic features
- •Further reading
- •Elementary stream (ES)
- •Packetized elementary stream (PES)
- •MPEG-2 program stream
- •MPEG-2 transport stream
- •System clock
- •Further reading
- •Japan
- •United States
- •ATSC modulation
- •Europe
- •Further reading
- •Appendices
- •Cement vs. concrete
- •True CIE luminance
- •The misinterpretation of luminance
- •The enshrining of luma
- •Colour difference scale factors
- •Conclusion: A plea
- •Radiometry
- •Photometry
- •Light level examples
- •Image science
- •Units
- •Further reading
- •Glossary
- •Index
- •About the author
1⁄4 1⁄4
1⁄4 1⁄4
Figure 12.4 An Interstitial chroma filter for JPEG/JFIF averages samples over a 2× 2 block. Shading represents the spatial extent of luma samples. The black dot indicates the effective subsampled chroma position, equidistant from the four luma samples. The outline represents the spatial extent of the result.
1⁄4 1⁄2 1⁄4
Figure 12.5 A cosited chroma filter for BT.601, 4:2:2 causes each filtered chroma sample to be positioned coincident – cosited – with an evennumbered luma sample.
1⁄8 1⁄4 1⁄8
1⁄8 1⁄4 1⁄8
Figure 12.6 A cosited chroma filter for MPEG-2, 4:2:0 produces a filtered result sample that is cosited horizontally, but sited interstitially in the vertical dimension.
Chroma subsampling filters
In chroma subsampling, the encoder discards selected colour difference samples after filtering. A decoder approximates the missing samples by interpolation. To perform 4:2:0 subsampling with minimum computation, some systems simply average CB over
a 2× 2 block and average CR over the same 2× 2 block, as sketched in Figure 12.4 in the margin. To interpolate the missing chroma samples prior to conversion back to R’G’B’, low-end systems simply replicate the subsampled CB and CR values throughout the 2× 2 quad. This technique is ubiquitous in JPEG/JFIF stillframes in computing, and is used in M-JPEG, H.261, and MPEG-1. This simple averaging process causes subsampled chroma to take an effective horizontal position halfway between two luma samples, what I call interstitial siting, not the cosited position standardized for studio video.
A simple way to perform 4:2:2 subsampling with horizontal cositing as required by BT.601 is to use weights of [1⁄4, 1⁄2, 1⁄4], as sketched in Figure 12.5.
4:2:2 subsampling has the advantage of no interaction with interlaced scanning.
A cosited horizontal filter can be combined with [1⁄2, 1⁄2] vertical averaging, as sketched in Figure 12.6, to implement 4:2:0 as used in MPEG-2.
Simple averaging filters like those of Figures 12.4, 12.5, and 12.6 have acceptable performance for stillframes, where any alias components that are generated remain stationary, or for desktop-quality video. However, in a moving image, an alias component introduced by poor filtering is liable to move at a rate different from the associated scene elements, and thereby produce a highly objectionable artifact. Highend digital video equipment uses sophisticated subsampling filters, where the subsampled CB and CR of a 2× 1 pair in 4:2:2 (or of a 2× 2 quad in 4:2:0) take contributions from several surrounding samples. The relationship of filter weights, frequency response, and filter performance will be detailed in Filtering and sampling, on page 191. These coefficients implement a high quality FIR filter suitable for 4:2:2 subsampling:
[-1, 3, -6, 12, -24, 80, 128, 80, -24, 12, -6, 3, -1]/256.
CHAPTER 12 |
INTRODUCTION TO LUMA AND CHROMA |
127 |
The video literature often calls these quantities chrominance. That term has a specific meaning in colour science, so in video I prefer the term modulated chroma.
See Introduction to composite NTSC and PAL, on page 135. Concerning SECAM, see SECAM, on page 126 of Composite NTSC and PAL: Legacy Video Systems.
Chroma in composite NTSC and PAL
I introduced the colour difference components PBPR and CBCR, often called chroma components. They accompany luma in a component video system. I also introduced UV and IQ components; these are intermediate quantities in the formation of modulated chroma.
Historically, insufficient channel capacity was available to transmit three colour components separately. The NTSC technique was devised to combine the three colour components into a single composite signal; the PAL technique is both a refinement of NTSC and an adaptation of NTSC to 576i scanning. (In SECAM, the three colour components are also combined into one signal. SECAM is a form of composite video, but the technique has little in common with NTSC and PAL, and it is of little commercial importance today.)
NTSC and PAL encoders traditionally started with R’G’B’ components. At the culmination of composite video, digital encoders started with Y’CBCR components. NTSC or PAL encoding involves these steps:
•Component signals are matrixed and conditioned to form colour difference signals U and V (or I and Q).
•U and V (or I and Q) are lowpass-filtered, then quadrature modulation imposes the two colour difference signals onto an unmodulated colour subcarrier, to produce a modulated chroma signal, C.
•Luma and chroma are summed. In studio video, summation exploits the frequency-interleaving principle. Composite NTSC and PAL signals were historically
analog. During the 1990s, digital composite (4fSC) systems were used; the 4fSC scheme is now obsolete. As I mentioned in Video system taxonomy, on page 94,
composite video has been supplanted by component video in consumers’ premises and in industrial applications. For further information, see Introduction to composite NTSC and PAL, on page 135.
128 |
DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES |
The notation CCIR is often wrongly used to denote 576i25 scanning. The former CCIR (now ITU-R) standardized many scanning systems, not just 576i25.
Introduction to
component SD |
13 |
In Raster scanning, on page 83, I introduced the concepts of raster scanning; in Introduction to luma and chroma, on page 121, I introduced the concepts of colour coding in video. This chapter combines the concepts of raster scanning and colour coding to form the basic technical parameters of the 480i and 576i SD systems. This chapter concerns modern systems that use component colour – digital Y’CBCR (BT.601), or analog Y’PBPR. In Introduction to composite NTSC and PAL, on page 135, I will describe NTSC and PAL composite video encoding.
Scanning standards
Two scanning standards are in use for conventional analog television broadcasting in different parts of the world. The 480i29.97 system is used primarily in North America and Japan, and today accounts for roughly 1⁄4 of all television receivers. The 576i25 system is used primarily in Europe, Asia, Australia, and Central America, and accounts for roughly 3⁄4 of all television receivers. 480i29.97 (or 525/59.94/2:1) is colloquially referred to as NTSC, and 576i25 (or 625/50/2:1) as PAL; however, the terms NTSC and PAL properly apply to colour encoding and not to scanning standards. It is obvious from the scanning nomenclature that the line counts and field rates differ between the two systems:
In 480i29.97 video, the field rate is exactly 60⁄1.001 Hz; in 576i25, the field rate is exactly 50 Hz.
Several different standards for 480i29.97 and 576i25 digital video are sketched in Figure 13.1 overleaf.
129
|
|
480i29.97 SCANNING |
|||
|
|
780 |
30 |
||
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
640 |
|
Square sampling |
525 |
|
|
480 |
|
|
|
|
|
|
|
|
|
858 |
33: |
||
|
|
|
|||
|
|
|
|
|
|
Component |
|
|
|
704/708/720 |
|
|
|
|
|
|
|
BT.601 |
525 |
|
|
480 |
|
|
|
|
|
|
|
|
|
910 |
35: |
||
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
768 |
|
Composite 4fsc |
525 |
|
|
483 |
|
|
|
|
|
NTSC |
|
|
|
|
|
|
|
576i25 SCANNING
944
768
625 576
864
720
625 576
1135 4⁄625
948
625576 PAL
36 1⁄18
33:
Figure 13.1 SD digital video rasters for 4:3 aspect ratio. 480i29.97 scanning is at the left, 576i25 at the right. The top row shows square sampling (“square pixels”). The middle row shows sampling at the BT.601 standard sampling frequency of 13.5 MHz. The bottom row shows sampling at four times the colour subcarrier frequency (4fSC). Above each diagram is its count of samples per total line (STL); ratios among STL values are written vertically in bold numerals.
See PAL-M, PAL-N on page 125, and
SECAM on page 126 of Composite NTSC and PAL: Legacy Video Systems. Consumer frustration with a diversity of functionally equivalent standards led to proliferation of multistandard TVs and VCRs in countries using these standards.
Analog broadcast of 480i usually uses NTSC colour coding with a colour subcarrier of about 3.58 MHz; analog broadcast of 576i usually uses PAL colour coding with a colour subcarrier of about 4.43 MHz. It is important to use a notation that distinguishes scanning from colour, because other combinations of scanning and colour coding are in use in large and important regions of the world. Brazil uses PAL-M, which has 480i scanning and PAL colour coding. Argentina uses PAL-N, which has 576i scanning and a 3.58 MHz colour subcarrier nearly identical to NTSC’s subcarrier. In France, Russia, and other countries, SECAM is used. Production equipment is no longer manufactured for any of these obscure standards: Production in these countries is done using 480i or 576i studio equipment, either in the component domain or in 480i NTSC or 576i PAL. These studio signals are then transcoded prior to broadcast: The colour encoding is altered – for example, from PAL to SECAM – without altering scanning.
130 |
DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES |
480i29.97 SCANNING |
576i25 SCANNING |
|||||||||||
780 |
|
|
|
|
|
944 |
|
|||||
Square sampling |
|
|
|
|
|
|
|
R’G’B’ |
|
|
|
R’G’B’ |
|
|
|
|
|
|
|
|
|
|
|||
|
12 3⁄11 |
MHz |
|
|
|
|
|
|||||
|
|
|
|
|
14.75 MHz |
|
||||||
|
|
|
|
|
|
|
|
|
||||
|
|
(≈12.272727) |
|
|
|
|
||||||
858 |
|
|
|
|
|
864 |
|
|||||
Component 4:2:2 |
|
|
|
|
|
|
|
Y’CBCR |
|
|
|
Y’CBCR |
|
|
|
|
|
|
|
|
|
|
|||
BT.601 |
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13.5 MHz |
|
|
|
13.5 MHz |
|
|||||
|
910 |
|
|
|
|
|
|
|
1135 4⁄625 |
|
||
Composite 4fsc |
|
|
|
|
|
|
|
Y’IQ |
|
|
|
Y’UV |
|
|
|
|
|
|
|
|
|
|
|||
|
|
14 7⁄22 |
MHz |
NTSC |
|
|
|
PAL |
||||
|
|
|
|
|
|
|
|
|
|
17.734475 MHz |
|
|
|
|
(≈14.31818) |
|
|
|
|
|
|
||||
Figure 13.2 SD sample rates are shown for six different 4:3 standards, along with the usual colour coding for each standard. There is no realtime studio interface standard for square-sampled SD.
ITU-R Rec. BT.601-5, Studio encoding parameters of digital television for standard 4:3 and widescreen 16:9 aspect ratios.
Figure 13.1 indicates STL and SAL for each standard. The SAL values are the result of some complicated issues to be discussed in Choice of SAL and SPW parameters on page 380. For details concerning my reference to 483 active lines (LA) in 480i systems, see Picture lines, on page 379.
Figure 13.2 above shows the standard 480i29.97 and 576i25 digital video sampling rates, and the colour coding usually associated with each of these standards. The 4:2:2, Y’CBCR system for SD is standardized in Recommendation BT.601 of the ITU Radiocommunication Sector (formerly CCIR). I call it BT.601.
With one exception, all of the sampling systems in Figure 13.2 have a whole number of samples per total line; these systems are line-locked. The exception is composite 4fSC PAL sampling, which has a noninteger number (11354⁄625) of samples per total line; this creates a huge nuisance for the system designer.
480i and 576i have gratuitous differences in many technical parameters, summarized in Table 13.1 overleaf.
CHAPTER 13 |
INTRODUCTION TO COMPONENT SD |
131 |
†The EBU N10 component
analog interface for Y’PBPR, occasionally used for 480i, has
7:3 picture-to-sync ratio.
‡480i video in Japan, and the EBU N10 component analog interface, have zero setup. See page 381.
System |
480i29.97 |
576i25 |
|
|
|
Picture:sync ratio |
10:4† |
7:3 |
Setup, percent |
7.5‡ |
0 |
Count of |
6 |
5 |
equalization, |
||
broad pulses |
|
|
|
|
|
Line number 1, and |
First |
First |
0V,defined at: |
equalization |
broad pulse |
|
pulse of field |
of frame |
Bottom picture line in: |
First field |
Second field |
|
|
|
Table 13.1 Gratuitous differences. between 480i and 576i
2
EVEN
ODD 1
•
•
•
•
•
•
•
•
Different treatment of interlace between 480i and 576i imposes different structure onto the picture data. The differences cause headaches in systems such as MPEG that are designed to accommodate both 480i and 576i images. In Figures 13.3 and 13.4 below,
I show how field order, interlace nomenclature, and image structure are related. Figure 13.5 at the bottom of this page shows how MPEG-2 identifies each field as either top or bottom. In 480i video, the bottom field is the first field of the frame; in 576i, the top field is first. Figures 13.3, 13.4, and 13.5 depict just the image array (i.e., the active samples), without vertical blanking lines; MPEG makes no provision for halflines.
Figure 13.3 Interlacing in 480i. The first field (historically called odd, here denoted 1) starts with a full picture line, and ends with a left-hand halfline containing the bottom of the picture. The second field (here dashed, historically called even), transmitted about 1⁄60 s later, starts with a right-hand halfline containing the top of the picture; it ends with a full picture line.
1ONE Figure 13.4 Interlacing in 576i. The first field includes a right-
TWO 2 |
hand halfline containing the top line of the picture, and ends |
|||||
|
|
|
|
|
• |
|
|
|
|
|
|
with a full picture line. The second field, transmitted 1⁄50 s |
|
• |
||||||
|
|
|
|
|
• |
later, starts with a full line, and ends with a left-hand halfline |
|
|
|
|
|||
• |
||||||
|
|
|
|
|
• |
that contains the bottom of the picture. (In 576i terminology, |
|
|
|
|
|||
• |
|
|||||
|
|
|
|
|
• |
the terms odd and even are rarely used, and are best avoided.) |
• |
|
|||||
|
|
TOP |
Figure 13.5 Interlacing in MPEG-2 identifies a picture |
|||
BOTTOM |
|
|
|
|||
|
|
|
||||
|
• |
according to whether it contains the top or bottom picture line |
||||
|
||||||
• |
|
|
|
of the frame. Top and bottom fields are displayed in the order |
||
|
• |
|||||
• |
|
|
|
that they are coded in an MPEG-2 data stream. For frame- |
||
|
• |
|||||
• |
|
|
|
coded pictures, display order is determined by a one-bit flag |
||
|
• |
top field first, typically asserted for 576i and negated for 480i. |
||||
• |
|
|
|
|||
|
|
|
|
|||
132 |
DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES |
