- •Contents
- •Figures
- •Tables
- •Preface
- •Acknowledgments
- •1. Raster images
- •Aspect ratio
- •Geometry
- •Image capture
- •Digitization
- •Perceptual uniformity
- •Colour
- •Luma and colour difference components
- •Digital image representation
- •Square sampling
- •Comparison of aspect ratios
- •Aspect ratio
- •Frame rates
- •Image state
- •EOCF standards
- •Entertainment programming
- •Acquisition
- •Consumer origination
- •Consumer electronics (CE) display
- •Contrast
- •Contrast ratio
- •Perceptual uniformity
- •The “code 100” problem and nonlinear image coding
- •Linear and nonlinear
- •4. Quantization
- •Linearity
- •Decibels
- •Noise, signal, sensitivity
- •Quantization error
- •Full-swing
- •Studio-swing (footroom and headroom)
- •Interface offset
- •Processing coding
- •Two’s complement wrap-around
- •Perceptual attributes
- •History of display signal processing
- •Digital driving levels
- •Relationship between signal and lightness
- •Algorithm
- •Black level setting
- •Effect of contrast and brightness on contrast and brightness
- •An alternate interpretation
- •Brightness and contrast controls in LCDs
- •Brightness and contrast controls in PDPs
- •Brightness and contrast controls in desktop graphics
- •Symbolic image description
- •Raster images
- •Conversion among types
- •Image files
- •“Resolution” in computer graphics
- •7. Image structure
- •Image reconstruction
- •Sampling aperture
- •Spot profile
- •Box distribution
- •Gaussian distribution
- •8. Raster scanning
- •Flicker, refresh rate, and frame rate
- •Introduction to scanning
- •Scanning parameters
- •Interlaced format
- •Interlace and progressive
- •Scanning notation
- •Motion portrayal
- •Segmented-frame (24PsF)
- •Video system taxonomy
- •Conversion among systems
- •9. Resolution
- •Magnitude frequency response and bandwidth
- •Visual acuity
- •Viewing distance and angle
- •Kell effect
- •Resolution
- •Resolution in video
- •Viewing distance
- •Interlace revisited
- •10. Constant luminance
- •The principle of constant luminance
- •Compensating for the CRT
- •Departure from constant luminance
- •Luma
- •“Leakage” of luminance into chroma
- •11. Picture rendering
- •Surround effect
- •Tone scale alteration
- •Incorporation of rendering
- •Rendering in desktop computing
- •Luma
- •Sloppy use of the term luminance
- •Colour difference coding (chroma)
- •Chroma subsampling
- •Chroma subsampling notation
- •Chroma subsampling filters
- •Chroma in composite NTSC and PAL
- •Scanning standards
- •Widescreen (16:9) SD
- •Square and nonsquare sampling
- •Resampling
- •NTSC and PAL encoding
- •NTSC and PAL decoding
- •S-video interface
- •Frequency interleaving
- •Composite analog SD
- •15. Introduction to HD
- •HD scanning
- •Colour coding for BT.709 HD
- •Data compression
- •Image compression
- •Lossy compression
- •JPEG
- •Motion-JPEG
- •JPEG 2000
- •Mezzanine compression
- •MPEG
- •Picture coding types (I, P, B)
- •Reordering
- •MPEG-1
- •MPEG-2
- •Other MPEGs
- •MPEG IMX
- •MPEG-4
- •AVC-Intra
- •WM9, WM10, VC-1 codecs
- •Compression for CE acquisition
- •AVCHD
- •Compression for IP transport to consumers
- •VP8 (“WebM”) codec
- •Dirac (basic)
- •17. Streams and files
- •Historical overview
- •Physical layer
- •Stream interfaces
- •IEEE 1394 (FireWire, i.LINK)
- •HTTP live streaming (HLS)
- •18. Metadata
- •Metadata Example 1: CD-DA
- •Metadata Example 2: .yuv files
- •Metadata Example 3: RFF
- •Metadata Example 4: JPEG/JFIF
- •Metadata Example 5: Sequence display extension
- •Conclusions
- •19. Stereoscopic (“3-D”) video
- •Acquisition
- •S3D display
- •Anaglyph
- •Temporal multiplexing
- •Polarization
- •Wavelength multiplexing (Infitec/Dolby)
- •Autostereoscopic displays
- •Parallax barrier display
- •Lenticular display
- •Recording and compression
- •Consumer interface and display
- •Ghosting
- •Vergence and accommodation
- •20. Filtering and sampling
- •Sampling theorem
- •Sampling at exactly 0.5fS
- •Magnitude frequency response
- •Magnitude frequency response of a boxcar
- •The sinc weighting function
- •Frequency response of point sampling
- •Fourier transform pairs
- •Analog filters
- •Digital filters
- •Impulse response
- •Finite impulse response (FIR) filters
- •Physical realizability of a filter
- •Phase response (group delay)
- •Infinite impulse response (IIR) filters
- •Lowpass filter
- •Digital filter design
- •Reconstruction
- •Reconstruction close to 0.5fS
- •“(sin x)/x” correction
- •Further reading
- •2:1 downsampling
- •Oversampling
- •Interpolation
- •Lagrange interpolation
- •Lagrange interpolation as filtering
- •Polyphase interpolators
- •Polyphase taps and phases
- •Implementing polyphase interpolators
- •Decimation
- •Lowpass filtering in decimation
- •Spatial frequency domain
- •Comb filtering
- •Spatial filtering
- •Image presampling filters
- •Image reconstruction filters
- •Spatial (2-D) oversampling
- •Retina
- •Adaptation
- •Contrast sensitivity
- •Contrast sensitivity function (CSF)
- •24. Luminance and lightness
- •Radiance, intensity
- •Luminance
- •Relative luminance
- •Luminance from red, green, and blue
- •Lightness (CIE L*)
- •Fundamentals of vision
- •Definitions
- •Spectral power distribution (SPD) and tristimulus
- •Spectral constraints
- •CIE XYZ tristimulus
- •CIE [x, y] chromaticity
- •Blackbody radiation
- •Colour temperature
- •White
- •Chromatic adaptation
- •Perceptually uniform colour spaces
- •CIE L*a*b* (CIELAB)
- •CIE L*u*v* and CIE L*a*b* summary
- •Colour specification and colour image coding
- •Further reading
- •Additive reproduction (RGB)
- •Characterization of RGB primaries
- •BT.709 primaries
- •Leggacy SD primaries
- •sRGB system
- •SMPTE Free Scale (FS) primaries
- •AMPAS ACES primaries
- •SMPTE/DCI P3 primaries
- •CMFs and SPDs
- •Normalization and scaling
- •Luminance coefficients
- •Transformations between RGB and CIE XYZ
- •Noise due to matrixing
- •Transforms among RGB systems
- •Camera white reference
- •Display white reference
- •Gamut
- •Wide-gamut reproduction
- •Free Scale Gamut, Free Scale Log (FS-Gamut, FS-Log)
- •Further reading
- •27. Gamma
- •Gamma in CRT physics
- •The amazing coincidence!
- •Gamma in video
- •Opto-electronic conversion functions (OECFs)
- •BT.709 OECF
- •SMPTE 240M OECF
- •sRGB transfer function
- •Transfer functions in SD
- •Bit depth requirements
- •Gamma in modern display devices
- •Estimating gamma
- •Gamma in video, CGI, and Macintosh
- •Gamma in computer graphics
- •Gamma in pseudocolour
- •Limitations of 8-bit linear coding
- •Linear and nonlinear coding in CGI
- •Colour acuity
- •RGB and R’G’B’ colour cubes
- •Conventional luma/colour difference coding
- •Luminance and luma notation
- •Nonlinear red, green, blue (R’G’B’)
- •BT.601 luma
- •BT.709 luma
- •Chroma subsampling, revisited
- •Luma/colour difference summary
- •SD and HD luma chaos
- •Luma/colour difference component sets
- •B’-Y’, R’-Y’ components for SD
- •PBPR components for SD
- •CBCR components for SD
- •Y’CBCR from studio RGB
- •Y’CBCR from computer RGB
- •“Full-swing” Y’CBCR
- •Y’UV, Y’IQ confusion
- •B’-Y’, R’-Y’ components for BT.709 HD
- •PBPR components for BT.709 HD
- •CBCR components for BT.709 HD
- •CBCR components for xvYCC
- •Y’CBCR from studio RGB
- •Y’CBCR from computer RGB
- •Conversions between HD and SD
- •Colour coding standards
- •31. Video signal processing
- •Edge treatment
- •Transition samples
- •Picture lines
- •Choice of SAL and SPW parameters
- •Video levels
- •Setup (pedestal)
- •BT.601 to computing
- •Enhancement
- •Median filtering
- •Coring
- •Chroma transition improvement (CTI)
- •Mixing and keying
- •Field rate
- •Line rate
- •Sound subcarrier
- •Addition of composite colour
- •NTSC colour subcarrier
- •576i PAL colour subcarrier
- •4fSC sampling
- •Common sampling rate
- •Numerology of HD scanning
- •Audio rates
- •33. Timecode
- •Introduction
- •Dropframe timecode
- •Editing
- •Linear timecode (LTC)
- •Vertical interval timecode (VITC)
- •Timecode structure
- •Further reading
- •34. 2-3 pulldown
- •2-3-3-2 pulldown
- •Conversion of film to different frame rates
- •Native 24 Hz coding
- •Conversion to other rates
- •Spatial domain
- •Vertical-temporal domain
- •Motion adaptivity
- •Further reading
- •36. Colourbars
- •SD colourbars
- •SD colourbar notation
- •Pluge element
- •Composite decoder adjustment using colourbars
- •-I, +Q, and Pluge elements in SD colourbars
- •HD colourbars
- •References
- •38. SDI and HD-SDI interfaces
- •Component digital SD interface (BT.601)
- •Serial digital interface (SDI)
- •Component digital HD-SDI
- •SDI and HD-SDI sync, TRS, and ancillary data
- •Analog sync and digital/analog timing relationships
- •Ancillary data
- •SDI coding
- •HD-SDI coding
- •Interfaces for compressed video
- •SDTI
- •Switching and mixing
- •Timing in digital facilities
- •Summary of digital interfaces
- •39. 480i component video
- •Frame rate
- •Interlace
- •Line sync
- •Field/frame sync
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Picture center, aspect ratio, and blanking
- •Halfline blanking
- •Component digital 4:2:2 interface
- •Component analog R’G’B’ interface
- •Component analog Y’PBPR interface, EBU N10
- •Component analog Y’PBPR interface, industry standard
- •40. 576i component video
- •Frame rate
- •Interlace
- •Line sync
- •Analog field/frame sync
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Picture center, aspect ratio, and blanking
- •Component digital 4:2:2 interface
- •Component analog 576i interface
- •Scanning
- •Analog sync
- •Picture center, aspect ratio, and blanking
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Component digital 4:2:2 interface
- •Scanning
- •Analog sync
- •Picture center, aspect ratio, and blanking
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Component digital 4:2:2 interface
- •43. HD videotape
- •HDCAM (D-11)
- •DVCPRO HD (D-12)
- •HDCAM SR (D-16)
- •JPEG blocks and MCUs
- •JPEG block diagram
- •Level shifting
- •Discrete cosine transform (DCT)
- •JPEG encoding example
- •JPEG decoding
- •Compression ratio control
- •JPEG/JFIF
- •Motion-JPEG (M-JPEG)
- •Further reading
- •46. DV compression
- •DV chroma subsampling
- •DV frame/field modes
- •Picture-in-shuttle in DV
- •DV overflow scheme
- •DV quantization
- •DV digital interface (DIF)
- •Consumer DV recording
- •Professional DV variants
- •47. MPEG-2 video compression
- •MPEG-2 profiles and levels
- •Picture structure
- •Frame rate and 2-3 pulldown in MPEG
- •Luma and chroma sampling structures
- •Macroblocks
- •Picture coding types – I, P, B
- •Prediction
- •Motion vectors (MVs)
- •Coding of a block
- •Frame and field DCT types
- •Zigzag and VLE
- •Refresh
- •Motion estimation
- •Rate control and buffer management
- •Bitstream syntax
- •Transport
- •Further reading
- •48. H.264 video compression
- •Algorithmic features, profiles, and levels
- •Baseline and extended profiles
- •High profiles
- •Hierarchy
- •Multiple reference pictures
- •Slices
- •Spatial intra prediction
- •Flexible motion compensation
- •Quarter-pel motion-compensated interpolation
- •Weighting and offsetting of MC prediction
- •16-bit integer transform
- •Quantizer
- •Variable-length coding
- •Context adaptivity
- •CABAC
- •Deblocking filter
- •Buffer control
- •Scalable video coding (SVC)
- •Multiview video coding (MVC)
- •AVC-Intra
- •Further reading
- •49. VP8 compression
- •Algorithmic features
- •Further reading
- •Elementary stream (ES)
- •Packetized elementary stream (PES)
- •MPEG-2 program stream
- •MPEG-2 transport stream
- •System clock
- •Further reading
- •Japan
- •United States
- •ATSC modulation
- •Europe
- •Further reading
- •Appendices
- •Cement vs. concrete
- •True CIE luminance
- •The misinterpretation of luminance
- •The enshrining of luma
- •Colour difference scale factors
- •Conclusion: A plea
- •Radiometry
- •Photometry
- •Light level examples
- •Image science
- •Units
- •Further reading
- •Glossary
- •Index
- •About the author
The term luminance is widely misused in video. See Relative luminance, on page 258, and Appendix A, YUV and luminance considered harmful, on page 567.
Constant luminance |
10 |
Video systems convey colour image data using one component that approximates lightness, and two other components that represent colour, absent lightness. In Colour science for video, on page 287, I will detail how luminance can be formed as a weighted sum of linear RGB values each of which is proportional to optical power. A colour scientist uses the term constant luminance to refer to this sum being constant. Transmitting a single component from which relative luminance can be reconstructed is the principle of constant luminance. Preferably a nonlinear transfer function acts on that component to impose perceptually uniform coding.
Standard video systems do not strictly adhere to that principle; instead, they implement an engineering approximation. The colour scientist’s weighted sum of linear RGB is not computed. Instead, a nonlinear transfer function is applied to each linear-light RGB component individually, then a weighted sum of the nonlinear gamma-corrected R’G’B’ components forms what I call luma. (Many video engineers carelessly call this quantity luminance.) In standard video systems, luma is encoded using the theoretical RGB weighting coefficients of colour science, but in a block diagram different from the one a colour scientist would expect: In video, gamma correction is applied before the matrix, instead of the colour scientist’s preference, after.
Historically, transmission of a single component representative of greyscale enabled compatibility with “black-and-white” television. Human vision has poor acuity for colour compared to luminance. Placing “black-and-white” information into one component
107
The term “monochrome” is sometimes used instead of “greyscale.” However, in classic computer graphics terminology monochrome refers to bilevel (1-bit) images or display systems, so I avoid that term.
Applebaum, Sidney (1952), “Gamma correction in constant luminance color television systems,” in Proc. IRE, 40 (11): 1185–1195 (Oct.).
enables chroma subsampling to take advantage of vision’s low acuity for chroma in order to reduce data rate (historically, bandwidth) in the two other components. In colour imaging, it is sensible to code a “black- and-white” component even if “black-and-white” compatibility isn’t required (for example, in JPEG).
I’ve been placing “black-and-white” in quotes. At the invention of television, the transmitted signal represented greyscale, not just black and white: Then, and now, greyscale would be a better term.
Historical video literature refers to the “signal representing luminance” or the “luminance signal” or the “luminance component.” All of these terms were once justified; however, they are now dangerous: To use the term “luminance” suggests that relative luminance (Y) can be decoded from that component. However, without strict adherence to the principle of constant luminance, luminance cannot be decoded from the greyscale component alone: Two other components (typically CB and CR) are necessary.
In this chapter, I will explain why and how all current video systems depart from the principle of constant luminance. If you are willing to accept this departure from theory as a fact, then you may safely skip this chapter, and proceed to Introduction to luma and chroma, on page 121, where I will introduce how the luma and colour difference signals are formed and subsampled.
The principle of constant luminance
Ideally, the so-called monochrome component in colour video would mimic a greyscale system: Relative luminance would be computed as a properly weighted sum of (linear-light) R, G, and B tristimulus values, according to the principles of colour science that are explained in
Transformations between RGB and CIE XYZ, on page 307. At the decoder, the inverse matrix would reconstruct linear R, G, and B tristimulus values:
Figure 10.1 Formation of relative luminance
R |
|
|
Y |
11 b |
|
|
|
R |
|
|
-1 |
|
|
||||
|
|
|
|
|
|
|||
|
|
|
|
|
||||
G |
|
[P] |
|
|
[P |
] |
|
G |
|
|
|
|
|||||
B |
|
|
|
|
|
|
|
B |
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
108 |
DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES |
Two colour difference (chroma) components would be computed, to enable chroma subsampling; these would be conveyed to the decoder through separate channels:
Figure 10.2 Hypothetical chroma components (linear-light)
R |
|
|
Y |
|
|
|
|
[P] |
[P-1] |
||||
|
|
|
R |
|||
|
|
|
||||
G |
|
|
|
G |
||
|
|
|||||
|
|
|||||
|
|
|||||
B |
|
|
|
|
|
B |
|
|
|
|
|
||
|
|
|
|
|
|
|
Set aside the chroma components for now: No matter how they are handled, in a true constant luminance system all of the relative luminance is recoverable from the greyscale component alone.
If relative luminance were conveyed directly, 11 bits or more would be necessary. Eight bits barely suffice if we use nonlinear image coding, introduced on page 31, to impose perceptual uniformity: We could subject relative luminance to a nonlinear transfer function that mimics vision’s lightness sensitivity. Lightness can be approximated as CIE L* (to be detailed on page 259); L* is roughly the 0.42-power of relative luminance.
Figure 10.3 Encoding nonlinearly coded relative luminance
R |
|
|
Y |
|
|
|
L* 8 b |
|
|
|
|
Y |
|
|
|
R |
|||
|
|
|
|
|
|
|
2.4 |
|
-1 |
|
|
||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||
G |
|
[P] |
|
|
|
|
γ =0.42 |
|
|
|
|
|
|
|
[P |
] |
|
G |
|
|
|
|
|
|
|
|
|
|
|||||||||||
B |
|
|
|
|
|
|
E |
|
|
|
|
|
|
|
|
|
|
|
B |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The decoder would apply the inverse transfer function:
Figure 10.4 Decoding nonlinearly coded relative luminance
|
|
|
Y |
|
|
|
L* |
|
|
|
|
Y |
|
|
|
|
||
R |
|
|
|
|
|
|
γD=2.4 |
-1 |
|
|
R |
|||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
G |
|
[P] |
|
|
|
0.42 |
|
|
|
|
|
|
|
[P |
] |
|
G |
|
|
|
|
|
|
|
|
|
|
|
|
||||||||
B |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
B |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
If a video system were to operate in this manner, it would conform to the principle of constant luminance: All of the relative luminance would be present in, and recoverable from, the greyscale component.
Compensating for the CRT
Unfortunately for the theoretical block diagram – but fortunately for video, as you will see in a moment – the
CHAPTER 10 |
CONSTANT LUMINANCE |
109 |
electron gun of a historical CRT display introduces a power function having an exponent of about 2.4:
Figure 10.5 The CRT transfer function
R |
|
|
Y |
|
|
|
L* |
|
|
|
Y |
|
|
|
|
|
|
|
||
|
|
|
|
|
|
2.4 |
|
-1 |
|
|
|
|
2.4 |
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
G |
|
[P] |
|
|
|
0.42 |
|
|
|
|
|
|
[P |
] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
B |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
R G B
In a constant luminance system, the decoder would have to invert the display’s power function. This would require insertion of a compensating transfer function – roughly a 1⁄2.4-power function – in front of the CRT:
Figure 10.6 Compensating the CRT transfer function
R |
|
|
|
Y |
|
|
|
L* |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2.4 |
|
|
-1 |
|
|
|
|
|
|
|
|
2.4 |
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
G |
|
[P] |
|
|
|
0.42 |
|
|
|
|
|
|
[P |
] |
|
|
1⁄ |
2.4 |
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||||||
B |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The decoder would now include two power functions: An inverse L* function with an exponent close to 2.4 to invert the perceptually uniform coding, and a power function with an exponent of 1⁄2.4 – that is, about 0.42 – to compensate for the CRT’s nonlinearity. Figure 10.6 represents the block digram of an idealized, true constant luminance video system.
Departure from constant luminance
Having two nonlinear transfer functions at every decoder was historically expensive and impractical.
Notice that the exponents of the power functions are
2.4 and 1⁄2.4 – the functions are inverses! To avoid the complexity of incorporating two power functions into
a decoder’s electronics, we begin by rearranging the block diagram, to interchange the “order of operations” of the matrix and the CRT compensation:
Figure 10.7 Rearranged decoder
R |
|
|
Y |
|
|
|
L* |
|
|
|
Y |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2.4 |
|
|
|
|
|
|
|
-1 |
|
|
|
2.4 |
|
|||||
G |
|
[P] |
|
|
|
0.42 |
|
|
|
|
|
|
1⁄ |
2.4 |
|
|
[P |
] |
|
|
|
|
||
|
|
|
|
|
|
|
|
|
||||||||||||||||
B |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Upon rearrangement, the two power functions are adjacent. Since the functions are effectively inverses,
110 |
DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES |
