- •Contents
- •Figures
- •Tables
- •Preface
- •Acknowledgments
- •1. Raster images
- •Aspect ratio
- •Geometry
- •Image capture
- •Digitization
- •Perceptual uniformity
- •Colour
- •Luma and colour difference components
- •Digital image representation
- •Square sampling
- •Comparison of aspect ratios
- •Aspect ratio
- •Frame rates
- •Image state
- •EOCF standards
- •Entertainment programming
- •Acquisition
- •Consumer origination
- •Consumer electronics (CE) display
- •Contrast
- •Contrast ratio
- •Perceptual uniformity
- •The “code 100” problem and nonlinear image coding
- •Linear and nonlinear
- •4. Quantization
- •Linearity
- •Decibels
- •Noise, signal, sensitivity
- •Quantization error
- •Full-swing
- •Studio-swing (footroom and headroom)
- •Interface offset
- •Processing coding
- •Two’s complement wrap-around
- •Perceptual attributes
- •History of display signal processing
- •Digital driving levels
- •Relationship between signal and lightness
- •Algorithm
- •Black level setting
- •Effect of contrast and brightness on contrast and brightness
- •An alternate interpretation
- •Brightness and contrast controls in LCDs
- •Brightness and contrast controls in PDPs
- •Brightness and contrast controls in desktop graphics
- •Symbolic image description
- •Raster images
- •Conversion among types
- •Image files
- •“Resolution” in computer graphics
- •7. Image structure
- •Image reconstruction
- •Sampling aperture
- •Spot profile
- •Box distribution
- •Gaussian distribution
- •8. Raster scanning
- •Flicker, refresh rate, and frame rate
- •Introduction to scanning
- •Scanning parameters
- •Interlaced format
- •Interlace and progressive
- •Scanning notation
- •Motion portrayal
- •Segmented-frame (24PsF)
- •Video system taxonomy
- •Conversion among systems
- •9. Resolution
- •Magnitude frequency response and bandwidth
- •Visual acuity
- •Viewing distance and angle
- •Kell effect
- •Resolution
- •Resolution in video
- •Viewing distance
- •Interlace revisited
- •10. Constant luminance
- •The principle of constant luminance
- •Compensating for the CRT
- •Departure from constant luminance
- •Luma
- •“Leakage” of luminance into chroma
- •11. Picture rendering
- •Surround effect
- •Tone scale alteration
- •Incorporation of rendering
- •Rendering in desktop computing
- •Luma
- •Sloppy use of the term luminance
- •Colour difference coding (chroma)
- •Chroma subsampling
- •Chroma subsampling notation
- •Chroma subsampling filters
- •Chroma in composite NTSC and PAL
- •Scanning standards
- •Widescreen (16:9) SD
- •Square and nonsquare sampling
- •Resampling
- •NTSC and PAL encoding
- •NTSC and PAL decoding
- •S-video interface
- •Frequency interleaving
- •Composite analog SD
- •15. Introduction to HD
- •HD scanning
- •Colour coding for BT.709 HD
- •Data compression
- •Image compression
- •Lossy compression
- •JPEG
- •Motion-JPEG
- •JPEG 2000
- •Mezzanine compression
- •MPEG
- •Picture coding types (I, P, B)
- •Reordering
- •MPEG-1
- •MPEG-2
- •Other MPEGs
- •MPEG IMX
- •MPEG-4
- •AVC-Intra
- •WM9, WM10, VC-1 codecs
- •Compression for CE acquisition
- •AVCHD
- •Compression for IP transport to consumers
- •VP8 (“WebM”) codec
- •Dirac (basic)
- •17. Streams and files
- •Historical overview
- •Physical layer
- •Stream interfaces
- •IEEE 1394 (FireWire, i.LINK)
- •HTTP live streaming (HLS)
- •18. Metadata
- •Metadata Example 1: CD-DA
- •Metadata Example 2: .yuv files
- •Metadata Example 3: RFF
- •Metadata Example 4: JPEG/JFIF
- •Metadata Example 5: Sequence display extension
- •Conclusions
- •19. Stereoscopic (“3-D”) video
- •Acquisition
- •S3D display
- •Anaglyph
- •Temporal multiplexing
- •Polarization
- •Wavelength multiplexing (Infitec/Dolby)
- •Autostereoscopic displays
- •Parallax barrier display
- •Lenticular display
- •Recording and compression
- •Consumer interface and display
- •Ghosting
- •Vergence and accommodation
- •20. Filtering and sampling
- •Sampling theorem
- •Sampling at exactly 0.5fS
- •Magnitude frequency response
- •Magnitude frequency response of a boxcar
- •The sinc weighting function
- •Frequency response of point sampling
- •Fourier transform pairs
- •Analog filters
- •Digital filters
- •Impulse response
- •Finite impulse response (FIR) filters
- •Physical realizability of a filter
- •Phase response (group delay)
- •Infinite impulse response (IIR) filters
- •Lowpass filter
- •Digital filter design
- •Reconstruction
- •Reconstruction close to 0.5fS
- •“(sin x)/x” correction
- •Further reading
- •2:1 downsampling
- •Oversampling
- •Interpolation
- •Lagrange interpolation
- •Lagrange interpolation as filtering
- •Polyphase interpolators
- •Polyphase taps and phases
- •Implementing polyphase interpolators
- •Decimation
- •Lowpass filtering in decimation
- •Spatial frequency domain
- •Comb filtering
- •Spatial filtering
- •Image presampling filters
- •Image reconstruction filters
- •Spatial (2-D) oversampling
- •Retina
- •Adaptation
- •Contrast sensitivity
- •Contrast sensitivity function (CSF)
- •24. Luminance and lightness
- •Radiance, intensity
- •Luminance
- •Relative luminance
- •Luminance from red, green, and blue
- •Lightness (CIE L*)
- •Fundamentals of vision
- •Definitions
- •Spectral power distribution (SPD) and tristimulus
- •Spectral constraints
- •CIE XYZ tristimulus
- •CIE [x, y] chromaticity
- •Blackbody radiation
- •Colour temperature
- •White
- •Chromatic adaptation
- •Perceptually uniform colour spaces
- •CIE L*a*b* (CIELAB)
- •CIE L*u*v* and CIE L*a*b* summary
- •Colour specification and colour image coding
- •Further reading
- •Additive reproduction (RGB)
- •Characterization of RGB primaries
- •BT.709 primaries
- •Leggacy SD primaries
- •sRGB system
- •SMPTE Free Scale (FS) primaries
- •AMPAS ACES primaries
- •SMPTE/DCI P3 primaries
- •CMFs and SPDs
- •Normalization and scaling
- •Luminance coefficients
- •Transformations between RGB and CIE XYZ
- •Noise due to matrixing
- •Transforms among RGB systems
- •Camera white reference
- •Display white reference
- •Gamut
- •Wide-gamut reproduction
- •Free Scale Gamut, Free Scale Log (FS-Gamut, FS-Log)
- •Further reading
- •27. Gamma
- •Gamma in CRT physics
- •The amazing coincidence!
- •Gamma in video
- •Opto-electronic conversion functions (OECFs)
- •BT.709 OECF
- •SMPTE 240M OECF
- •sRGB transfer function
- •Transfer functions in SD
- •Bit depth requirements
- •Gamma in modern display devices
- •Estimating gamma
- •Gamma in video, CGI, and Macintosh
- •Gamma in computer graphics
- •Gamma in pseudocolour
- •Limitations of 8-bit linear coding
- •Linear and nonlinear coding in CGI
- •Colour acuity
- •RGB and R’G’B’ colour cubes
- •Conventional luma/colour difference coding
- •Luminance and luma notation
- •Nonlinear red, green, blue (R’G’B’)
- •BT.601 luma
- •BT.709 luma
- •Chroma subsampling, revisited
- •Luma/colour difference summary
- •SD and HD luma chaos
- •Luma/colour difference component sets
- •B’-Y’, R’-Y’ components for SD
- •PBPR components for SD
- •CBCR components for SD
- •Y’CBCR from studio RGB
- •Y’CBCR from computer RGB
- •“Full-swing” Y’CBCR
- •Y’UV, Y’IQ confusion
- •B’-Y’, R’-Y’ components for BT.709 HD
- •PBPR components for BT.709 HD
- •CBCR components for BT.709 HD
- •CBCR components for xvYCC
- •Y’CBCR from studio RGB
- •Y’CBCR from computer RGB
- •Conversions between HD and SD
- •Colour coding standards
- •31. Video signal processing
- •Edge treatment
- •Transition samples
- •Picture lines
- •Choice of SAL and SPW parameters
- •Video levels
- •Setup (pedestal)
- •BT.601 to computing
- •Enhancement
- •Median filtering
- •Coring
- •Chroma transition improvement (CTI)
- •Mixing and keying
- •Field rate
- •Line rate
- •Sound subcarrier
- •Addition of composite colour
- •NTSC colour subcarrier
- •576i PAL colour subcarrier
- •4fSC sampling
- •Common sampling rate
- •Numerology of HD scanning
- •Audio rates
- •33. Timecode
- •Introduction
- •Dropframe timecode
- •Editing
- •Linear timecode (LTC)
- •Vertical interval timecode (VITC)
- •Timecode structure
- •Further reading
- •34. 2-3 pulldown
- •2-3-3-2 pulldown
- •Conversion of film to different frame rates
- •Native 24 Hz coding
- •Conversion to other rates
- •Spatial domain
- •Vertical-temporal domain
- •Motion adaptivity
- •Further reading
- •36. Colourbars
- •SD colourbars
- •SD colourbar notation
- •Pluge element
- •Composite decoder adjustment using colourbars
- •-I, +Q, and Pluge elements in SD colourbars
- •HD colourbars
- •References
- •38. SDI and HD-SDI interfaces
- •Component digital SD interface (BT.601)
- •Serial digital interface (SDI)
- •Component digital HD-SDI
- •SDI and HD-SDI sync, TRS, and ancillary data
- •Analog sync and digital/analog timing relationships
- •Ancillary data
- •SDI coding
- •HD-SDI coding
- •Interfaces for compressed video
- •SDTI
- •Switching and mixing
- •Timing in digital facilities
- •Summary of digital interfaces
- •39. 480i component video
- •Frame rate
- •Interlace
- •Line sync
- •Field/frame sync
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Picture center, aspect ratio, and blanking
- •Halfline blanking
- •Component digital 4:2:2 interface
- •Component analog R’G’B’ interface
- •Component analog Y’PBPR interface, EBU N10
- •Component analog Y’PBPR interface, industry standard
- •40. 576i component video
- •Frame rate
- •Interlace
- •Line sync
- •Analog field/frame sync
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Picture center, aspect ratio, and blanking
- •Component digital 4:2:2 interface
- •Component analog 576i interface
- •Scanning
- •Analog sync
- •Picture center, aspect ratio, and blanking
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Component digital 4:2:2 interface
- •Scanning
- •Analog sync
- •Picture center, aspect ratio, and blanking
- •R’G’B’ EOCF and primaries
- •Luma (Y’)
- •Component digital 4:2:2 interface
- •43. HD videotape
- •HDCAM (D-11)
- •DVCPRO HD (D-12)
- •HDCAM SR (D-16)
- •JPEG blocks and MCUs
- •JPEG block diagram
- •Level shifting
- •Discrete cosine transform (DCT)
- •JPEG encoding example
- •JPEG decoding
- •Compression ratio control
- •JPEG/JFIF
- •Motion-JPEG (M-JPEG)
- •Further reading
- •46. DV compression
- •DV chroma subsampling
- •DV frame/field modes
- •Picture-in-shuttle in DV
- •DV overflow scheme
- •DV quantization
- •DV digital interface (DIF)
- •Consumer DV recording
- •Professional DV variants
- •47. MPEG-2 video compression
- •MPEG-2 profiles and levels
- •Picture structure
- •Frame rate and 2-3 pulldown in MPEG
- •Luma and chroma sampling structures
- •Macroblocks
- •Picture coding types – I, P, B
- •Prediction
- •Motion vectors (MVs)
- •Coding of a block
- •Frame and field DCT types
- •Zigzag and VLE
- •Refresh
- •Motion estimation
- •Rate control and buffer management
- •Bitstream syntax
- •Transport
- •Further reading
- •48. H.264 video compression
- •Algorithmic features, profiles, and levels
- •Baseline and extended profiles
- •High profiles
- •Hierarchy
- •Multiple reference pictures
- •Slices
- •Spatial intra prediction
- •Flexible motion compensation
- •Quarter-pel motion-compensated interpolation
- •Weighting and offsetting of MC prediction
- •16-bit integer transform
- •Quantizer
- •Variable-length coding
- •Context adaptivity
- •CABAC
- •Deblocking filter
- •Buffer control
- •Scalable video coding (SVC)
- •Multiview video coding (MVC)
- •AVC-Intra
- •Further reading
- •49. VP8 compression
- •Algorithmic features
- •Further reading
- •Elementary stream (ES)
- •Packetized elementary stream (PES)
- •MPEG-2 program stream
- •MPEG-2 transport stream
- •System clock
- •Further reading
- •Japan
- •United States
- •ATSC modulation
- •Europe
- •Further reading
- •Appendices
- •Cement vs. concrete
- •True CIE luminance
- •The misinterpretation of luminance
- •The enshrining of luma
- •Colour difference scale factors
- •Conclusion: A plea
- •Radiometry
- •Photometry
- •Light level examples
- •Image science
- •Units
- •Further reading
- •Glossary
- •Index
- •About the author
My notation is outlined in Figure 28.6, on page 343. The coefficients are derived in Colour science for video, on page 287.
To compute luminance using (R+G+B)/3 is at odds with the characteristics of vision.
If the luminance of a scene element is to be sensed by a scanner or camera having a single spectral filter, then the spectral response of the scanner’s filter must – in theory, at least – correspond to the luminous efficiency function of Figure 24.1. However, luminance can also be computed as a weighted sum of suitably chosen red, green, and blue tristimulus components. The coefficients are functions of vision, of the white reference, and of the particular red, green, and blue spectral weighting functions employed. For realistic choices of white point and primaries, the green coefficient is quite large, the blue coefficient is the smallest of the three, and the red coefficient has an intermediate value.
The primaries of contemporary video displays are standardized in BT.709. Weights computed from these primaries are appropriate to compute relative luminance from red, green, and blue tristimulus values for computer graphics, and for modern video cameras and modern displays in both SD and HD:
709Y = 0.2126 R+ 0.7152G+ 0.0722B |
Eq 24.1 |
For BT.709 primaries, luminance comprises roughly 21% power from the red (longwave) region of the spectrum, 72% from green (mediumwave), and 7% from blue (shortwave).
Blue has a small contribution to luminance. However, vision has excellent colour discrimination among blue hues. Equation 24.1 does not give you licence to assign fewer bits to blue than to red or green – in fact, it tells you nothing whatsoever about how many bits to assign to each channel.
Lightness (CIE L*)
Lightness is defined by the CIE as the brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting. Lightness is most succinctly described as apparent reflectance. Vision is attuned to estimating surface reflectance factors; lightness relates to that aspect of vision. The CIE’s phrase “similarly illuminated area that appears white” involves the absolute luminance by which relative luminance is normalized. In digital imaging, the reference white luminance is ordinarily
CHAPTER 24 |
LUMINANCE AND LIGHTNESS |
259 |
(L*) |
100 |
|
|
|
|
|
|
|
|
|
|
|
|
Lightness |
80 |
|
|
|
|
|
60 |
|
Foss |
|
|
|
|
or |
|
|
|
|
|
|
|
|
Richter/DIN |
|
|
||
(relative) |
|
|
|
|
||
40 |
|
CIE L* |
|
|
|
|
|
|
Newhall (Munsell Value, “renotation”) |
||||
|
|
Priest |
|
|
|
|
Value |
20 |
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
0 |
|
|
|
|
|
|
0 |
0.2 |
0.4 |
0.6 |
0.8 |
1.0 |
Relative luminance
Figure 24.2 Luminance and lightness. The dependence of lightness (L*) or value (V) upon relative luminance (Y) has been modeled by polynomials, power functions, and logarithms. In all of these systems, 18% “mid-grey” has lightness about halfway up the perceptual scale. This graph is adapted from Fig. 2 (6.3) in Wyszecki and Stiles, Color Science (cited on page 286).
The L* symbol is pronounced
EL-star.
closely related to the luminance of a perfectly diffusing reflector (PDR) in the scene, or the luminance at which such a scene element is presented (or will ultimately be presented) at a display.
In Contrast sensitivity, on page 249, I explained that vision has a nonlinear perceptual response to luminance. Vision scientists have proposed many functions that relate relative luminance to perceived lightness; several of these functions are graphed in Figure 24.2.
The computational version of lightness, denoted L*, is defined by the CIE as a certain nonlinear function of relative luminance. In 1976, the CIE standardized lightness, L* as an approximation of the lightness response of human vision. Other functions – such as Munsell value – specify alternate lightness scales, but the CIE L* function is widely used and internationally standardized.
The L* function has two segments: a linear segment near black, and a scaled and offset cube root (1/3-power) function everywhere else.
260 |
DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES |
Eq 24.2
Eq 24.3
To compute L* from optical density D in the range 0 to 2, use this relation:
L* = 116 ·10-D/3-16
My best-fit pure power function estimate is based upon numerical (Nelder-Mead) minimization of least-squares error on L* values 0 through 100 in steps of 10. The same result is obtained by fitting 100 samples in linear-light space.
The 1976 version of the CIE standard expresses this definition of L*:
|
|
|
Y |
|
|
|
|
|
Y |
≤ 0.008856 |
||
903.3 |
; |
|
|
|
||||||||
|
|
|
||||||||||
|
|
|
YN |
|
|
|
|
YN |
|
|||
L*(Y) = |
|
|
|
|
1 |
|
|
|
|
|
||
|
Y |
|
|
|
|
|
|
Y |
||||
3 |
− 16; |
0.008856 < |
||||||||||
116 |
|
|
|
|
|
|
|
|||||
|
|
|
|
Y |
||||||||
|
Y |
|
|
|
|
|
|
|
||||
|
|
|
N |
|
|
|
|
|
|
|
|
N |
In the 2004 version of the standard, the decimals were replaced by exact rational fractions. Today’s definition is equivalent to this:
|
|
116 |
|
3 |
Y |
|
|
Y |
|
|
|
24 |
|
3 |
|||||
|
|
; |
|
≤ |
|
||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
YN |
|
|
|
|
|
|
||||||||||
|
12 |
|
|
|
YN |
116 |
|||||||||||||
L* (Y) = |
|
|
|
|
|
1 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
|
|
||||
|
|
Y |
3 |
|
|
24 |
|
Y |
|||||||||||
|
116 |
|
|
|
|
− 16; |
|
|
|
|
|
|
< |
|
|
||||
|
|
|
|
|
|
Y |
|||||||||||||
|
|
|
Y |
|
|
|
116 |
|
|||||||||||
|
|
|
|
|
N |
|
|
|
|
|
|
|
|
|
|
|
|
|
N |
The argument Y is relative luminance, proportional to intensity. This quantity is already relative to some absolute white reference, typically the absolute luminance associated with a perfect (or imperfect, say 90%) diffuse reflector. The argument Y is tacitly assumed to lie on
a scale whose maximum value (Yn) is related to the viewer’s adaptation state. The division by Yn does not form relative luminance; rather, the normalization accommodates the tradition dating back to 1931 and earlier that tristimulus values lie on a 0 to 100 scale. For tristimulus reference range of 0 to 1, as I prefer, the division by Yn can be omitted.
The linear segment of L* is convenient for mathematical reasons, but is not justified by visual perception: The utility of L* is limited to a luminance ratio of about 100:1, and L* values below 8 don’t represent meaningful visual stimuli. (In graphics arts, luminance ratios up to about 300:1 are used with L*.)
The exponent of the power function segment of the L* function is 1⁄3, but the scale factor of 116 and the offset of -16 modify the pure power function such that the best-fit pure power function has an exponent of 0.42, not 1/3! L* is based upon a cube root, but it is not best approximated by a cube root! The best purepower function approximation to lightness is 100 times the 0.42-power of relative luminance.
CHAPTER 24 |
LUMINANCE AND LIGHTNESS |
261 |
In a display system having contrast ratio of 100:1, L* takes values between 9 and 100.
∆L* is pronounced delta EL-star.
For television viewing, we typically set Yn to reference white at the display. In television viewing, the viewer’s adaptation is controlled both by the image itself and by elements in the field of view that are outside the image. In cinema, the viewer’s adaptation is controlled mainly by the image itself. In cinema, setting Yn to reference white is not necessarily appropriate; it may be more appropriate to set Yn to the luminance of the representation of a perfect diffuse reflector in the displayed scene.
Relative luminance of 0.01 maps to L* of almost exactly 9. You may find it convenient to keep in mind two exact mappings of L*: Relative luminance of 1/64 (0.015625) corresponds to L* of exactly 13, and relative luminance of 1/8 (0.125) corresponds to L* of exactly 42 (which, as Douglas Adams would tell you, is the answer to Life, the Universe, and Everything).
The difference between two L* values, denoted ∆L*, is a measure of perceptual “distance.” In graphics arts, a difference of less than unity between two L* values is generally considered to be imperceptible – that is, ∆L* of unity is taken to lie at the threshold of discrimination. L* is meaningless beyond about 200 – that is, beyond about 6.5 Y/Yn.
In Contrast sensitivity, on page 249, I gave the example of logarithmic coding with a Weber contrast of 1.01. For reconstructing images for human viewing, it is never necessary to quantize relative luminance more finely than that. However, L* suggests that a ratio of 1.01 is unnecessarily fine. The inverse L* of 100 is unity; dividing that by the inverse L* of 99 yields a Weber contrast of 1.025. The luminance ratio between adjacent L* values increases as L* falls, reaching 1.13 at L* of 8 (at relative luminance of about 1%, corresponding to a contrast ratio of 100:1). L* was standardized based upon estimation of lightness of diffusely reflecting surfaces; the linear segment below L* of 8 was inserted for mathematical convenience. I consider estimating the visibility of lightness differences at luminance values less than 1% of white to be a research topic, and I recommend against using delta-L* at such low luminances.
L* provides one component of a uniform colour space; it can be described as perceptually uniform. Since we cannot directly measure the quantity in question, we
262 |
DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES |
cannot assign to it any strong properties of mathematical linearity; as far as I’m concerned, the term perceptually linear is not appropriate.
In Chapter 10, Constant luminance, I described how video systems use a luma signal (Y’) that is an engineering approximation to lightness. The luma signal is only indirectly related to the relative luminance (Y) or the lightness (L*) of colour science.
CHAPTER 24 |
LUMINANCE AND LIGHTNESS |
263 |
This page intentionally left blank
Figure 25.1 Example coordinate system
The CIE system
of colorimetry |
25 |
The Commission Internationale de L’Éclairage (CIE) has defined a system that maps a spectral power distribution
(SPD) of physics into a triple of numerical values – CIE XYZ tristimulus values – that form the mathematical coordinates of colour space. In this chapter, I describe the CIE system. In the following chapter, Colour science for video, I will explain how these XYZ tristimulus values are related to linear-light RGB values.
Colour coordinates are analogous to coordinates on a map (see Figure 25.1). Cartographers have different map projections for different functions: Some projections preserve areas, others show latitudes and longitudes as straight lines. No single map projection fills all the needs of all map users. Analogously, there are many “colour spaces,” and as in maps, no single coordinate system fills all of the needs of users.
In Chapter 24, Luminance and lightness, I introduced the linear-light quantity luminance. To reiterate, I use the term luminance and the symbol Y to refer to CIE luminance. I use the term luma and the symbol Y’ to refer to the video component that conveys an approximation to lightness. Most of the quantities in this chapter, and in the following chapter Colour science for video, involve “linear-light” values that are proportional to intensity. In Chapter 10, Constant luminance, I related the theory of colour science to the practice of video. To approximate perceptual uniformity, video uses quantities such as R’, G’, B’, and Y’ that are not proportional to intensity.
265
