Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
литература / Digital_Video_and_HD_Second_Edition_Algorithms_and_Interfaces.pdf
Скачиваний:
0
Добавлен:
13.05.2026
Размер:
38.02 Mб
Скачать

SMPTE RP 166, Critical Viewing Conditions for Evaluation of Color Television Pictures.

EBU Tech. R23, Procedure for the operational alignment of grade-1 colour picture monitors.

terms are nearly unity, and the off-diagonal terms are nearly zero. In these cases, if the transform is computed in the nonlinear (gamma-corrected) R’G’B’ domain, the resulting errors will be small.

Camera white reference

There is an implicit assumption in television that the camera operates as if the scene were illuminated by a source having the chromaticity of CIE D65. In prac-

tice, television studios are often lit by tungsten lamps at around 3200 K, and scene illumination is often deficient in the shortwave (blue) region of the spectrum. This situation is compensated by white balancing – that is, by adjusting the gain of the red, green, and blue components at the camera so that a diffuse white object reports the values that would be reported if the scene illumination had the same tristimulus values as CIE D65. In studio cameras, controls for white balance are available. In consumer cameras, activating white balance causes the camera to integrate red, green, and blue over the picture, and to adjust the gains so as to equalize the sums. (This approach to white balancing is sometimes called integrate to grey.)

Display white reference

In additive mixture, the illumination of the reproduced image is generated entirely by the display device. In particular, reproduced white is determined by the characteristics of the display, and is not dependent on the environment in which the display is viewed. In a completely dark viewing environment, such as a cinema theater, this is desirable; a wide range of chromaticities is accepted as “white.” However, in an environment where the viewer’s field of view encompasses objects other than the display, the viewer’s notion of “white” is likely to be influenced or even dominated by what he or she perceives as “white” in the ambient. To avoid subjective mismatches, the chromaticity of white reproduced by the display and the chromaticity of white in the ambient should be reasonably close. SMPTE has standardized the chromaticity of reference white in studio displays. The standard specifies that luminance for reference white be reproduced at 120 cd·m-2, and surround conditions – basically, neutral grey at 10% of

310

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES

reference white – are outlined. In Europe, reference white luminance is specified in EBU R23 as 80 cd·m-2. Modern blue CRT phosphors are more efficient with respect to human vision than red or green phosphors.

Until recently, brightness was valued in computer displays more than colour accuracy. In a quest for

a small brightness increment at the expense of a loss of colour accuracy, computer display manufacturers adopted a white point having a colour temperature of about 9300 K, producing a white having about

1.3 times as much blue as the standard CIE D65 white reference used in television. So, computer displays and computer pictures often look excessively blue. The situation can be corrected by adjusting or calibrating the display to a white reference with a lower colour temperature.

Studio video standards in Asia call for viewing with a 9300 K white reference. This practice apparently originates from a cultural preference regarding the portrayal of skin tones.

Gamut

Analyzing a scene with the CIE analysis functions produces distinct component triples for all colours. But when transformed into components suitable for a set of physical display primaries, some of those colours – those colours whose chromaticity coordinates lie outside the triangle formed by the primaries – will have negative component values. In addition, colours outside the triangle of the primaries may have one or two primary components that exceed unity. These colours cannot be correctly displayed. Display devices typically clip signals that have negative values and saturate signals whose values exceed unity. Visualized on the chromaticity diagram, a colour outside the triangle of the primaries is reproduced at a point on the boundary of the triangle.

If a camera is designed to capture all colours, its complexity is necessarily higher and its performance is necessarily worse than a camera designed to capture a smaller range of colours. Thankfully, the range of colours encountered in the natural and man-made world is a small fraction of all of the colours. Although it is necessary for an instrument such as a colorimeter

CHAPTER 26

COLOUR SCIENCE FOR VIDEO

311

Pointer, Michael R. (1980), “The gamut of real surface colours,” in Color Research and Application 5 (3): 143–155 (Fall).

Poynton, Charles (2010), “Widegamut image capture,” in Proc. IS&T CGIV, Fourth European Conf. on Colour in Graphics and Imaging: 471– 482 (Joensuu, Finland).

Perhaps the first image coding system that accommodated linearlight (tristimulus) values below zero and above unity is described in

Levinthal, Adam, and Thomas Porter (1984), “Chap: a SIMD graphics processor,” in Computer Graphics 18 (3): 77–82 (July, Proc. SIGGRAPH ’84).

SMPTE ST 2048-1, 2048× 1080 and 4096× 2160 Digital Cinematography Production Image Formats FS/709.

to measure all colours, in an imaging system we are generally concerned with colours that occur frequently.

M.R. Pointer characterized the distribution of frequently occurring real surface colours. The naturally occurring colours tend to lie in the central portion of the chromaticity diagram, where they can be encompassed by a well-chosen set of physical primaries. An imaging system performs well if it can display all or most of these colours. BT.709 does reasonably well; however, many of the colours of conventional offset printing – particularly in the cyan region – are not encompassed by all-positive BT.709 RGB. To accommodate such colours requires wide-gamut reproduction.

Wide-gamut reproduction

For much of the history of colour television, cameras were designed to incorporate assumptions about the colour reproduction capabilities of colour CRTs. But nowadays, video production equipment is being used to originate images for a much wider range of applications than just television broadcast. The desire to make digital cameras suitable for originating images for this wider range of applications has led to proposals for video standards that accommodate a wider gamut.

The xvYCC (“x.v.Color”) scheme is intended to be the basis for wide-gamut reproduction in future HD systems. The scheme is intended for use with RGB tristimulus values having BT.709 primaries, but with their range extended to -0.25 to +1.33, well outside the range 0 to 1. The excursions below zero and above unity allow RGB values to represent colours outside the triangle enclosed by the BT.709 primaries. When the extended R’G’B’ values are matrixed, the resulting Y’CBCR values lie within the “valid” range: Regions of Y’CBCR space outside the “legal” RGB cube are exploited to convey wide-gamut colours.

Free Scale Gamut, Free Scale Log (FS-Gamut, FS-Log)

A recent SMPTE standard endorses wide-gamut imagery in production. “FS” stands for “Free Scale;” image data having arbitrary chromaticity can be conveyed. The standard uses the notation R’FSG’FSB’FS for wide-gamut colour components. The “709” component in the stan-

312

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES

ST 2048 contains many occurrences of “tristimulus value” where “chromaticity coordinate” is meant.

Expect raised eyebrows among colour and image scientists.

Color VANC is pronounced colour-VEE-ants. The companion standard ST 2048-2 suggests placing Color VANC in the early portion of the active interval of line 18 in 1125-line interfaces.

dard’s title reflects the option to convey image data having BT.709 colorimetry. The default values for FS primaries reflect Sony “wide gamut” delivered by the F23, F35, and F65 cameras (see page 294). The standard provides no default values for the quasilog OECF.

The colour space is defined by the chromaticity coordinates of the primaries and white, and a parametricly defined quasilog OECF. Apart from toe and shoulder regions that are typically nonlinear, no provision is made for footroom or headroom. The standard does not specify how image data values are to be carried, but presumably more than 10 bits per component will be used (despite the quasilog).

The quasilog OECF is described by a set of four numerical parameters and a (fifth) “exposure” value kEXT; 0 ≤ kEXT indicates underexposure, kEXT = 1 indicates correct exposure (default!), and 1 < kEXT indicates overexposure.

The standard defines Color VANC, an ancillary data (ANC) packet carrying colour metadata – namely, the chromaticities of the primaries and reference white, the four parameters of the quasilog OECF function, kEXT, and 12 numerical parameters concerned with the toe and knee (or shoulder) of the OECF. Presumably, DI ingest is expected to use the parameters carried by the Color VANC to construct a colour transform.

Further reading

For a highly readable short introduction to colour image coding, consult DeMarsh and Giorgianni. For a terse, complete technical treatment, read Schreiber.

For details of many aspects of colour imaging technology, consult either Kang (somewhat dated, now), or Sharma. For a discussion of nonlinear RGB in computer graphics, read Lindbloom’s siggraph paper.

In a computer graphics system, once light is on its way to the eye, any tristimulus-based system can accurately represent colour. However, the interaction of light and objects involves spectra, not tristimulus values. In computer-generated imagery (CGI), the calculations actually involve sampled SPDs, even if only three

CHAPTER 26

COLOUR SCIENCE FOR VIDEO

313

samples (in this context, colour components) are used. Roy Hall discusses these issues.

DeMarsh, LeRoy E., and Edward J. Giorgianni (1989), “Color science for imaging systems,” in Physics Today: 44– 52 (Sep.).

Hall, Roy (1989), Illumination and Color in Computer Generated Imagery (New York: Springer).

Kang, Henry R. (1997), Color Technology for Electronic Imaging Devices (Bellingham, Wash.: SPIE).

Lindbloom, Bruce (1989), “Accurate color reproduction for computer graphics applications,” in Computer Graphics, 23 (3): 117–126 (July).

Reinhard, Erik et al. (2008), Color Imaging: Fundamentals and Applications (Wellesley, Mass.: A K Peters).

Schreiber, William F. (1993), Fundamentals of Electronic Imaging Systems, Third Edition (Berlin: Springer-Verlag).

Sharma, Gaurav (2002), Digital Color Imaging Handbook

(Boca Raton, Florida: CRC).

314

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES

Соседние файлы в папке литература