Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
The Digital Filmmaking Handbook.pdf
Скачиваний:
179
Добавлен:
11.07.2018
Размер:
19.48 Mб
Скачать

Chapter 3 n Digital Video Primer

49

Compression Ratios

To fit more data onto a tape and to better facilitate digital postproduction, most digital video formats use some type of data compression. This compression process can greatly affect the quality of your image. Uncompressed video has a compression ratio of 1:1, while compressed video can range anywhere from 1.6:1 to 10:1. Video compressed at a 10:1 ratio has 10 percent of its original data. Although throwing out 90 percent of your image data might sound a little scary, rest assured that modern compression schemes can deliver excellent results, even with very high compression ratios. In fact, video DVDs use a fairly extreme level of MPEG-2 compression, showing that heavily compressed video can still be commercially viable.

As explained earlier, a video format with a 4:2:2 color sampling ratio compresses the video signal by discarding half of the color information. However, this discarded information is not visible to the human eye, so this compression is considered lossless.

When the color sampling rate dips to 4:1:1 or 4:2:0, the information that has been discarded is visible to the eye, so this compression is considered lossy. However, the image quality of 4:1:1 and 4:2:0 video, such as HDV, is still considered excellent.

Data Rate

When a digital video camera records an image, the amount of information that is stored for each second of video is determined by the video format’s data rate, or bit rate. For example, HDV has a data rate of 25 megabits per second (Mbps). This means that 25Mbps of information are stored for each second of video. (If you factor in audio, timecode information, and the other “housekeeping” data that needs to be stored, HDV actually takes up about 36Mbps.) DVCPRO HD, on the other hand, ranges from 40 to 100Mbps of information for each second of video. As one would expect, more information means a better picture, and the data rate is one of the reasons that DVCPRO HD has higher quality than HDV. Uncompressed 4:4:4 HD video has a data rate of over 800Mbps, while video that is compressed for streaming over the Internet usually has a low data rate of 10Mbps or less.

Understanding Digital Media Files

As digital video cameras move toward file-based acquisition, the clarity of the various digital video formats gets murkier. It used to be that you simply looked at the type of videotape itself to identify the format: a Beta SP tape meant the format was Betacam, a VHS tape was VHS. The physical size and structure of the tape itself determined the quality of the images and audio that it could record.

Nowadays, file-based acquisition means that the format of the recording media itself matters less and less because tapeless media, like SD cards or flash drives, are simply neutral storage devices that can store any type of file that fits.

The good news is that digital video now works in a way that is intuitive to anyone who uses a computer: files are stored on discs or drives. The not-so-good news is that it’s pretty hard to look at a list of files and figure out what the quality or format of that file is.

50 The Digital Filmmaking Handbook, 4E

There’s another big change in regard to media and file formats. With a traditional videotapebased workflow, you would pick a format and stick with it throughout the shooting and editing process. If you did decide to introduce another videotape format into your workflow, typically it would be at the end when you mastered to a higher-quality format.

With tapeless formats, it doesn’t work that way anymore. In fact, you will be likely to use at least three different file formats as you shoot, edit, and finish your film. You’ll start with an acquisition format, which is the format, usually proprietary, that your camera uses to capture video. Then you’ll transcode your media to an intermediate format, often also proprietary, during the editing process. And finally you’ll use several different delivery formats to finish your project and distribute across various media.

We’ll go over formats a little later in this chapter, but first, let’s talk about how exactly those tapeless digital video and audio files work.

Digital Video Container Files

When you pull digital video media off a piece of videotape or disk and onto your computer, the media is stored in a file on your hard drive. These files are called container files. QuickTime movies (.MOV), MP4 (.MP4), Material Exchange Format (.MXF), Flash video (.F4V), Windows Media Video (.WMV), and Audio Video Interleaved (.AVI) are some popular digital media container file types for video.

Each of these container files can hold a video stream, two or more audio streams, and some sort of data stream. The data stream can hold additional information, such as subtitles, chapter information, tags, and digital rights management (DRM) data. The type of container file doesn’t necessarily affect the image or sound quality. Instead, the container file supports a selection of different codecs that it works with to determine exactly how the media is stored (see Figure 3.9). That means that the image quality can range from 4:4:4 uncompressed to highly compressed HDV, but the file type (.MOV, .AVI, and so on) remains the same. The video stream will use one codec, and the audio streams will use special audio codecs. Because they hold all these different types of files in one package, container files are also known as wrappers.

Codecs

Earlier, we mentioned that video and audio data must be compressed before they can be played and stored by your computer (unless you have special hardware for playing uncompressed video), and the software that handles this task is called a codec. Codecs are built into the hardware in a digital video camera or video card. In your computer, they are usually software-based and managed either by the video architecture of your operating system, by the digital video editing application that you are using, or by your Web browser.

If you have ever created a Web video on your computer, you have probably been presented with a choice of different codecs. Likewise, if you have made the choice between viewing HD or SD footage using a streaming video on-demand service like Zune, you have been presented with a choice of codecs. Sometimes, decisions about codecs are disguised in “user friendly” language (see Figure 3.10), and sometimes, they are displayed in their full, complicated glory (see Figure 3.11). But regardless of how they are presented, codecs are always a part of interacting with digital video.

Chapter 3 n Digital Video Primer

51

Figure 3.9

When you export video from the QuickTime Player, it will display information about codec used in that container file.

Figure 3.10

Some applications dumb it down for you, which isn’t always a bad thing.

52 The Digital Filmmaking Handbook, 4E

Figure 3.11

Adobe Premiere lets you choose from many different codecs when you export video.

Codecs can be either lossy or lossless; that is, they either degrade the image quality or leave it unaffected. Most compression is lossy, but that doesn’t mean the loss is necessarily visible to the human eye, in fact most digital video codecs are lossy and yet the image quality is fantastic. (See the earlier explanation of color sampling.) The more complicated a shot, the more likely that you will see compression artifacts. A wide shot with lots of small detail, such as the leaves in the upper shot in Figure 3.12, is much harder to compress than a simple close-up of a person like the lower shot in Figure 3.12. Also, many video codecs are a little biased toward human facial features and skin tones so they do a better job compressing shots of faces.

Digital video codecs fall into two categories: intraframe and interframe compression. Intraframe compression treats each frame of video as a still and compresses it separately. Interframe uses keyframes to compress a video clip section by section. The MJPEG and DV codecs use intraframe compression, whereas the various MPEG-based codecs use interframe compression. Interframe compression is more sophisticated than intraframe compression,

Chapter 3 n Digital Video Primer

53

and it is therefore more able to deliver highly compressed files with less visible loss of image quality. However, codecs that use interframe compression are not well suited to editing, because you can only make edits at the keyframes. Interframe codecs, such as H.264, are great for acquisition in the camera, but then it is recommended to transcode these camera original files to another codec such as Apple ProRes or Avid DNxHD for postproduction.

Figure 3.12

Complicated images such as wide shots and moving camera shots (top) are more prone to compression artifacts than simple shots such as close-ups (bottom).

54 The Digital Filmmaking Handbook, 4E

Many codecs take longer to compress than they do to decompress. (These are called asymmetrical.) Most of the codecs that you’ll eventually use to output a finished movie are asymmetrical. For example, although it might take hours to compress a movie using the Sorenson or MPEG codecs, the computer can decompress it and play it back in real time.

Uncompressed digital video still involves a codec, even though it’s technically not compressed. That’s because all digital video requires quantization to take an analog image and render it into digital 0s and 1s; however, since this compression is invisible to the human eye, or “lossless,” it’s considered “uncompressed.” Uncompressed video has a color sampling ratio of 4:4:4 and is primarily used for visual effects, 3D, and blue/green screen shooting because the additional color information that isn’t visual to the human eye is helpful when doing refined computer-generated imagery. As high-quality digital cinema formats grow in popularity, the use of 4:4:4 codecs, such as the R3D codec, become more common, but the large file sizes do tend to require extra computer hardware.

Different codecs are used for different purposes. You’ll use high-compression/lower-quality codecs for Web or mobile delivery (.mp4, .3gp, and so on) and low-compression/higher- quality codecs for higher-quality digital television or digital cinema delivery (h.264, .f4v, .wmv,

.xmf, and so on).

There are many different video codecs out there, but they tend to fall into a few broad categories:

nMPEG-4 based codecs were originally developed for highly compressed Web video, such as MP4, Sorenson, and DivX (all a subset called MPEG-4 part 2), but they have been expanded to include very high-quality codecs, such as H.264, AVC HD, AVC Intra, and others used by many higher-end HD cameras such as Sony’s HDCAM SR, Panasonic’s AVCCAM line, and most DSLRs (a subset known as MPEG-4 part 10). MPEG-4 uses interframe compression and is also used for certain Blu-ray Discs and DVDs. It is easily the most commonly used family of codecs at the time of this writing.

nMPEG-2 based codecs were originally developed for full-frame broadcast-quality video, and they use interframe compression. MPEG-2 is used for some Blu-ray Discs, DVDs, and digital video formats, such as XDCAM, MPEG-IMX, and HDV.

nDCT-based compression is used in some high-end digital video cameras, such as Sony’s HDCAM and high-quality intermediate codecs, such as Apple ProRes.

nVC-3 based compression is software-only and allows for 8- or 10-bit 4:2:2 1080 and 720 HD video. Avid’s high-quality DNxHD intermediate codecs use VC-3 based compression.

nMJPEG (Motion-JPEG) is an older codec family based on the still image compression algorithm of JPEG that uses intraframe compression. Older editing hardware, such as the Avid Meridien board, uses MJPEG.

nJPEG 2000 is a newer codec developed in 2000 and intended to replace the older MJPEG codec. It is known for maintaining very high image quality at high bit rates and is primarily used for file-based digital cinema projection.

nMPEG-1 is an older delivery codec originally developed to compress video to fit onto CDs. It uses interframe compression, is not used in editing applications, and there are no cameras that shoot MPEG-1. The popular MP3 audio format is derived from the MPEG-1 standard.

nDV is an SD codec that was the high-end consumer video format of choice until a few years ago. It uses intraframe compression and is limited to a 4:3 aspect ratio and an image resolution of 720 480 pixels.