Term Glossary

1080i: Refers to an interlaced HDTV signal with 1080 horizontal lines and an Aspect ratio of 16:9 (1.78:1). All major HDTV broadcasting standards include a 1080i format, which has a Resolution of 1920×1080 pixels.

1080p: Refers to a progressive (non-interlaced) HDTV signal with 1080 horizontal lines and an Aspect Ratio of 16:9 (1.78:1) and a resolution of 1920×1080 pixels. 1080p is a common acquisition standard for non-broadcast HD content creation. It can be acquired at almost any frame rate.

16:9: Refers to the standard Advanced Television (HDTV) aspect ratio where height is 9/16 the width. It’s also commonly referred to as 1.78:1 or simply 1.78, in reference to the width being approximately 1.78 times the height.

2160p: Refers to a progressive Ultra High Definition video signal or display. It can be either 4096 x 2160 (1.90 aspect ratio), or 3840 x 2160 (1.78 aspect ratio). 4096 x 2160 is commonly referred to “4K” where 3840 x 2160 is commonly referred to as “QuadHD.” QuadHD is the proposed 4K broadcast standard.

24P: Refers to 24 progressive frames. This is a common digital acquisition frame rate as it mimics the film look. In practical application, 24P is actually acquired at 23.98 fps.

3:2 Pulldown: Refers to the process of matching the film frame rate (24 frames per second) to the frame rate of NTSC video (30 frames per second). In 3:2 pulldown, one frame of film is converted to three fields (1 1/2 frames) of video, and the next frame of film is converted to two fields (1 frame) of video. This cadence is repeated (3 fields, 2 fields, 3 fields, 2 fields, etc.). Over the course of one second of play time, an additional
 12 video fields is created (equivalent of six frames), closing the gap between 24 fps film and 30 fps video.

480i: Refers to interlaced video encoded with two 240 line fields, forming a final Frame of 480 Pixels vertical resolution. This is also commonly referred to as “NTSC” television.

4:2:2 Sampling: Refers to a method of chroma sub-sampling whereby less color information (Blue and Red) is encoded to the video signal compared to luminance information. Since the human eye has a lower acuity for color differences than it does for luminance detail, the luminance channel (Green) receives the highest level sampling to maintain image detail in the decoded signal. The numerals 4:2:2 simply refer to sampling ratios. In this case there are two samples of Red and Blue taken to every four samples of Green.

4:4:4 Sampling: Refers to equal and full color sampling for Green, Red, and Blue. 4:4:4 provides the highest level color detail, and is used at the high-end of digital acquisition and mastering. 4:4:4 is commonly referred to as RGB video.

4:3: Refers to an aspect ratio with a height that’s 3/4 the width. It’s also commonly referred to as 1.33:1 or simply 1.33. The standard aspect ratio of Standard Definition TV, it is also the standard aspect ratio of 35mm lm. In this application it is also referred to as “full aperture” 1.33.

5.1 Sound: Is a technical reference to the number of channels in a Discrete Surround signal (with .1 indicating that the sixth channel doesn’t include the full range of audio frequencies), the term is also commonly used to refer to equipment capable of playing back a 5.1 Surround signal. This audio encoding scheme was developed by Dolby Labs, and is commonly used worldwide in both professional and consumer applications.

720p: Refers to a progressive television signal with 720 horizontal lines and an Aspect Ratio of 16:9 (1.78:1). All major HDTV broadcasting standards include a 720p format, which has a resolution of 1280×720.

AAF: Refers to the Advanced Authoring Format, is a cross-platform file format developed by Microsoft that enables the interchange of data between various multimedia tools.

AC-3: Refers to Dolby Laboratory’s third generation audio coding algorithm. It was developed to allow the use of lower data rates with a minimum of perceived degradation to the sound quality.

Academy Ratio: Refers to the standard 35mm 4:3 Academy Aperture, which calculates to an actual aspect ratio of 1.37:1.

ACES: Acronym for the Academy Color Encoding Specification (See IIF/ACES).

ALE: Refers to Avid Log Exchange—A file format (text file) for the interchange of video and audio metadata between platforms.

Aliasing: An undesirable distortion component that can arise in any digitally encoded information (picture or sound). Aliasing may appear as fine jagged edges on lateral lines in a video image, or high frequency oscillations in video images containing high frequency detail.

Aperture: Essentially refers to the amount of light allowed in to expose either the film or the image sensor in a film or video camera. This is controlled by a part of the camera called the diaphragm. Like the iris of the eye, the diaphragm controls the amount of light that reaches the film or imager. The size of the opening in the diaphragm is called the f-stop.

ARRIRAW: Refers to the uncompressed recording of all image data gathered by the Alexa digital cinema camera imager, prior to deBayer. Native resolution is 2880 x 1620 (16:9 Mode) and the 2880 x 2160 (4:3 Mode). The RAW data stream from the camera can be recorded via T-link to certified recorders such as Codex Digital or Convergent Design 4:4:4.

Artifact: An artifact is a visible or audible result of a digital processing error. An example would be video blocking and video noise commonly seen with MPEG video, compression used in broadcasting and DVD encoding.

ASA: The American Standards Association defined a method to determine the film speeds of black-and-white negative films in 1943. The ASA system was eventually superseded by the ISO film speed system. The arithmetic ASA speed scale continued to live on as the linear speed value of the ISO system.

ASC-CDL: Refers to a method and metadata scheme for the exchange of basic primary color grading information between platforms. The ASC CDL allows color corrections made with one device at one location to be applied or modified by other devices elsewhere using metadata. The format defines each function using numbers for the red, green, and blue color channels. A total of nine numbers comprise a single color decision. A tenth number, Saturation, applies to the R, G, and B color channels in combination. This specification was devised and submitted by the American Cinematographer Society Technical Committees, thus ASC-CDL.

Aspect Ratio: Refers to the physical dimensions of a film or video frame, or display. For example a 1.33:1 (4×3) aspect ratio refers to the constant of 1 (top to bottom) to the width. In this case the width is 1.33 times wider than the height of the picture. This formula applies to all aspect ratios. Examples: 1.78:1 (16×9), 1.85:1 (theatrical), 2.35:1 (scope), etc.

Avid: Refers to Avid Technology, Inc., an American company specializing in video and audio post production technologies. They are best known for their non-linear audio and video editing and processing systems.

Bandwidth: In computer networks, bandwidth is often used as a synonym for data transfer rate—the amount of data that can be carried from one point to another in a given time period (usually a second). This kind of bandwidth is usually expressed in bits (of data) per second (bps). 
In broadcast communications, bandwidth refers to the width of the range (or band) of frequencies that an electronic signal uses on a given transmission medium. In this usage, bandwidth is expressed in terms of the difference between the highest-frequency signal component and the lowest-frequency signal component and is normally measured in Mega Hertz (MHz).

Bayer Filter: A Bayer filter is a color filter array (CFA) for arranging RGB color filters on a square grid of an electronic imager’s photo-sites (photo-sensors). Bayer filters are commonly used in single-chip (CMOS APS) digital cameras, camcorders, and digital and file-based cinema cameras to create a color images. The filter grid is typically 50% green, 25% red and 25% blue.

Bit: The shortened form of “binary digit” (0 or 1). A bit is the smallest unit of information.

Bit Depth: Refers to the number of bits per pixel. Bit depth determines the number of shades of gray or variations of color that can be displayed by a computer monitor. For example, a monitor with a bit depth of 1 can display only black and white. Most monitoring systems are 10 bit, which allows up to 1024 different variations in color and grayscale. Were a monitor capable of 16 bit display as in the ACES format, 65,536 different variations would be possible.

Bitrate: Refers to the volume of data that can be serially streamed in one second.

Bitstream: Refers to a series of bits or continuous serial transmission of digital data.

Blu-ray: Refers to an optical disc standard which uses a blue-violet laser instead of red laser. It is most commonly used for Blu-ray video disc mastering and playback (HD DVD). The characteristics of the blue-violet laser allows for greater data volume to be inscribed to the disc surface.

BPS: See Bitrate.

Broadcast Wave File (BWF): Refers to an extension of the popular Microsoft WAVE audio format and is the recording format of most le-based non-linear digital recorders used for motion picture, radio and television production. Wave files specify the format of metadata, allowing audio processing elements to identify themselves, document their activities, and permit synchronization with other recordings. This metadata is stored as extension chunks in a standard digital audio WAV file (See Waveform Audio File Format).

Byte: A collection of 8 bits that comprises a data word.

CCD: Refers to Charged Coupled Device imagers. Highly efficient and widely used in video cameras manufacturing for decades, CCD imaging has largely been replaced by CMOS imagers, which are cheaper to manufacture and typically provide greater photosite (light sensors) counts (see CMOS).

CDL: Color Decision List (See ASC-CDL).

Checksum: A checksum is a simple error-detection method in which each file is accompanied by a numerical value based on the number of bits and the value of those bits in the message. The receiving station then re-calculates the numerical value based on the number of bits and value of bits it has received. Thus checking for any errors that may have been introduced during a file ingest, file copy, or any other transmission of storage including LTO archive. If the numerical values match, no error has occurred.

Chroma: Refers the purity of a color, separate from its luminance value; the intensity of a distinctive hue and saturation of a color.

Clone: Refers to the duplication of a digital tape or file where a bit for bit reproduction is created as opposed to the re-encoding of a rendered or decoded playback.

CMOS: Refers to an image sensor consisting of an integrated circuit containing an array of pixel sensors, each pixel containing a photodetector and an active amplifier. There are many types of active pixel sensors including the CMOS APS used most commonly in DSLRs and digital video cameras. Such an image sensor is produced by a CMOS process (and is hence also known as a CMOS sensor), and has emerged as an alternative to charge-coupled device (CCD) image sensors.

Codec: Stands for Coder/Decoder. A codec is a device or software program capable of encoding or decoding (compressing-decompressing) a digital data stream or signal. A codec encodes a data stream or signal for transmission, storage or encryption; or decodes it for playback
or editing.

Color Space: Refers to how color is encoded in a video signal. The most common color spaces are RGB and YUV, with variants of each considered separate color spaces (See RGB, and YUV). A defined color space will also include a color Gamut which refers to a subset of colors reproducible within a given color space.

Color Temperature: Refers to a characteristic of visible light as it applies to lighting. Color temperature is conventionally stated in units of absolute temperature, the “Kelvin”, having the unit symbol “K.” Color temperatures over 5,000 Kelvin are called cool colors (bluish-white), while lower color temperatures (2,700–3,000 K) are referred to as warm colors (yellowish-white through red).

Composite Video: Refers to the lowest quality format for Standard Definition (SD) analog video recording and transmission.

Compression: Compressing digital data requires removing redundancies so that it takes up less bandwidth or record space. There are two main forms of compression, lossy and lossless. Lossless compression only takes away a certain amount of data so that it can be returned to its original complete state. This method typically results in higher data rates. Lossy compression however, will sacrifice more data to produce lower data rates for record and transmission purposes. MPEG-4 is an example of a highly efficient lossy compression standard that can maintain high quality images while tremendously reducing the size of a video signal.

Compression Artifacts: Refers to the compacting of a digital signal, particularly when a high compression ratio is used. This may result in small errors when the signal is decompressed. These errors are known as artifacts, or unwanted defects. The artifacts may resemble noise (or edge busyness) or may cause parts of the picture, particularly fast moving portions, to display blocking or other distortions, which may render the image incomplete.

Conform: During conform, the picture from an edited sequence is re-assembled at full or ‘online’ resolution. This final assembly is checked against a video copy of the offline edit to verify that the edits are correct and frame-accurate. A conform can also be termed “online.”

Container: Refers to a file “wrapper” that allows different types of codecs to be handled by a standardized playback device such as the QuickTime player or the Windows Media Player.

Database: Refers to a collection of data and/or metadata, which is organized for rapid search and retrieval, by a computer. Databases are structured to facilitate storage, retrieval, modification, and deletion of data. A database consists of a file or set of files that can be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage.

DCP encoding: A Digital Cinema Package (DCP) is the standard convention accepted worldwide for distributing and projecting movies digitally. (Used for film festivals or distributing your film to theaters nationwide.)

De Bayering: Refers to the mathematical algorithm and process for creating finished picture pixels from the Green, Red, and Blue sub-samples created by the photo-sites in a single chip imager using a Bayer Filter.

Digital Rights Management (DRM): Refers to a class of access control technologies (primarily software) that are used by content creators and distributors as well as hardware manufacturers, publishers, copyright holders, and individuals with the intent to limit access or use of copyrighted content to prescribed limitations.

Digital Video Interface (DVI): The DVI specification refers to a uniform connector that can accommodate both, digital and analog video signal. DVI has three subsets: DVI-A, for analog signals, DVI-D, for digital signals, and DVI-I (integrated), for both analog and digital signals.

Digitizing: Refers to the process of creating digital files from physical elements such as photographs or digital and analog videotape sources.

DIT: An acronym for Digital Imaging Technician. The DIT is typically responsible for: Set-up and prep of digital cameras and on-set monitoring, on-set color management assistance to the Director of Photography, and on-set media management assuming no data manager is assigned to the set.

DPX File: Refers to a standard uncompressed file format designed for high quality “Digital Picture Exchange” (DPX). This file format is most commonly used when scanning film for restoration and archiving as digital files. It is also commonly used for rendering RAW files from some digital motion picture cameras (e.g. RED, Arri, F65) for final conform and finishing.

Dual Link Interface: Refers to the use of two HD-SDI output/input links (link A / link B) to transfer full bandwidth 4:4:4 HD signals from a 4:4:4 imaging or playback source to a 4:4:4 record or display device.

DVCPRO: Refers to Panasonic’s proprietary DV Codec. This codec has been released in 25Mbps, 50Mbps, and 100Mbps in 4:1:1 and 4:2:2 sampling.

EDL: An Edit Decision List (EDL) is used to carry over editorial decisions from the offline editorial platform (i.e. Avid or Final Cut Pro) to the online conform process. It may also be used to identify and pull specific shots from a network or archival tape for VFX or Marketing materials.

Encode: The process of converting an image or audio source to a specific digital file format for the purpose of internet streaming or record to physical media.

Encryption: The process of encoding media files or communications in such a way as to prevent unauthorized access to the content. In an encryption scheme, the message or information is encrypted using an encryption algorithm, which renders the content useless without the appropriate decryption key.

Ethernet: The most widely used data link protocol in the world. Traditional Ethernet connections could operate at speeds up to 10 Mbps. More advanced “Fast Ethernet” standards have increased the speed to 100 Mbps.

F-stop (Focal Stop): Refers to the international standard sequence of numbers that express relative camera aperture. F-stop is the “lens focal length” divided by the “effective aperture diameter.” The smaller the F- stop number, the greater the amount of light that passes through the lens. Each change of F-stop halves or doubles the image brightness as you step up or down.

Field: One half of an “interlaced” video frame. Each video frame is composed of two video fields. Each field contains either odd or even video line information. When displayed at sixty fields per second (30fps) the composite video images are complete. See also Frame.

File Size: The amount of storage space a file takes up. Measured in bits or bytes: Kilo, Mega, Giga Tera, etc.

FireWire: Is a widely used data interface in computers, camcorders and portable disc drives with typical transfer speeds at up to 800Mbps, using FireWire 800.

Flash Video File Format (.flv): A video file format used to deliver video over the Internet.

Frame: May refer to a film or video frame: 24 individual film frames equate to one second of run time. 24 individual “progressive” video frames equate to one second of run time. 30 individual interlaced video frames equate to on second of run time.

Framerate: Defines how many pictures (frames) one second of lm, video, or audio contains. Normal acronym for framerate is fps—frames per second.

Gamma: Refers to non linear luminance in a video signal. Gamma is normally applied to video images to compensate for negative gamma curves introduced by display devices. The introduction of gamma curves that are inverse to those designed to correct for the fact that the intensity displayed on a display device is not linearly related to the input voltage.

Gigabit: The Gigabit is 10^9 bits, or 1,000,000,000 bits, or 1000 megabits. A Gigabit is equal to 125 Megabytes.

H.264: Refers to a form of MPEG-4 video compression encoding, pioneered for the purpose of providing good quality video at half the bit rate. H.264 is currently one of the most common compression formats used for recording and distribution of high definition video. Because of its compression efficiency, it is widely used for video streaming and WiFi applications, but is best known for its use in Blu-ray disc encoding.

HDCAM SR: Refers to a Sony tape recording format that utilizes MPEG 4 Simple Studio Profile (SStP) encoding for 4:2:2 and 4:4:4 sampled video recording. Recordings can be made in High Quality (HQ) or Standard Quality (SQ), which are recorded at either 880Mbps or 440Mbps, respectively. HQ recordings are considered mathematically lossless recording, while SQ recordings are considered visually lossless recording.

High-Definition Multimedia Interface (HDMI): An interface used primarily in consumer electronics for the transmission of uncompressed high definition video, up to 8 channels of audio, and control signals, over a single cable. HDMI is the de facto standard for HDTV displays, Blu-ray Disc players, and other HDTV electronics.

HD-SDI: “Refers to High Definition Serial Digital Interface.” This is a 10 bit interface with a capacity of 1.5Gbps transfer speed. It is the standard interface for all 4:2:2 HD video outputs/inputs in digital camera monitoring outs, HD video record devices, and HD displays.

Interlaced: Refers to the frame structure of 30fps video. See also Field, Frame.

IIF/ACES: Refers to the Motion Picture Academy’s “Image Interchange Format” and “Academy Color Encoding Specification.” Employing Open EXR file format and 16 bit color sampling, its intention is to create a “future proof” standard for file interchange and color encoding that would be applied to acquisition, post production, asset archive, and the manufacture of professional acquisition and post production tools, going forward.

ISO: ISO also refers to a number system used to express the light sensitivity of film stocks and digital camera imagers. In traditional (film) photography ISO (or ASA) was the indication of how sensitive a film was to light. It was measured in numbers (e.g. 100, 200, 400, 800 etc.) The lower the number the lower the light sensitivity of the film and the finer the film grain. In Digital Photography, ISO measures the sensitivity of the image sensor and the same principles apply; the lower the number the less sensitive the camera is to light. Higher ISO settings are generally used in darker situations to get faster shutter speeds.

ISO: Also refers to a CD or DVD image (not picture) file with an extension of “.iso”. The extension comes from the full name of the CD-ROM and DVD-ROM file system specification, ISO 9660.

JPEG: JPEG stands for “Joint Photographic Experts Group.” JPEG images are compressed image files. Unlike GIF they do not have color restrictions and they do not support animation. And unlike PNG files they are lossy, supporting a wide range of compression ratios.

Kelvin: Refers to a measure of color temperature. The Kelvin is often used to measure the color temperature of light sources. Color temperature is based upon the principle that a black body radiator emits light whose color depends on the temperature of the radiator. Black bodies with temperatures below 4000 K appear reddish whereas those above about 7500 K appear bluish.

Keyframe: A position on a video timeline when an event occurs.

Kilobit (Kb): A Kb is equal to 10^3 bits, or 1000 bits. There are 125 Bytes in one Kilobit.

Kilobyte (KB): A KB is equal 10^3, or 1,000 Bytes.

Linear Video: Is commonly understood to refer to non logarithmic video signals that have been processed for viewing on standard linear display devices. The most common encoding for this purpose is SMPTE Rec. 709, which defines image resolution, frame rate, gamma, luminance and color space for standardized monitoring.

Log Video: Refers to the process of logarithmic color space encoding in video signals. Logarithmic color encoding is intended to more closely mimic the natural light response of film emulsions in digital devices. Images encoded in this way are referred to as “non-linear” or “Log,” and are typically used with certain film-specific, or digital acquisition file formats and digital recordings. In logarithmic color space the relationship between a pixel’s digital value and its visual brightness does not remain constant (linear) across the full gamut of black to white.

LTO: Stands for Linear Tape Open. An LTO is a magnetic tape data storage designed for long term storage (15 to 30 years).

Look-Up Table (LUT): Refers to a matrix of color data that is searched in order to change a source set of colors coordinates to a selected destination set of color coordinates. For example, LUTs are commonly used to change color space when playing back Log encoded video on standard display devices.

Luminance: Refers to linear (non-gamma corrected) luminosity information, although it’s sometimes used to refer to both linear (analog) and gamma corrected (digital) luminosity.

Luminosity: Refers to the brightness of a color. Most of the luminosity perceived by the human eye is in the color green, and conversely most of what we perceive in the color green is luminosity.

Megabit: A Megabit is 10^6 bits, or 1,000,000 bits or 1000 kilobits. A Megabit is equal to 125 Kilobytes.

Megabyte: A Megabyte is 10^6, or 1,000,000 Bytes (8 bits to a Byte).

Megapixel: A Megapixel is the equivalent of one million pixels, and is normally used to refer to the resolution capabilities in digital cameras and display devices.

.MOV: The .mov file is a movie file format sometimes referred to as the Apple QuickTime Movie Format. .mov files are limited to QuickTime. The .mov format is also the basis for the MPEG-4 format.

MPEG: An acronym for the Motion Picture Experts Group, the MPEG compression format is widely used in DVD encoding, broadcast, and web applications. There are three basic formats of MPEG: MPEG-1, the first generation audio and video compression format, resulted in spinoff formats such as MP3; MPEG-2 is widely used in DVD encoding and broadcast applications; MPEG-4 was designed specifically for low-bandwidth (less than 1.5MBit/sec bitrate) video/audio encoding purposes. This is widely used in web applications and streaming. There are also a number of other MPEG-4 versions which utilize bitrates of up to 880 Mbps. Two of these higher formats are H.264, and Simple Studio Profile (SStP).

Metadata: Data that describes or gives information about other data. In the realm of post-production metadata files refer to video assets. Metadata files often include (but are not limited to) information like file name, clip name, timecode, keycode, frame rate, resolution, creation date, file history, etc. Metadata can also include color and editorial decisions as represented by CDL, EDL, and ALE files.

Mezzanine File: Refers to a lightly compressed archival file (finished product) that is suitable for use as a source files in distribution. Mezzanine files are commonly referred to as “working archival files,” which service cross platform distribution requirements.

NAS: Refers to Network Attached Storage. NAS systems provide file-level computer data storage and connect to multiple networked computers or platforms for data access. The benefits of NAS systems, compared to file servers, include faster data access, easier administration, and simple configuration.

Non-Linear Editing (NLE): Non-Linear Editing is a technique used in digital systems where a digital source (such as digitized film, video or audio) is used to create an edited version, not by rearranging the source file or materials, but by creating a detailed list of edit points (ins, outs, fades, etc.). Editing software reads the sequence and creates the Edit Decision List. For film based shows we may require a cut list or the sequence itself.

Offline Editing: Refers to the film and television editorial process where proxy elements (tape or film) are used in place of the original acquisition materials. Most offline editorial processes are accomplished in AVID, Final Cut Pro or Premiere editing systems, using low resolution proxy files. Upon completion of the offline editorial process, editorial decisions are applied to the original acquisition files, tapes, or film elements, in the final conform/online process.

Online: Refers to the compilation of select original digital acquisition elements as dictated by an Edit Decision List (EDL). This final conform of select elements is checked against a video copy of the offline edit to verify that the edits are correct and frame-accurate. An online can also be termed a “conform.”

Open EXR: Refers to a high dynamic-range image file format developed by Industrial Light & Magic for use in computer imaging applications. It is commonly used for VFX work and is the preferred file format for Academy ACES.

Oversampling: Commonly refers to sampling image data at twice the required output resolution. This results in what is known as a Nyquist benefit. The Nyquist-Shannon Sampling Theorem essentially says: when sampling a signal (e.g., converting from an analog signal to digital),
 the sampling frequency must be greater than twice the bandwidth of the input signal in order to be able to reconstruct the original perfectly from the sampled version.

P2 Card: A type of memory used almost exclusively in Panasonic camcorder and video record products.

Petabit (Pb): A Petabit is 10^15 bits, or 1,000,000,000,000,000 bits or 1000 Terabits. A Petabit is equal to 125 Terabytes.

Petabyte (PB): The PB is 10^15, or 1,000,000,000,000,000 Bytes.

Pixel: Refers to a “Picture Element”. A pixel is a physical, addressable, point in a video image. It is the smallest addressable element in a display device. Pixels in a display typically refer to “finished picture pixels.” A finished picture represents an average of red, green, and blue color picture samples taken from the original capture device. The intensity of each pixel is variable. Individual photo-sites (photo sensitive receptors) in a video camera imager are often referred to as pixels, even though they are only a contributing element to a finished display picture pixel.

Progressive: Refers to Progressive images, which contain and displays all scan lines of a video image for the duration of one frame cycle. See also Frame, Interlaced.

Prosumer: Refers to a class of video, audio, and other multimedia tools that have features required by professionals, but are still in the price range of hobbyists.

Proxy File: Refers to a file which has been converted (transcoded) from a larger original file format to an inferior (smaller file size “Proxy”) format for portability and platform interchange. Proxy files are typically used in editorial platforms, streaming media, and distribution media. To be useful, a proxy file must be exactly the same as the source it is representing, in frame rate and run time.

Quad HD: Refers to a 2160p progressive Ultra High Definition video scheme. Quad HD pixel resolution is 3840 x 2160. Its aspect ratio is 16:9 (1.78:1). It’s is called QuadHD because it has exactly four times as many pixels as standard HDTV (1920×1080) with the same aspect ratio.

QuickTime: Refers to a multi-media file format that supports most encoding formats, including Cinepak, JPEG, and MPEG. QuickTime is competing with a number of other standards, including AVI and ActiveMovie. In February 1998, the ISO standards body adopted QuickTime as the basis for the MPEG-4 standard.

R3D: Refers to REDCODE RAW, a proprietary audio/video file format created by RED Digital Cinema Camera Company. It is used as native recording format of Red digital cameras.

RAID: Refers to “Redundant Array of Independent Disks.” The RAID storage combines multiple disk drive components into a logical storage unit. There are different RAID levels ranging from RAID “0” to RAID “6.” RAID 6 is considered most reliable, providing the highest level of
 practical redundancy.

Rec. 709: Refers to ITU-R Recommendation BT.709, which is an international specification governing and standardizing all aspects of the high-definition television format.

REDCINE-X: Refers to a RED Digital Cinema Camera Co. software application for viewing R3D files natively on a Mac or Windows system. The application also provides the ability to transcode R3D files to common self-contained formats (e.g. QuickTime, Avid MXF, and DPX), and adjust or manipulate RAW metadata.

RED ROCKET-X Card: A graphics accelerator card developed by RED Digital Cinema Camera Co., for accelerated render and transcoding of R3D files, at various resolutions. Multiple Red Rocket cards may be used in parallel operations to boost system performance.

RGB: RGB is an acronym for Red, Green, and Blue. It also refers to an RGB color space. RGB color space is defined by the three chromaticities of the Red, Green, and Blue additive primaries, and can produce any chromaticity that is in the color space triangle defined by those primary colors. The complete specification of an RGB color space also requires a white point chromaticity and a gamma correction curve.

Resolution: Refers to the density of lines or total pixels that make up an image. Resolution determines the detail and quality in the image. It is also a measure of a camera or video system’s ability to reproduce detail, or the amount of detail that can be seen in an image. It is often expressed as a simple numeric. For example, HDTV resolution is 1920×1080 (1920 pixels per horizontal scan line x 1080 scan lines). Higher resolution scanned images, such as from film, might be identified as 2K, or 4K, which essentially identifies a rounded approximation of the horizontal pixel count, times the number of scan lines in the display or image file. Examples: Digital cinema resolutions for 2K and 4K equal 2048×1080, and 4096×2160, respective.

RMD: Refers to RED Metadata. The file format is used to save specific user created looks such as color, crop, etc. to R3D files.

Sample-Rate: Refers to the Sampling Frequency of an Analog to Digital conversion. This literally means how frequently the analog source is sampled. Sample-rate is measured in samples per second, where 1000 samples per second is equivalent to a sampling frequency of 1 Kilo Hertz (1 KHz).

Transcode: The process of converting one form of encoded data file to another. Transcoding is commonly used in the creation of proxy media for editorial and distribution purposes. Virtually all original digital acquisition files are transcoded at some point in the process of content creation and distribution.

Upload: Pushing a digital file across a network from a local computer to a remote computer.

USB 3.0: A computer interface with extremely fast transmission speeds of up to 5 Gbps. The previous generation (USB 2.0) had a transmission speed of only 480 Mbps.

Visual Aspect Ratio: See Aspect Ratio.

Waveform Audio File Format (WAV): A digital audio recording file format that is compatible with Windows, Macintosh, and Linux operating systems. WAV files are commonly used in MP3 recording.

White Point: Refers to a set of red, green, and blue values or chromaticity coordinates that serve to define the color “white” in image capture, encoding, or reproduction. Depending on the application, different definitions of white are needed to give acceptable results. For example, photography taken indoors may be lit by incandescent lights, which are relatively orange compared to daylight. Defining “white” as daylight will give unacceptable results when attempting to color-correct a photograph taken with incandescent lighting.

Wrapper: A wrapper is data that precedes or frames the main data or program that sets up another program so that it can run successfully. With video and audio data, wrappers allow different types of codecs to be handled by a standardized playback device such as the QuickTime player or the Windows Media Player. A wrapper can also be termed a “container.”

Windows Media Video (.wmv): An audio and video file encoded for use with Windows Media Player.

YUV: Refers to a Color Space typically used as part of a color image pipeline. It encodes a color image or video taking human perception into account, allowing reduced bandwidth for chrominance components, thereby typically enabling transmission errors or compression artifacts to be more efficiently masked by the human perception than using a “direct” RGB-representation.