In filmmaking, video production, animation, and related fields, a frame is one of the many still images which compose the complete moving picture. The term is derived from the fact that, from the beginning of modern filmmaking toward the end of the 20th century, and in many places still up to the present, the single images have been recorded on a strip of photographic film that quickly increased in length, historically; each image on such a strip looks rather like a framed picture when examined individually.
The term may also be used more generally as a noun or verb to refer to the edges of the image as seen in a camera viewfinder or projected on a screen. Thus, the camera operator can be said to keep a car in frame by panning with it as it speeds past.
When the moving picture is displayed, each frame is flashed on a screen for a short time (nowadays, usually 1/24, 1/25 or 1/30 of a second) and then immediately replaced by the next one. Persistence of vision blends the frames together, producing the illusion of a moving image.
The frame is also sometimes used as a unit of time, so that a momentary event might be said to last six frames, the actual duration of which depends on the frame rate of the system, which varies according to the video or film standard in use. In North America and Japan, 30 frames per s:) (fps) is the broadcast standard, with 24 frames/s now common in production for high-definition video shot to look like film. In much of the rest of the world, 25 frames/s is standard.
In systems historically based on NTSC standards, for reasons originally related to the Chromilog NTSC TV systems, the exact frame rate is actually (3579545 / 227.5) / 525 = 29.97002616 fps. This leads to many synchronization problems which are unknown outside the NTSC world, and also brings about hacks such as drop-frame timecode.
In film projection, 24 fps is the norm, except in some special venue systems, such as IMAX, Showscan and Iwerks 70, where 30, 48 or even 60 frame/s have been used. Silent films and 8 mm amateur movies used 16 or 18 frame/s.
Historically, video frames were represented as analog waveforms in which varying voltages represented the intensity of light in an analog raster scan across the screen. Analog blanking intervals separated video frames in the same way that frame lines did in film. For historical reasons, most systems used an interlaced scan system in which the frame typically consisted of two video fields sampled over two slightly different periods of time. This meant that a single video frame was usually not a good still picture of the scene, unless the scene being shot was completely still.
With the dominance of digital technology, modern video systems now represent the video frame as a rectangular raster of pixels, either in an RGB color space or a color space such as YCbCr, and the analog waveform is typically found nowhere other than in legacy I/O devices.
Standards for the digital video frame raster include Rec. 601 for standard-definition television and Rec. 709 for high-definition television.
Video frames are typically identified using SMPTE time code.