Bullet time (also known as frozen moment, dead time, flow motion, or time slice) is a visual effect or visual impression of detaching the time and space of a camera (or viewer) from those of its visible subject. It is a depth enhanced simulation of variable-speed action and performance found in films, broadcast advertisements, and realtime graphics within video games and other special media. It is characterized both by its extreme transformation of time (slow enough to show normally imperceptible and unfilmable events, such as flying bullets) and space (by way of the ability of the camera angle—the audience's point-of-view—to move around the scene at a normal speed while events are slowed). This is almost impossible with conventional slow motion, as the physical camera would have to move implausibly fast; the concept implies that only a "virtual camera", often illustrated within the confines of a computer-generated environment such as a virtual world or virtual reality, would be capable of "filming" bullet-time types of moments. Technical and historical variations of this effect have been referred to as time slicing, view morphing, temps mort (French: "dead time"), and virtual cinematography.
The term "bullet time" had first been used within the original script of the 1999 film The Matrix, and later in reference to the slow-motion effects in the 2001 video game Max Payne. In the years since the introduction of the term via the Matrix films, it has become a commonly applied expression in popular culture.
The bullet time effect was originally achieved photographically by a set of still cameras surrounding the subject. The cameras are fired sequentially, or all at the same time, depending on the desired effect. Single frames from each camera are then arranged and displayed consecutively to produce an orbiting viewpoint of an action frozen in time or as hyper-slow-motion. This technique suggests the limitless perspectives and variable frame rates possible with a virtual camera. However, if the still array process is done with real cameras, it is often limited to assigned paths.
In The Matrix, the camera path was pre-designed using computer-generated visualizations as a guide. Cameras were arranged, behind a green or blue screen, on a track and aligned through a laser targeting system, forming a complex curve through space. The cameras were then triggered at extremely close intervals, so the action continued to unfold, in extreme slow-motion, while the viewpoint moved. Additionally, the individual frames were scanned for computer processing. Using sophisticated interpolation software, extra frames could be inserted to slow down the action further and improve the fluidity of the movement (especially the frame rate of the images); frames could also be dropped to speed up the action. This approach provides greater flexibility than a purely photographic one. The same effect can also be simulated using pure CGI, motion capture, and other approaches.
Bullet time evolved further through The Matrix series (1999–2003) with the introduction of high-definition computer-generated approaches like virtual cinematography and universal capture. Universal capture, a machine vision guided system, was the first-ever motion picture deployment of an array of high definition cameras focused on a common human subject (actor, Neo) in order to create volumetric photography. Like the concept of bullet time, the subject could be viewed from any angle yet, at the same time, the depth-based media could be recomposed as well as spatially integrated within computer-generated constructs. It moved past a visual concept of a virtual camera to becoming an actual virtual camera. Virtual elements within the Matrix Trilogy utilized state-of-the-art image-based computer rendering techniques pioneered in Paul Debevec's 1997 film The Campanile and custom evolved for The Matrix by George Borshukov, an early collaborator of Debevec. Inspiration aside, virtual camera methodologies pioneered within the Matrix trilogy have been often credited as fundamentally contributing to capture approaches required for emergent virtual reality and other immersive experience platforms.
For many years, it has been possible to use computer vision techniques to capture scenes and render images of novel viewpoints sufficient for bullet time type effects. More recently, these have been formalized into what is becoming known as free-viewpoint television (FTV). At the time of The Matrix, FTV was not a fully mature technology. FTV is effectively the live-action version of bullet time, without the slow motion.