A light field camera, also known as plenoptic camera, captures information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the direction that the light rays are traveling in space. This contrasts with a conventional camera, which records only light intensity.
One type of light field camera uses an array of micro-lenses placed in front of an otherwise conventional image sensor to sense intensity, color, and directional information. Multi-camera arrays are another type of light field camera. Holograms are a type of film-based light field image.
The first light field camera was proposed by Gabriel Lippmann in 1908. He called his concept "integral photography". Lippmann's experimental results included crude integral photographs made by using a plastic sheet embossed with a regular array of microlenses, or by partially embedding very small glass beads, closely packed in a random pattern, into the surface of the photographic emulsion.
In 1992, Adelson and Wang proposed the design of a plenoptic camera that can be used to significantly reduce the correspondence problem in stereo matching. To achieve this, an array of microlenses is placed at the focal plane of the camera main lens. The image sensor is positioned slightly behind the microlenses. Using such images, the displacement of image parts that are not in focus can be analyzed and depth information can be extracted.
The "standard plenoptic camera" is a standardized mathematical model used by researchers to compare different types of plenoptic (or light field) cameras. By definition the "standard plenoptic camera" has microlenses placed one focal length away from the image plane of a sensor. Research has shown that its maximum baseline is confined to the main lens entrance pupil size which proves to be small compared to stereoscopic setups. This implies that the "standard plenoptic camera" may be intended for close-range applications as it exhibits increased depth resolution at very close distances that can be metrically predicted based on the camera's parameters.
In 2004, a team at Stanford University Computer Graphics Laboratory used a 16-megapixel camera with a 90,000-microlens array (meaning that each microlens covers about 175 pixels, and the final resolution is 90 kilo pixels) to demonstrate that pictures can be refocused after they are taken.
Lumsdaine and Georgiev described the design of a type of plenoptic camera in which the microlens array can be positioned before or behind the focal plane of the main lens. This modification samples the light field in a way that trades angular resolution for higher spatial resolution. With this design, images can be post focused with a much higher spatial resolution than with images from the standard plenoptic camera. However, the lower angular resolution can introduce some unwanted aliasing artifacts.
A type of plenoptic camera using a low-cost printed film mask instead of a microlens array was proposed by researchers at MERL in 2007. This design overcomes several limitations of microlens arrays in terms of chromatic aberrations and loss of boundary pixels and allows higher-spatial-resolution photos to be captured. However, the mask-based design reduces the amount of light that reaches the image sensor compared to cameras based on microlens arrays.
Plenoptic cameras are good for imaging fast-moving objects where autofocus may not work well, and for imaging objects where autofocus is not affordable or usable such as with security cameras. A recording from a security camera based upon plenoptic technology could be used to produce an accurate 3D model of a subject.