A conventional camera focuses all light rays emitted from an object to the same point in the image. In contrast, a lightfield image contains both the location and angle of incident light rays. Much has been written about light field imaging, but to introduce this project, it suffices to explain that it is conventionally accomplished by capturing many images from different perspecives. Typically, this is by use of a microlens array, as in the Lytro camera, or a larger array of cameras themselves, as in the older Stanford Multi-camera Arrray.
However, this compound image can be captured in other ways. For example, consider a camera recording video from a moving vehicle, as below:
This video is essentially a series of images captured with displacement along a single axis, and therefore, can be interpreted as a kind of lightfield array! (Ignoring the effects of time, bends in the road, and the suspension of the bus, of course.) However, because our "array" is one dimensional, this will be a 3D lightfield, rather than the more useful 4D lightfields. Still, we are capable of using our lightfield to synthesize an image of the entire road simultaneously, taken from any angle within the field of view of the camera.
Notice especially the way the perspective changes on the road sign!
From this lightfield, we can also synthesize an aperture one mile wide (in one dimesion, at least). This ultra wide aperture enables a synthetic depth of field that is extremely shallow.
By finding the image that is most "in focus" for each (x,y) coordinate, it is simple to create a simple click-to-focus prototype. To identify the sharpest focus regions, a simple high-pass filter and rectifier can be used to measure the high frequency energy in each region. A 1-D filter is sufficient, as the sharpness is only variable along a single axis.
You can play with the result here!
If you'd like to take a deeper dive into the (extremely simple) computation that makes this possible, the source code for this project can be found on my GitHub.