Light Field Camera

Fergal Hennessy




In this precanned project, our goal is to simulate the image-warping effects that a camera experiences.
What I learned the most from this project was understanding how to make a demonstration of the problems that early cameras faced.
For example, pinhole cameras have a problem where a larger aperture allows more light but also more perspectives which can make a more blurry image. Similarly, depth of field is a phenomenon that modern cameras often struggle with.
Being able to simulate this from images without the phenomenon was very interesting.


Depth Focusing with Image Averaging


The first part of this project was depth refocusing. Images are from the Stanford Light Field Archive.
Rectified coordinate data is given for images taken in a rectangular grid pattern.
To simulate focus depth, we apply a linear shift to the images, use affine warp to shift the images to the center location, and average all the images.
Depending on the amount of linear shift, the image average produces a different depth of field, as you can see in the gifs above.



Aperture Selection


Chess set with large aperture (including images with Euclidian distance up to 10000) and depth 70

Chess set with medium aperture (including images with Euclidian distance up to 100) and depth 70

Chess set with small aperture (including images with Euclidian distance up to 20) and depth 70

Flower with large aperture (including images with Euclidian distance up to 10000) and depth 70

Flower with medium aperture (including images with Euclidian distance up to 100) and depth 70

Flower with small aperture (including images with Euclidian distance up to 20) and depth 70

In the real world, a larger aperture size allows more perspectives to be incorporated into the final image.
Using the grid of images we have, we can simulate larger or smaller apertures by including/not including images that are nearer or farther away from our desired point of view. The results are shown above.