Light Field and Microlenses
secondary research into the topic plenoptic and simple depth sensors - #1
Light field photography is a rather complex field of optics. It has features that are so revolutionary that it is considered to be the next major evolution of photography. A light field camera is also known as a plenoptic camera. It has the ability to capture both the intensity and direction of light rays crossing a plane. This enables these cameras to see in 3 dimensions and somewhat around objects. But the main task up to now for this technology was to allow for refocusing of an image after it was taken and segmenting the picture according to depth. In essence, it allows you to pick and spot in the image and get it into focus and get the distance to that point from the camera fairly accurately.
There are numerous ways of doing this but the most interesting are the ones that use microlenses and coded masks. Here we will deal first with microlenses.
Microlenses
Gabriel Lippman invented Integral Imaging in 1908 and won a Nobel Prize for his ‘Photographie intégrale’. Integral Imaging is the idea of recreating a bug-eye or compound eye view of the world where each eyelet receives a slightly different view and thus creating a multiscopic three-dimensional image.
There are other optically interesting mechanics like barrier grid and lenticular lenses, a personal favorite as lenticular lenses are cheap and affordable. There is a project I did where a lenticular lense was put in front of the screen and a specially prepared image was shown on it. By just moving your head you could see the other image. This technique is just a small sidetrack and will be discussed more in other posts. If you are inpatient check out the prior post about Illusions and Optical effects for a little more information.
The most basic camera has a lens with which the user controls the focus manually or by using the autofocus function. Autofocus is also interesting technology but out of the scope of this post. If you wish to focus on an object that is further or nearer you move the lens further or nearer the camera sensor. This is fairly intuitive and we can understand it as we have working knowledge of it with current cameras. The light field camera is really a sort of an extension of a normal camera. We have our main lens and a sensor just like in a normal camera but we also have a microlens array, an array of tiny lenses in a grid that sits just in front of our sensor.
The focusing of the plenoptic camera is done using mathematics, there’s nothing mechanical in its operation. Images taken with this kind of camera are unusable without postprocessing.
There is an open-source software called PlenoptiCam on git-hub or software page plenoptic.info that can decode light field images if a black and white mask is provided. It can create a depth field and a point cloud from a 2D image. It was made by Christopher Hahn who also provided a Google Colab notebook to play around with. Check it out.
There are many different variations of microlens arrays, from microscopic to really large arrays which really can’t be called a microlens array and they are all functionally the same with all the variations of the normal lense like focal length, depth of field, and aperture.
All these microlenses act like tiny cameras taking pictures of the back of the main lens. Just like having an array of cameras with different focus points but using mathematics and not an expensive array of cameras.
The light field camera is impressive, however, there are a decent amount of issues. First, in order for the camera to gather so much data about the incoming light, the resolution of the final image is severely reduced. In the case of the Lytro camera, a 40 megapixel sensor captures light field data for the final image to approximating 4 megapixels image. Lytro no longer exists as a camera company as it was acquired by Google in 2018. Lytro was way before its time and as such struggled to find its place in the mid 2010s. Their cameras were expensive, cumbersome to use, required a lot of processing power to post-process the images, and images themself could not be compressed using standard image compression algorithms, and not to mention the reduction in picture resolution at a time when all the marketing buzz was how much more megapixels a camera had. Their last hurray was with a revolutionary VR camera that allowed a user to move around real space using a VR headset.
The biggest issue of microlenses based cameras is their manufacturing tolerances and low production volume all of which rises prices. Current prices for microlens array range from $300 to thousands of dollars, this price is just for the array without the camera itself.
With further innovations in lens production like 3D lens printing, this technology could see a mainstream comeback but until then it will be just curiosity and relegated to specialized industrial tasks.
However, there is a way around this manufacturing issue by using multiple cameras. This was the plan from Pelican Imaging that unlike other array based plenoptic cameras used a small array (4×4) of traditional imaging sensors. The low-resolution images captured by each eyelet at 16 .75MP images are processed and reassembled into an 8MP final JPEG version, complete with an embedded depth map.
Sadly, Pelican Imagin was acquired in 2016 by Holding Corp which is a “provider of semiconductor packaging and interconnects solutions and intellectual property products to original equipment manufacturers.” In other words, this particular sensor module is locked behind patent and not accessible by the public. So are other manufactures of camera array modules like LinX which was acquired by Apple in 2015.
Looks like the only way to DIY a cheap light field camera is to use a bunch of cheap camera modules and create an array of them. Not great, next up, Coded Aperature technology.
Further readings:
Matlab Code for computing 4D Light Field from 2D Image
Digital Light Field Photography 2006 Dissertation, Stanford University
EE367/CS448I: Computational Imaging and Display Class- Winter 2021