≡ Menu

Lytro: the technology that brings bad photos into focus.

People who know me know that I’m a photo geek.  I love great photos, and I like to flatter myself that occasionally I might even take one or two.

Every photographer has encountered the problem that Lytro (unveiled yesterday) solves – an incorrectly focused image.   Lytro’s innovation is to place an array of “micro-lenses” in front of the main sensor in the camera, capturing the “light field” of the image, rather than just the image itself.  Practically, that means:

  • The focal point and depth of field of the image can be changed after the picture is taken.  For photographers, it’s a mind blowing piece of technology, as it does away with the entire process of choosing aperture and focal point at the time of shooting.  Now one can simply capture the image, and choose the focal point later.   This has applications in portrait, action and macro photography.
  • Better images can be captured in low light, without resorting to the use of a flash.  This is because the micro-lens array uses wide aperture settings, reducing image noise common in low light settings.   Landscape, night time, and indoor photography can all benefit.

Check it out. In the image below, click each of the scuba tanks, and then the diver in the background to see the focal point of the photograph change.


Lytro.com / Jason Bradley

Founder Ren Ng’s PhD dissertation won the 2007 ACM Award, as well as Stanford University’s Arthur Samuel Award for Best PhD dissertation.  The math and the science behind this technology make for a fascinating read.

Ng’s thesis highlights the one inherent limitation in the micro-lens approach as well – micro-lens photography uses enormously more photographic sensor resolution than conventional photography in order to achieve the same image size.  In Ng’s dissertation, the prototype camera was capable of producing images  just 296 x 296 pixels in size.  He writes:

the ideal photosensor for light field photography is one with a very large number (e.g. 100 mp) of small pixels, in order to match the spatial resolution of conventional cameras while providing extra directional resolution.

You can see this in the image above, and the other sample images that Lytro has published on their site.  Simply use the control on the bottom right side of the image to zoom the image to full screen, and then pick a background object as the focal point.  The images have a soft quality about them, due to the lack of pixel resolution.

Lytro is bringing this technology to market, later this year, in a consumer point and shoot format. Their concept is that ordinary people will be able to simply shoot a photo and correct it later, producing results akin to those of a high end SLR in the hands of a professional. And because most consumer photographs are shared online, the resolution limitations of the technology shouldn’t be as important.

I’m a skeptic, I’m afraid.  I think Lytro’s market choice is a pragmatic attempt to fit an early stage technology to a market. Consumers, however, mostly don’t care if their photographs aren’t perfect.  Most consumers don’t edit, color correct, balance light or contrast, etc despite the fact that there is plenty of good and inexpensive photo editing software available.  Consumers point, shoot and upload.

I think the biggest market for Lytro’s technology will be professionals – photographic journalists, commercial photographers, artists, the scientific community and others for whom the requirement to have correctly composed and focused images is of paramount importance. Professionals arrive at a shoot with an array of lenses and camera bodies in order to manage the problems that Lytro eliminates. They then shoot hundreds of photographs knowing that 90% of the photographs they take will be unusable. Lytro could potentially save these folks thousands of dollars in equipment costs and time, and allow them to take many more usable photographs in a single session.  However, until 100MP and 200MP sensors are available at affordable price points, these applications will have to wait.

If Moore’s Law is to be believed, we should only have to wait another 3 to 5 years for those sensors to become available.

Enhanced by Zemanta

{ 0 comments… add one }

Leave a Comment