Researchers Focus on Computational Photography Solutions

Google and MIT scientists have collaborated to explore how computational photography, which uses machine learning and specialized algorithms, can be used to retouch photos in real time. The team trained neural networks via 5,000 images created by Adobe and MIT; five different photographers retouched each image, teaching the software the various ways a photo could be improved. Elsewhere, Nvidia and UC Santa Barbara created a computational zoom for photographers that lets them change a photo’s perspective in post production.

The Verge notes that, “using machine learning to improve photos has been done before, but the real advance with this research is slimming down the algorithms so that they are small and efficient enough to run on a user’s device without any lag.”

Google_MIT_Instant_Retouch

According to an MIT blog, “the software itself is no bigger than a single digital image” and can “process images in a range of styles.” It also notes that, “smartphones and cameras already process imaging data in real time, but these new techniques are more subtle and reactive, responding to the needs of individual images, rather than applying general rules.”

The techniques used to “slim down the algorithms” include “turning the changes made to each photo into formulae and using grid-like coordinates to map out the pictures,” meaning the data can “be expressed mathematically, rather than as full-scale photos.”

“This technology has the potential to be very useful for real-time image enhancement on mobile platforms,” said Google researcher Jon Barron. “Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones.”

He added that the research results “may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience.”

Digital Photography Review reports that Nvidia and researchers from University of California, Santa Barbara have “detailed a new type of technology called ‘computational zoom’ that can be used to adjust the focal length and perspective of an image after it has been taken.” The technology can, “at times, allow for the creation of ‘novel image compositions’ that can’t be captured using a physical camera,” such as “multi-perspective images featuring elements from photos taken using a telephoto lens and a wide-angle lens.”

To use the technology, photographers have to take a so-called stack of images, “where each image is taken slightly closer to the subject while the focal length remains unchanged.” Computational zoom “determines the camera’s orientation and position … followed by the creation of a 3D rendition of the scene with multiple views.” Then the data is “used to synthesize multi-perspective images which have novel compositions through a user interface.”

UCSB reports that, “the team hopes to make [the technology] available to photographers in the form of software plug-ins.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.