January 6, 2014
Researchers have discovered the ability to create ultrasharp images from barely illuminated objects. This is done by mathematically stitching together information from particles of light. The development will likely be used to support studies of fragile biological materials such as the human eye, that could be damaged or destroyed by illumination. The development could also be used for military surveillance applications in locations with low light.
According to Nature, electrical engineer Ahmed Kirmani and his colleagues at the Massachusetts Institute of Technology have created an algorithm that takes into account the correlations between close parts of an illuminated object, and the physics of low-light measurements.
In this process, low-intensity pulses of visible laser light is targeted as an object of interest, then the laser fires a pulse at this location until a photon is recorded by a detector. Each of these illuminated spots match up with a pixel in the final image.
This strategy is similar in the way a three-dimensional structure is created, by using existing light detection and ranging (LIDAR) techniques, but the algorithm developed by Kirmani and his colleagues provides this information using one-hundredth the number of photons.
Since the laser creates light of a single wavelength, the pictures created with this method are monochromatic, but can distinguish different materials with varying shades of black, white, and grey.
“To simulate real-world conditions, the researchers used an incandescent lamp that created a level of stray background photons roughly equal to those that number reflected from the laser,” explains the article. “To eliminate the noise, the team used various algorithms, which enabled them to produce high-resolution, 3D images using a total of about one million photons. By comparison, an image of similar quality taken with a mobile-phone camera under office lighting conditions would require a few hundred trillion photons, Kirmani calculates.”