DeepRay Uses AI to Reconstruct Distorted Video in Real Time

Cambridge Consultants has trained artificial intelligence to reconstruct images from a highly distorted image or video feed in real time that is the AI’s best guess of what the source image originally looked like. The unique approach of their DeepRay product involves recognizing and correcting for a wide universe of distortion patterns. Cambridge Consultants’ first clients are self-driving car companies concerned with accurate object and risk detection, but DeepRay could also have a number of entertainment industry technical and story applications.

Cambridge Consultants used GANs (Generative Adversarial Networks) to train DeepRay on two separate problem domains. They trained the AI to recognize and correct for a huge body of image distortions such as images seen through broken glass, washboard glass, and glass with dirt and rain drops on it, images reflected off of crumpled metal, images seen through fog and smoke, etc.

They also trained the AI on real world images so it recognized real-world image characteristics, such as sharp or soft edges, high or low contrast images, motion blur, etc.

When a distorted still or streaming video is input, the two trainings are combined to produce a new image in real time. DeepRay is not recognizing shapes and pulling objects from a database. DeepRay is also not shifting pixels to create a clear image from the original distorted image.

Instead, DeepRay is creating an entirely new image based on the AI’s interpretation of what the pre-distortion image would look like. So, for example, if the distorted image looks like it is a capture of a sharp and bright image, then the AI will construct its best guess of the objections in the image and make them appropriately sharp and bright. But if there are indications that the source was a foggy image, then the AI will construct a soft, foggy image.

This technology could be used to detect and correct for lens distortions during production. This would be especially useful for live action 3D shoots, where any difference in lens distortion between the right eye and left eye could cause physical discomfort among viewers later on. DeepRay could also be used to clean rain, smoke, dirt and debris out of an image or video in real time on the set.

On the creative side, the technology could be a plot device. It could capture the reflection of activity around a corner from a distorted non-mirror surface like polished granite or a shiny car, and then clean it up to reveal useful information (as in a 21st century version of the film “Blow-Up”).

Cambridge Consultants will be demonstrating its neural network trained on Nvidia’s DGX-1 platform and running on a standard gaming laptop in the Eureka Park area of CES (Sands 44337).

You can learn more about DeepRay in this recent VentureBeat article or by visiting the Cambridge Consultants site.