December 5, 2017
On November 29 at the AWS re:Invent conference, Amazon Web Services introduced its AWS DeepLens, a video camera whose main purpose is to teach developers how to program AI functions. The camera comes loaded with different AI infrastructures and AWS infrastructure such as AWS Greengrass Core and a version of MXNet. Developers can also add their own frameworks like TensorFlow. The 4-megapixel camera can shoot 1080p HD video and offers a 2D microphone system for recording sound, in the form factor of an action camera on top of an external hard drive.
Digital Trends reports that, “the camera system uses an Intel Atom processor fast enough to run deep learning algorithms on 10 frames in one second,” and “the 8 GB of memory houses both the pre-stored code along with custom algorithms.” The system could also potentially use cloud computing, via Wi-Fi, for “algorithms too large to run on the internal hardware.”
The AWS DeepLens software allows the user to “choose from project templates for a more guided learning experience or choose to design their own software from scratch.” The user also gets a hands-on experience via the templates or sample project, which “walks the developers” through the steps.
The camera, which will ship in April for $250, is available for pre-order here.
In its press release, Intel reveals that, “AWS and Intel collaborated on the DeepLens camera to provide builders of all skill levels with the optimal tools needed to design and create artificial intelligence (AI) and machine learning products.” It also notes that the collaboration “follows the recent introduction of the Intel Speech Enabling Developer Kit, which provides a complete audio front-end solution for far-field voice control and makes it easier for third-party developers to accelerate the design of consumer products integrating Alexa Voice Service.”
DeepLens is powered by “an Intel Atom X5 processor with embedded graphics that support object detection and recognition,” and also “uses Intel-optimized deep learning software tools and libraries (including the Intel Compute Library for Deep Neural Networks, Intel clDNN) to run real-time computer vision models directly on the device.”
“We are seeing a new wave of innovation throughout the smart home, triggered by advancements in artificial intelligence and machine learning,” said Intel smart home group general manager Miles Kingston. “DeepLens brings together the full range of Intel’s hardware and software expertise to give developers a powerful tool to create new experiences, providing limitless potential for smart home integrations.”
Intel also reports that, “Apache MXNet is supported today, and TensorFlow and Caffe2 will be supported in 2018’s first quarter.”