AWS Tool Aims to Simplify the Creation of AI-Powered Apps

Amazon introduced AWS Deep Learning Containers, a collection of Docker images preinstalled with preferred deep learning frameworks, with the aim of making it more seamless to get AI-enabled apps on Amazon Web Services. At AWS, general manager of deep learning Dr. Matt Wood noted that the company has “done all the hard work of building, compiling, and generating, configuring, optimizing all of these frameworks,” taking that burden off of app developers. The container images are all “preconfigured and validated by Amazon.”

VentureBeat reports that the container images “support Google’s TensorFlow machine learning framework and Apache MXNet, with Facebook’s PyTorch and other deep learning frameworks to come … [and] work on the full range of AWS services including Amazon ECS, Amazon Elastic Container Service for Kubernetes, and Amazon Elastic Compute Cloud (EC2), and with Kubernetes on Amazon EC2.” Additionally, “microservices can be added to apps deployed on Kubernetes using Deep Learning Containers.”

According to Wood, “Deep Learning Containers include a number of AWS-specific optimizations and improvements,” which will result in “the highest performance for training and inference in the cloud.” Amazon has stated that TensorFlow optimizations “in particular allow certain AI models to train up to twice as fast through ‘significantly’ improved GPU scaling — up to 90 percent scaling efficiency for 256 GPUs.”

The company wrote in a blog post that, “AWS Deep Learning Containers are tightly integrated with Amazon EKS and Amazon ECS, giving you choice and flexibility to build custom machine learning workflows for training, validation, and deployment.”

“Through this integration, Amazon EKS and Amazon ECS handle all the container orchestration required to deploy and scale the AWS Deep Learning Containers on clusters of virtual machines,” the company added. The new AWS Deep Learning Containers are free and can be found in AWS Marketplace and Elastic Container Registry.

Months ago, Amazon introduced Inferentia, a “high-throughput, low-latency processor custom-built for cloud inference.” Inferentia “supports NT8, FP16, and mixed precision, and multiple machine learning frameworks including TensorFlow, Caffe2, and ONNX,” and is slated to be available this year “in AWS products including EC2 and Amazon’s SageMaker.”

Amazon also debuted Elastic Inference, which is fully compatible with TensorFlow, Apache MXNet and ONNX, and provides a service that “allows customers to attach GPU-powered inference acceleration to any Amazon EC2 or Amazon SageMaker instance.” Amazon stated it can “reduce deep learning costs by up to 75 percent.”

More information on AWS Deep Learning Containers can be found here.