DeepMind and Academics Advance General Purpose Robots

“Robots are great specialists, but poor generalists,” according to Google DeepMind, which says models are typically trained for individual tasks, and changing a single variable can mean starting again from scratch. Now the London-based Alphabet subsidiary thinks it’s come up with a way to combine knowledge across robotics for a general purpose machine helper. In conjunction with 33 academic labs, Google DeepMind has pooled data from 22 different robot types to create the Open X-Embodiment dataset. Simultaneously, the group releases the RT-1-X robotics transformer (RT) model derived from RT-1. Continue reading DeepMind and Academics Advance General Purpose Robots

IBM Project CodeNet Employs AI Tools to Program Software

IBM’s AI research unit debuted Project CodeNet, a dataset to develop machine learning models for software programming. The name is a take-off on ImageNet, the influential dataset of photos that pushed the development of computer vision and deep learning. Creating “AI for code” systems has been challenging since software developers are constantly discovering new problems and exploring different solutions. IBM researchers have taken that into consideration in developing a multi-purpose dataset for Project CodeNet. Continue reading IBM Project CodeNet Employs AI Tools to Program Software

IBM CodeNet Enables AI Translation of Computer Languages

During its Think conference this week, IBM debuted Project CodeNet, an open-source dataset for benchmarking around AI for code. Project CodeNet consists of 14 million code examples, which makes it about 10 times larger than the most similar dataset, which has 52,000 examples. Project CodeNet also offers 500 million lines of code and 55 programming languages including C++, Java, Python, Go, COBOL, Pascal and Fortran, making it a Rosetta Stone for AI systems to automatically translate code into other programming languages. Continue reading IBM CodeNet Enables AI Translation of Computer Languages

Facebook Counters AI Bias with a Data Set Featuring Actors

Facebook released an open-source AI data set of 45,186 videos featuring 3,011 U.S. actors who were paid to participate. The data set is dubbed Casual Conversations because the diverse group was recorded giving unscripted answers to questions about age and gender. Skin tone and lighting conditions were also annotated by humans. Biases have been a problem in AI-enabled technologies such as facial recognition. Facebook is encouraging teams to use the new data set. Most AI data sets comprise people unaware they are being recorded. Continue reading Facebook Counters AI Bias with a Data Set Featuring Actors

Consortium Releases New Measurement Benchmarks for AI

MLPerf, a consortium of 40 technology companies including Google and Facebook, just released benchmarks for evaluating artificial intelligence-enabled tools, including image recognition, object detection and voice translation. MLPerf general chair Peter Mattson, a Google engineer, reported, “for CIOs, metrics make for better products and services they can then incorporate into their organization.” Thus far, organizations have been slow to adopt AI technologies, in part due to the plethora of tools and services available. Continue reading Consortium Releases New Measurement Benchmarks for AI

Google GPipe Library Speeds Deep Neural Network Training

Google has unveiled GPipe, an open-sourced library that makes training deep neural networks more efficient under the TensorFlow framework Lingvo for sequence modeling. According to Google AI software engineer Yanping Huang, “in GPipe … we demonstrate the use of pipeline parallelism to scale up DNN training,” noting that larger DNN models “lead to better task performance.” Huang and his colleagues published a paper on “Efficient Training of Giant Neural Networks Using Pipeline Parallelism.” Continue reading Google GPipe Library Speeds Deep Neural Network Training

Nvidia’s New AI Method Can Reconstruct an Image in Seconds

Nvidia debuted a deep learning method that can edit or reconstruct an image that is missing pixels or has holes via a process called “image inpainting.” The model can handle holes of “any shape, size, location or distance from image borders,” and could be integrated in photo editing software to remove undesirable imagery and replace it with a realistic digital image – instantly and with great accuracy. Previous AI-based approaches focused on rectangular regions in the image’s center and required post processing. Continue reading Nvidia’s New AI Method Can Reconstruct an Image in Seconds

Google Intends to Advance Machine Learning With its AutoML

In May, research project Google Brain debuted its AutoML artificial intelligence system that can generate its own AIs. Now, Google has unveiled an AutoML project to automate the design of machine learning models using so-called reinforcement learning. In this system, AutoML is a controller neural network that develops a “child” AI network for a specific task. The near-term goal is that AutoML would be able to create a child that outperforms human versions. Down the line, AutoML could improve vision for autonomous vehicles and AI robots. Continue reading Google Intends to Advance Machine Learning With its AutoML

Startup Develops Technology to Identify Counterfeit Products

One year ago, New York startup Entrupy introduced a technology based on a handheld microscope camera and mobile app to spot counterfeit fashion accessories. Since then, the company says it has improved accuracy to 98 percent for handbags from 11 luxury brands, including Chanel, Gucci and Louis Vuitton. The tech has also been tested on CE products. Fashion brands have thus far used holographic tags, microprinting and, more recently, radio beacons woven into fabric to protect their products against counterfeiting. Internet shopping and second-hand retailers have made it more challenging. Continue reading Startup Develops Technology to Identify Counterfeit Products

OpenAI Rolls Out Virtual World, Google Opens DeepMind Lab

OpenAI, the Elon Musk-supported artificial intelligence lab, just debuted Universe, a virtual world that is a software training ground for everything from games to Web browsers. Universe begins with approximately 1,000 software titles, with games from Valve and Microsoft. OpenAI is also in discussions with Microsoft to add the Project Malmo platform, based on the game “Minecraft,” and hopes to add Google AI lab’s DeepMind Lab environment, which was just made public. The goal is that Universe will help machines develop flexible brainpower. Continue reading OpenAI Rolls Out Virtual World, Google Opens DeepMind Lab

Image Recognition Tech Paving the Way for Future Advances

Image recognition, or computer vision, is the foundation of new opportunities in everything from automotive to advertising. Its growing importance is such that the upcoming LDV Vision Summit, an annual conference on visual technology, is now in its third year. Computer vision has expanded through trends that have benefited other forms of AI, including open source, deep learning technology, easier programming tools and faster, cheaper computing, opening up opportunities for a wide range of businesses. Continue reading Image Recognition Tech Paving the Way for Future Advances

Clarifai’s Artificial Intelligence Can Recognize Video Content

Startup Clarifai has developed artificial intelligence technology based on deep learning that can identify what is in a video. This ability could be significant for search engines, which currently have to rely on textual clues around a video to guess what might be in it. Clarifai’s AI has the ability to identify objects, in addition to letting users know exactly when those objects will appear in the video. This technology could be used to help advertisers and other companies analyze their videos. Continue reading Clarifai’s Artificial Intelligence Can Recognize Video Content

Carnegie Mellon Computer Can Teach Itself Common Sense

The Never Ending Image Learner (NEIL), a computer program at Carnegie Mellon, searches the Web for images and tries to understand them in order to grow a visual database and gather common sense. This program is part of recent advances in computer vision where computer programs are able to identify and label objects in images, as well as recognize attributes such as color and lighting. This data will help computers comprehend the visual world. Continue reading Carnegie Mellon Computer Can Teach Itself Common Sense