Facebook Pushes Core Principles at Developer Conference

With an emphasis on privacy, Facebook made a series of compelling announcements at its annual F8 developer conference this week. Chief executive Mark Zuckerberg detailed six core principles that will be embedded across the company’s services: private interactions, improved data encryption, interoperability, general safety, reducing permanence and secure data storage. The principles arrive following a difficult period for the social giant, as it continues to face criticism regarding privacy-related scandals while contending with increased scrutiny from global regulators. Continue reading Facebook Pushes Core Principles at Developer Conference

AWS Tool Aims to Simplify the Creation of AI-Powered Apps

Amazon introduced AWS Deep Learning Containers, a collection of Docker images preinstalled with preferred deep learning frameworks, with the aim of making it more seamless to get AI-enabled apps on Amazon Web Services. At AWS, general manager of deep learning Dr. Matt Wood noted that the company has “done all the hard work of building, compiling, and generating, configuring, optimizing all of these frameworks,” taking that burden off of app developers. The container images are all “preconfigured and validated by Amazon.” Continue reading AWS Tool Aims to Simplify the Creation of AI-Powered Apps

Study’s Fantasy Text-Based Game Tests AI Agents’ Abilities

Facebook AI Research, the Lorraine Research Laboratory in Computer Science and its Applications (LORIA), and University College London recently conducted a study to determine if AI can navigate a fantasy text-based game, dubbed “LIGHT.” To examine the AI agents’ comprehension of the virtual world, the study investigated the so-called grounding dialogue, comprised of mutual knowledge, beliefs and assumptions allowing communication between two people. The large-scale, crowdsourced “LIGHT” environment allows AI and humans to interact. Continue reading Study’s Fantasy Text-Based Game Tests AI Agents’ Abilities

Intel Describes Tool to Train AI Models With Encrypted Data

Intel revealed that it has made progress in an anonymized, encrypted method of model training. Industries such as healthcare that need a way to use AI tools on sensitive, personally identifiable information have been waiting for just such a capability. At the NeurIPS 2018 conference in Montreal, Intel showed off its open-sourced HE-Transformer that works as a backend to its nGraph neural network compiler, allowing AI models to work on encrypted data. HE-Transformer is also based on a Microsoft Research encryption library. Continue reading Intel Describes Tool to Train AI Models With Encrypted Data

Facebook Adds 24 Languages to Rosetta Translation Feature

Facebook’s Rosetta is a machine learning system that extracts text in many languages from over one billion images in a real time. Facebook built its own optical character recognition system that can process such huge amount of content, day in and day out. In a recent blog post, Facebook explained how Rosetta works, using a convolutional neural network to recognize and transcribe text, even non-Latin alphabets and non-English words. The system was trained with a mix of human- and machine-annotated public images. Continue reading Facebook Adds 24 Languages to Rosetta Translation Feature

Nvidia’s New AI Method Can Reconstruct an Image in Seconds

Nvidia debuted a deep learning method that can edit or reconstruct an image that is missing pixels or has holes via a process called “image inpainting.” The model can handle holes of “any shape, size, location or distance from image borders,” and could be integrated in photo editing software to remove undesirable imagery and replace it with a realistic digital image – instantly and with great accuracy. Previous AI-based approaches focused on rectangular regions in the image’s center and required post processing. Continue reading Nvidia’s New AI Method Can Reconstruct an Image in Seconds