By
Phil LelyveldJanuary 15, 2021
Two CES 2021 panels addressed the current state and anticipated advances in quantum computing, which is already being applied to problems in business, academia and government. However, the hardware is not as stable and robust as people would like, and the algorithms are not yet up to the task to solve the problems that many researchers envision for them. This has not stopped entrepreneurs, major corporations and governments from dedicated significant resources in R&D and implementations, nor from VCs and sovereign funds making major bets on who the winners will be. Continue reading CES: Sessions Examine the Potential of Quantum Computing
By
Debra KaufmanJanuary 7, 2021
OpenAI unveiled DALL-E, which generates images from text using two multimodel AI systems that leverage computer vision and NLP. The name is a reference to surrealist artist Salvador Dali and Pixar’s animated robot WALL-E. DALL-E relies on a 12-billion parameter version of GPT-3. OpenAI demonstrated that DALL-E can manipulate and rearrange objects in generated imagery and also create images from scratch based on text prompts. It has stated that it plans to “analyze how models like DALL·E relate to societal issues.” Continue reading OpenAI Unveils AI-Powered DALL-E Text-to-Image Generator
By
ETCentricNovember 9, 2020
To fully examine the inner workings and potential impact of deep learning language model GPT-3 on media, ETC’s project on AI & Neuroscience in Media is hosting a virtual event on November 10 from 11:00 am to 12:15 pm. RSVP here to join moderator Yves Bergquist of ETC@USC and presenter Dr. Mark Riedl of Georgia Tech as they present, “Machines That Can Write: A Deep Look at GPT-3 and its Implications for the Industry.” The launch last June of OpenAI’s GPT-3, a language model that uses deep learning to generate human-like text, has raised many questions in the creative community and the world at large. Continue reading Virtual Event: GPT-3 and Its Implications for the M&E Industry
By
Debra KaufmanSeptember 24, 2020
Microsoft struck a deal with AI startup OpenAI to be the exclusive licensee of language comprehension model GPT-3. According to Microsoft EVP Kevin Scott, the deal is an “incredible opportunity to expand our Azure-powered AI platform in a way that democratizes AI technology.” Among potential uses are “aiding human creativity and ingenuity in areas like writing and composition, describing and summarizing large blocks of long-form data (including code), converting natural language to another language.” Continue reading Microsoft Inks Deal With OpenAI for Exclusive GPT-3 License
By
Debra KaufmanAugust 25, 2020
In the not-so-distant future there will likely be services that allow the user to choose plots, characters and locations that are then fed into an AI-powered transformer with the result of a fully customized movie. The idea of using generative artificial intelligence to create content goes back to 2015’s computer vision program DeepDream, thanks to Google engineer Alexander Mordvintsev. Bringing that fantasy closer to reality is the AI system GPT-3 that creates convincingly coherent and interactive writing, often fooling the experts. Continue reading AI-Powered Movies in Progress, Writing Makes Major Strides
By
Debra KaufmanJuly 22, 2020
OpenAI’s Generative Pre-trained Transformer (GPT), a general-purpose language algorithm for using machine learning to answer questions, translate text and predictively write it, is currently in its third version. GPT-3, first described in a research paper published in May, is now in a private beta with a select group of developers. The goal is to eventually launch it as a commercial cloud-based subscription service. Its predecessor, GPT-2, released last year, was able to create convincing text in several styles. Continue reading Beta Testers Give Thumbs Up to New OpenAI Text Generator
By
Debra KaufmanJuly 2, 2020
The National Research Cloud, which has bipartisan support in Congress, gained approval of several universities, including Stanford, Carnegie Mellon and Ohio State, and participation of Big Tech companies Amazon, Google and IBM. The project would give academics access to a tech companies’ cloud data centers and public data sets, encouraging growth in AI research. Although the Trump administration has cut funding to other kinds of research, it has proposed doubling its spending on AI by 2022. Continue reading National Research Cloud Gains Big Tech, Legislator Support
By
Debra KaufmanJune 16, 2020
In the wake of protests over police brutality, senators Cory Booker (D-New Jersey) and Kamala Harris (D-California) and representatives Karen Bass (D-California) and Jerrold Nadler (D-New York) introduced a police reform bill in the House of Representatives that includes limits on the use of facial recognition software. But not everyone is pleased. ACLU senior legislative counsel Neema Guliani, for example, pointed to the fact that facial recognition algorithms are typically not as accurate on darker skin shades. Continue reading Facial Recognition Paused While Congress Considers Reform
By
Debra KaufmanJune 15, 2020
Artificial intelligence research institute OpenAI, after collecting trillions of words, debuted its first commercial product, the API. Its goal is to create the “most flexible general-purpose AI language system” in existence. Currently, the API’s skills include translating between languages, writing news stories, and answering everyday questions. The API is engaged in limited testing and, said chief executive Sam Altman, will be released broadly for use in a range of tasks, such as customer support, education and games. Continue reading OpenAI Tests Commercial Version of Its AI Language System
By
Debra KaufmanMay 21, 2020
At Microsoft’s Build 2020 developer conference, the company debuted a supercomputer built in collaboration with, and exclusively for, OpenAI on Azure. It’s the result of an agreement whereby Microsoft would invest $1 billion in OpenAI to develop new technologies for Microsoft Azure and extend AI capabilities. OpenAI agreed to license some of its IP to Microsoft, which would then sell to partners as well as train and run AI models on Azure. Microsoft stated that the supercomputer is the fifth most powerful in the world. Continue reading Microsoft Announces Azure-Hosted OpenAI Supercomputer
By
Debra KaufmanMarch 18, 2020
The Federal Trade Commission is investigating the purchase of hundreds of small startups made by Big Tech companies Amazon, Apple, Facebook, Google and Microsoft to determine if they have become too powerful. In 2019, a record-breaking 231 artificial intelligence startups were snapped up, which in many cases ended public availability of their products. According to CB Insights, that number compares to 42 AI startups acquired in 2014. Apple has been the No. 1 buyer of these startups since 2010. Continue reading Big Tech Companies Acquire Significant Number of AI Startups
By
Debra KaufmanFebruary 4, 2020
Meet Meena, Google’s new chatbot powered by a neural network. According to the tech giant, Meena was trained on 341 gigabytes of public social-media chatter (8.5 times as much data as OpenAI’s GPT-2) and can talk about anything and even make jokes. With Meena, Google hopes to have made a chatbot that feels more human, always a challenge for AI-enabled media, whether it’s a chatbot or a character in a video game. To do so, Google created the Sensibleness and Specificity Average (SSA) as a metric for natural conversations. Continue reading Google Debuts Chatbot With Natural Conversational Ability
By
Yves BergquistDecember 5, 2019
We’re not going to lie: the annual “heads up CES” piece on artificial intelligence is a major exercise in hit or miss. This is because technology rarely evolves on an annual time scale, and certainly not advanced technology like AI. Yet, here we are once again. Sure, 2019 was as fruitful as it gets in the AI research community. The raw debate between Neural Networks Extremists (those pushing for an “all neural nets all the time” approach to intelligence) and the Fanatical Symbolists (those advocating a more hybrid approach between knowledge bases, expert systems and neural nets) took an ugly “Mean Girl” turn, with two of the titans of the field (Gary Marcus and Yann LeCun) trading real insults on Twitter just a few days ago. Continue reading The Human Interface: What We Expect From AI at CES 2020
By
Debra KaufmanOctober 2, 2019
Both IBM and Google recently advanced development of Text-to-Speech (TTS) systems to create high-quality digital speech. OpenAI found that, since 2012, the compute power needed to train TTS models has exploded to more than 300,000 times. IBM created a much less compute-intensive model for speech synthesis, stating that it is able to do so in real-time and adapt to new speaking styles with little data. Google and Imperial College London created a generative adversarial network (GAN) to create high-quality synthetic speech. Continue reading Google and IBM Create Advanced Text-to-Speech Systems
By
Debra KaufmanAugust 26, 2019
Los Altos, CA-based startup Cerebras, dedicated to advancing deep learning, has created a computer chip almost nine inches (22 centimeters) on each side — huge by the standards of today’s chips, which are typically the size of postage stamps or smaller. The company plans to offer this chip to tech companies to help them improve artificial intelligence at a faster clip. The Cerebras Wafer-Scale Engine (WSE), which took three years to develop, has impressive stats: 1.2 trillion transistors, 46,225 square millimeters, 18 gigabytes of on-chip memory and 400,000 processing cores. Continue reading Cerebras Builds Enormous Chip to Advance Deep Learning