OpenAI Debuts Tool to Translate Natural Language into Code

OpenAI’s Codex, an AI system that translates natural language into code, was released via an API in private beta. Codex, trained on billions of lines of public code, can turn plain English commands into 12+ programming languages and also powers GitHub service Copilot that suggests whole lines of code within Microsoft Visual Studio and other development environments. OpenAI explained that Codex will be offered for free during an “initial period,” and invites “businesses and developers to build on top of it through the API.”

Continue reading OpenAI Debuts Tool to Translate Natural Language into Code

OpenAI and Microsoft Introduce $100 Million AI Startup Fund

OpenAI unveiled a $100 million OpenAI Startup Fund to fund early-stage companies pursuing ways that AI can have a “transformative” impact on healthcare, education, climate change and other fields. OpenAI chief executive Sam Altman said the Fund will make “big, early bets” on no more than 10 such companies. OpenAI, with funding from Microsoft and others, will manage the Fund. Selected projects will get “early access” to future OpenAI systems, support from OpenAI’s team and credits for Microsoft Azure. Continue reading OpenAI and Microsoft Introduce $100 Million AI Startup Fund

OpenAI and EleutherAI Foster Open-Source Text Generators

OpenAI’s GPT-3, the much-noted AI text generator, is now being used in 300+ apps by “tens of thousands” of developers and generating 4.5 billion words per day. Meanwhile, a collective of researchers, EleutherAI is building transformer-based language models with plans to offer an open source, GPT-3-sized model to the public for free. The non-profit OpenAI has an exclusivity deal with Microsoft that gives the tech giant unique access to GPT-3’s underlying code. But OpenAI has made access to its general API available to all comers, who then build services on top of it. Continue reading OpenAI and EleutherAI Foster Open-Source Text Generators

GPT-3: New Applications Developed for OpenAI’s NLP Model

OpenAI’s natural language processing (NLP) model GPT-3 offers 175 billion parameters, compared with its predecessor, GPT-2’s mere 1.5 billion parameters. The result of GPT-3’s immense size has enabled it to generate human-like text based on only a few examples of a task. Now, many users have gained access to the API, and the result has been some interesting use cases and applications. But the ecosystem is still nascent and how it matures — or whether it’s superseded by another NLP model — remains to be seen. Continue reading GPT-3: New Applications Developed for OpenAI’s NLP Model

CES: Sessions Examine the Potential of Quantum Computing

Two CES 2021 panels addressed the current state and anticipated advances in quantum computing, which is already being applied to problems in business, academia and government. However, the hardware is not as stable and robust as people would like, and the algorithms are not yet up to the task to solve the problems that many researchers envision for them. This has not stopped entrepreneurs, major corporations and governments from dedicated significant resources in R&D and implementations, nor from VCs and sovereign funds making major bets on who the winners will be. Continue reading CES: Sessions Examine the Potential of Quantum Computing

OpenAI Unveils AI-Powered DALL-E Text-to-Image Generator

OpenAI unveiled DALL-E, which generates images from text using two multimodel AI systems that leverage computer vision and NLP. The name is a reference to surrealist artist Salvador Dali and Pixar’s animated robot WALL-E. DALL-E relies on a 12-billion parameter version of GPT-3. OpenAI demonstrated that DALL-E can manipulate and rearrange objects in generated imagery and also create images from scratch based on text prompts. It has stated that it plans to “analyze how models like DALL·E relate to societal issues.” Continue reading OpenAI Unveils AI-Powered DALL-E Text-to-Image Generator

Virtual Event: GPT-3 and Its Implications for the M&E Industry

To fully examine the inner workings and potential impact of deep learning language model GPT-3 on media, ETC’s project on AI & Neuroscience in Media is hosting a virtual event on November 10 from 11:00 am to 12:15 pm. RSVP here to join moderator Yves Bergquist of ETC@USC and presenter Dr. Mark Riedl of Georgia Tech as they present, “Machines That Can Write: A Deep Look at GPT-3 and its Implications for the Industry.” The launch last June of OpenAI’s GPT-3, a language model that uses deep learning to generate human-like text, has raised many questions in the creative community and the world at large.  Continue reading Virtual Event: GPT-3 and Its Implications for the M&E Industry

Microsoft Inks Deal With OpenAI for Exclusive GPT-3 License

Microsoft struck a deal with AI startup OpenAI to be the exclusive licensee of language comprehension model GPT-3. According to Microsoft EVP Kevin Scott, the deal is an “incredible opportunity to expand our Azure-powered AI platform in a way that democratizes AI technology.” Among potential uses are “aiding human creativity and ingenuity in areas like writing and composition, describing and summarizing large blocks of long-form data (including code), converting natural language to another language.” Continue reading Microsoft Inks Deal With OpenAI for Exclusive GPT-3 License

AI-Powered Movies in Progress, Writing Makes Major Strides

In the not-so-distant future there will likely be services that allow the user to choose plots, characters and locations that are then fed into an AI-powered transformer with the result of a fully customized movie. The idea of using generative artificial intelligence to create content goes back to 2015’s computer vision program DeepDream, thanks to Google engineer Alexander Mordvintsev. Bringing that fantasy closer to reality is the AI system GPT-3 that creates convincingly coherent and interactive writing, often fooling the experts. Continue reading AI-Powered Movies in Progress, Writing Makes Major Strides

Beta Testers Give Thumbs Up to New OpenAI Text Generator

OpenAI’s Generative Pre-trained Transformer (GPT), a general-purpose language algorithm for using machine learning to answer questions, translate text and predictively write it, is currently in its third version. GPT-3, first described in a research paper published in May, is now in a private beta with a select group of developers. The goal is to eventually launch it as a commercial cloud-based subscription service. Its predecessor, GPT-2, released last year, was able to create convincing text in several styles. Continue reading Beta Testers Give Thumbs Up to New OpenAI Text Generator

National Research Cloud Gains Big Tech, Legislator Support

The National Research Cloud, which has bipartisan support in Congress, gained approval of several universities, including Stanford, Carnegie Mellon and Ohio State, and participation of Big Tech companies Amazon, Google and IBM. The project would give academics access to a tech companies’ cloud data centers and public data sets, encouraging growth in AI research. Although the Trump administration has cut funding to other kinds of research, it has proposed doubling its spending on AI by 2022. Continue reading National Research Cloud Gains Big Tech, Legislator Support

Facial Recognition Paused While Congress Considers Reform

In the wake of protests over police brutality, senators Cory Booker (D-New Jersey) and Kamala Harris (D-California) and representatives Karen Bass (D-California) and Jerrold Nadler (D-New York) introduced a police reform bill in the House of Representatives that includes limits on the use of facial recognition software. But not everyone is pleased. ACLU senior legislative counsel Neema Guliani, for example, pointed to the fact that facial recognition algorithms are typically not as accurate on darker skin shades. Continue reading Facial Recognition Paused While Congress Considers Reform

OpenAI Tests Commercial Version of Its AI Language System

Artificial intelligence research institute OpenAI, after collecting trillions of words, debuted its first commercial product, the API. Its goal is to create the “most flexible general-purpose AI language system” in existence. Currently, the API’s skills include translating between languages, writing news stories, and answering everyday questions. The API is engaged in limited testing and, said chief executive Sam Altman, will be released broadly for use in a range of tasks, such as customer support, education and games. Continue reading OpenAI Tests Commercial Version of Its AI Language System

Microsoft Announces Azure-Hosted OpenAI Supercomputer

At Microsoft’s Build 2020 developer conference, the company debuted a supercomputer built in collaboration with, and exclusively for, OpenAI on Azure. It’s the result of an agreement whereby Microsoft would invest $1 billion in OpenAI to develop new technologies for Microsoft Azure and extend AI capabilities. OpenAI agreed to license some of its IP to Microsoft, which would then sell to partners as well as train and run AI models on Azure. Microsoft stated that the supercomputer is the fifth most powerful in the world. Continue reading Microsoft Announces Azure-Hosted OpenAI Supercomputer

Big Tech Companies Acquire Significant Number of AI Startups

The Federal Trade Commission is investigating the purchase of hundreds of small startups made by Big Tech companies Amazon, Apple, Facebook, Google and Microsoft to determine if they have become too powerful. In 2019, a record-breaking 231 artificial intelligence startups were snapped up, which in many cases ended public availability of their products. According to CB Insights, that number compares to 42 AI startups acquired in 2014. Apple has been the No. 1 buyer of these startups since 2010. Continue reading Big Tech Companies Acquire Significant Number of AI Startups

Page 1 of 212