GPT-3: New Applications Developed for OpenAI’s NLP Model

OpenAI’s natural language processing (NLP) model GPT-3 offers 175 billion parameters, compared with its predecessor, GPT-2’s mere 1.5 billion parameters. The result of GPT-3’s immense size has enabled it to generate human-like text based on only a few examples of a task. Now, many users have gained access to the API, and the result has been some interesting use cases and applications. But the ecosystem is still nascent and how it matures — or whether it’s superseded by another NLP model — remains to be seen. Continue reading GPT-3: New Applications Developed for OpenAI’s NLP Model

AI-Powered Movies in Progress, Writing Makes Major Strides

In the not-so-distant future there will likely be services that allow the user to choose plots, characters and locations that are then fed into an AI-powered transformer with the result of a fully customized movie. The idea of using generative artificial intelligence to create content goes back to 2015’s computer vision program DeepDream, thanks to Google engineer Alexander Mordvintsev. Bringing that fantasy closer to reality is the AI system GPT-3 that creates convincingly coherent and interactive writing, often fooling the experts. Continue reading AI-Powered Movies in Progress, Writing Makes Major Strides

Beta Testers Give Thumbs Up to New OpenAI Text Generator

OpenAI’s Generative Pre-trained Transformer (GPT), a general-purpose language algorithm for using machine learning to answer questions, translate text and predictively write it, is currently in its third version. GPT-3, first described in a research paper published in May, is now in a private beta with a select group of developers. The goal is to eventually launch it as a commercial cloud-based subscription service. Its predecessor, GPT-2, released last year, was able to create convincing text in several styles. Continue reading Beta Testers Give Thumbs Up to New OpenAI Text Generator