AI-Powered Movies in Progress, Writing Makes Major Strides

In the not-so-distant future there will likely be services that allow the user to choose plots, characters and locations that are then fed into an AI-powered transformer with the result of a fully customized movie. The idea of using generative artificial intelligence to create content goes back to 2015’s computer vision program DeepDream, thanks to Google engineer Alexander Mordvintsev. Bringing that fantasy closer to reality is the AI system GPT-3 that creates convincingly coherent and interactive writing, often fooling the experts.

OneZero reports that Duke University AI scientist David Carlson thinks “someone will eventually try to do this … [but] that’s … years at the minimum.” Reaching that point is more complicated than “making a robot watch every movie” and “very different from a single GAN, trained on so much movie data that it can just spit out entire movies at the push of a button.”

In fact, says OneZero, “there’s a pretty daunting gap between what cutting-edge AI can do right now, and what seems feasible based on contemporary science fiction and deceptive headlines.”

Carlson suggested a “stepwise procedure.” “Given the current technology, we could probably actually go from a screenplay to an audio recording that might be convincing in some way,” he said. When it comes to video, he added, the computer still has a hard time with continuity.

“Unless you specifically tell it that there has to be logical consistency between scenes, it’s very conceivable that you have your first scene where you have a set of people, and then you just switch angles and it’s a completely different set of people talking about the same thing,” said Carlson, who noted that what’s required is to be able to “represent the internal consistencies in math.”

Oscar Sharp, director of the 2016 “Sunspring,” an AI-written short film, said that asking “what wouldn’t someone do?” is “a good shortcut to something that’s quite creative.”

Sharp next tried to make the short film “Zone Out,” using a stepwise procedure similar to that suggested by Carlson, using multiple AI systems for different jobs, including a convolutional neural network to locate visuals to match those in the AI-generated script and another to cast actors.

The result, he admitted, “fell down on most fronts.” “Machine learning is a very useful metaphor for human thinking,” he said. “When you shut your eyes, the model is still there and it’s still predicting.”

The Wall Street Journal reports that GPT-3’s “ability to interact in English and generate coherent writing have been startling hardened experts.” OpenAI, which released the autoregressive language model, revealed that “GPT-3 can work out analogy questions from the old SAT with better results than the average college applicant … [and] can generate news articles that readers may have trouble distinguishing from human-written ones.”

Early adopters have discovered that GPT-3 can also “complete a half-written investment memo, produce stories and letters written in the style of famous people, generate business ideas and even write certain kinds of software code based on a plain-English description of the desired software.” GPT-3’s 175 parameters, a “key measure of an AI model’s capacity,” is more than “10 times that of its nearest rival, Microsoft’s Turing-NLG.”

OpenAI will release GPT-3 as a commercial product when the beta test is finished.

Related:
AI Magic Makes Century-Old Films Look New, Wired, 8/12/20

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.