During the Moogfest music and technology fest in North Carolina, Google Brain researcher Douglas Eck outlined a new artificial intelligence research project at Google called Magenta. The group, expected to publicly launch next month, plans to use the company’s machine learning engine TensorFlow to explore new ways that computers and AI systems could be trained to create original art and media such as music or video. The initiative should prove challenging; so far, the most advanced AI systems have struggled to replicate styles of existing artists.
However, the Magenta team hopes to develop new tools that would make it easier for Google and others to explore the creative possibilities of computers and AI systems.
“Much in the same way that Google opened up TensorFlow, Eck said Magenta will make available its tools to the public,” explains Quartz. “The first thing it will be launching is a simple program that will help researchers import music data from MIDI music files into TensorFlow, which will allow their systems to get trained on musical knowledge.”
According to team member Adam Roberts, resources and software related to Magenta’s efforts would be made available via GitHub and a blog starting June 1. Roberts also demonstrated a digital synthesizer program that used AI to create a more complete melody based on the notes introduced by a musician.
“Eck said the inspiration for Magenta had come from other Google Brain projects,” notes Quartz, “like Google DeepDream, where AI systems were trained on image databases to ‘fill in the gaps’ in pictures, trying to find structures in images that weren’t necessarily present in the images themselves. The result was the psychedelic images that the system could create, where ordinary images were infused with skyscrapers, eyeballs, or household items.”
Eck and his team want to explore ways to train computers to create engaging music, and follow with new and interesting ways to create images and video.