November 27, 2019
Facebook, the University of Lorraine and the University College London published a paper that detailed how they researched a way to use artificial intelligence to create complex game worlds. Building video game environments has always been time-consuming, but this group’s approach — using content from the fantasy text-based multiplayer adventure “LIGHT” — demonstrated how machine learning algorithms can learn to “assemble different elements, arranging locations and populating them with characters and objects.”
VentureBeat reports that the paper’s authors also “demonstrate that these … tools can aid humans interactively in designing new game environments.” “LIGHT,” a “research environment in the form of a text-based game within which AI and humans interact as player characters,” offers “crowdsourced natural language descriptions of 663 locations based on a set of regions and biomes, along with 3,462 objects and 1,755 characters.”
The team “built a model to generate game worlds” and considered two ranking models, “one where models had access to the location name only and a second where they had access to the location description information.”
The models “predicted the neighboring locations of each existing location,” and each location could connect up to four neighboring locations and couldn’t appear multiple times in a single map. Separate models “produced objects, or items with which characters could interact.”
Based on “characters and objects associated with locations from ‘LIGHT,’ the researchers created data sets to train algorithms that placed both objects and characters in locations, as well as objects within objects.” A Transformer architecture trained on two billion Reddit comments was leveraged to supply “another family of models that had been fed the corpora from the world construction task created new game elements.”
For the elements to all “work in concert,” an empty map grid “was initialized to represent the number of possible locations, with a portion of grid positions marked inaccessible to make exploration more interesting.”
After populating the central location and “iteratively fill[ing] in neighboring locations until the entire grid was populated … then, for each placed location, a model predicted which characters and objects should populate that location before another model predicted if objects should be placed inside existing objects.”
The research team also suggested “a human-aided design paradigm, where the models could provide suggestions for which elements to place.” In their experiments, “the team used their framework to generate 5,000 worlds with a maximum size of 50 arranged locations …. [with] around 65 percent and 60 percent of characters and objects in the data set, respectively … generated after the full 5,000 maps.”
“These steps show a path to creating cohesive game worlds from crowdsourced content, both with model-assisted human creation tooling and fully automated generation,” they stated in their research paper.
More details are available via the Facebook Artificial Intelligence blog.