Facebook Makes New Natural Language Model Open Source

Facebook and AI startup Hugging Face open-sourced their new natural language processing model, Retrieval Augmented Generation (RAG), which finds and interprets contextual information on the fly. RAG is now available as a component of the Hugging Face transformer library, integrated with the new Datasets library to offer the indexed knowledge source RAG relies on. According to Facebook, RAG can alter or add to its internal knowledge, letting researchers control the model without needing to retrain it.

VentureBeat reports that, whereas most natural language models thus far have been used for tasks “where a human could produce the solution without background knowledge … by contrast, RAG uses input data to retrieve a relevant set of documents from a database like Wikipedia.”

For example a search for the origins of mammals on earth, RAG “might surface documents for ‘Mammal,’ ‘History of Earth,’ and ‘Evolution of Mammals’ … [which] are concatenated as context with the input and then fed into the model to produce the output text.” RAG is able to leverage so-called late fusion “to integrate knowledge from retrieved documents, meaning it makes answer predictions for document-question pairs before aggregating the final prediction scores.”

RAG’s performance is improved if it has “access to documents containing clues to the answer but where the answer isn’t stated verbatim … [and it] even generates answers in certain situations where the answer is not contained in any of the retrieved documents.” As proof, Facebook related that it “benchmarked” RAG on “open-domain datasets like NaturalQuestions, which contains questions from Google Search users” and it was able to generate correct answers even if there was no answer in the datasets.

RAG was also tested with questions inspired by “Jeopardy!,” highlighting its ability with knowledge-intensive natural language questions. In fact, RAG’s responses were “more specific, diverse, and factual than those from comparable models, perhaps owing to RAG’s ability to synthesize responses using disparate pieces of information drawn from multiple sources.”

Facebook research manager Sebastian Riedel stated that RAG is not being used in production at the social media company as “the team behind it is actively iterating to mitigate potential bias” and has “restricted documents in the training dataset to Wikipedia.” The team is also “exploring a version of RAG that minimizes remaining risks so they can get to a point where the outputs are consistently safe … and they’re looking into how they can scale RAG, make it multimodal, and have it operate using multiple knowledge sources at once.”

Facebook revealed that RAG, whose “true strength lies in its flexibility,” has achieved strong results on NaturalQuestions, CuratedTrec, and WebQuestions, “demonstrating that state-of-the-art machine reading performance can be achieved with a generative, rather than extractive, reader.”

Related:
Facebook Is Opening Its Experimental Predictions App to All Users, Engadget, 10/1/20

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.