Advances by OpenAI and DeepMind Boost AI Language Skills

Advances in language comprehension for artificial intelligence are issuing from San Francisco’s OpenAI and London-based DeepMind. OpenAI, which has been working on large language models, says it now lets customers fine-tune its GPT-3 models using their own custom data, while the Alphabet-owned DeepMind is talking-up Gopher, a 280-billion parameter deep-learning language model that has scored impressively on tests. Sophisticated language models have the ability to comprehend natural language, as well as predict and generate text, requirements for creating advanced AI systems that can dispense information and advice or that are required to follow instructions.

“According to Gartner, 80 percent of technology products and services will be built by those who are not technology professionals by 2024. This trend is fueled by the accelerated AI adoption in the business community, which sometimes requires specifically tailored AI workloads,” VentureBeat wrote from information provided by OpenAI.

GPT-3 was developed with accessibility in mind. “How this manifests is that you can customize a GPT-3 model using one command line invocation,” OpenAI technical staffer Rachel Lim told VentureBeat. With few examples, GPT-3 can perform a multitude of natural language tasks, a concept known as few-shot learning or prompt design.

On its blog, OpenAI says one API customer increased correct outputs from 83 percent to 95 percent through custom fine-tuning, and says another reduced error by 50 percent by inputting new data each week. Commercially available since 2020, VentureBeat reports that as of Q1, GPT-3 was “being used in more than 300 different apps by ‘tens of thousands’ of developers and producing 4.5 billion words per day.”

DeepMind claims its Gopher beat GPT-3 in tests in subjects including humanities, math, medicine and general knowledge. Although human experts beat all the AI algorithms in every subject, in some areas, like high school reading comprehension, Gopher “approaches human-level performance,” according to Fortune, which rated its overall language performance as “better than any existing similar software.”

In a blog post, DeepMind writes that Gopher exceed existing language models “for a number of key tasks” including the massive multitask language understanding (MMLU) benchmark, “where Gopher demonstrates a significant advancement towards human expert performance over prior work.”

The test results are “despite the fact that Gopher is smaller than some ultra-large language software,” Fortune writes, noting that at 280 billion parameters, Gopher is larger than OpenAI’s GPT-3, with 175 billion, but smaller than the 535 billion available to the Microsoft DeepSpeed-Nvidia Megatron collaboration. It is also smaller than other AI systems developed by Google, which go as high as 1.6 trillion parameters, Fortune reports, noting Alibaba utilizes AI with up to 10 trillion parameters.

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.