Nvidia Turbo Charges NeMo Megatron Large Training Model

Nvidia has issued a software update for its formidable NeMo Megatron giant language training model, increasing efficiency and speed. Barely a year since Nvidia unveiled Megatron, this latest improvement further leverages the transformer engine architecture that has become synonymous with deep learning since Google introduced the concept in 2017. New features result in what Nvidia says is a 5x reduction in memory requirements and up to a 30 percent gain in speed for models as large as 1 trillion parameters, making NeMo Megatron better at handling transformer tasks across the entire stack. Continue reading Nvidia Turbo Charges NeMo Megatron Large Training Model

Advances by OpenAI and DeepMind Boost AI Language Skills

Advances in language comprehension for artificial intelligence are issuing from San Francisco’s OpenAI and London-based DeepMind. OpenAI, which has been working on large language models, says it now lets customers fine-tune its GPT-3 models using their own custom data, while the Alphabet-owned DeepMind is talking-up Gopher, a 280-billion parameter deep-learning language model that has scored impressively on tests. Sophisticated language models have the ability to comprehend natural language, as well as predict and generate text, requirements for creating advanced AI systems that can dispense information and advice or that are required to follow instructions. Continue reading Advances by OpenAI and DeepMind Boost AI Language Skills