Google’s Imagen AI Model Makes Advances in Text-to-Image

Google has released a research paper on a new text-to-image generator called Imagen, which combines the power of large transformer language models for text with the capabilities of diffusion models in high-fidelity image generation. “Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis,” the company said. Simultaneously, Google is introducing DrawBench, a benchmark for text-to-image models it says was used to compare Imagen with other recent technologies including VQGAN+CLIP, latent diffusion models, and OpenAI’s DALL-E 2. Continue reading Google’s Imagen AI Model Makes Advances in Text-to-Image

Nvidia Touts New H100 GPU and Grace CPU Superchip for AI

Nvidia has begun previewing its latest H100 Tensor Core GPU, promising “an order-of-magnitude performance leap for large-scale AI and HPC” over previous iterations, according to the company. Nvidia founder and CEO Jensen Huang announced the Hopper earlier this year, and IT professionals’ website ServeTheHome recently had a chance to see a H100 SXM5 module demonstrated. Consuming up to 700W in an effort to deliver 60 FP64 Tensor teraflops, the module — which features 80 billion transistors and has 8448/16896 FP64/FP32 cores in addition to 538 Tensor cores — is described as “monstrous” in the best way. Continue reading Nvidia Touts New H100 GPU and Grace CPU Superchip for AI

Advances by OpenAI and DeepMind Boost AI Language Skills

Advances in language comprehension for artificial intelligence are issuing from San Francisco’s OpenAI and London-based DeepMind. OpenAI, which has been working on large language models, says it now lets customers fine-tune its GPT-3 models using their own custom data, while the Alphabet-owned DeepMind is talking-up Gopher, a 280-billion parameter deep-learning language model that has scored impressively on tests. Sophisticated language models have the ability to comprehend natural language, as well as predict and generate text, requirements for creating advanced AI systems that can dispense information and advice or that are required to follow instructions. Continue reading Advances by OpenAI and DeepMind Boost AI Language Skills

Microsoft and Nvidia Debut World’s Largest Language Model

Microsoft and Nvidia have trained what they describe as the most powerful AI-driven language model to date, the Megatron-Turing Natural Language Generation model (MT-NLG), which has “set the new standard for large-scale language models in both model scale and quality,” the firms say. As the successor to the companies’ Turing NLG 17B and Megatron-LM, the new MT-NLG has 530 billion parameters, or “3x the number of parameters compared to the existing largest model of this type” and demonstrates unmatched accuracy in a broad set of natural language tasks. Continue reading Microsoft and Nvidia Debut World’s Largest Language Model