Cerebras Chip Tech to Advance Neural Networks, AI Models

Deep learning requires a complicated neural network composed of computers wired together into clusters at data centers, with cross-chip communication using a lot of energy and slowing down the process. Cerebras has a different approach. Instead of making chips by printing dozens of them onto a large silicon wafer and then cutting them out and wiring them to each other, it is making the largest computer chip in the world, the size of a dinner plate. Texas Instruments tried this approach in the 1960s but ran into problems. Continue reading Cerebras Chip Tech to Advance Neural Networks, AI Models

Positive Reviews for Latest Intel Modular Mini Gaming PC Kit

Intel’s new NUC 11 Extreme kit, code-named “Beast Canyon,” is the company’s fourth attempt at building a more compact yet powerful gaming PC. Featuring (ironically) Intel’s biggest chassis yet, Beast Canyon relies on Compute Element cartridges containing a miniaturized motherboard, CPU, memory, storage and ports. Like last year’s Ghost Canyon, the form factor enables gamers to upgrade the entire system as if they were upgrading a graphics card. The 8-liter Beast has room for 12 inches of GPU power and includes a 650-watt 80+ Gold power supply. Continue reading Positive Reviews for Latest Intel Modular Mini Gaming PC Kit

Nvidia and NERSC Unveil a New Supercomputer for AI Tasks

Nvidia and the National Energy Research Scientific Computing Center (NERSC) debuted Perlmutter, an AI supercomputer that features 6,144 Nvidia A100 Tensor Core GPUs. Named for astrophysicist Saul Perlmutter, the supercomputer has been dubbed by Nvidia as “the fastest on the planet,” at processing with the 16-bit and 32-bit mixed-precision math used in AI applications, said the company’s global HPC and AI product marketing lead Dion Harris. Its first job will be to create the largest-ever 3D map of the visible universe. Continue reading Nvidia and NERSC Unveil a New Supercomputer for AI Tasks

Facebook Using Self-Supervised Models to Build AI Systems

Facebook debuted Learning from Videos, a project designed to learn audio, images and text from publicly available Facebook videos to improve its core AI systems. By culling data from hundreds of languages and countries, said Facebook, the project will also help to enable “entirely new experiences.” Learning from Videos, which began in 2020, has also helped to improve recommendations in Instagram Reels. Facebook, Google and others are focused on self-supervised techniques rather than labeled datasets to improve AI. Continue reading Facebook Using Self-Supervised Models to Build AI Systems

Nvidia Cuts Video-Conferencing Bandwidth by Factor of Ten

Last month Nvidia launched Maxine, a software development kit containing technology the company claims will cut the bandwidth requirements of video-conferencing software by a factor of ten. A neural network creates a compressed version of a person’s face which, when sent across the network, is decompressed by a second neural network. The software can also make helpful corrections to the image, such as rotating a face to look straight forward or replacing it with a digital avatar. Nvidia is now waiting for software developers to productize the technology. Continue reading Nvidia Cuts Video-Conferencing Bandwidth by Factor of Ten

Nvidia Debuts New Version of A100 GPU for Supercomputers

At the beginning of its SC20 supercomputing conference, Nvidia unveiled its 80GB version of the A100 GPU (graphics processing unit) based on its Ampere graphics architecture and aimed at AI and graphics for supercomputing. The chip is intended to enable faster real-time data analysis for business and government applications. This new version doubles the memory of the predecessor, debuted six months ago. Nvidia executive Paresh Kharya noted that 90 percent of the world’s data was created in the last two years. Continue reading Nvidia Debuts New Version of A100 GPU for Supercomputers

AMD Acquires Xilinx: Opens Door for 5G, Data Center Chips

Advanced Micro Devices (AMD) agreed to pay $35 billion in stock to acquire Xilinx, which will enable it to diversify into chips for 5G wireless communications and automotive electronics. The company, which has some of the strongest sales in its 51-year history, has traditionally been Intel’s rival for computer chips. With Xilnix, AMD could also provide components for data centers and compete with Nvidia in that space. The all-stock deal is still topped by Nvidia’s plan to purchase UK chipmaker Arm for $40 billion. Continue reading AMD Acquires Xilinx: Opens Door for 5G, Data Center Chips

With Arm Purchase, Nvidia May Dominate AI Edge Computing

Moore’s Law — Intel co-founder Gordon Moore’s prediction that the number of transistors on a chip doubles about every two years — has been the foundation of the semiconductor industry. But, as the industry nears the limits of circuitry and physics of electronics, it’s being replaced by another one: that silicon chips powering AI more than double in power every two years, due to hardware and software. As Moore’s Law was the foundation for improvements in computers, this new law will power the Internet of Things. With its $40 billion acquisition of Arm Holdings, Nvidia could be positioned for a new type of evolution.  Continue reading With Arm Purchase, Nvidia May Dominate AI Edge Computing

Nvidia Acquisition of SoftBank’s Arm Brings Rewards, Risks

Nvidia agreed to pay $40 billion — $21.5 billion in stock, $12 billion in cash — for SoftBank’s Arm division, a chip designer based in the United Kingdom. Nvidia will pay $2 billion on signing, and SoftBank will also receive $5 billion in cash or stock should Arm’s performance meet specific standards. Arm employees will receive $1.5 billion in Nvidia stock. This will be the biggest semiconductor industry deal since SoftBank paid $31.4+ billion to purchase Arm in 2016. The deal will also increase competition between Nvidia and Intel. Continue reading Nvidia Acquisition of SoftBank’s Arm Brings Rewards, Risks

Microsoft Confirms the Debut of $299 Xbox Series S Console

Following a series of leaks, Microsoft has now confirmed the existence of another new game console, the Xbox Series S, which it dubs the smallest Xbox ever (60 percent smaller than the Series X). The company said the “slim, white console” will debut “soon,” provide “next-gen performance” and be priced at $299. Reports suggest that Series S and Series X will be available for purchase on November 10 and hint that many Series S features are similar to those of Series X, but with less disc drive and less powerful CPU and GPU. Microsoft stated that the Xbox Series X “will be four times more powerful than its predecessor, the Xbox One X.” Continue reading Microsoft Confirms the Debut of $299 Xbox Series S Console

Nvidia Purchase of Arm Signals Inflection Point in Computing

If Nvidia acquires Arm Ltd. in the next few weeks, which many experts predict will happen, the company may be in the position to dominate the next computing ecosystem. Jefferies semiconductor analyst Mark Lipacis notes that, the computer industry goes through a “strategic inflection point” every 15 years, with research showing that dominant players in each era account for 80 percent of the profits. Different ecosystems are the result of “multi-pronged” strategy by those companies that come out on top. Continue reading Nvidia Purchase of Arm Signals Inflection Point in Computing

Nvidia Debuts GeForce RTX Chip Series with Lower Latency

Nvidia debuted its 28-billion transistor Ampere-based 30 Series graphics chips for PC gamers, ideal for Microsoft and Sony’s next-generation consoles to unveil by the holidays. The 30 Series GeForce RTX chips (available September 17) are comprised of the RTX 3070 ($500), 3080 ($700), and 3090 ($1,500), with second generation RTX (real-time ray tracing graphics). According to chief executive Jensen Huang, there are “hundreds of RTX games” in development, joining “Minecraft,” “Control” and “Wolfenstein: Youngblood,” which already feature RTX. Continue reading Nvidia Debuts GeForce RTX Chip Series with Lower Latency

Sony Reveals Details on PlayStation 5 Consoles, New Games

Sony debuted two versions of its PlayStation 5 game console as well as new games, in advance of the holiday season. The PS5 Digital Edition, the second version, omits the Blu-ray Disc drive, and its download-only feature could eventually impact Amazon, GameStop, Walmart and other retailers. The Digital Edition also sports a sleeker design and, potentially, a lower price. New games include the latest “Spider-Man” and “Gran Turismo” titles and an enhanced version of Take-Two Interactive Software’s “Grand Theft Auto V.” Continue reading Sony Reveals Details on PlayStation 5 Consoles, New Games

Nvidia A100: Powerful New Chipset Created for Advancing AI

Nvidia unveiled its A100 artificial intelligence chip, which houses 54 billion transistors and can execute 5 petaflops of performance, about 20 times more than the company’s previous Volta chip. Chief executive Jensen Huang, who revealed it during his Nvidia GTC keynote address, dubbed it “the ultimate instrument for advancing AI.” The original March 24 introduction was postponed due to the COVID-19 pandemic. Nvidia also unveiled the DGX A100 system, the third generation of Nvidia’s AI DGX platform, which uses the new chips. The DGX A100 is now shipping. Continue reading Nvidia A100: Powerful New Chipset Created for Advancing AI

Intel to Unveil Experimental Neuromorphic Computing System

Intel will debut Pohoiki Springs, an experimental research system for neuromorphic computing that simulates the way human brains work and computes more quickly and with less energy. It will first be made available, via the cloud, to the Intel Neuromorphic Research Community, which includes about a dozen companies (such as Accenture and Airbus), academic researchers and government labs. Intel and Cornell University jointly published a paper on the Loihi chip’s ability to learn and recognize 10 hazardous materials from smell. Continue reading Intel to Unveil Experimental Neuromorphic Computing System