The Cerebras CS-1 Chip Is 10,000 Times Faster Than a GPU

Cerebras Systems and its partner, the Department of Energy’s National Energy Technology Laboratory (NETL), revealed that its CS-1 system, featuring a single massive chip that features an innovative design, is 10,000+ times faster than a graphics processing unit (GPU). The CS-1, built around Cerebas’ Wafer-Scale Engine (WSE) and its 400,000 AI cores, was first announced in November 2019. The partnership between the Energy Department and Cerebras includes deployments with the Argonne National Laboratory and Lawrence Livermore National Laboratory. Continue reading The Cerebras CS-1 Chip Is 10,000 Times Faster Than a GPU

Cerebras Builds Enormous Chip to Advance Deep Learning

Los Altos, CA-based startup Cerebras, dedicated to advancing deep learning, has created a computer chip almost nine inches (22 centimeters) on each side — huge by the standards of today’s chips, which are typically the size of postage stamps or smaller. The company plans to offer this chip to tech companies to help them improve artificial intelligence at a faster clip. The Cerebras Wafer-Scale Engine (WSE), which took three years to develop, has impressive stats: 1.2 trillion transistors, 46,225 square millimeters, 18 gigabytes of on-chip memory and 400,000 processing cores. Continue reading Cerebras Builds Enormous Chip to Advance Deep Learning

Intel Team Focuses on Low Voltage Transistor to Power AI

Neuroscientist and Intel’s chief technology officer of AI Amir Khosrowshahi revealed that he is remaining at Intel with a team of researchers building an innovative integrated circuit (IC). The IC under development will feature transistors that will, hope the researchers, function at voltages as low as 100 millivolts, a step towards matching voltage of communication in the brain. The existence of such an IC would unleash power-hungry AI applications targeting climate change, waste management and other global problems. Continue reading Intel Team Focuses on Low Voltage Transistor to Power AI

IBM Aims to Power IoT, AI, VR With New 5-Nanometer Chip

IBM Research, GlobalFoundries and Samsung partnered to create transistors for a 5-nanometer semiconductor chip, expected to enable chips with 30 billion transistors. Researchers say the technical achievement should enable the $330 billion chip industry to keep up with Moore’s Law, the 1965 statement by Intel chairman emeritus Gordon Moore that the number of transistors per square inch on integrated circuits would double about every two years. Three years ago, IBM vowed to invest $3 billion over five years in chip R&D. Continue reading IBM Aims to Power IoT, AI, VR With New 5-Nanometer Chip

Microsoft Imagines a Practical Future for Quantum Computers

Microsoft is going full bore into quantum computing, moving from pure research into efforts to build a prototype of what has been primarily an experimental field. If and when they come to fruition, quantum computers could have an impact on drug design, artificial intelligence and even our understanding of physics. For that reason, IBM and Google are also investing in quantum computing, although Microsoft has taken a singular approach, based on so-called braiding particles (also known as anyons). Continue reading Microsoft Imagines a Practical Future for Quantum Computers

Nvidia’s Tesla P100 Chip Enables Speedy Machine Learning

Nvidia has entered the field of artificial intelligence with the debut of its Tesla P100 chip, which contains 15 billion transistors — about twice as many as its previous high-end graphics processor and, says Nvidia chief executive Jen-Hsun Huang, the largest chip ever made. Nvidia is creating the DGX-1 computer with eight Tesla P100 chips and AI software; computers from third parties integrating the chip are expected to be on the market by next year. Huang hints its first use is likely to be for cloud computing services. Continue reading Nvidia’s Tesla P100 Chip Enables Speedy Machine Learning