AI hardware revolution: AI’s impact on supercomputing

AI is making hardware more interesting

Lawrence Livermore National Laboratory is one of the largest supercomputing users in the world. The U.S. Department of Energy’s supercomputer institution, which is operated by the Department, has a computing power of 200 petaflops or 200 billion floating point operations per second.

In the last two years, two newcomers have joined this lineup: Cerebras Systems Inc. The two startups have raised collectively more than $1.8billion in funding and are attempting to disrupt a market dominated by x86 central processor units and graphics processing unit off-the shelf. They will be replacing this hardware with custom-built hardware for artificial intelligence model development, inference processing, and running these models.

Cerebras claims that its WSE-2 chip can provide 2.6 trillion transistors, and 850,000 cores of CPU, to train neural networks. This is about 500 times more transistors and 100x as many cores than a high-end GPU. The company claims that the architecture, which has 40 gigabytes onboard memory as well as the ability to connect to up to 2.4 petabytes external memory, can process AI models too large to be feasible on GPU-based computers. The company raised $720 on a valuation of $4 billion.