AMD, in partnership with HPE, announced today that its fourth-gen EPYC, based on 'Zen 4' CPU core, will be powering the Lawrence Livermore National Laboratory (LLNL)'s upcoming exascale Supercomputer dubbed the 'El Capitan'. The computer will also be featuring next-gen Radeon Instinct GPUs as accelerators. The two will work in tandem in AMD's enhanced version of the open-source heterogenous ROCm software environment.
Fourth-gen EPYC processors are codenamed "Genoa" and AMD discussed it briefly back in October last year at the HPC AI Advisory Council's 2019 UK Conference.
As such, the listed specification for El Capitan currently is as follows:
- Next generation AMD EPYC processors, codenamed “Genoa”. These processors will support next-gen memory (likely DDR5) and I/O subsystems for AI and HPC workloads,
- Next-gen Radeon Instinct GPUs based on a new compute-optimized architecture for workloads including HPC and AI. These GPUs will use the next-gen high bandwidth memory(HBM) and are designed for optimum deep learning performance,
- The 3rd Gen AMD Infinity Architecture, which will provide a high-bandwidth, low latency connection between the four Radeon Instinct GPUs and one AMD EPYC CPU included in each node of El Capitan. As well, the 3rd Gen AMD Infinity Architecture includes unified memory across the CPU and GPU, easing programmer access to accelerated computing,
- An "enhanced" version of the open-source ROCm heterogenous programming environment, being developed to tap into the combined performance of AMD CPUs and GPUs, unlocking maximum performance.
All this combined is expected to surpass two exaflops of double-precision (FP64) performance making it the World's fastest Supercomputer. This will be AMD's second exascale system, the first being the Frontier OLCF-5.
The El Capitan is primarily being designed to overlook the operations of the U.S. National Nuclear Security Administration, and ensure the safety and security of the U.S. nuclear reserves and arsenal.
Early 2023 is when AMD believes the El Capitan should be deployed so there's still a long way to go.
10 Comments - Add comment