Computing Resources
World-Class Computing
The HPC4EI Program uses world class computing power across nine U.S. Department of Energy laboratories. Combined, the DOE systems, which include 5 of the top 20 systems as ranked by TOP500 in June 2021, mark some of the most powerful computing resources in the world.
In addition, through the national laboratories, DOE actively infuses and maintains a comprehensive ecosystem of high performance computing assets and the industry leading scientists and engineers capable of leveraging their power.
Each project awarded through the HPC4Manufacturing and HPC4Materials pillars is matched with a specific super computing system that best meets their needs. Through the overarching HPC4EI program, we are leveraging this family of DOE systems to bring world class computing power to solve some of the most challenging problems facing US industry.
DOE Participating Laboratories
The following table shows the individual systems supporting this program, the laboratories that host them, and each system’s key metrics.
DOE National Laboratory | System | Nodes | Cores | GPU | Description |
---|---|---|---|---|---|
Theta | 4,392 | 281,088 | Cray XC40/Intel Zeon PHi (KNL) | ||
ThetaGPU | 48 | 3,072 | 192 | Cray XC40/AMD Rome and NVIDIA A100 | |
Cooley | 126 | 1,512 | 126 | Cray CS300-AC/Intel Haswell and NVIDIA Tesla K80 | |
Bebop | 672 | 24,192 | Cray CS400/Intel Xeon Broadwell | ||
256 | 9,216 | 1,024 | Cray CS400/Intel Xeon Broadwell and NVIDIA A100 | ||
Perlmutter | 1,792 | 114,668 | HPE Cray EX235n, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 40 GB | ||
3,072 | 393,216 | 1,792 | HPE Cray EX235n, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 40 GB | ||
Lassen | 792 | 34,848 | 3,168 | IBM Power System S922LC/+ 4x NVIDIA Volta accelerators/node | |
Quartz | 2,634 | 96,768 | Intel Xeon E5-2695 | ||
Grizzly | 1,490 | 53,640 | Penguin Computing Tundra ES; Broadwell | ||
Summit | 4,608 | 202,752 | 27,648 | IBM Power System S922LC/+ 6x NVIDIA Volta accelerators/node | |
Frontier | 9,408 | 602,112 | 37,632 | HPE Cray EX/ AMD EPYC CPU and AMD MI250X GPU | |
Ridge | 40 | 5,120 | Cray/HPE CS500, AMD Rome CPUs (64-core, 2 GHz) | ||
Constance | 520 | 12,480 | Dual Intel Haswell E5-2670 | ||
Kestrel | 2,144 | 222,976 | CPU Nodes: Dual Socket Intel Xenon Sapphire Rapids 52 core processors with 256 GB of Memory | ||
10 | 1,040 | CPU Big Memory Nodes: CPU Nodes: Dual Socket Intel Xenon Sapphire Rapids 52 Core Processors with 2TB of Memory | |||
132 | 16,896 | 528 | GPU Nodes: Dual Socket AMD Genoa 64-core processors with 4 NVIDIA H100 SXM GPUs, and total of 484 GB of Memory | ||
8 | 832 | 16 | GPU Big Memory Nodes: CPU Nodes: Dual Socket Intel Xenon Sapphire Rapids 52 core processors with 2 NVIDIA A40 GOUs and 2TB of Memory | ||
Joule 2 | 1,664 | 78,720 | HPE ProLiant XL/Intel Xeon Gold 6148, Nvidia Tesla P100 | ||
Sky Bridge | 1,848 | 29,568 | Intel Sandy Bridge E5-2670 | ||
Chama | 1,232 | 19,712 | Intel Sandy Bridge E5-2670 | ||
Ghost | 740 | 26,640 | Intel Xeon Broadwell E5-2695 | ||
Eclipse | 1,488 | 53,568 | Intel Xeon Broadwell E5-2695 | ||
Attaway | 1,488 | 53,568 | 2.3Ghz Intel Skylake/Gold 6140 | ||
Solo | 374 | 13,464 | Intel Xeon Broadwell E5-2695 | ||
Uno | 201 | 3,344 | 2.7 Ghz Intel Sandy Bridge |
Virtual Computational Facility Tours
Lawrence Berkeley National Laboratory
Virtual Tour
Register for a docent-led virtual tour of the National Energy Research Scientific Computing Center (NERSC) facility.
Oak Ridge National Laboratory
Virtual Tour
Visit the Oak Ridge Leadership Computing Facility (OLCF) through a self-guided virtual tour.
Sandia National Laboratories
Virtual Tour
Visit Sandia National Laboratories Data Center through a self-guided virtual tour.