In this podcast, the Radio Free HPC team looks back on the highlights of SC18 and the newest TOP500 list of the world’s fastest supercomputers.
Buddy Bland shows off Summit, the world’s fastest supercomputer at ORNL.
The latest TOP500 list of the world’s fastest supercomputers is out, a remarkable ranking that shows five Department of Energy supercomputers in the top 10, with the first two captured by Summit at Oak Ridge and Sierra at Livermore. With the number one and number two systems on the planet, the “Rebel Alliance” vendors of IBM, Mellanox, and NVIDIA stand far and tall above the others.
Summit widened its lead as the number one system, improving its High Performance Linpack (HPL) performance from 122.3 to 143.5 petaflops since its debut on the previous list in June 2018. Sierra also added to its HPL result from six months ago, going from 71.6 to 94.6 petaflops, enough to bump it from the number three position to number two. Both are IBM-built supercomputers, powered by Power9 CPUs and NVIDIA V100 GPUs.
Sierra’s ascendance pushed China’s Sunway TaihuLight supercomputer, installed at the National Supercomputing Center in Wuxi, into third place. Prior to last June, it had held the top position on the TOP500 list for two years with its HPL performance of 93.0 petaflops. TaihuLight was developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC).
In this video from ISC 2018, Yan Fisher from Red Hat and Buddy Bland from ORNL discuss Summit, the world’s fastest supercomputer. Red Hat teamed with IBM, Mellanox, and NVIDIA to provide users with a new level of performance for HPC and AI workloads.
Tianhe-2A (Milky Way-2A), deployed at the National Supercomputer Center in Guangzho, China, is now in the number four position with a Linpack score of 61.4 petaflops. It was upgraded earlier this year by China’s National University of Defense Technology (NUDT), replacing the older Intel Xeon Phi accelerators with the proprietary Matrix-2000 chips.
Top-500, Green-500, IO-500, HPCG, and now CryptoSuper-500 all point to growing versatility of supercomputers,” said Shahin Khan from OrionX. “It’s time to more explicitly recognize that. Counting systems which are capable of doing Linpack but In fact are doing something else continues to be an issue. We need additional info about systems so we can tally them correctly and make this less of a game.”
At number five is Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland. At 21.2 petaflops, it maintains its standing as the most powerful system in Europe. It is powered by a combinations of Intel Xeon processors and NVIDIA Tesla P100 GPUs
Trinity, a Cray XC40 system operated by Los Alamos National Laboratory and Sandia National Laboratories improved its performance to 20.2 petaflops, enough to move it up one position to the number six spot. It uses Intel Xeon Phi processors, the only top ten system to do so.
The AI Bridging Cloud Infrastructure (ABCI) installed in Japan at the National Institute of Advanced Industrial Science and Technology (AIST) is listed at number seven with a Linpack mark of 19.9 petaflops. The Fujitsu-built system is powered by Intel Xeon Gold processors, along with NVIDIA Tesla V100 GPUs.
Germany provided a new top ten entry with SuperMUC-NG, a Lenovo-built supercomputer installed at the Leibniz Supercomputing Centre (Leibniz-Rechenzentrum) in Garching, near Munich. With more than 311,040 Intel Xeon cores and an HPL performance of 19.5 petaflops, it captured the number eight position.
Titan, a Cray XK7 installed at the DOE’s Oak Ridge National Laboratory, and previously the most powerful supercomputer in the US, is now the number nine system. It achieved 17.6 petaflops using NVIDIA K20x GPU accelerators.
Sequoia, an IBM BlueGene/Q supercomputer installed at DOE’s Lawrence Livermore National Laboratory, is the 10th-ranked TOP500 system. It was first delivered in 2011, achieving 17.2 petaflops on HPL.
The Student Cluster Competition was developed in 2007 to provide an immersive high performance computing experience to undergraduate and high school students. With sponsorship from hardware and software vendor partners, student teams design and build small clusters, learn designated scientific applications, apply optimization techniques for their chosen architectures, and compete in a non-stop, 48-hour challenge at the SC conference to complete a real-world scientific workload, showing off their HPC knowledge for conference attendees and judges. Teams are composed of six students, at least one advisor, and vendor partners. The advisor provides guidance and recommendations, the vendor provides the resources (hardware and software) and the students provide the skill and enthusiasm. Students work with their advisors to craft a proposal that describes the team, the suggested hardware, and their approach to the competition. The SCC committee reviews each proposal and provides comments for all submissions received before the deadline.”