In this video, Mike Bernhardt from the Exascale Computing Project catches up with ORNL's David Bernholdt at SC18. They discuss supercomputing the conference, his career, the evolution and significance of message passing interface (MPI) in parallel computing, and how ECP has influenced his team’s efforts.
NOAA is out with their 2018 Arctic Report Card and the news is not good, folks. Issued annually since 2006, the Arctic Report Card is a timely and peer-reviewed source for clear, reliable and concise environmental information on the current state of different components of the Arctic environmental system relative to historical records. "The Report Card is intended for a wide audience, including scientists, teachers, students, decision-makers and the general public interested in the Arctic environment and science."
In this video from SC18 in Dallas, Yan Fisher and Dan McGuan from Red Hat describe the company's powerful software solutions for HPC and Ai workloads. "All supercomputers on the coveted Top500 list run on Linux, a scalable operating system that has matured over the years to run some of the most critical workloads and in many cases has displaced proprietary operating systems in the process. For the past two decades, Red Hat Enterprise Linux has served as the foundation for building software stacks for many supercomputers. We are looking to continue this trend with the next generation of systems that seek to break the exascale threshold."
An IaaS platform can help keep HPC cloud cluster users out of the cluster management business. A new white paper from XTREME-D, "Point and Click HPC: The XTREME-Stargate IaaS Platform", explores how the Stargate platform, that provides a web portal to cluster resources, can increase user efficiency, eliminate cluster administration costs and acts as a “pay-as-you-go” cloud model, simplifying HPC cloud clusters and making them more accessible.
In this video, Torsten Hoefler from ETH Zurich presents: Scientific Benchmarking of Parallel Computing Systems. "Measuring and reporting performance of parallel computers constitutes the basis for scientific advancement of high-performance computing. Most scientific reports show performance improvements of new techniques and are thus obliged to ensure reproducibility or at least interpretability. Our investigation of a stratified sample of 120 papers across three top conferences in the field shows that the state of the practice is not sufficient."
With a new upgrade, the University of Birmingham is set to benefit from the largest IBM POWER9 machine learning cluster in the UK, delivering unprecedented performance for AI workloads. Working with OCF, the high-performance compute, the University will integrate a total of 11 IBM POWER9-based IBM Power Systems servers into its existing HPC infrastructure. "With our early deployment of the two IBM POWER9 servers we have seen what is possible. By scaling up, we can keep-pace with the escalating demand and offer the computational capacity and capability to attract leading researchers to the University.”
In this video from SC18 in Dallas, Ziv Kalminovich from VMware describes how the company's powerful virtualization capabilities bring flexibility and performance to HPC workloads. "With VMware, you can capture the benefits of virtualization for HPC workloads while delivering performance that is comparable to bare-metal. Our approach to virtualizing HPC adds a level of flexibility, operational efficiency, agility and security that cannot be achieved in bare-metal environments—enabling faster time to insights and discovery."
Intel has a long history of making important announcements at the annual Supercomputer shows, and this year was no exception. This guest post from Intel covers what new technology was front and center from Intel at SC18, including its Cascade Lake advanced performance processors, Intel Optane Persistent Memory and more. Learn more about these new technologies designed to accelerate the convergence of high-performance computing and AI.
In this video SC18, Jack Wells from ORNL describes how OpenACC enables scientists to port their codes to GPUs and other HPC platforms. "OpenACC, a directive-based high-level parallel programming model, has gained rapid momentum among scientific application users - the key drivers of the specification. The user-friendly programming model has facilitated acceleration of over 130 applications including CAM, ANSYS Fluent, Gaussian, VASP, Synopsys on multiple platforms and is also seen as an entry-level programming model for the top supercomputers (Top500 list) such as Summit, Sunway Taihulight, and Piz Daint. As in previous years, this BoF invites scientists, programmers, and researchers to discuss their experiences in adopting OpenACC for scientific applications, learn about the roadmaps from implementers and the latest developments in the specification."
Today Equus Compute Solutions rolled out its new G2660 2U 2xGPU server, ideal for artificial intelligence and deep learning environments. This GPU platform offers higher performance, reduced rack space requirements, and lower power consumption compared with traditional CPU-centric server platforms. :Our customers have been asking for the flexibility to source GPUs in different ways on high performance servers,” said Lee Abrahamson, CTO of Equus Compute Solutions. “Our GPU servers, such as the G2660 server, are the ideal cost-optimized solutions for a wide range of applications and workloads. At the same time, these innovative platforms provide benefits of scale and volume, component standardization, ease of service logistics, and the means to avoid vendor lock-in.”