COLUMBUS, Ohio, April 22, 2019 — While the impressive breakthroughs from Ohio’s leading research universities stem from the brilliant minds of students and faculty, the discoveries are greatly accelerated through the use of high performance computing. Thursday, many of these great minds gathered at the Ohio Supercomputer Center (OSC) for the Statewide Users Group (SUG) spring conference to collaborate and share ideas with peers and OSC staff.
SUG encompasses all OSC clients and receives direction from the SUG executive committee, a volunteer group composed of the Ohio university faculty who provide OSC’s leadership with program and policy advice and direction to ensure a productive environment for research.
Thursday’s conference featured a keynote address from Santosh Rao, senior technical director at NetApp, a hybrid cloud data services and data management company. Rao is responsible for NetApp’s Artificial Intelligence (AI) and data engineering products and solutions business. He works closely with the AI solution ecosystem across GPU, computer, software and channel partners, as well as customers worldwide.
“As NetApp is working across the industry, we are seeing the need for marketable skills that combine data science, algorithms, and the ability to take a fundamental approach around mission-critical skills and building out the ethics, the compliance, and the placeability aspects of the technology,” Rao said. “It is a pleasure and privilege to be here today because we can hopefully provide some parts that allow us to influence the program as we go forward, knowing the next five years are going to be fundamental to how the technology rolls. The ability to produce graduates that have those marketable skills is going to be critical for the industry.”
Breakout sessions provided the opportunity for OSC’s hardware and software committees to meet and share ideas. Other sessions included updates on code optimization, containers, the client portal, and the OSC Campus Champions program.
OSC Executive Director Dave Hudak addressed attendees and provided updates on the center’s direction. All the members of the OSC management team presented on client impact, service offerings, system updates and business details. Attendees also had a chance to provide constructive feedback and ask questions with experts in the room.
“The Statewide Users Group meeting is always an exciting opportunity to engage students and faculty,” Hudak said. “We learn about the work they do and how OSC services help support their studies. Bringing the community together demonstrates the breadth of work on a range of areas, everything from analyzing hurricane storm surges to designing new materials.”
SUG’s popular poster and flash talk competitions took place in the afternoon. Participants presented nine flash talks and 17 posters, competing for a first-place prize of 5,000 resource units of time on OSC systems and a second-place prize of 2,500 resource units. All flash talk and poster competitors received 1,000 resource units.
The flash talks provided SUG students and investigators with a chance to highlight current research that OSC’s clients are doing at their home universities. For students, this is an opportunity to practice speaking in front of an audience, as they are judged by their peers and other conference attendees. The Ohio State University’s Arif Hussain won the flash talk competition for his presentation titled “Sweeping Jet Film Cooling and Impingement Cooling for Gas Turbine Heat Transfer Application.”
Taking the runner-up position, also from Ohio State, was Alexandria Volkening with “Forecasting Elections with Mathematical Models of Contagion Spread.”
The poster competitors display their research and results on paper, while attendees have an opportunity to engage them in person about their research. Zhiping Zhong from The Ohio State University won first place with his poster, titled “Viruses Potentially Enhance Hosts Cold- and Salt-Tolerance.” Second place went to The Ohio State University’s Harper McMinn-Sauder for a poster titled “Measuring Honey Bee Utilization of Conservation Reserve Program (CRP) Pollinator Plantings Using DNA Metabarcoding.”
The next SUG conference will take place Thursday, Oct. 17, 2019.
Flash talk participants:
Andrew Chen, The Ohio State University, Predictive Model for Selective Aryl C-H Chlorination
Youssef Golestani, Kent State University, Modeling Liquid Crystal Elastomer Coatings Containing Defects
Arif Hossain, The Ohio State University, Sweeping Jet Film Cooling and Impingement Cooling for Gas Turbine Heat Transfer Application
Younghun Kang, The Ohio State University, Comparison of Two Parametric Wind Models in Storm Surge Simulation
Sean Marguet, The Ohio State University, Mechanistic Investigations of Hydrogen Evolution by Nickel-Substituted Rubredoxin: Examining the Importance of Secondary Sphere
Rajesh Ranjan, The Ohio State University, Stability Dynamics of Three-Dimensional Cavity Flows Using an Optimally Parallel Analysis Tool
Sharon Scott, The Ohio State University, Use of Computational Simulations to Determine Sorption Mechanisms of Organic Cationic Contaminants in Organic Matter
Vilas Shinde, The Ohio State University, Control of Shock Wave Boundary Layer Interaction Using Surface Morphing
Alexandria Volkening, The Ohio State University, Forecasting Elections with Mathematical Models of Contagion Spread
Russell Bonneville, The Ohio State University, Characterization of Clonal Evolution in Microsatellite Unstable Metastatic Cancers through Multi-Regional Tumor Sequencing
Andrew Chen, The Ohio State University, Predictive Model for Selective Aryl C-H Chlorination
Masood Delfarah, The Ohio State University, Deep Learning for Reverberant Speaker Separation: An Empirical Study
Julio de Lima Nicolini, The Ohio State University, Model Order Reduction of Electromagnetic Particle-in-Cell Kinetic Plasma Simuations
Omar El-Khoury, The Ohio State University, Storm Surge Simulation due to Hurricane Harvey
Chung Hyun Lee, The Ohio State University, Computational Analysis of Electromagnetic Interactions of Multiple Antennas on the Extremely Large and Multi-scale Objects
Arif Hossain, The Ohio State University, Sweeping Jet Film Cooling and Impingement Cooling for Gas Turbine Heat Transfer Application
Sean Marguet, The Ohio State University, Mechanistic Investigations of Hydrogen Evolution by Nickel-Substituted Rubredoxin: Examining the Importance Of Secondary Sphere
Harper McMinn-Sauder, The Ohio State University, Measuring honey bee utilization of Conservation Reserve Program (CRP) Pollinator Plantings using DNA Metabarcoding
Ola Nosseir, The Ohio State University, Phase Field Modeling of Transformation Pathway in HEA
Sharon Scott, The Ohio State University, Use of Computational Simulations to Determine Sorption Mechanisms of Organic Cationic Contaminants in Organic Matter
Carrie Salmon, The University of Akron, Linear Chlorophosphazenes: a Computational Study
Chanté Vines, The Ohio State University, Evaluating Fugitive Methane Emissions from Hydraulic Fracturing Using an Artificial Neural Network
Dylan Wood, The Ohio State University, A Modeling Framework for Assessing and Communicating Environmental Risks Due to Hurricanes
Yuan Xue, The University of Akron, DFT Calculations on Heterocyclic Substituted Phosphazene Oligomers as Metal Chelators
Zhiping Zhong, The Ohio State University, Viruses Potentially Enhance Hosts Cold- and Salt-Tolerance
Theresa Yazbeck, The Ohio State University, The Effects of Canopy Density and Spacing in Modulating Pollution Deposition Rate
Three thousand watts. That’s how much power the competitors in the 2019 ASC Student Supercomputer Challenge here in Dalian, China, have to work with. Everybody would like more juice to run compute-intensive HPC simulations and AI models, which means that whoever can get the most work done within the power constraints will take home the crown.
Twenty student teams from around the world have gathered here at the Dalian University of Technology to pit their computing abilities against one another in a no-holds-barred computational extravaganza. The event, which is hosted by the Asia Supercomputer Community (ASC) and sponsored by Inspur and Intel, started on Sunday and runs through Thursday, when the winners will be announced.
Just getting to this point is a victory of sorts for the undergraduates, but everybody would like to take home an award just the same. The ASC19 competition opened for registration in November with 300 student teams from 200 universities. The preliminary round kicked off in January and continued until early March, when the teams submitted their final proposals. A jury of HPC and AI experts with the ASC organization judged the proposals, and selected the final teams that are here this week.
The first two days of ASC (Sunday and Monday) are all about building the cluster, completing test runs, and dialing in the cluster for the actual competition, which runs mainly Tuesday and Wednesday. The configuration phase is a critical period for the teams, as they must determine which configuration will deliver the best performance against the various applications.
The sound of whirring fans fills the spacious indoor stadium here whenever one of the teams starts a test and the clusters ramp up. The teams check the results to see how the cluster ran, make some hardware or software tweaks, and repeat as necessary until they’re happy (or at least somewhat satisfied) with the setup.
Twenty student teams from around the world compete in ASC19 this week at Dalian University of Technology’s indoor stadium
The big challenge is to pick a cluster configuration that gives good results across all of the applications, or at least a cross-section of them. Dialing in that configuration involves balancing numerous variables, some of which are known and some of which are not. Once competition begins, the teams are not allowed to re-boot their clusters, although they can take action such as idling spare CPU or GPU nodes, which gives them some flexibility to adapt to different workloads while staying under 3,000 watts.
Each team is allowed to use as many servers as it wants. Boxes of Inspur servers are lined up against the bleacher seats, waiting to be racked and put into action. A maximum of 384GB of RAM is allowed, which most teams will probably use, and teams must supply their own hard drives (SSDs are a must).
The teams are allowed to use any internal networks they like. Most have chosen Mellanox interconnects, but one team has selected Intel OmniPath. Will it provide an advantage? Only time will tell.
ASC19 teams are also allowed to bring as many accelerators as they want to the competition. One team is rumored to have a large number of Nvidia V100 GPUs at the ready — effectively “loaded for bear.” But because of that pesky 3,000 watt limit, it’s unclear if the ASC19 winner will able to bring all that firepower to bear.
Some of the applications will clearly benefit from GPU computational power, while others will favor a lighter-weight approach. It all comes down to finding a strategic, or balanced, approach.
The power board shows how close the 20 teams competing in ASC19 are coming to the 3,000-watt max
For example, one of the ASC19 applications is the single-image super-resolution (SISR) competition, which requires the teams to use the PyTorch deep learning framework to basically “up-convert” a series of blurry images into images with greater detail. Having a lot of GPUs here could be a boon.
Another prominent application that’s sure to stress the ASC19 teams is the Community Earth System Model (CESM), which is one of the main models used by the United Nations Intergovernmental Panel on Climate Change (IPCC). CESM is more of a “classic” HPC application, which may or may not benefit from GPUs.
The teams are eagerly awaiting the release of the actual data sets on Tuesday, which will go far in determining whether they made the right decisions in configuring their setups. Of course, by then it will be too late to make any major changes to their configuration.
Besides the CESM and the SISR tests, the teams must also prepare for other applications, including a WTDBG sequence assembly application; the HPL & HPCG high performance benchmarks; and the old standby benchmark, LINPACK. There is also one mystery application that will test the teams’ ability to adapt to the unexpected.
Except for the mystery application, the ASC19 competitors have known what applications, algorithms, frameworks, and codes they’re going to be working with here. That allows them to prepare, including by writing CUDA programs for AI and HPC applications. That’s why the release of the datasets early Tuesday morning is such a critical event. The data sets could be significantly bigger than what they’re used to dealing with, which could throw an interesting wildcard into the mix.
What’s At Stake
The computational tests take place Tuesday and Wednesday, and on Thursday morning the teams will make their presentation and mount a verbal defense against the jury. Later that day, the winners will be announced and a banquet will be held in celebration.
The prizes at play include the title of Champion, the Silver Prize, the Highest Linpack, the e-Prize, and the Group Competition, among other awards. There’s a total bonus of over $36,000 at stake.
As the students battle it out on the stadium floor for AI and HPC supremacy, there will be other events taking place, including the 21st HPC Connection Workshop and Artificial Intelligence & Supercomputing Innovation Symposium.
Speaking at the workshop will be Jack Dongarra, the chairman of the ASC Advisory Committee and a distinguished professor at the University of Tennessee, as well as University of California Berkeley expert Leon Chua.
The symposium, meanwhile, will feature Huchuan Lu, a professor at Dalian University of Technology, and Guohui Li, a researcher at the Dalian Institute of Chemical Physics in the Chinese Academy of Sciences.
The main event, though, involves the competition among the 300-plus students from around the world. The students hail from 15 universities in China and five from universities in other countries, including Germany (two), Poland, South Korea, and Columbia. Universities represented include:
Beihang University, China
Boxes of Inspur servers line the wall of the ASC19 Finals this week in Dalian, China
Southern University of Science and Technology, China
Fuzhou University, China
Jinan University, China
Shanghai Jiao Tong University, China
Sun Yat-Sen University, China
Taiwan Tsing Hua University, China
Peking University, China
Huazhong University of Science and Technology, China
Shanxi University, China
Taiyuan University of Technology, China
University of Electronic Science and Technology of China, China
The Chinese University of Hong Kong, China
University of Warsaw / Warsaw University of Technology, Poland
The finals of ASC19 Student Supercomputer Challenge kicked off on April 21st, 2019 at Dalian University of Technology, China. Top 20 teams from around the world designed and built clusters of up to 3,000W, tackling tough tasks including: AI-application single-image super-resolution (SR), Community Earth System Model (CESM), sequence assembly software WTDBG, high performance benchmarks HPL & HPCG, and a mystery application. They are fighting for the Champion, the Silver Prize, the Highest LINPACK, the e-Prize, the Group Competition and other awards, with a total bonus of over $36,000 USD.
The 21st HPC Connection Workshop and Artificial Intelligence & Supercomputing Innovation Symposium will also be held during the finals. Speakers include Jack Dongarra, ASC Advisory Committee Chair and Distinguished Professor at University of Tennessee, Leon Chua, Professor at University of California Berkeley and a number of world renown experts.
The ASC Student Supercomputer Challenge is the world’s largest student supercomputer competition, sponsored and organized by China and supported by Asian, European, and American experts and institutions. The main objectives of ASC are to encourage the exchange and training of young supercomputer talents from different countries, to improve supercomputer applications and R&D efforts, to boost the development of supercomputing technologies, and to promote technical and industrial innovations. The annual ASC Student Supercomputer Challenge was first held in 2012 and has since attracted over 7,000 undergraduates from all over the world. ASC19 is jointly organized by Asia Supercomputer Community, Inspur Group and Dalian University of Technology.
The 20 ASC19 finalists are:
Southern University of Science and Technology
Shanghai Jiao Tong University
Sun Yat-Sen University
Taiwan Tsing Hua University
Huazhong University of Science and Technology
Taiyuan University of Technology
University of Electronic Science and Technology of China
The Chinese University of Hong Kong
University of Warsaw / Warsaw University of Technology
Dalian University of Technology
University of Tartu
ASC19 Student Supercomputer Challenge Finals
The 21st HPC Connection Workshop
Time: 14:00-17:30 April 24
Venue: Dalian University of Technology
Topic: The impact and challenge of AI and Supercomputing
Artificial Intelligence and Supercomputer Innovation Symposium
Time: 09:00-11:20 April 25
Venue: Dalian University of Technology
SAN JOSE, Calif., April 22, 2019 — Cadence Design Systems, Inc. today announced that it has collaborated with TSMC to enable customers’ production delivery of next-generation system-on-chip (SoC) designs for mobile, high-performance computing (HPC), 5G and artificial intelligence (AI) applications on TSMC’s 5nm FinFET process technology. As part of the collaboration, the Cadence digital, signoff and custom/analog tools have been certified for Design Rule Manual (DRM) and SPICE v1.0, and Cadence IP has been enabledfor the TSMC 5nm process. The corresponding process design kits (PDKs) featuring integrated tools, flows and methodologies are now available for traditional and cloud-based environments. Additionally, mutual customers have already completed several tapeouts using Cadence tools, flows and IP for full production development on the TSMC 5nm process technology.
Cadence delivered a fully integrated digital implementation and signoff tool flow, which has been certified on TSMC’s industry-leading 5nm process that has the benefits of process simplification provided by extreme ultraviolet (EUV) lithography. The Cadence full-flow includes the Innovus Implementation System, Liberate Characterization Portfolio, Quantus Extraction Solution, Tempus Timing Signoff Solution, Voltus IC Power Integrity Solution and Pegasus Verification System.
The Cadence digital and signoff tools that have been optimized for TSMC’s 5nm process technology provide EUV support at key layers and associated new design rules, which enable mutual customers to reduce iterations and achieve performance, area, and power (PPA) improvements.Some of the newest enhancements for the 5nm process include predictive via-pillar-aware synthesis structuring with the Genus Synthesis Solution as well as a pin-access control routing method for cell electromigration (EM) handling in the Innovus Implementation System and Tempus ECO and also statistical EM budgeting analysis support in the Voltus IC Power Integrity Solution. The newly certified Pegasus Verification System supports 5nm rule decks for all TSMC physical verification flows including DRC, LVS and metal fill.
5nm Custom/Analog Tool Certification
The Cadence custom/analog tools certified on TSMC’s industry-leading 5nm process technology include the Spectre® Accelerated Parallel Simulator (APS), Spectre eXtensive Partitioning Simulator (XPS), Spectre RF Option, Spectre Circuit Simulator, Voltus-Fi Custom Power Integrity Solution, Pegasus Verification System as well as the Virtuoso® custom IC design platform, which includes the Virtuoso Layout Suite EXL, Virtuoso Schematic Editor and Virtuoso ADE Product Suite.
The Virtuoso R&D team has an ongoing and rich collaboration with the Cadence IP Group, developing 5nm mixed-signal IP using a state-of-the-art custom design methodology built on the latest Virtuoso design platform. By continually enhancing the design methodologies and capabilities included with the Virtuoso Advanced-Node and Methodology Platform for TSMC’s advanced-node processes, including the 5nm process, customers can achieve better custom physical design throughput versus traditional non-structured design methodologies.
The new Virtuoso Advanced-Node and Methodology Platform (ICADVM 18.1) consists of features and functionality required for creating 5nm designs, which include an accelerated, row-based custom placement and routing methodology that enables users to improve productivity and better manage complex design rules. Cadence introduced several new features that support the 5nm process including stacked gate support, universal poly grid snapping, area-based rule support, asymmetric coloring and voltage-dependent rule support, analog cell support and support for various new devices and design constraints that are part of TSMC’s 5nm technology offering.
5nm IP Enablement
Cadence is developing a differentiated advanced-node IP portfolio to support TSMC’s 5nm process, which includes a high-performance memory subsystem, very high-speed SerDes and high-performance analog to meet the demands of HPC, machine learning (ML) and 5G base stations. With the release of TSMC’s 5nm design infrastructure, Cadence and TSMC are actively engaged with customers and enabling next-generation SoC development by addressing the latest IP requirements for evolving application areas.
“TSMC’s 5nm technology offers our customers the industry’s most advanced technology to address the growing demand for computing power driven by AI and 5G,” said Suk Lee, TSMC senior director, Design Infrastructure Management Division. “By collaborating closely with Cadence, we’re enabling customers to effectively differentiate themselves and deliver designs to market faster using our latest technologies.”
“We’re continuing to broaden our collaboration with TSMC to facilitate 5nm FinFET adoption, giving customers access to the latest tools and IP for advanced process design creation,” said Dr. Chin-Chi Teng, senior vice president and general managerof the Digital & Signoff Group at Cadence. “Our R&D team has focused heavily on developing new features and performance improvements so that our digital and signoff and custom/analog tools and IP can be used with complete confidence, enabling customers to achieve first-pass silicon success and deliver end products within aggressive time-to-market schedules.”
Cadence enables electronic systems and semiconductor companies to create the innovative end products that are transforming the way people live, work and play. Cadence software, hardware and semiconductor IP are used by customers to deliver products to market faster. The company’s System Design Enablement strategy helps customers develop differentiated products—from chips to boards to systems—in mobile, consumer, cloud datacenter, automotive, aerospace, IoT, industrial and other market segments. Cadence is listed as one of Fortune Magazine’s 100 Best Companies to Work For.
April 19 — Physicists at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have discovered valuable information about how electrically charged gas known as “plasma” flows at the edge inside doughnut-shaped fusion devices called “tokamaks.” The findings mark an encouraging sign for the development of machines to produce fusion energy for generating electricity without creating long-term hazardous waste.
The result partially corroborates past PPPL findings that the width of the heat exhaust produced by fusion reactions could be six times wider, and therefore less narrow, concentrated, and damaging, than had been thought. “These findings are good news for ITER,” said PPPL physicist C.S. Chang, lead author of a description of the research in Physics of Plasmas, referring to the international fusion experiment under construction in France. “The findings show that the heat exhaust in ITER will have a smaller chance of harming the machine,” Chang said.
Fusion, the power that drives the sun and stars, is the fusing of light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei — that produces energy. Scientists around the world are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.
The superhot plasma within tokamaks, which can reach hundreds of millions of degrees, is confined by magnetic fields that keep the plasma from the walls of the machines. However, particles and heat can escape from the confinement fields at the “magnetic separatrix” — the boundary between the magnetically confined and unconfined plasmas. At this boundary, the field lines cross at the so-called X-point, the spot where the waste heat and particles escape and strike a target called the “divertor plate.”
The new findings reveal the surprising effect of the X-point on the exhaust by showing that a hill-like bump of electric charge occurs at the X-point. This electrical hill makes the plasma circulate around it, preventing plasma particles from traveling between the upstream and downstream areas of the field lines in a straight path. Instead, like cars maneuvering around a construction site, the charged plasma particles take a detour around the hill.
The researchers produced these findings with XGC, an advanced computer code developed with external collaborators at PPPL that models the plasma as a collection of individual particles rather than as a single fluid. The model, which showed that the connection between the upstream plasma located above the X-point and the downstream plasma below the X-point formed in a way not predicted by simpler codes, could lead to more accurate predictions about the exhaust and make future large-scale facilities less vulnerable to internal damage.
“This result shows that the previous model of the field lines involving flux tubes is incomplete,” said Chang — referring to the tubular areas surrounding regions of magnetic flux — “and that the current understanding of the interaction between the upstream and downstream plasmas is not correct. Our next step is to figure out a more accurate relationship between the upstream and downstream plasmas using a code like ours. That knowledge will help us develop more accurate equations and improved reduced models, which in fact are already in progress.”
Support for this research comes from the DOE’s Office of Science. Co-authors of the Physics of Plasmas paper are PPPL physicists Seung-Hoe Ku and Randy Michael Churchill. Computations were performed on leadership-class supercomputers at the Oak Ridge Leadership Computing Facility, the Argonne Leadership Computing Facility, and the National Energy Research Scientific Computing Center (NERSC), all DOE Office of Science user facilities.
PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov(link is external).
SCHIPHOL-RIJK, The Netherlands, April 19 – On April 5 2019, Taurus Group BV announced the acquisition of Dutch HPC specialist ClusterVision. The group has purchased all of ClusterVision’s assets and intellectual property. A core team from the former ClusterVision entity will form the basis of a new company. Taurus will continue ClusterVision’s activities in HPC including the development of TrinityX – an open source high performance computing cluster management eco- system and its support services. The new company will run under the leadership of original ClusterVision founder Mr Alex Ninaber. “After the recent troubles, we are confident in getting business back on track. All existing TrinityX customers of ClusterVision will be in good hands”, says Mr Ninaber. “I am happy that we can continue as part of Taurus Group. Our strengths will certainly complement each other.”
High performance computing (HPC) is a niche market involving sophisticated computing hardware configured for heavy calculations and data modelling. HPC requires the fine tuning of storage, networking and compute for achieving maximized performance from the hardware. With explosive growth in data, there could be a much wider requirement for HPC installations in the future. There is a clear opportunity for ClusterVision to re-establish its long reputation as trusted supplier of HPC solutions with the financial strength of the Taurus Group structure.
Since 2016 Taurus Group has acquired several companies. “It is part of an ongoing transformation within Taurus to offer a full range of IT services ranging from hardware components to fully integrated enterprise solutions to our global customer base. The opportunity to bring ClusterVision in the portfolio is very important to our HPC strategy and future growth,” says Taurus Group Managing Director Arun Garg. “We have a long history in the distribution of storage, networking and compute. We believe that by integrating closely connected business verticals will eventually bring significant scale, synergy and a thriving circular economy to the entire group”
ClusterVision is Europe’s dedicated specialist for high-performance compute (HPC) solutions. By combining cutting-edge hardware and software components with a range of customised professional services, they create and maintain top-quality, efficient, and reliable HPC solutions. Based in Amsterdam, ClusterVision grew into a reputed international operation. They have designed, built, and managed some of the fastest and most complex computational and database clusters in Europe, some of which became TOP500 systems, including the largest Supercomputer in Scandinavia.
About Taurus Group
Taurus was founded in 2005 and became a conglomerate of value-added IT distribution and system integration companies with offices in The Netherlands (HQ), Belgium (2BY2 NV), and Germany (Taurus Europe GmbH). It has built a strong global distribution infrastructure with large on-hand inventories of components, particularly enterprise storage. In addition, Taurus Group has its own enterprise integration division (ClusTaur Solutions BV) that offers software defined datacentre solutions such as ultra-low latency Software Defined Storage (SDS). To provide even more value to the market, Taurus Group is continuing to bring additional expertise into its ecosystem. Aside from High Performance Computing, Taurus is finalizing the acquisition of Dutch company Circle B BV adding Open Compute Project solutions and Software Defined Networking (SDN) to its offering.
April 19 — On 10 April 2019 the first-ever image of a black hole was unveiled. This once-in-a-lifetime achievement would not have happened without the support of a global team of dedicated scientists, and of European supercomputers.
During a live-streamed press conference in Brussels, spectators were treated to a short video zooming in on galaxy M87 to end with a view of the supermassive black hole that exists at its centre and is 6.5 billion times more massive than the Sun. BlackHoleCam Scientists (https://blackholecam.org/) as part of the Event Horizon Telescope (EHT) Consortium (https://eventhorizontelescope.org) contributed to obtain the long-sought image, which actually shows the “shadow” of the black hole surrounded by a bright ring. The black hole itself remains hidden, as light bends in the intense gravity and cannot escape to show us what reality looks like beyond the event horizon (point of no return).
Prof. Dr. Luciano Rezzolla, Chair of Theoretical Astrophysics at the Goethe University, Frankfurt, Germany, and one of the principal investigators (PI) of BlackHoleCam (https://astro.uni-frankfurt.de/rezzolla/) is part of the global network of more than 200 scientists that needed one year and eight telescopes to achieve their feat.
In addition to the telescopes, the team used supercomputing resources to explore theoretically the various aspects of this discovery and understand the properties of the black hole. The German Tier-0 systems SuperMUC, hosted by LRZ in Garching, HazelHen, hosted by HLRS in Stuttgart, and the LOEWE cluster in CSC in Frankfurt, were directly employed in these calculations. SuperMUC and HazelHen are part of the PRACE Research Infrastructure that supports excellent science and engineering in Europe (www.prace-ri.eu/prace-resources/)
“Two aspects of our work that would have been impossible without supercomputers: the direct simulations that first produce the plasma dynamics near a black hole and subsequently the appearance of the “shadow” from a black hole, and the data reduction needed to turn the huge amount of interferometric data from the radiotelescopes into an image,” explains Rezzolla.
When using world-class HPC systems, the need for efficient algorithms and skilled scientific programmers increases significantly. Katie Bouman, who is an assistant professor at the California Institute of Technology’s computing and mathematical sciences department, was one of the talented scientists on the EHT team. The image showing her excitement at the first viewing of the image has gone viral, and has given a young and female face to universe science.
“In international projects with diverse teams of highly skilled and motivated people, the best scientific results are achieved. This project, which combines the most advanced technology with the finest theoretical supercomputer simulations, is a good example of that,” Rezzolla added.
PRACE is proud to have Luciano Rezzolla as a Member of its Access Committee, the body that gives advice to the Board of Directors concerning the allocation of resources of the RI (www.prace-ri.eu/organisation/)
The image opens new windows onto the study of black holes, their event horizons, and gravity, and PRACE will continue to support excellent research in this field via its Call for Proposals for Project Access. To date PRACE has supported 17 such projects, providing them with more than 780 million core hours combined. The topic is also featured in the Scientific Case for Computing in Europe 2018 – 2026 (www.prace-ri.eu/third-scientific-case/), which the PRACE Scientific Steering Committee, the body that provides advice and guidance on all matters of a scientific and technical nature which may influence the scientific work carried out by the use of the resources provided by PRACE.
BlackHoleCam is a EU-funded project to finally image, measure and understand astrophysical black holes. Our research will test fundamental predictions of Einstein’s theory of General Relativity (GR). The BlackHoleCam team members are active partners of the global Event Horizon Telescope Consortium. BHCam is a project funded through a “Synergy Grant” awarded by the European Research Council (ERC) to a team of European astrophysicists, in partnership with the Event Horizon Telescope project and other international partners.
April 19, 2019 — PRACE has named Dr. Debora Sijacki, Reader in Astrophysics and Cosmology at the Institute of Astronomy and Kavli Institute for Cosmology, University of Cambridge, United Kingdom winner of 2019 PRACE Ada Lovelace Award for HPC for her outstanding contributions to and impact on HPC in Europe. As a computational cosmologist she has achieved numerous high-impact results in astrophysics based on numerical simulations on state-of-the-art supercomputers.
Debora Sijacki focuses in her work on computational astrophysics, especially studying galaxy formation, supermassive black holes, and hydrodynamical feedback processes. She has developed several new numerical models and implemented them in massively parallel simulations. An interesting example for her impact in computational science is the highly successful Illustris galaxy formation model where she was one of the key developers during her postdoctoral Hubble fellowship at Harvard University. The main result of Illustris regards the fundamental role supermassive black holes play in shaping galaxy properties, based on seminal ideas she developed during her PhD studies at the Max Planck Institute for Astrophysics in Garching, Germany.
Sijacki uses for her calculations large supercomputer facilities. Her work directly showcases the importance of high performance computing for fundamental research in cosmology and astrophysics. Her impact in HPC is recognised as she was involved as principal investigator (PI) or Co-PI in several projects that were awarded in total more than 180 million core hours through PRACE, XSEDE and DiRAC. Her expertise and high recognition in this area is also underlined by the fact that since 2016 she is the Chair of the UK National HPC Project Management Board of DiRAC.
The research areas of Astrophysics and Computational Sciences are still suffering from an unbalanced gender ratio. However Sijacki found in both research domains international acceptance of her work. Her research achievements were recognised with the Otto-Hahn medal of the Max-Planck Society. Furthermore she was awarded with an ERC starting grant (2015-2020) during her lectureship at the Institute of Astronomy in Cambridge. She has excellent communication skills conveying the enthusiasm for her work not only at scientific conferences, but also through media interviews and public outreach events inspiring the next generation of young female scientists in HPC.
The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class High Performance Computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 5 PRACE members (BSC representing Spain, CINECA representing Italy, ETH Zurich/CSCS representing Switzerland, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU’s Horizon 2020 Research and Innovation Programme (2014-2020) under grant agreement 730913. For more information, see www.prace-ri.eu
* Novel and innovative interconnect architectures
* Multi-core processor interconnects
* System-on-Chip Interconnects
* Advanced chip-to-chip communication technologies
* Optical interconnects
* Protocols and interfaces for inter-processor communication
* Survivability and fault-tolerance of interconnects
* High-speed packet processing engines and network processors
* System and storage area network architectures and protocols
* High-performance host-network interface architectures
* High-bandwidth and low-latency I/O
* Pb/s switching and routing technologies
* Innovative architectures for supporting collective communication
* Novel communication architectures to support cloud computing
* Centralized and distributed cloud interconnects
* Requirements driving high-performance interconnects
* Traffic characterization for HPC systems and commercial data centers
* Software for Network/Fabric Bring-up, Configuration and Performance
Management, e.g., OpenFlow or OpenSM
* Data Center networking
AUTHOR INFORMATION AND FORMAT
Presentations at HOT Interconnects are in the form of
30-minute talks in PowerPoint or .PDF. Presentation
slides will be mounted on the website www.hoti.org ,
accessible to attendees during and after the conference
A select group of presenters will be encouraged to submit
a full-length paper for publication in a special issue of IEEE Micro.
A $500 award will be given for the best student presentation.
Support will be offered for student travel.
Regular Presentations consist of a title, an extended
abstract (two to four pages) and the presenter’s
contact information (name, affiliation, job title, address,
phone(s), fax, and email). Please indicate whether you
have submitted, intend to submit, or have already
presented or published a similar or overlapping
submission to another conference or journal. Also
indicate if you would like the submission to be held
confidential. If so indicated, these submissions
remain confidential until the first day of the conference.
Submissions are evaluated by the Program Committee on
the basis of performance of the device(s), degree of
innovation, use of advanced technology, potential market
significance, and anticipated interest to the audience.
Research and software contributions will be evaluated
with similar criteria. To the extent that you are
describing a product, indicate its status – design,
development, tape out, silicon, shipping, etc.
Authors will be notified of acceptance decisions by May 31,
2019. Send questions relating to the program to the
program chairs Khaled Hamidouche and Ryan Grant at: firstname.lastname@example.org
and questions relating to conference operation
to the general chairs, Eitan Zahavi and Don Draper, at: email@example.com
Sponsored by the Technical Committee on Microprocessors
and Microcomputers of the IEEE Computer Society.
Check the HOT INTERCONNECTS 26 web page for updates: www.hoti.org
HSINCHU, Taiwan, R.O.C., April 18, 2019 — TSMC today announced consolidated revenue of NT$218.70 billion (US $7.09 billion), net income of NT$61.39 billion (US $1.99 billion), and diluted earnings per share of NT$2.37 (US$0.38 per ADR unit) for the first quarter ended March 31, 2019.
Year-over-year, first quarter revenue decreased 11.8% while net income and diluted EPS both decreased 31.6%. Compared to fourth quarter 2018, first quarter results represented a 24.5% decrease in revenue and a 38.6% decrease in net income. All figures were prepared in accordance with TIFRS on a consolidated basis.
In US dollars, first quarter revenue was $7.10 billion, which decreased 16.1% year-over-year and decreased 24.5% from the previous quarter. Gross margin for the quarter was 41.3%, operating margin was 29.4%, and net profit margin was 28.1%.
In the first quarter, shipments of 7-nanometer accounted for 22% of total wafer revenue and 10-nanometer process technology contributed 4% while 16-nanometer accounted for 16%. Advanced technologies, defined as 16-nanometer and more advanced technologies, accounted for 42% of total wafer revenue.
“In the first quarter, our business was impacted by the overall global economic condition which dampened the end market demand; customer inventory management to digest excess inventory in the semiconductor supply chain; and high-end mobile product seasonality. Meanwhile, the net effect from the photoresist defect material incident also impacted our first quarter revenue by about 3.5 percent,” said Lora Ho, SVP and Chief Financial Officer of TSMC. “While the economic factor and mobile product seasonality are still lingering as we move into second quarter, we believe we may have passed the bottom of the cycle of our business as we are seeing demand stabilizing. Based on our current business outlook, management expects the overall performance for second quarter 2019 to be as follows”:
Revenue is expected to be between US$7.55 billion and US$7.65 billion, and, based on the exchange rate assumption of 1 US dollar to 30.85 NT dollars,
Gross profit margin is expected to be between 43% and 45%;
Operating profit margin is expected to be between 31% and 33%.
TSMC pioneered the pure-play foundry business model when it was founded in 1987, and has been the world’s largest dedicated semiconductor foundry ever since. The company supports a thriving ecosystem of global customers and partners with the industry’s leading process technology and portfolio of design enablement solutions to unleash innovation for the global semiconductor industry. TSMC serves its customers with annual capacity of about 12 million 12-inch equivalent wafers in 2019 from fabs in Taiwan, the United States, and China, and provides the broadest range of technologies from 0.5 micron plus all the way to foundry’s most advanced processes, which is 7-nanometer today. TSMC is the first foundry to provide 7-nanometer production capabilities, and is headquartered in Hsinchu, Taiwan. For more information about TSMC, visit http://www.tsmc.com.