Amazon Web Services » HPC Blog
96 FOLLOWERS
A High-Performance Computing (HPC) blog by Amazon Web Services. Subscribe to get the latest articles and updates from this blog.
Amazon Web Services » HPC Blog
1d ago
Run simulations using multiple containers in a single AWS Batch job
Matthew Hansen, Principal Solutions Architect, AWS Advanced Computing & Simulation
Recently, AWS Batch launched a new feature that makes it possible to run multiple containers within a single job. This enables new scenarios customers have asked about like simulations for autonomous vehicles, multi-robot collaboration, and other advanced simulations.
For autonomous system (AS) developers, this means you can keep your simulation and test scenario code in separate containers from the autonomy and sensor pipelines you want to ..read more
Amazon Web Services » HPC Blog
6d ago
Nextflow is a popular domain-specific language (DSL) and runtime used to define workflows that string together multiple processing steps into a pipeline. This allows it to perform quite complex genomics or scientific analyses including for machine-learning.
Workflows defined in Nextflow code can leverage container orchestration technologies to deploy containerized workloads across clusters, clouds, or HPC environments. As an interpreted language, errors in a Nextflow script are only revealed at runtime. This increases the time and cost of developing and debugging a workflow which could be redu ..read more
Amazon Web Services » HPC Blog
1w ago
This post was contributed by Olivia Choudhury, PhD and Aniket Deshpande from AWS; Sanchit Misra, PhD, Vasimuddin Md., PhD, Narendra Chaudhary, PhD, Saurabh Kalikar, PhD, and Manasi Tiwari, PhD, Research Scientists at Intel Labs India; Ashish Kumar Patel, contingent worker with Intel Technology India Pvt. Ltd.
We are living in the exciting times of the rapidly growing field of omics, including genomics, proteomics, transcriptomics, and metabolomics data. Our ability to measure omics data is increasing at a dramatic pace and new data science (AI and data management) pipelines are being deve ..read more
Amazon Web Services » HPC Blog
3w ago
This post was contributed by Ross Pivovar, Solution Architect, and Adam Rasheed, Head of Emerging Workloads & Technologies, AWS and Orang Vahid, Director of Engineering Services and Kayla Rossi, Application Engineer, Maplesoft
In our previous posts we discussed how to setup a Level 4 Digital Twin (DT) that adapts to changing environments and how to use the L4 DT to perform forecasting, scenario analysis, and risk assessment based on incoming measurement data. In this post today, we’ll discuss methods to find the optimal number, type, and placement of sensors to maximize the accuracy of you ..read more
Amazon Web Services » HPC Blog
1M ago
Research organizations around the world run large-scale simulations, analyses, models, and other distributed, compute-intensive workloads on AWS every day. These jobs depend on an orchestration layer to coordinate tasks across the compute fleet.
As a researcher or systems administrator providing services for researchers, it can be difficult to choose which AWS service or solution to use because there are various options for different kinds of workloads.
In this post, we’ll describe some typical research use cases and explain which AWS tool we think best fits that workload.
Understanding your ..read more
Amazon Web Services » HPC Blog
1M ago
As many readers will know, AWS Batch provides functionality that enables you to run batch workloads on the managed container orchestration services in AWS: Amazon ECS and Amazon EKS. One of the core concepts of Batch is that it provides a job queue you can submit your work to. Batch is designed to transition your jobs from SUBMITTED to RUNNABLE states if they pass preliminary checks, and from RUNNING to either FAILED or SUCCEEDED after the job is placed on a compute resource and completes. Batch is also sends an event to Amazon CloudWatch Events for each corresponding job state update.
Sometim ..read more
Amazon Web Services » HPC Blog
2M ago
HPC workloads like genome sequencing and protein folding involve processing huge amounts of input data. Genome sequencing aims to determine an organism’s complete DNA sequence by analyzing extensive genome databases containing gene and genome reference sequences from thousands of species. Protein folding uses molecular dynamics simulations to model the physical movements of atoms and molecules in a protein.
These workloads require analyzing massive input datasets. To support these kinds of applications that need high bandwidth, low latency, and parallel access to lots of data, AWS offers ..read more
Amazon Web Services » HPC Blog
2M ago
This post was contributed by J. Doyne Farmer, Institute of New Economic Thinking, Oxford University, Jagoda Kaszowska-Mojsa, Oxford University & Institute of Economics, Polish Academy of Sciences, Sam Bydlon, Senior Solutions Architect and Ilan Gleiser, Principal Machine Learning Specialist, WWSO Emerging Technologies, AWS
Economists and policy makers have maintained a sustained interest in understanding the effects of macroprudential economic policies. Recently, a novel approach using agent-based models has emerged, which provides insights into the complexity of these ..read more
Amazon Web Services » HPC Blog
2M ago
You might associate the phrase ‘Jupyter Notebooks in production’ with a scrappy startup short on engineers or a hobbyist tinkering in their free time. However, this story unfolds at Amazon, where a team transitioned from requiring software engineers to replicate scientists’ work for production, to enabling scientists to seamlessly deploy Jupyter Notebooks into production.
Amazon’s Renewable Energy Optimization team produces software to maximize the effectiveness of our portfolio of wind and solar farms. The team develops and runs machine learning models that forecast the state of the electrici ..read more
Amazon Web Services » HPC Blog
2M ago
Cloud computing provides an experience similar to uncapped or unlimited resources for HPC workloads, helping organizations to accelerate research and development. When using cloud, the business owner typically allocates a fixed annual budget for HPC resources. The budget then needs to be split by multiple groups, across departments, business units, or projects.
But while budgets are fixed, HPC workload needs fluctuate throughout the year for nearly everyone. That’s challenging, and can often make the shift to cloud too much of a puzzle for some.
In this post, we’ll describe a solution for mana ..read more