October 20, 2021

3 min

How High-Performance Computing and the Hybrid Cloud Support Scientific Research

Moving HPC workloads out of the data center and into the cloud extends valuable resources.

Ken Cameron

The high-performance computing that drives much of our scientific and medical research may lack the high profile of consumer and enterprise technologies. Behind the scenes, however, researchers use HPC to pursue essential knowledge about people, our planet and the universe itself.  

In fact, there are probably more than 30,000 use cases for HPC, which relies on graphical processing units designed to perform calculations on an application or data suite across a very large computing cluster. (Learn more on the basics of HPC). For research interests ranging from bioinformatics to black holes, the scientific community uses these powerful platforms to perform quantitative analyses with complex algorithms. Increasingly, researchers are adopting hybrid cloud environments to speed up, scale and optimize this work.

Cloud-Based Computations Speed Up Essential Medical Research

Medical research is one of the most active areas for HPC use. It’s also incredibly resource-intensive computationally. In cancer research, for example, a DNA sequencer may take roughly 40 to 50 hours to analyze a single blood sample. Moving the same sequencing activity to the cloud can reduce that workload by approximately 75 percent — delivering results faster to drive scientific advances.

For the institutions that perform this work, moving to a hybrid cloud delivers all the benefits that other organizations derive from the cloud. These include reduced costs from shrinking the on-premises data center and greater agility to meet intense spikes in demand for resources without having to pay for unused equipment.

Astronomical research, such as efforts to identify black holes, is another major HPC application. Scientists use powerful telescopes located all over the world to take pictures of objects in space and analyze them to determine where black holes may be located. That information helps to improve our understanding of the wider universe.

On-Demand Resources Align with HPC’s Variable Workloads

Data analytics and machine learning have already transformed HPC. The cloud is poised to be similarly transformative. With institutions often reliant on funding from the National Science Foundation and other organizations, efficient use of IT resources is essential. Automated functions in the cloud can reduce the human capital that research requires, while increasing the research pace.  

Moving to a hybrid cloud environment can also extend research funds. Limiting the amount that institutions must spend on on-premises equipment gives them more freedom to invest in the powerful agility of the cloud. The ability to quickly scale compute and storage resources up and down is especially useful for researchers, who often have inconsistent workloads. A medical research project might require 20 sequencers for 20 researchers for a short period, for instance, but that team might dwindle to two researchers for months. 

Many institutions have just begun their move to an HPC hybrid cloud environment. They find that the resources and agility of the cloud align extremely well with variable nature of research work, making it possible to get the most bang for the buck from research funding.

Story by Ken Cameron, a Data Center Architect for CDW.


NetApp® cloud services help you get more out of your cloud environment than you ever thought was possible.