Research Hub > What is High Performance Computing (HPC)?

October 12, 2021

Article
5 min

What is High Performance Computing (HPC)?

High performance computing allows organizations to handle and analyze massive amounts of data and answer some of the world’s biggest questions.

What is HPC?

With the correct IT infrastructure in place, high performance computing can help you analyze massive amounts of data across an HPC cluster – or a connected network of supercomputers and HPC servers. Each individual computer aggregates power across their graphical processing units (GPUs) to perform complex data sequencing and calculations.

What is an HPC cluster?

HPC clusters are necessary because some problems are too complex for one computer to solve on its own. DNA sequencing, for example, involves trillions of calculations per second, where the average computer can only perform a few billion per second. Building an HPC cluster allows organizations to dramatically cut down on the amount of time it takes to sequence this data. 

Who uses high performance computing?

High performance computing has been used in academic research for years. There are over 30,000 unique use cases for HPC solutions, with more being discovered every day. High performance computing is leading the way for groundbreaking scientific and medical studies and creating new technologies.

Just a few examples of how HPC is used in different industries include:

  • Bioinformatics
  • Quantitative analysis for medical research
  • Modeling of weather events or natural disasters
  • Space research, such as finding the location of black holes
  • AI research, such as machine learning or neural networks
  • Graphical rendering in media
  • Detecting credit card fraud
  • Analytics and creating complex algorithms

Different organizations use data sets in different ways. One of the great things about HPC is that it allows researchers to coordinate with each other, process data from multiple angles, and solve problems together. As a result, many HPC applications are built as open source programs so that multiple organizations can learn from each other.

How does HPC work in a data center?

To perform HPC, an on-premises data center needs computers and servers that are capable of handling lots of data, and a networking solution between them that is practically failsafe. The HPC cluster will be running quadrillions of calculations per second, and the data center infrastructure needs to be able to sequence and analyze data almost instantaneously.

Once the data is sequenced, it needs special HPC storage to capture the output. The data transfer from server to storage needs to happen fast in order to keep up with the rate at which the HPC cluster is processing data. An IT infrastructure that can handle HPC is expensive, but costs can be mitigated by moving data workloads to the cloud. For the problems HPC can help the world solve, though, the costs are well worth it.

How is cloud computing used in HPC?

The on-premises equipment needed to run HPC can be cost-prohibitive to most organizations. Oftentimes there is not enough physical space for a data center of the necessary size, and there’s often a lack of personnel that can keep the data centers up and running – and even fewer people who know how to report on the data that is generated.

Fortunately, there are ways to limit your data center footprint if you want to use HPC. A hybrid cloud infrastructure can help mitigate some of these costs. Moving computational workloads to the cloud reduces the amount of HPC equipment you need for calculations. You’ll still have a core set of HPC servers on-prem, but a cloud infrastructure allows you to decide how much data you want to process at once. This gives you added flexibility for how frequently you sequence data as well, especially if you’re an organization with frequently changing funding for research or projects.

Most cloud platforms have automated functions that can generate data reports for you. AWS HPC and Azure HPC both have extensive offerings in the space, for example. But when an out-of-the-box cloud solution won’t work for your project, cloud HPC also allows for the use of custom APIs that can help you sequence data however you need it, automatically.

In short, cloud HPC is the cheapest way to perform HPC while getting the most value out of the data that is generated.

What are HPC services?

High performance computing is a large undertaking. An HPC service partner can help you get up and running, or help you optimize your HPC cluster for reduced computational time. Other HPC services might include:

  • Professional Cloud Services: From establishing a hybrid infrastructure to assisting with moving your computational analysis workloads to the cloud, HPC cloud service providers help you reduce your data center footprint.
  • Custom APIs: Your data and how you want to use it is unique. Experienced developers can write custom API code that allows you to sequence and analyze data and customize reporting to suit the specifications of your research or project.
  • Containerization: Deep learning workloads are increasingly requiring containerization in order to keep them manageable. Development services can help you customize your containers for Kubernetes, breaking apart your HPC workloads and working with APIs to determine which workloads are moved to the cloud at any given moment.
  • Support Services: Whether your HPC equipment requires maintenance or you need help keeping your hybrid infrastructure up and running, HPC support services minimize downtime and mitigate costly data transfer errors.

CDW and HPC

HPC is laying the groundwork for the future of technology and advancing nearly all areas of study. If your organization is looking to take advantage of the power of high performance computing, CDW can help you prepare you data center or build a custom HPC infrastructure from the ground up – or up to the cloud.