NVIDIA DGX A100 STATION 40 GB

Mfg.Part: DGXS-2040D+P2CMI00 | CDW Part: 6338635
Availability: 8-10 Days
Warranties
Product Details
  • Server
  • rack-mountable
  • 6U
  • 2 x EPYC 7742 / 2.25 GHz
  • RAM 256 GB
  • SSD 7.68 TB
  • NVMe
View Full Product Details
Better Together
NVIDIA DGX A100 STATION 40 GB
Quick View
Total Price:

Product Details

Main Features
  • Server
  • rack-mountable
  • 6U
  • 2 x EPYC 7742 / 2.25 GHz
  • RAM 256 GB
  • SSD 7.68 TB
  • NVMe
  • SSD 1.92 TB
  • 4 x A100 Tensor Core
  • GigE
  • 10 GigE
  • 100 Gigabit Ethernet
  • 25 Gigabit LAN
  • 50 Gigabit LAN
  • 200 Gigabit LAN
  • 200 Gigabit InfiniBand
  • Ubuntu
  • monitor: none
  • 10000 TFLOPS
Every business needs to transform using artificial intelligence (AI), not only to survive, but to thrive in challenging times. However, the enterprise requires a platform for AI infrastructure that improves upon traditional approaches, which historically involved slow compute architectures that were siloed by analytics, training, and inference workloads. NVIDIA DGX A100 is the universal system for all AI workloads - from analytics to training to inference. DGX A100 sets the bar high, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system. DGX A100 also offers the unprecedented ability to deliver fine-grained allocation of computing power, using the Multi-Instance GPU capability in the NVIDIA A100 Tensor Core GPU, which enables administrators to assign resources that are right-sized for specific workloads. This ensures that the largest and most complex jobs are supported, along with the simplest and smallest. Running the DGX software stack with optimized software from NGC, the combination of dense compute power and complete workload flexibility make DGX A100 an ideal choice for both single node deployments and large scale Slurm and Kubernetes clusters deployed with NVIDIA DeepOps.