NVIDIA Quadro GV100 - graphics card - Quadro GV100 - 32 GB

Mfg.Part: VCQGV100-PB | CDW Part: 5043331 | UNSPSC: 43201401
  • Graphics card
  • Quadro GV100
  • 32 GB HBM2
  • PCIe 3.0 x16
  • 4 x DisplayPort
View Full

This item was discontinued on May 14, 2018

Contact Sales Assistance
(800) 800-4239, Monday-Friday 7am-7:30pm CT

Email our Sales Assistance Team , Response within 24 Hours

Better Together
Quick View
Total Price:

Product Overview

Main Features
  • Graphics card
  • Quadro GV100
  • 32 GB HBM2
  • PCIe 3.0 x16
  • 4 x DisplayPort
AI, photo realistic rendering, simulation, and VR are transforming professional workflows. Engineers can now create groundbreaking products faster. Architects can design buildings that could only have existed in their imaginations. And artists can render complex photorealistic scenes in seconds instead of hours. As applications continue to be enhanced with these technologies, professional computing tools need to keep pace.

The NVIDIA Quadro GV100 is reinventing the workstation to meet the demands of these next-generation workflows. It's powered by NVDIA Volta, delivering the extreme memory capacity, scalability, and performance that designers, architects, and scientists need to create, build, and solve the impossible.

Based on a state-of-the-art 12nm FFN high-performance manufacturing process customized for NVIDIA to incorporate 5120 CUDA cores, the Quadro GV100 GPU is the most powerful computing platform for HPC, AI, VR and graphics workloads on professional desktops. Able to deliver more than 7.4 TFLOPS of double-precision (FP64), 14.8 TFLOPS of single-precision (FP32), 29.6 TFLOPS of half-precision (FP16), 59.3 TOPS of integer-precision (INT8), and 118.5 TFLOPs of tensor operation capability, it supports a wide range of compute-intensive workloads flawlessly.

New mixed-precision Tensor Cores purpose-built for deep learning matrix arithmetic, deliver an 8x boost in TFLOPS performance for training, compared to the previous generation. Quadro GV100 utilizes 640 Tensor Cores; each Tensor Core performs 64 floating point fused multiply-add (FMA) operations per clock, and each SM performs a total of 1024 individual floating point operations per clock.