Notifications
Notifications
CDW Logo

NVIDIA Tesla K40 - GPU computing processor - Tesla K40

Mfg # UCSC-GPU-K40= CDW # 3859610

Quick tech specs

  • GPU computing processor
  • for UCS C240 M4
  • VDI C240 M4
  • Tesla K40
  • Smart Play 8 C240
View All

Know your gear

Equipped with plenty of memory, the Tesla K40 GPU accelerator is ideal for demanding HPC and big data problem sets. It outperforms CPUs and provides a Tesla GPUBoost feature that enables power headroom to be converted into usercontrolled performance boost.

This item was discontinued on October 06, 2022

Enhance your purchase

NVIDIA Tesla K40 - GPU computing processor - Tesla K40 is rated 4.50 out of 5 by 2.
Rated 5 out of 5 by from Simplifies our processes and helps us handle complex computations effectively What is our primary use case? We heavily depend on NVIDIA Tesla GPUs, as they are a vital part of our daily operations. We use them for various important tasks at my organization, where we are both a resource and an education institute. These GPUs are essential for our work in AI for health, genomics, and bioinformatics. From analyzing genomic data to driving progress in AI and machine learning, Tesla GPUs play a key role in our research and education efforts. How has it helped my organization? Working with NVIDIA has been incredibly beneficial for us. The biggest benefits of working with it are its powerful GPU performance, reliable hardware, and excellent software ecosystem. Having everything from one source makes our work smoother. The NGC support with pre-built containers saves us time and effort, allowing quick adaptation without extensive testing. Unlike other GPU vendors, NVIDIA's solutions work seamlessly out of the box. What is most valuable? The most valuable aspects of Tesla are its CUDA software framework, which boosts our computing capabilities, and NVIDIA's NGC cloud support. The pre-built containers they offer, especially for tasks like potential flow simulations, are a big time-saver. These features make Tesla GPUs essential for our work in AI, genomics, and bioinformatics, simplifying our processes and helping us handle complex computations effectively. What needs improvement? While the current Tesla setup meets our needs well, it would be beneficial to see broader application support and compatibility with different workloads. The existing configuration handles our current use cases well, but expanding its capabilities to accommodate a wider range of applications would be a great improvement. For how long have I used the solution? I have been working with NVIDIA Tesla for five years. What do I think about the stability of the solution? We haven't experienced any stability issues with Tesla. It has been very stable for. What do I think about the scalability of the solution? The solution is scalable, especially with products like the NVIDIA DGX, which is designed for scalability. In our organization, we have over a thousand HPC users, with around 300 to 400 specifically using Tesla for their high-performance computing needs. How are customer service and support? The tech support for NVIDIA is excellent. They are very responsive and reliable. I would rate the support at an eight out of ten because we haven't encountered many issues and didn't have to reach out to them many times. How would you rate customer service and support? Positive How was the initial setup? The initial setup process for Tesla was straightforward for me since I had prior experience working with the product. Setting it up requires two people. The deployment process involves a two-step approach: hardware deployment and software development. After that, we use Ansible for automatic software installation. This includes getting the operating system in place using Foreman and adding necessary components like NVIDIA CUDA drivers. The deployment time varies based on the number of servers, but for around ten servers, it typically takes about two hours. We deploy them in parallel to streamline the process. Maintaining Tesla involves routine tasks like updating drivers and addressing security issues. We handle this by taking about 10% of our servers offline at a time, using a slow scheduler to ensure a controlled process. What's my experience with pricing, setup cost, and licensing? The majority of our Tesla GPUs operate on bare metal servers without additional licensing costs. Which other solutions did I evaluate? We considered other options before going with NVIDIA. Our focus is on what our users are comfortable with, and currently, NVIDIA is widely preferred by them. While we might explore other options like Xilinx FPGA cards or AMD GPUs in the future, our decision is mainly driven by meeting our users' current preferences. What other advice do I have? My advice for those considering working with Tesla is that if you can afford it, go for it. The ecosystem is robust and it is a worthwhile investment. Overall, I would rate NVIDIA Tesla as a nine out of ten. Which deployment model are you using for this solution? On-premises Disclaimer: I am a real user, and this review is based on my own experience and opinions.
Date published: 2023-11-21T00:00:00-05:00
Rated 5 out of 5 by from Robust choice for demanding applications in artificial intelligence, scientific computing, and data-intensive tasks due to their high performance and large memory capacity What is our primary use case? Our team actively engages in computer vision, data science, and image recognition. Our primary focus lies in harnessing artificial intelligence, particularly in applied mathematics. On one front, our efforts are dedicated to producing AI-driven outcomes in the field of sound. At the same time, we have experts conducting biological experiments and research with AI applications. How has it helped my organization? The ease of use is a significant advantage. With a wealth of internet articles and readily available knowledge, there are extensive resources on how to seamlessly integrate various AI and machine learning processes. This accessibility allows our students to quickly and effectively utilize these tools. What is most valuable? The cost advantage is paramount. The affordability factor, specifically, the expenses associated with on-premises infrastructure, are five times lower than those incurred in the cloud. What needs improvement? I believe there should be an effort to lower costs, especially considering the higher price of the latest update. The focus should be on fostering interactive learning experiences by offering Internet or YouTube workshops and providing educational materials that would help to simplify the learning curve for students. For how long have I used the solution? I have been working with it for almost five years. What do I think about the stability of the solution? When comparing NVIDIA and HPE, it's evident that NVIDIA is more stable. HPE, in particular, experiences numerous stability issues with its hub. What do I think about the scalability of the solution? Scalability remains consistent because both utilize a common approach. Whether scaling out NVIDIA or HPE, the process remains unchanged. We have around two hundred monitors. How are customer service and support? NVIDIA support outperforms HPE, although it comes at a higher cost. Which solution did I use previously and why did I switch? In terms of price, NVIDIA is approximately twice as expensive as HPE. However, the drawbacks include issues with technical support and the less mature ecosystem of HPE. NVIDIA, on the other hand, excels in investing significant efforts in deploying a robust AI and machine learning ecosystem and community. How was the initial setup? The initial setup was highly complex, prompting our reliance on partners, hardware vendors, and integrators to ensure a well-designed and properly deployed system. It's not a straightforward process; considerable energy and effort are required to establish a fully functional AI infrastructure. What about the implementation team? The deployment process begins with a thorough understanding of the business case and requirements set by our scientists. Translating these business needs into system requirements involves a careful selection of interconnects, storage providers, and server providers. Choosing partners to integrate these components effectively is crucial in creating a functional puzzle. Then, we define acceptance criteria, including specific test cases and stress tests, to ensure the seamless operation of all components. Real-life user cases are then introduced to evaluate the performance, comparing results with other systems to validate the efficacy of our deployment. For a single lab deployment, this process typically takes around a month. However, for larger institutional or departmental deployments, involving about eight to ten team members, the timeframe extends to approximately six months. Maintaining traditional infrastructure is a complex task that demands skilled professionals with significant expertise in AI infrastructure and machine learning. What was our ROI? Regarding ROI, our focus is not solely on monetary gains, as we operate as an academic institution. Instead, we gauge our investments' success by generating highly qualified academic articles published in prestigious journals such as Nature, Science, and Life Science. This serves as our metric of success. What's my experience with pricing, setup cost, and licensing? Generally, the price is affordable, but the most recent update comes with a notable increase in cost. What other advice do I have? I advise those entering the AI field to be cautious and seek guidance from experienced professionals. It's crucial to approach such decisions thoroughly understanding business goals and organizational objectives. Overall, I would rate it eight out of ten. Which deployment model are you using for this solution? On-premises Disclaimer: My company has a business relationship with this vendor other than being a customer:Integrator
Date published: 2023-11-22T00:00:00-05:00