HPE Apollo 4510 Gen9 - rack-mountable - no CPU - 0 GB - 0 GB

Mfg.Part: 799581-B21 | CDW Part: 3817084 | UNSPSC: 43211501
Availability: Call Call for availability
$9,157.99 Advertised Price
Advertised Price
Lease Option ($275.29/month)
Note: Leasing is available to businesses only. Leasing is not available to individuals.
Product Details
  • Server
  • rack-mountable
  • 4U
  • 2-way
  • RAM 0 GB
  • SAS
  • hot-swap 2.5"
View Full Product Details
Better Together
Quick View
Total Price:

Product Overview

Main Features
  • Server
  • rack-mountable
  • 4U
  • 2-way
  • RAM 0 GB
  • SAS
  • hot-swap 2.5"
  • 3.5"
  • no HDD
  • Matrox G200
  • GigE
  • 10 GigE
  • monitor: none
  • CTO
Is your business prepared to store and analyze business critical data at scale? With the HPE Apollo 4500 Systems, HPE challenges the notion that one-size-fits all for Big Data infrastructure by creating purpose-built systems that specifically address storage and analytics workloads. For object storage, the ultra-dense HPE Apollo 4510 includes one server and up to 68 LFF drives in a 4U chassis for a maximum of 544 TB per system. For clustered storage environments, the HPE Apollo 4520 offers two servers with built-in failover capability. For Hadoop and other Big Data solutions, the HPE Apollo 4530 uniquely offers three servers per chassis, ideal for housing three copies of data in a single system. The HPE Apollo 4500 series allows you to realize all the value of your data at the right cost and in the least amount of space. And with HPE software tools to help you deploy, operate, and optimize your valuable data center resources, you can grow your data with confidence at any scale.

HPE Apollo 4510 Gen9 - rack-mountable - no CPU - 0 GB - 0 GB is rated 4.0 out of 5 by 2.
Rated 4 out of 5 by from Enables us to do the world's leading superhuman AI research. Valuable Features:It's very hard for a professor to amass the supercomputing resources, so I've been very fortunate to have that level of supercomputing at our disposal and that has really enabled us to do the world's leading superhuman AI research. That is what we did, we actually beat the best heads up in all Texas, holding human players in the world this January. So, we're at a superhuman level in the strategic reasoning.Improvements to My Organization:We have been working with the Pittsburgh Supercomputing Center for around ten years. They are picking the hardware and they had picked this hybrid system. It has several different kinds of components in the system and we had worked with them for a long time. We knew that they were picking the stake of that stuff so that's why we selected this solution.Room for Improvement:One thing that we are looking for is the better stability of the Lustre file system, it could be improved. I have heard that they are coming out with a better memory bandwidth, so that's good or maybe, it's already there in System 10.In that case, of course, then there is need for more CPUs, more storage and all of that.Use of Solution:I've been working with the Pittsburgh Supercomputing Center for about ten years, on the various supercomputers, with the Bridges Supercomputing Center, which is their newest and is built by HPE, from the very outset. So, we were one of the mirror customers, i.e., testing it, as they were building it. We've been using it over as a regular customer after that. So, I don't remember exactly but I would say that we have been using it for about two and a half years.Stability Issues:It has been fairly reliable. In the beginning, of course not, but then we were a “baiter customer”, so in the beginning, there was nothing, literally there was nothing in the racks. We've been with it from the beginning and of course, in the beginning, it was less stable. However, it became more stable over time.If there's anything that hasn't been that stable, then it is the Lustre file system. I would say that they have made some improvements with that but this is not just a problem with bridges. We have computed the other supercomputing centers like San Diego Supercomputing Center in the past as well and Lustre seems to be just a little bit unstable overall.Scalability Issues:It's going to meet our needs moving forward, it is scalable. Having said that, our algorithms are very compute-hungry and storage-hungry, so more is more and there's no limit as to how much our algorithms can use. The more compute and the more storage they have, the better they will perform.Technical Support:I would support the Pittsburgh Supercomputing Center (PSC) support; they gave us the support and their support has been awesome. We don't directly contact HPE, they contact HPE if needed.Initial Setup:The PSC installed everything, i.e., both hardware and software. So we didn't do any of that; from our perspective, it has been easy to use.Other Advice:Whilst looking for a vendor, we do not look at the brand name at all. Instead what we look for are just reliability and raw horsepower.It has been great. The Pittsburgh Supercomputing Center guys have been great in supporting us very quickly and sometimes even at night or on weekends. I've been very fortunate as a professor to get this level of supercomputing, so we've been able to do the world's leading research in this area. The only things that I would improve are the ones that I have mentioned before, i.e., the Lustre file system, and maybe, the memory access from the CPU.Disclaimer: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Date published: 2017-07-13
Rated 4 out of 5 by from It allows us to use a few nodes as possible for storing log-file data so that we have as much direct space capacity as possible. Valuable Features:Apollo's most valuable features for us are its density and storage capabilities.Improvements to My Organization:We're trying to keep all log files in our Hadoop server, which amounts to several terabytes a day of locked data that we need to analyze. Apollo allows us to use as few nodes as possible for this so that we have as much direct space capacity as possible. It gives us much more space per gigabyte.Room for Improvement:It's a very good system when you need a lot of disk capacity. But it's unclear whether the performance of the IO will be sufficient when calculating the theoretical amount of time to read all the disc space. If the workload is not purely sequential, then performance in the IO is less than optimal because it's optimized for streaming processing.Deployment Issues:We have no issues with deployment.Stability Issues:We installed it in place about a week ago, and it's been running without problems.Scalability Issues:We have probably some 6,000 or 7,000 physical cells already and are planning more.Technical Support:We have technical account managers who work with us. It's pretty much a direct line to HP without having to dial the general support number.Previous Solutions:We previously used the DL380s. Compared to those, Apollo has roughly four times the amount of space per server, which means we can really do a lot. We technically could have four DL380s, but the licensing cost would have been significantly more.Initial Setup:The initial setup was straightforward, and we've been happy about it.Disclaimer: I am a real user, and this review is based on my own experience and opinions.
Date published: 2016-01-11
  • y_2018, m_7, d_18, h_19
  • bvseo_bulk, prod_bvrr, vn_bulk_2.0.8
  • cp_1, bvpage1
  • co_hasreviews, tv_0, tr_2
  • loc_en_US, sid_3817084, prod, sort_[SortEntry(order=SUBMISSION_TIME, direction=DESCENDING), SortEntry(order=FEATURED, direction=DESCENDING)]
  • clientName_cdw