HPE Apollo r2600 - rack-mountable - 2U - up to 4 blades
up to 4 blades
hot-plug 800 Watt
The HPE Apollo r2600 System features 24 hot-swappable bays. It comes in a rack-mountable form factor.
HPE Apollo r2600 - rack-mountable - 2U - up to 4 blades is rated 4.0 out of 5 by 2.
Rated 4 out of 5 by Dr Tuomas Sandholm from Enables us to do the world's leading superhuman AI research.Valuable Features:It's very hard for a professor to amass the supercomputing resources, so I've been very fortunate to have that level of supercomputing at our disposal and that has really enabled us to do the world's leading superhuman AI research. That is what we did, we actually beat the best heads up in all Texas, holding human players in the world this January. So, we're at a superhuman level in the strategic reasoning.Improvements to My Organization:We have been working with the Pittsburgh Supercomputing Center for around ten years. They are picking the hardware and they had picked this hybrid system. It has several different kinds of components in the system and we had worked with them for a long time. We knew that they were picking the stake of that stuff so that's why we selected this solution.Room for Improvement:One thing that we are looking for is the better stability of the Lustre file system, it could be improved. I have heard that they are coming out with a better memory bandwidth, so that's good or maybe, it's already there in System 10.In that case, of course, then there is need for more CPUs, more storage and all of that.Use of Solution:I've been working with the Pittsburgh Supercomputing Center for about ten years, on the various supercomputers, with the Bridges Supercomputing Center, which is their newest and is built by HPE, from the very outset. So, we were one of the mirror customers, i.e., testing it, as they were building it. We've been using it over as a regular customer after that. So, I don't remember exactly but I would say that we have been using it for about two and a half years.Stability Issues:It has been fairly reliable. In the beginning, of course not, but then we were a “baiter customer”, so in the beginning, there was nothing, literally there was nothing in the racks. We've been with it from the beginning and of course, in the beginning, it was less stable. However, it became more stable over time.If there's anything that hasn't been that stable, then it is the Lustre file system. I would say that they have made some improvements with that but this is not just a problem with bridges. We have computed the other supercomputing centers like San Diego Supercomputing Center in the past as well and Lustre seems to be just a little bit unstable overall.Scalability Issues:It's going to meet our needs moving forward, it is scalable. Having said that, our algorithms are very compute-hungry and storage-hungry, so more is more and there's no limit as to how much our algorithms can use. The more compute and the more storage they have, the better they will perform.Technical Support:I would support the Pittsburgh Supercomputing Center (PSC) support; they gave us the support and their support has been awesome. We don't directly contact HPE, they contact HPE if needed.Initial Setup:The PSC installed everything, i.e., both hardware and software. So we didn't do any of that; from our perspective, it has been easy to use.Other Advice:Whilst looking for a vendor, we do not look at the brand name at all. Instead what we look for are just reliability and raw horsepower.It has been great. The Pittsburgh Supercomputing Center guys have been great in supporting us very quickly and sometimes even at night or on weekends. I've been very fortunate as a professor to get this level of supercomputing, so we've been able to do the world's leading research in this area. The only things that I would improve are the ones that I have mentioned before, i.e., the Lustre file system, and maybe, the memory access from the CPU.Disclaimer: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Date published: 2017-07-13
Rated 4 out of 5 by Michael Ehrig from It allows us to use a few nodes as possible for storing log-file data so that we have as much direct space capacity as possible.Valuable Features:Apollo's most valuable features for us are its density and storage capabilities.Improvements to My Organization:We're trying to keep all log files in our Hadoop server, which amounts to several terabytes a day of locked data that we need to analyze. Apollo allows us to use as few nodes as possible for this so that we have as much direct space capacity as possible. It gives us much more space per gigabyte.Room for Improvement:It's a very good system when you need a lot of disk capacity. But it's unclear whether the performance of the IO will be sufficient when calculating the theoretical amount of time to read all the disc space. If the workload is not purely sequential, then performance in the IO is less than optimal because it's optimized for streaming processing.Deployment Issues:We have no issues with deployment.Stability Issues:We installed it in place about a week ago, and it's been running without problems.Scalability Issues:We have probably some 6,000 or 7,000 physical cells already and are planning more.Technical Support:We have technical account managers who work with us. It's pretty much a direct line to HP without having to dial the general support number.Previous Solutions:We previously used the DL380s. Compared to those, Apollo has roughly four times the amount of space per server, which means we can really do a lot. We technically could have four DL380s, but the licensing cost would have been significantly more.Initial Setup:The initial setup was straightforward, and we've been happy about it.Disclaimer: I am a real user, and this review is based on my own experience and opinions.