391
Back to Top

HPE Apollo r2800 with Expander - storage enclosure

Mfg. Part: 798154-B21 | CDW Part: 3835761 | UNSPSC: 43201616
$1,949.99Advertised Price
Lease Option ($57.52 /month) Lease Availability
Close

Have leasing questions? Let us know how can we help.

Note: Leasing is available to businesses only. Leasing is not available to individuals.
800.800.4239
Mon-Fri 7am-7:30pm CT
Availability:3-5 days
Orders placed today will ship within 5 days
  • Storage enclosure
  • 24 bays (SATA-300 / SAS)
  • CTO
View More

Recommended Warranty

Product Overview
Main Features
  • Storage enclosure
  • 24 bays (SATA-300 / SAS)
  • CTO
The Apollo 2000 System is the enterprise bridge to scale-out architecture for traditional data centers delivering the space and cost savings of density-optimized infrastructure in a non-disruptive manner. It is a dense, multi-server platform that packs a lot of performance and workload capability into a small amount of datacenter space while delivering the efficiencies of a shared infrastructure.

The Apollo 2000 System offers the configuration flexibility to support a variety of workloads, from remote site systems to large HPC clusters and everything in between. And it can be deployed cost-effectively starting with a single 2U, shared infrastructure chassis to meet the configuration needs of a wide variety of scale-out workloads.

The Apollo 2000 System is a density-optimized, 2U shared infrastructure chassis for up to 4 ProLiant Gen9 independent, hot-plug servers with all the traditional data center attributes - standard racks and cabling and rear-aisle serviceability access. A 42U rack fits up to 20 Apollo r2000 series chassis accommodating up to 80 servers per rack.

With Apollo 2000 System servers there is flexibility to tailor the system to the precise needs of each workload with compute and flexible I/O and storage options. Apollo 2000 System servers can be "mixed and matched" within a single chassis to support different applications and it can even be deployed with a single server, leaving room to scale as customer's needs grow. The Apollo 2000 chassis comes with 4 single rotor fans and an additional 4 fans can be added for redundancy.

Technical Specifications
Specifications are provided by the manufacturer. Refer to the manufacturer for an explanation of the print speed and other ratings.
Bay Provided
Form Factor: 2.5" SFF
Free Qty: 24
Total Qty: 24

Chassis
Supported Devices Modules Qty: 24
Supported Interface: Serial ATA-300 / SAS

Dimensions & Weight
Depth: 32.4 in
Height: 3.4 in
Weight: 22.05 lbs
Width: 17.6 in

Environmental Parameters
Humidity Range Operating: 10 - 90% (non-condensing)
Max Operating Temperature: 95 °F
Min Operating Temperature: 50 °F
Sound Emission: 54 dBA

Expansion Bays
Form Factor (metric): 6.4 cm SFF
Type: Hot-swap

Hard Drive
Type: No HDD

Header
Brand: HPE
Compatibility: PC
Manufacturer: HP CTO Server Avnet
Model: R2800 with Expander
Packaged Quantity: 1
Product Line: HPE Apollo

Miscellaneous
Pricing Type: CTO

Optical Storage
Type: None

Processor
Type: None

Storage
Type: Storage enclosure

Storage Controller
Type: None

Storage Controller (2nd)
Type: None

Product Reviews
HPE Apollo r2800 with Expander - storage enclosure is rated 4.0 out of 5 by 2.
Rated 4 out of 5 by from Enables us to do the world's leading superhuman AI research. Valuable Features:It's very hard for a professor to amass the supercomputing resources, so I've been very fortunate to have that level of supercomputing at our disposal and that has really enabled us to do the world's leading superhuman AI research. That is what we did, we actually beat the best heads up in all Texas, holding human players in the world this January. So, we're at a superhuman level in the strategic reasoning.Improvements to My Organization:We have been working with the Pittsburgh Supercomputing Center for around ten years. They are picking the hardware and they had picked this hybrid system. It has several different kinds of components in the system and we had worked with them for a long time. We knew that they were picking the stake of that stuff so that's why we selected this solution.Room for Improvement:One thing that we are looking for is the better stability of the Lustre file system, it could be improved. I have heard that they are coming out with a better memory bandwidth, so that's good or maybe, it's already there in System 10.In that case, of course, then there is need for more CPUs, more storage and all of that.Use of Solution:I've been working with the Pittsburgh Supercomputing Center for about ten years, on the various supercomputers, with the Bridges Supercomputing Center, which is their newest and is built by HPE, from the very outset. So, we were one of the mirror customers, i.e., testing it, as they were building it. We've been using it over as a regular customer after that. So, I don't remember exactly but I would say that we have been using it for about two and a half years.Stability Issues:It has been fairly reliable. In the beginning, of course not, but then we were a “baiter customer”, so in the beginning, there was nothing, literally there was nothing in the racks. We've been with it from the beginning and of course, in the beginning, it was less stable. However, it became more stable over time.If there's anything that hasn't been that stable, then it is the Lustre file system. I would say that they have made some improvements with that but this is not just a problem with bridges. We have computed the other supercomputing centers like San Diego Supercomputing Center in the past as well and Lustre seems to be just a little bit unstable overall.Scalability Issues:It's going to meet our needs moving forward, it is scalable. Having said that, our algorithms are very compute-hungry and storage-hungry, so more is more and there's no limit as to how much our algorithms can use. The more compute and the more storage they have, the better they will perform.Technical Support:I would support the Pittsburgh Supercomputing Center (PSC) support; they gave us the support and their support has been awesome. We don't directly contact HPE, they contact HPE if needed.Initial Setup:The PSC installed everything, i.e., both hardware and software. So we didn't do any of that; from our perspective, it has been easy to use.Other Advice:Whilst looking for a vendor, we do not look at the brand name at all. Instead what we look for are just reliability and raw horsepower.It has been great. The Pittsburgh Supercomputing Center guys have been great in supporting us very quickly and sometimes even at night or on weekends. I've been very fortunate as a professor to get this level of supercomputing, so we've been able to do the world's leading research in this area. The only things that I would improve are the ones that I have mentioned before, i.e., the Lustre file system, and maybe, the memory access from the CPU.Disclaimer: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Date published: 2017-07-13
Rated 4 out of 5 by from It allows us to use a few nodes as possible for storing log-file data so that we have as much direct space capacity as possible. Valuable Features:Apollo's most valuable features for us are its density and storage capabilities.Improvements to My Organization:We're trying to keep all log files in our Hadoop server, which amounts to several terabytes a day of locked data that we need to analyze. Apollo allows us to use as few nodes as possible for this so that we have as much direct space capacity as possible. It gives us much more space per gigabyte.Room for Improvement:It's a very good system when you need a lot of disk capacity. But it's unclear whether the performance of the IO will be sufficient when calculating the theoretical amount of time to read all the disc space. If the workload is not purely sequential, then performance in the IO is less than optimal because it's optimized for streaming processing.Deployment Issues:We have no issues with deployment.Stability Issues:We installed it in place about a week ago, and it's been running without problems.Scalability Issues:We have probably some 6,000 or 7,000 physical cells already and are planning more.Technical Support:We have technical account managers who work with us. It's pretty much a direct line to HP without having to dial the general support number.Previous Solutions:We previously used the DL380s. Compared to those, Apollo has roughly four times the amount of space per server, which means we can really do a lot. We technically could have four DL380s, but the licensing cost would have been significantly more.Initial Setup:The initial setup was straightforward, and we've been happy about it.Disclaimer: I am a real user, and this review is based on my own experience and opinions.
Date published: 2016-01-11
  • y_2017, m_11, d_23, h_21
  • bvseo_bulk, prod_bvrr, vn_bulk_2.0.3
  • cp_1, bvpage1
  • co_hasreviews, tv_0, tr_2
  • loc_en_US, sid_3835761, prod, sort_[SortEntry(order=SUBMISSION_TIME, direction=DESCENDING), SortEntry(order=FEATURED, direction=DESCENDING)]
  • clientName_cdw
 
Adding to Cart...
11/24/2017 12:45:47 PM
^ Back to Top

Maximum 300 characters
An account manager will email you within one business day to confirm your request.

Your Quote has been submitted

What Happens Next? A confirmation email is on its way. Within one business day, you will be contacted by an Account Manager to finalize your quote.

Error!

Something went wrong.

Please try again later.