Research Hub > Understanding Hyperconvergence with SimpliVity

November 27, 2017

Article
3 min

Understanding Hyperconvergence with SimpliVity

Learn the ins and outs of this technology, and take a close look at one of its market leaders.

hero1-1

It’s June 12, 2014, and I’m in beautiful Vancouver, B.C., working a booth at VMware’s VMUG UserCon. I’ve now been a solutions architect at SimpliVity for two months covering the Pacific Northwest and I’m excited to be spreading the word about this new technology called hyperconvergence.

As people approach the booth to investigate this new startup with the curious pop-up banner, I’d often start the conversation by asking, “Have you heard of hyperconvergence?” Eighty percent of the attendees looked puzzled and said, “No.” And only 20 percent said, “Yes,” but with just a little uncertainty. Of the 20 percent that said, “Yes,” I’d say that less than half of them really understood the basic concept of hyperconvergence. But that was 2014. In the past 3 1/2 years, the understanding and adoption of hyperconvergence has grown tremendously.

The Whats and Whys of Hyperconvergence

So, allow me to level set with some basics: Hyperconvergence is a combination of compute, hypervisor and storage. Some people will include networking, but in reality the appliance still needs a network switch to communicate with the rest of the world, so I don’t include it.

Why has hyperconvergence taken off so quickly? Here are my top three reasons:

  1. Deployment and scalability: It’s so easy to deploy! Gone are the days when it took weeks to plan for the data migration. The data migration process for a SimpliVity deployment is simply a Storage vMotion of the VMs from one node to another. And when more resources are required, it only takes a few clicks to add another node into the environment, which increases the CPU, memory and storage capacity and performance linearly.
  2. Reduced costs: You buy what you need when you need it. You no longer need to purchase a large monolithic storage control and add disks over time, which is very cost-ineffective.
  3. Increased administration and manageability: No longer is there a need to create LUNs, aggregates or volumes; instead an NFS filesystem is created to span multiple nodes. You can monitor the health and performance of the global compute and storage environment as well as all backups and replication from a single GUI, no matter where on the planet a node might reside. And all of this is done from the vSphere Web Client, so there’s no new interface to learn.

Hyperconvergence was created (and is being widely adopted) because it simplifies acquisition and administration of the data center for the directors and vice presidents of IT, the data center architects and the system admins, allowing IT departments to do the fun stuff — innovating and building systems to make the business money. Everyone at every level benefits from this technology.

SimpliVity’s DVP

Looking more closely at SimpliVity, how is it different? It has a few features and innovations that are unique in the market. Let’s start with the Data Virtualization Platform (DVP), which consists of the OmniStack Accelerator and SimpliVity’s Data Architecture.

OmniStack Accelerator

The OmniStack Accelerator is a storage controller that has been purpose-built for inline deduplication, compression and optimization. It also holds supercapacitors rather than batteries to protect cached data during a power outage.

With inline deduplication, as data is being written to storage, writes are being analyzed in 8KB chunks. If the block is unique, then its coalesced in memory, marked in the File System’s metadata, compressed and written to disk in a full write stripe. If the block is a duplicate and is already being stored in the system’s Object Store, then a pointer is created in metadata and no further processing of that block is required.

SimpliVity Data Architecture

If the OmniStack Accelerator is the brawn, then SimpliVity’s Data Architecture is the brains. The Data Architecture is broken into the Data Management Layer and the Presentation Layer. The Data Management Layer is also divided into two pieces: the Object Store and the File System. Earlier I discussed how each block of data is deduped inline before being written to disk. This means that each block of data contained within the Object Store of each node is unique at the time of inception.

In my mind, the File System is where the magic happens. The File System is where containers reside. A container is a logical representation of a VM and consists of pointers to all the blocks within the Object Store that are needed to create a specific VM. Keep in mind that multiple containers can all point to the same block of data (i.e. a Windows system file that is being accessed by multiple VMs).

I’ve often joked that the presentation layer is boring because it’s really not doing much. The presentation layer presents the containers to ESXi as if they were a physical virtual machine disk (VMDK).

Data Protection

SimpliVity’s Local Data Protection is another valuable feature. The drives in each node are protected by Raid5 or Raid6 depending on the quantity of drives. The different configurations break down like this:

  • Large nodes: 12 x 1.92TB SSD in a Raid6 config
  • Medium nodes: 9 x 1.92TB SSD in a Raid6 config
  • Small nodes: 5 x 1.92TB SSD in a Raid5 config
  • X-Small nodes: 5 x 960GB SSD in a Raid5 config

Because the Mean Time to Failure (MTF) of SSDs is drastically longer than that of HDDs, it’s highly unlikely (but not impossible) that two of the five SSDs in a small or x-small will fail at the same time.

Backups, Replication and Disaster Recovery (DR)

Backups are an integral part of the SimpliVity product and are included at no additional charge. A backup policy is first created with the typical parameters of day/time, frequency, retention period, application consistency, etc. One or more VMs are then assigned to a specific backup policy. Application consistent backups employ an agentless implementation that integrates with MS Volume Shadow Services (VSS).

When a VM needs to be replicated to another location for DR purposes or the application’s development cycle, the SimpliVity source node will first transfer the VM’s metadata to the SimpliVity target node. The metadata describes all the blocks which compile the VM. The SimpliVity node on the destination side will compare the VM’s metadata with the metadata of its own existing blocks and then request only the missing blocks of data, which would be required to compile the VM during a restoration.

RapidDR (which is sold separately) is the “Easy” button to get the business up and running when an outage occurs. It’s a runbook that integrates with SimpliVity’s backups and replication. It’s easy to configure and will do most of the steps needed in a DR situation like:

  • Power on the last backup of your AD server first and give it a new IP address first because you need authentication
  • Wait 10 minutes and then power up your SQL servers with a new IP
  • Wait 10 more minutes and then power up your web servers that have hooks into SQL

Data Virtualization Platform Value

How and why do businesses benefit from all these features and innovations? It starts with DVP. Software defined and hardware accelerated – the Data Virtualization Platform takes advantage of both, which is really the best of both worlds.

OmniStack Accelerator Advantage

Moving on to the OmniStack Accelerator – Deduplication is a CPU-intense process, but by performing these operations on the OmniStack Accelerator, it allows the available CPU cycles within the node to complete more application-centric processes. I compare the need and benefits of performing inline dedupe, compression and optimization on the OmniStack Accelerator to the benefits of a GPU performing video acceleration. Supercapacitors are important because they charge much faster than batteries, which reduces the chance of data loss during a double power bump where power is lost, re-established and then lost again before a battery can be fully recharged. And inline dedupe reduces the write IOPS to disk having a large positive impact on the node’s performance, especially considering that the process is completed on the OmniStack Accelerator and not stealing cycles from the node’s CPU.

Data Architecture Gains

Most people understand the benefits of virtualizing hardware with hypervisors like ESXi and Hyper-V. But there’s also a benefit in virtualizing the data and that’s what the SimpliVity Data Architecture does with its Object Store and File System. Because each 8KB block of data within the Object Store is as unique as a snowflake, it allows the Object Store to deliver both live data and data protection at very high speeds. And it’s the File System that allows the physical blocks to be abstracted from the hypervisor.

Protection Benefits

Looking at RAID vs. RAIN + Erasure Coding — there are pros and cons to each, and for SimpliVity’s Data Virtualization Platform I think that RAID is the best decision because the typical drawbacks of RAID are minimized. The heavy write penalty associated with RAID 6 is significantly reduced with the OmniStack Accelerator’s ability to do inline dedupe. And in the past, rebuild times for a failed spinning drive could also take a very long time, but with today’s all-flash systems, rebuilding a failed SSD takes a fraction of the time. The use of RAID does allow SimpliVity to deploy two node systems with high availability (HA) where other manufacturers require three or four nodes. And SimpliVity can deploy a single node in remote sites or areas of the business where there are noncritical workloads.

Making the Most of Backups, Replication and DR

Since each VM is assigned to the backup policy that best meets its own SLA, there’s a lot of flexibility for the admin to protect the data exactly as needed. The minimum RPO is 10 minutes, which is smaller than some of the competitors. The agentless, application-consistent backups have no dependency on VMware Snapshots, which is great. File-level restores are also included. Each backup is its own synthetic full and there are no dependencies on prior backups unlike traditional snapshots.

SimpliVity is superefficient and fast when replicating data. Think of this way, you just created a new production Windows 2012 R2 server running SQL 2016 and named it “ProdSQL10.” Now you want to replicate ProdSQL10 to your DR site. The first automated step in the replication process is the comparison of ProdSQL10’s metadata or block map to the metadata of every block in the DR node. Since you’ve been replicating ProdSQL09 (Windows 2012 with SQL 2016 server) for the past seven months, the Windows 2012 operating system and SQL 2016 application do not have to be replicated for ProdSQL10 because those blocks were written when ProdSQL09 was last replicated. Only the unique blocks for ProdSQL10’s database need to be replicated to the remote site. This is the advantage of global dedupe; it’s superefficient and very fast. Of course, replication times will vary depending on the size of the VM, rate of change, frequency of the replication and bandwidth.

Most of us know what to do when a disaster happens, but RapidDR cuts down on the mistakes that are made when we’re in a hurry, stressed and have just re-IPed a server as 192.168.1.21 rather than 192.168.10.21 and then left wondering why it can’t connect to the network. RapidDR is the “easy button” runbook but it’s an à la carte feature/license. All the other SimpliVity features, software and licenses are included with annual support.

The Wrap-Up

It’s worth noting that both the Gartner Magic Quadrant and the Forrester Wave place SimpliVity into their “Leaders” sector. Hyperconvergence has come a long way since I first evangelized it and it’s fun for me to look back on the HCI journey. There are several very good products available to our customers and if you’re curious about hyperconvergence then you’ve got to take a close look at SimpliVity when doing your due diligence.

Learn more about how CDW and SimpliVity can improve your infrastructure.

This blog post brought to you by:

Jon Mark

Jon Mark

CDW Expert
Jon Mark Sano is a highly experienced and trusted CDW expert.