How to Navigate the Increasingly Complex Cloud Landscape
As organizations deploy more workloads to the cloud, they find that inefficient strategies can become costly.
The use of cloud computing has exploded over the past decade. According to IDG, 89 percent of companies use Software as a Service (SaaS) somewhere in their environments, while 73 percent utilize Infrastructure as a Service (IaaS) and 61 percent use Platform as a Service (PaaS).
Cloud adoption is not only becoming wider, but also more complex, with hybrid and multicloud architectures becoming commonplace. As their cloud environments grow larger and more complex, IT teams must account for a wider variety of interdependencies. Typical areas of concern include networking, security and interactions between applications.
One of the key benefits of the public cloud is quick and easy scalability. There are a number of situations requiring scalability, including expanding IT resources to develop or test a new application, accommodating rapid growth or meeting the needs of peak demand periods. These resources can be spun up in a public cloud vendor’s environment — without delay or upfront capital costs.
There’s a downside, however, to this ease of expansion: As an enterprise grows its consumption of cloud resources, small inefficiencies also scale up, until problems that were once only minor issues grow into areas of critical concern. To deal with these consumption issues, organizations need a cloud strategy that helps them to manage their environments more efficiently. Such a strategy should include plans to control costs and optimize application performance, as well as to detail who is responsible for which aspects of cloud security.
Where Clouds Can Go Wrong
These inefficiencies can pop up in different ways. Some organizations fail to rightsize their cloud environments, opting to do simple “lift and shift” migrations instead of optimizing their designs. In such a scenario, an organization assumes that its current environment is designed in a perfectly efficient manner, and then literally takes the existing architecture and resources currently running on-premises and replicates that environment in the public cloud. It’s easy to see how this can quickly lead to massive overspending.
For instance, if an organization replicates an environment designed for peak (rather than routine) resource demands — and then pays to run those resources around the clock in the public cloud, even when they’re not being used — the resulting expenses will quickly add up. This was an especially common mistake in the early days of the cloud, when many people still assumed that the public cloud was an automatic money saver. Today, thankfully, many organizations have woken up to the reality that inefficient cloud investments can result in cost overruns, rather than cost savings.
Failing to decommission unused resources is another obvious — but often overlooked — source of wasteful spending. The cause of this problem is somewhat different from when organizations fail to rightsize their environments from the beginning, but the end result is the same. When users spin up public cloud resources for temporary projects, these resources are often left running long after the projects are complete. And, since many IT teams lack visibility into their cloud environments, they often don’t even realize that their organizations have unused resources spun up in the public cloud, with the meter running around the clock.
This source of inefficiency is especially common in organizations where users have permission to spin up public cloud resources across multiple departments with little in the way of oversight — particularly enterprises that operate in multiple public cloud environments without integrating those environments in a way that enables centralized visibility and control.
The Need for Cloud Optimization
Even if an organization is using all of the resources it pays for, it is critically important that cloud deployments are designed in such a way that workloads are matched with the appropriate cloud infrastructure. Storage is a prime example of an area where things can go wrong, as organizations often fail to distinguish between “hot” and “cold” data — which require different levels of availability (and which, in turn, can be had at very different price points). For instance, a hospital might be required to keep certain types of records for a period of 18 to 25 years but may never (or only rarely) access this data. By placing the data in a cool storage environment that is optimized for cost rather than performance, an organization can substantially lower its monthly cloud storage costs.
Finally, many inefficiencies are introduced into cloud environments due to a simple lack of good governance. Too often, the policies and procedures that will govern the growth of an organization’s cloud environment are an afterthought. This, in turn, means that IT and business leaders are constantly in a reactive position. Rather than being able to proactively monitor the growth of their environment and steer away from problems, they are forced to put out one fire after another, with little ability to predict what new problems might be headed their way. When organizations lack good governance rules, this often results in too many individuals having the “keys to the kingdom,” meaning they have the ability to spin up whatever resources they want — with no one to tell them “no.”
Cost overruns are perhaps the most obvious outcome of an inefficient cloud environment, but they’re far from the only negative result. A lack of governance, for example, can introduce shadow IT into an organization — which, in turn, can create security vulnerabilities, compliance issues and a lack of application visibility. To prevent these problems, organizations must develop an effective cloud management strategy and implement the right mix of tools, policies and partnerships to implement it.
To learn more about how you can better manage your organization’s cloud consumption, read the CDW white paper “Managing Cloud Consumption for Optimal Results.”