Articles

3 Reasons WHY Cloud Costs Go Haywire

Optimization Reduction
Surya Challa - Founder Cloud Accel
Surya Challa

May 17, 2021

Overview

Not long after moving an application or two into the public cloud, most organizations experience much higher cloud costs than earlier estimates. What reasons contribute to these higher cloud costs?

This is second in the following 3 article series:

  1. 3 Reasons WHY Cloud Optimization Matters
  2. 3 Reasons WHY Cloud costs go Haywire
  3. 3 Strategies Absolutely Essential for Cutting Cloud Costs

In 2020, $59B was spent on public cloud Infrastructure-as-a-Service (IaaS), and the annual spend is estimated to exceed $100B by 2022[1]. For most organizations, Cloud is subsuming the role of primary IT infrastructure provider. A significant challenge of this transformation is the loss of control over cloud costs. Here are three reasons why cloud costs can go out of control when organizations move to the cloud.

1. Elimination of “fear of commitment”

Optimization Fear

Investing in on-premises infrastructure requires a multi-year commitment to justify high upfront costs. Once on-prem investments are made, significant changes become prohibitively expensive. Public clouds solve this problem by providing the ability to provision and deprovision on demand. With the fear of long-term financial commitment out of the way, organizations often allow individual Dev and IT members to make ad hoc procurement decisions. While this flexibility boosts developer productivity, it can also create inefficiencies. Having no consistent procurement policies and a much larger variety of resources, each with its usage lifecycle, leads IT to end up with resources that:

  1. are no longer used but still lying around.
  2. are used only a fraction of the time.
  3. have more capacity than necessary.
  4. were procured at list prices.

Each of these scenarios contributes to higher cloud costs.

2. Workloads are in a constant change

Optimization Workload change

With new features and enhancements, Organizations cloud capacity requirements are constantly changing. Some resources end up being no longer required, while some need to be scaled to handle increased loads. Typically, the areas with inadequate capacity receive immediate attention to avoid inconveniencing end-users. In contrast, the areas with a capacity surplus are set aside for future action, as they are difficult to identify, isolate, and act upon. The surplus capacity continues to drain money.

3. Deprovisioning is an intricate task

Optimization Deprovisioning

Cloud resources, including storage, compute, databases, and cache servers, work together to deliver services. These interdependencies turn deprovisioning a subset of the resources into an intricate task and necessitate in-depth impact analysis from multiple teams (dev, ops, network, security, etc.). Wrong decisions carry the risk of bringing down business-critical systems. So teams err towards caution and opt for the status quo even when there are clear signs of surplus/unused capacity.

Conclusion

Most cloud admins can relate to the problems described above. A comprehensive review of resource utilization can help with addressing these issues. But with the constant pressure to deliver more features in a shorter time, such review efforts receive a lower priority.

So, is there a hope of gaining control on cloud costs, or do we just surrender? No need to give up; it turns out there are effective ways to tackle cloud costs. Our next article in the series will take a sneak peek at the approach that has been delivering rich dividends saving up to 65% or more in cost costs.

I am Surya, Founder-CEO@CloudAccel. Our solution CloudOptimize is now in public beta. CloudOptimize has reduced our customers’ cloud costs by 30–65%. Want to learn more? Take a trial run here:

References

Download this article

PDF will be sent to your inbox

Scroll to Top