3 Areas of Focus for a CFO to Reduce Cloud Computing Costs

Many CIOs approve public cloud usage for all or a portion of company infrastructure, often choosing from one of the big 3: AWS, Google Cloud Platform, or Microsoft Azure. Many of them believe they are saving money over the cost of maintaining on-premises hardware, software, and staff. They also have big plans to move some of the most pressing digital transformation projects to the public cloud, capitalizing on the ease of access to infinite computing resources.

According to the IDG’s 2018 Cloud Computing Survey, nine out of ten companies will have some part of their applications or infrastructure in the cloud by 2019, and the rest expect to follow by 2021.

After a few months or years, the cost of public cloud computing becomes unmanageable, leaving the CFO with a real problem on their hands. By next year, Gartner says that organizations that lack cost optimization processes will average 40% overspend in public cloud. The company’s public cloud slice is large, unmanaged, and unruly. It becomes overwhelming to figure out how to bring the cloud footprint into some kind of administrative and cost controls.

Many CFOs are caught between the proverbial rock and a hard place. The operational expense of the cloud is starting to far exceed the capital expense of on-premises. In many cases, the original cost saving justification for moving to the cloud is no longer valid as cloud costs now far outweigh traditional on-premises costs. In fact, a “RightScale 2019 State of The Cloud Report” by Data Center Knowledge finds that 43% of organizations do not have automated or manual policies to use the lowest-cost cloud service. Another 38% have not developed policies to use their cloud providers’ lowest-cost regions. About 29% do not have automated or manual policies to shut down workloads after hours, 27% do not have policies to eliminate inactive storage, and 20% do not have policies to right-size their instances.

The CFO has tried on multiple occasions to get actionable information from the CIO and IT teams, but gets vague and obtuse answers as to why the cloud costs are so high.

Here, we look at 3 of the most common areas where cloud costs spin out of control and equip the CFO with the right understanding of the problem and solution so as to have a productive conversation with IT staff.


Identity and Access Management

Many companies greenlight usage of the public cloud, but they don’t clearly think through identity and access management (IAM). In an identity and access management survey by One Identity entitled “Assessment of Identity and Access Management in 2018”, more than 1,000 IT security professionals reveal widespread lack of confidence in access control and privileged account management programs. One in three (31%) organizations rely on alarmingly antiquated processes including manual methods and spreadsheets to manage privileged accounts.

Without a good understanding of users and their needs for access to resources, companies often grant liberal privileges to employees. This leads to overprovisioning of servers. For example, a developer allocates a 64GB compute server when they only need 16GB or another developer spins up 12 nodes for a dev project when they only need 4.

All public cloud platforms like AWS, Azure and Google Cloud Platform have very robust IAM tools where administrators can provide the right level of access for the creation and usage of resources for a given role in the company. For example, if a developer is building small prototype applications, then they should only be allowed to create small compute and storage instances. By implementing robust IAM, the CFO will bring the cost of provisioning infrastructure in line with the role and task of the employee using the infrastructure.


Application Workload Utilization

If companies do not have good IAM, then there is a possibility that developers have overprovisioned hardware. If the hardware is overprovisioned, then it is possible that the production applications running on those servers may not be fully utilizing the cloud resources. Alternatively, the application may be written in an inefficient way where the application requires multiple additional compute and storage instances to make up for the incapacity of the application to manage the workload it must perform.

By performing a thorough application analysis, implementing modern DevOps based CI/CD principles, and understanding all of the services public clouds offer, administrators can refactor and optimize applications to leverage all of the features of cloud computing. By implementing workload utilization, the enterprise can right size their cloud footprint and in almost all cases, reduce the amount of public cloud infrastructure required.


Implement Application and Data Lifecycle Management

Companies often have no application lifecycle management in the cloud. Because cloud is cheap (in relative terms), fast to provision, and infinite, many companies will start multiple projects and spin up multiple sets of resources. Often for the sake of speed, companies will leave multiple versions of infrastructure up for development and QA. Many organizations do not document what is on the servers and why it is important.

Over time, as applications and organizations grow in size and in complexity, it becomes more and more risky to deprovision servers of questionable importance. It is far easier to keep these servers available and make sure “nothing breaks”, but at the cost of paying to keep these servers around.

As for data, companies end up dealing with “data sprawl” where multiple copies of the same dataset may exist in multiple places. This is especially the case in QA and testing where developers will copy the same set of data multiple times, but not necessarily classify or tag the data, so it becomes unclear if the data should be deleted or not. This forces administrators to have to purchase and run expensive data deduping applications in hopes to reclaim storage space.

AWS, Azure, and Google Cloud Platform all include application lifecycle tools. In addition, hybrid cloud tools like Red Hat CloudForms offer robust application lifecycle tools that enable administrators to provision, track, and deprovision servers on a regimented and predictable timeline between on-premises and public cloud implementations. By implementing application and data lifecycle tools, companies can reclaim and deprovision vast amounts of compute and storage resources, in some cases drastically reducing monthly cloud consumption costs.

Conclusion

These are only 3 of the many areas a CFO can focus on to reduce cloud costs. AWS, Google Cloud Platform, and Azure all offer cloud cost optimization tools. These tools are much more effective when the business has a solid IAM, workload optimization, and application lifecycle policy in place. In addition to optimization tools, administrators can implement robust alerting and monitoring, both at the cloud platform and application level. Robust monitoring and alerting will allow administrators to more effectively monitor cloud usage and react quickly, making sure certain infrastructure does not hit cost thresholds.

Controlling public cloud costs is not a mystery. In fact, there are many tools out there and documented best practices that enable an enterprise to tightly control costs. With some research, patience, and planning, a CIO can implement a very cost effective cloud strategy, which enables the CFO to quickly leave both the rock and the hard place behind when they look at their monthly cloud spend.

Next Steps

Stone Door Group is a modern cloud and DevOps solutions integrator with a bench of certified AWS, Azure, and Google Cloud Platform consultants. Stone Door Group offers its Cloud Cost Optimization Accelerator offering. This offering helps CFOs reduce the cost of their cloud consumption through the creation of a Cost Reduction Plan (CRP) and Cost Reduction Goal (CRG). Through executive coaching and technical implementation, Stone Door Group will systematically reduce the monthly public cloud costs for enterprises of any size.


About the Author

Mike McDonough is a Senior Red Hat Architect with Stone Door Group and is responsible for designing and blueprinting OpenShift and Ansible architectures for Fortune 500 customers spanning a broad array of Red Hat technologies. With a 25+ year career in enterprise IT, Mike has deep experience in cloud architecture, automation, performance tuning and information security. He is also the architect of the Cloud Cost Optimization AcceleratorSM offering. To learn more about how Stone Door Group can help control your cloud costs, email us at letsdothis@stonedoorgroup.com or download a data sheet.