By: Caetano Notari

Cloud computing is no longer a bet for the future and has become part of the reality of companies in Brazil. Consulting Frost & Sullivan estimated the IaaS market in the country at BRL 2.7 billion in 2017, which accounts for more than half of what is invested in the entire Latin American region. The expectation is for a growth above 30% per year until 2022.

The adoption of cloud computing brings direct impacts on organizations: greater flexibility and agility in the provision of infrastructure, cost reduction with hardware management, security, among others. This first wave is easy to quantify and measure, but after a few months the question arises: and now, what do we do to continue reducing costs?

cloud waste

According to Rightscale’s “2017 State of the Cloud Report” ( study, IT estimates that around 30% of cloud investments are wasted. Rightscale measured an even higher value, between 30% and 45%, which explains why 64% of respondents at more mature cloud companies are focused on cost optimization.

The biggest cost offenders in the cloud

As the reasons are many, we highlight some more common in companies already in the cloud:

  • Cloud infrastructure sizing using on-premise server metrics and assumptions: Many savvy IT professionals have already developed their hardware sizing methodologies, looking at long-term forecasts. What made sense when acquiring equipment was complex, expensive and time-consuming, no longer makes sense when we can change the size of online servers in an instant;
  • IT disaggregation: as there is no purchase of servers, it is common to see areas of companies hiring servers and paying with credit cards, without centralized coordination;
  • Inefficient use of instances, with a lot of time with low load or database usage;
  • Extensive use of On-Demand Instances at full list prices that are up to 75% higher than Reserved Instance prices.

To mitigate the difficulties described above, we recommend following the steps below:

Cloud usage mapping

Making the first recommendation is to create an inventory of all accounts and all services that are being used in the cloud. In companies with multiple departments, it's not that simple, but it's fundamental. Experiments with Big Data, customer segmentation, Business Intelligence analyzes can already be performed by data scientists, who are not within the IT areas, but are capable of setting up the entire environment for their simulations.

This activity must be coordinated by the IT area, including information on the use of services, types of instances and contracted databases and additional services collected. We also recommend publishing guidelines for hiring cloud services for the entire company.

Broad data collection

Based on the defined inventory, it's time to gather information about each of the identified items.

On AWS, it is possible to collect data, via CloudWatch, from all contracted services, whether EC2, EBS or EFS volumes, RDS Databases, NoSQL Databases such as Amazon Dynamo, among others. Anyone who wants to go deeper to learn more about concepts, limits and resources, we recommend the AWS website, at the link:

As we only have 14 days of data, the analyzes do not consider seasonal volumes, but they are a starting point. For broader analysis, we recommend using additional tools like New Relic, Nagios, Pagerduty, which extract much more data.

quick wins

In this analysis, it is easy to identify instances that are underutilized, both in terms of CPU and memory usage. We can also identify disk and network I/O usage by seeing if the settings were done correctly.

Since instances can be instantly provisioned, or have their configurations changed quickly, it doesn't make sense to have idle capacity. We recommend turning them off, or reducing their size, and start saving right away.

The use of S3 or EFS can also be optimized, as it is common to find unnecessary backups, or obsolete files, that can be removed without problems.

Structured Reduction

One of the advantages of the cloud is being able to create dedicated infrastructures for each application, streamlining its operation and allowing an accurate control of the processing capacity for each one.

Our recommendation is to combine the use of flexible infrastructure with cheaper instances.

Instance Scaling

Using resources only when needed is possible with the use of autoscaling and load balancers. Creating applications that do not store data in sessions is essential to allow for this scalability. With this, it is possible to provision, in real time, only the necessary processing capacity. Sky.Saver (link to website page) manages autoscaling with flexibility, allowing real-time adjustment of the number of open instances.

cheapest instances

AWS offers two types of subscription services at deep discounts: Reserved Instances and SPOT.

Reserved Instances ( require a long contract period, which starts at one year, but can reach three, with significant discounts that go up to 75%. For applications that have long term predictability of demand, this option should be considered.

In addition to these, we have SPOT instances, which cost up to 10% of the price of on-demand instances, are perfect for applications such as Tensorflow, for Artificial Intelligence and Machine Learning, Hadoop for Big Data, which require instances with high processing power.

The only difference between On-Demand Instances and Spot Instances is that Spot Instances can be stopped by EC2 with two minutes' notice when EC2 needs the capacity. As this happens regularly, it is not possible to use them immediately for applications in production.

Sky.Saver is a service developed by Sky.One Sky.One group in a transparent way. In this way, you benefit from reduced costs with EC2 availability.

Learn more by talking to an expert!

If you have questions about this topic, or need help lowering your costs, talk to our AWS experts.

Written by

Sky.One Team

This content was produced by SkyOne's team of cloud and digital transformation experts.