If we ask two people who see the same cloud in the sky together what shape it has, we will probably receive two very different answers. The same is true when we ask two CIOs what their perception of the public cloud is. Among unicorns and monsters, few know how to distinguish between the stormy cumulonimbus and the tall and light stratocumulus.
In this rapidly evolving environment, seeking identity, several myths about the public cloud have developed. Usually fueled by disinformation and also by the interest of those who feel threatened by the paradigm shift. I think it is worthwhile to bring the perception of companies that have already migrated to the cloud and can now testify about what it means to have most of their systems in this invisible environment.
The first of these myths concerns the performance of the cloud. The complaint is that often the cloud does not meet the expectations of users and leaves much to be desired in terms of availability. The fact is that most of these complaints come from companies that have placed their systems in smaller data centers with various technological restrictions and that generally offer hosting guru where the hardware is owned by the customer. This is not the reality of the public cloud. In the cloud of major providers like AWS, Google, Microsoft and Oracle, the amount of resources is virtually unlimited for most customers and the quality of service is totally dependent on the features and settings you choose for your environment.
The sky is the limit in terms of performance. If something is not working properly, it is just a matter of modifying the parameters of the resources to obtain more traffic bandwidth, CPU, memory and storage. In addition, several tools allow working on the solution architecture in order to make them more easily scalable and thus enjoy better performance in an optimized and inexpensive way.
The second myth is about security. There is a perception that bringing data and systems to the cloud can increase exposure and vulnerability to security threats. In here, one more time, we are victims of poor implementations of hosting guru that claim to be cloud, but do not offer the basic security features available in large public clouds. The lack of knowledge about the security mechanisms that are natively available for all environments also contributes to this perception. Among them, we find complete guru for system isolation, encryption, prevention and mitigation of attacks, monitoring and auditing, in addition to an extremely sophisticated and granular access control and identification system for all cloud resources.
It is extremely difficult to find companies that have this set of tools permanently updated with SOC (Security Operations Center) automated and criteria as strict of physical access to equipment as is the case of large public cloud providers. Thus, it becomes a matter of education and knowledge to understand that, in most cases, migration to the cloud represents an improvement in the overall security of guru.
The third myth is about the costs of the cloud. This topic can become quite complex because there are several models for purchasing the same resources. However, the myth that the public cloud is more expensive comes from poor comparisons with pure hosting guru where resources are statically allocated (purchased) with margins for growth already included.
The fundamental issue is that in the public cloud we can request expansions immediately and therefore it makes no sense to pre-allocate resources that are not being used. As we pay for hours of use, any time a server is down already represents savings. The goal is to make workloads operate much more fairly in relation to consumption (rightsizing) and use the ability to scale horizontally by adding more servers in the form of machine clusters. Thus, we can create and finish servers throughout the day, adjusting their service capacity to what is actually used; perhaps the simplest example is just scheduling the startup and shutdown of servers that operate only during business hours. In other words, through monitoring, automation and the use of tools that allow the dynamic use of resources, we are able to prove the economy of the cloud.
The last one is related to the construction of hybrid clouds (public/private) as a strategy to deal with the inability of the public cloud to accommodate certain applications. Of course, there will always be very specific applications and extremely difficult to move to the cloud, but these applications are much rarer than the ones I see used as a justification for keeping private clouds. To illustrate, we can mention the guru that allow you to take ERPs to the cloud safely and efficiently. These are vital applications for companies, often written on legacy technologies and interconnected with several other services, but which have been completely moved to the cloud with great success.
For the vast majority of companies, the local infrastructure will be restricted to users’ computers and internet access that interconnects them to the cloud. The private part of the hybrid cloud appears only as a temporary solution to accommodate an inability to bring everything to the cloud at once.
Of course, there is no single way to view the public cloud. Depending on the reality and needs of each company, the guru may change, however the gains in scale, sophistication and convenience of the public cloud place a new infrastructure paradigm in which it is increasingly difficult to justify the purchase and maintenance of own equipment. We need to get used to the fact that the public cloud is something much more concrete, complete and sophisticated than the infrastructure guru that preceded it and with that I imagine that that cloud with a mythological monster face will become just a beautiful group of water droplets that floats in the sky in the form of cotton.