Guest author Sharon Wagner is founder and CEO of Cloudyn, a provider of cloud analytics and optimization services.
When we look at the future of cloud, we have to consider the current business trajectory. It’s clear to me that the three most consumed cloud resources are compute, database and storage—they account for about 80% of the average business customer’s monthly cloud bill.
Given the fact that these three components are the basic foundations for any cloud application, it should come as no surprise that most cloud vendors expect to continue to invest in these domains in 2014.
Here’s how the major players are poised to perform next year: Amazon Web Services (AWS) reports that the fastest-growing service in 2013 is its DynamoDB database service. Google continues to invest in more Compute Engine compute and storage configurations. Among the multicloud customers we do business with, Google Compute Engine is the second most-used cloud after Amazon Web Services. Microsoft Azure follows in third place.
Let’s be realistic: How long will it take for Google to leverage its worldwide data-center presence and match AWS’s cloud-computing capabilities? Probably fairly quickly, given the amount of time and effort Google is putting into its Google Partners program.
Five 9s Won’t Matter
Looking back at Amazon Web Services, Google Compute Engine and Microsoft Azure outages in 2013, we have seen glitches here and there, but overall, the quality in terms of availability and response time remains pretty high. This means that from the enterprise’s perspective, 99.99% Vs. 99.999% availability will not be the deciding factor.
So what will be the decision criteria in 2014? All three giants (and others like Hewlett-Packard and IBM) have high market credibility for performance and availability. Assuming all major cloud providers will catch up to provide comprehensive computing, database and storage capabilities, differentiating between vendors will be tough. At the end of the day, choosing the right cloud provider will probably be down to cost.
Amazon reduces its prices frequently. Google announced Storage and Compute price reductions of 60% in December. We will continue to see a tough and bloody war between the three vendors next year.
While Amazon Web Services has built a sophisticated (yet complex) reserved instance capacity model, which accounts for 30% of its running capacity, Google still charges on demand. I suspect that this will also change very quickly to be competitive on pricing. Google’s advantage of “by the minute” pricing is an advantage for specific use cases that requires instances to be launched frequently for short periods of usage. This places companies such as ProfitBricks and CloudSigma who introduced granular billing as a differentiator in an interesting dilemma. Microsoft, meanwhile, gives customers discounted credit points that they can use across all their cloud products.
Therefore, the most aggressive vendor with the most aggressive price plans (which shortens the break-even point) will be able to lock in the most customers for long-term engagement. It’s going to be rough, but it will increase customers’ return on investment and it is good for the business. One fallout from this is that local infrastructure-as-a-service providers will lose more business and need to find the right way to work together with the big giants.
2014: The Year Of The Cloud Brokers
There has been a lot about cloud brokers this year—those who serve as intermediaries between providers of cloud services and the companies that buy them. I believe that cloud brokers who combine technology, consulting and financial buying power represent a new and exciting business model in the cloud.
This shift may be accelerated in 2014 following Dell’s acquisition of EnStratus and CSC’s acquisition of ServiceMesh. These and other brokers will give cloud consumers the freedom to choose what services they want to buy, from who and when, based on their preferences and variety of supported services. In addition, customers will use broker management platforms to get clearer insights into their cloud and orchestrate and provision workloads faster and smarter.
Long Live Cloud Service Management
In the old days of data centers, IT managers and CIOs were looking for maximum control over the infrastructure. In the era of cloud computing, CMOs and other departments are increasingly taking control and creating their own IT budget. This raises an interesting question: “How IT can keep control on the cloud (consumption, cost, security, service level agreements) without disrupting the “new way of work” adopted by developer/operators and non-tech service consumers?
I believe this calls for new tool sets to replace the old, traditional ITIL (Information Technology Infrastructure Library) solutions. New tools will manage everything from self-service request management and cost estimates to provisioning, governance, cloud monitoring and access control. To be fair, an essential component of cloud service management is the service catalog that contains all cloud services the organization can use. James Staten of Forrester also hits on this point in his 2014 predictions.
The Endgame In 2014
Current enterprise footprint in the cloud is not significant: Most start with storage services and not computing services. This means that they use cloud storage as a “garbage collector” for data that can’t or shouldn’t be in their private data center.
Regardless of the vendor, I believe that enterprises will take a big step forward toward adopting new technologies and will significantly leverage open-source platforms, mainly OpenStack. The conversation will no longer be “if” I need to use it. The question will be “when” should I use cloud computing, and for which applications, and how to get the max out of it.
Photo courtesy of Flickr user John Mueller
Read more : Cloud Wars In 2014: Amazon Versus Google And Other Follies
0 Responses
Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.