There are two camps in the cloud world. There are the cloud washers, which are the ones who put a cloud sticker on everything. They say, “Oh, yes, it’s a private cloud for big data.” Then they throw in a few dozen other buzz words for the new look on their legacy technology. The other camp, the cloud services providers, enable customers to innovate in less time than it would take if they had to rely on static, traditional systems.
The cloud washers will lose in this new fast world. Their camp will be abandoned and replaced by a next generation of service providers that offer ways to build apps, host them anywhere and do it all in a fluid fashion. It’s happening now and it’s happening faster than anyone thought it would. Customers want on-demand services, not a reinvention of what they already have.
“They do not want something that is called cloud,” said George Reese, who earlier this year sold his company, Enstratius, to Dell. “They want it on-demand. You can’t make up on-demand self-service.”
On-demand services such as Amazon Web Services (AWS) have existed for several years and are designed to collect, process, distribute and analyze the small pieces of data, such as tweets, Facebook updates, and pictures that come with tremendous speed and volume.
But with speed comes another dimension for the way data behaves, Red Monk Analyst Donnie Berkholz told me this week. Customers need new ways to accelerate their production, as app development becomes more industrial in nature. Apps can be built a lot easier than ever before, as increasingly there are discreet services that can be pieced together. It’s a different kind of factory than the hierarchy that comes with a conveyor belt model.
In this new reality, customers will build what Warner Bros. Executive Vice President and CTO Jonathan Murray calls the composable enterprise. Customers now operate static IT environments that exist in silos. They will increasingly have to consider the fluid nature for how apps and data run in this mesh like environment. Data has to be orchestrated and managed with the machines, databases, compute, networking and storage. It means that there have to be underlying systems and services that can keep that data flowing without encumbrance.
Flame On
This combined state of speed and acceleration means diminished returns for companies that have used cloud washing to try and push their legacy technologies as new cloud services. In particular, I am thinking about companies that are calling for more hardware and more virtualization for enterprise environments. It may suitable for purposes of savings and efficiency but traditional IT management technology dressed in a cloud tutu does do not put companies on a trajectory that allows them to innovate and capable of providing their own differentiated services.
“It is this notion of validity,” said Apcera’s Derek Collison.
Collison is a former Google engineer who, after leaving the Googleplex, moved on to VMware and led development for Cloud Foundry, the popular platform-as-a-service (PaaS). He left VMware in 2012 and started Apcera, which he described on Twitter last week as a continuum of trusted autonomous computing. In fact, Apcera now has the trademark rights to the word “Continuum,” and will use it as a foundation for marketing its technology.
Apcera is an autonomous system that understands the notion of who you want to talk to and how it talks to you. This is not meant literally but rather in the sense that, for example, the technology knows the semantics of a database and can associate it with a policy. It knows its physical location, where in the system it is located and is semantically aware of what it is. All of the descriptions are enforced and auditable, bound by a universal policy, which is built into the service. So an app can be deployed but the governance and regulatory requirements go with it and can be edited or changed when needed.
Collison describes the technology as the DNA of building blocks designed to deliver the velocity through the pipeline. In essence, it integrates infrastructure as a service (IaaS) with platform as a service and software as a service (SaaS) into one platform. He said it is configurable, fluid and adaptable. It is designed to perform scheduled jobs that are autonomous and have a semantic understanding of the infrastructure and communications environment. Apcera works as a service that customers access through REST APIs, using an HTTP interface. Customers have become accustomed to using this kind of interface, as it is the primary means for delivering services from complex backend systems.
In Collison’s view, customers should not be thinking that they have to go from 35 percent virtualization to 50 percent. Virtualization will not provide the speed and on-demand self-service that is needed to innovate.
VMware uses software defined networking (SDN) as the framework for its marketing and strategy. The company acquired Nicira for $1.26 billion and has used it as a springboard to launch its networking hypervisor. The intent is for VMware to become the provider of virtualization technology for the entire data center. It has virtualized the compute and now it is seeking to virtualize the storage and the networking, too.
Collison is convinced that SDN is not the answer. He describes the data center as a barnyard. VMware is trying to virtualize the entire barnyard. But that removes the ability for this sophisticated talking platform that Collision is pushing with Apcera and its autonomous continuum that makes everything in the infrastructure an autonomous barnyard.
Docker represents this concept of acceleration and the way that apps are increasingly everywhere. Docker automates the deployment of apps as a lightweight Linux container that can be built and tested on a laptop and synced to run anywhere. It can run on virtual machines, bare-metal servers, OpenStack clusters, public instances or any combination of on-premise and cloud offerings.
Apps Run Everywhere
Docker does not port the virtual machine or the operating system, which makes sense when considering that the infrastructure itself is becoming the operating system. The compute, storage and networking is already in place on a cloud service — the application just goes there to run.
The service avoids the issue that comes with moving virtual machines, which are not designed to move between clouds. So instead of moving the VM, Docker moves the code between the VMs. Most of the security is managed by the Linux kernel. Hykes said in an interview this summer that developers particularly like the ability to continually test and integrate app containers. This makes for simpler and faster methods for building applications that can run anywhere. With Docker, platforms can be built that leverage the services of different providers to create lightweight environments for building and delivering apps.
Docker is a natural accompaniment to CoreOS, the new Linux-based operating system started by Alex Polvi, the founder of Kickstarter, which sold to Rackspace. Docker actually comes packaged with CoreOS so applications can be moved between different services.
With CoreOS, the applications are deployable units that can run anywhere. The OS, based on Google principles, updates automatically, much like the Chrome browser does. This is different than Ubuntu or Red Hat, which may be running on different versions of the OS in different environments. That can make for some work in making the apps compatible.
“You can take the app and run it on AWS or Rackspace without modifications, ” Polvi said. “CoreOS focuses on the apps, not machines.”
The Ultimate Disrupter?
Amazon Web Services took an early lead, embracing on-demand technologies with services that allow for high-velocity startups to serve up pictures, video and updates. They bought tens of thousands of servers and started building a network of data centers around the world.
But much of what they built used technology designed for small clusters of two to four servers. To scale these operations has meant adding a wizard’s mix of software to connect hundreds and often thousands of servers. Apps need to run anywhere, all the time. But for the most part, these servers were designed for another age.
And it’s in this hardware realm where there is the potential for perhaps the most significant disruption and the cloud washers’ most serious threat: Open Compute, the effort to open-source the data center that Facebook initiated in 2011.
Facebook faced a problem when it started designing its Prineville, Ore., data center. The off-the-shelf servers had too much waste, so the company built its own and open-sourced the entire data-center operation. They spearheaded the creation of the Open Compute Foundation, which has emerged as a force in open-sourcing not just the servers but most recently the network, too.
With open-sourced hardware, companies like Digital Ocean can build hardware specifically for their needs. That would be almost impossible with the current hardware vendors.
“We are looking forward to using OpenCompute servers,” said Digital Ocean CEO and Co-Founder Ben Uretsky. “Since we are running Linux only, we can optimize the entire process — it provides a much more seamless experience. We will cut down on hardware. It will provide a better cost efficiency. We are replacing the 30-year-old BIOS with 2013 standards. Open-source BIPS — that is a standard we are trying to push. It would decrease boot time from minutes to seconds.
Look out cloud washers, it’s just going to get worse. This shake up is happening faster than anyone realized.”
Read more : The Cloud Washers Will Lose
0 Responses
Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.