Skip to content

Categories:

Cutting the Fat Out of the Cloud Development Stack

Caml “The wide availability of cloud computing offers an un-precedented opportunity to rethink how we construct applications,” opens the paper Turning Down the LAMP: Software Specialisation for the Cloud (PDF) – an “unashamedly academic” exploration of building custom kernals for the cloud. The papers authors – Anil Madhavapeddy, Richard Mortier, Ripduman Sohan, Thomas Gazagnaire, Steven Hand, Tim Deegan, Derek McAuley and Jon Crowcroft – built a prototype called Mirage to test their ideas. Essentially, Mirage is an extended version of Objective Caml running as a guest operating system in Xen. The authors claim this implementation exhibits “significant performance speedups for I/O and memory handling versus the same code running under Linux/Xen.” But what are the trade-offs?

Sponsor

Mirage stack
A conventional software stack (left) and the statically-linked Mirage approach (right).

The problem, according to the papers’ authors, is that traditional development stacks like LAMP are too thick and due to extensive support for legacy code. Since web apps are meant to be consumed in the browser, there’s no need for the guest OS to have things like graphic drivers and print spoolers built in.

“The key principle behind Mirage is to treat cloud virtual hardware as a compiler target, and convert high-level language source code directly into kernels that run on it,” the report says. “Our prototype compiler uses the OCaml language to further remove dynamic typing overheads and introduce more safety at compile time.” In some ways this is similar to creating embedded operating systems like QNX, but goes a bit further. The result is an efficient, secure and simple “operating system.”

Due to the advent of cloud hosting like Amazon AWS that let customers pay only for resources used, “Software efficiency now brings direct financial rewards in cloud environments.”

What are the downsides? I learned about Mirage at BarCamp Portland 2010. Participants in the session pointed out that while you gain in efficiency with this approach, you also gain maintenance overhead. If you create a custom kernal, you’re responsible for it. It may be cheaper and easier to simply live with the extra resource costs of a full stack than to create these ultra-thin OSes.

However, there are some circumstances in which maximum optimization may be preferable – such as real-time analytics. But in these cases, does virtualization make sense?

Discuss


Posted in Uncategorized.


0 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.



Some HTML is OK

or, reply to this post via trackback.