Taking Advantage of the Cloud Infrastructure: Database Servers
So how will the server-as-software nature of the cloud change infrastructure design? The biggest advantages should be realized in uses that write critical data (as opposed to read-only or temporary-data write uses), because those are the hardest to change and scale.
Take, for example, the database server. In many ways, the place and design of a database server in system infrastructure is much the same as it was at the beginning of the personal computer revolution: a beefy machine that handles all data-related writing and retrieval, perhaps with a backup waiting in the wings. An Oracle database server circa 1995 in a LAN client/server architecture looks very similar to a cloud-hosted MySQL instance serving a web application frontend: humans are required to launch or fail over, physical resources are limiting (prohibitively expensive, humans required, and–in the cloud–slow I/O), and dependent clients require the database server to have 100% uptime.
And perhaps the only software-like aspect that a database server might have–a single function–is usually untrue, as database servers often handle many different functions (e.g. authentication, storage and retrieval of user data, storage and retrieval of global data, exact record retrieval, full-text searching, and BI-like aggregation and number-crunching).
However, by embracing the software-like aspects of the cloud, database servers can be redesigned, getting better, cheaper, faster, and more secure. One step in redesigning the database server can be to split out its various functions to different servers: a cache server for frequently-read items (e.g. Memcached), an OLAP database server for business intelligence (e.g. Mondrian with a local MySQL database), a full-text index database for text-based searches (e.g. Apache Solr), and different relational databases for other purposes (e.g. PostGIS for geospatial data and MySQL for typical structured transactional data).
By breaking out these various functions, the infrastructure designer can allocate exactly the right amount of needed resources to each function, each function will be running on a platform designed to deliver optimal performance, and the security settings can admit only those authorized for the particular function into each server.
However, the above functional breakout does not harness the cloud’s essentially-unlimited resources and ephemeral nature. To get the full cloud benefit, one must abandon all traditional databases and move to a database built for the cloud; something like HBase. HBase, based on Google’s BigTable, is probably the most well-known open source database that takes advantage of the unlimited and ephemeral nature of the cloud.
HBase is meant to hold lots of data and be run across many servers in many data centers, and the underlying data can be replicated to clusters of other servers in many data centers with easily automated fail-over. HBase can dynamically scale up and scale down to more or less servers. In other words, if HBase’s single function can serve your database needs, then HBase takes advantage of all four of the server-as-software characteristics of the cloud.
Taking Advantage of the Cloud Infrastructure: Other Servers
Although the most dramatic cloud-driven infrastructure changes may be with respect to database servers, other types of servers may also benefit from the advantages of the cloud.
For example, application servers on the cloud may not be as “single function” as they may appear on traditional servers.
An application server might have both customer-facing and administrative functions, where the administrative functions (such as running complex reports or extracting and compressing massive amount of data for download) are both rarely used and a drain on resources for customers. In a properly-designed cloud architecture, the administrative functions could run on a separate, more powerful server that is launched on demand and terminated automatically after some period of idleness.
An application server might also serve a website and an API, where web site traffic is more consistent, can be cached, and must return within a second, whereas the API results cannot be cached and must be able to handle huge influxes of requests. In a traditional server environment, these functions would likely be combined onto web application servers, but in the cloud they should be divided onto separate servers with different hardware profiles and auto-scaling plans.
Servers used for internal data processing can also benefit from the advantages of the cloud. For example, restoring a large compressed database from online storage can be very taxing on server resources in the cloud; simply un-gzipping a few multi-gigabyte files can bring regular operations to halt. This is the perfect time to spin up a separate cloud server to handle the decompression and restoration tasks. Or, another internal example: database backups can also be resource-intensive and slow down database operations; instead, launch a replication slave, sync up the database, and then lock tables and back up from the slave.
Final Thoughts on Cloud Architecture
In general, architectures that take advantage of the cloud should break down work into jobs that can be run separately on servers that are designed to terminate when not needed. This architectural move from traditional servers to the cloud may be seen as roughly analogous to the move from functional programming to event-driven programming in software design: react specifically to only what is needed, and do not design around waiting or idle time.
If you can achieve this in your cloud-based architecture, then you will truly have a faster, cheaper, more fault tolerant, and more secure deployment.
Pages: 1 2
0 Responses
Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.