Editor’s note: Zev Laderman is the co-founder and CEO of Newvem, a service that helps optimize AWS cloud infrastructure.
Amazon Web Services (AWS) provides an excellent cloud infrastructure solution for both early stage startups and enterprises. The good news is that AWS is a pay-per-use service, provides universal access to state-of-the-art computing resources, and scales with the growing needs of a business. The bad news – AWS can be very hard for early stage companies to onboard, while enterprises usually spend too much time with ‘busy work’ to optimize AWS and keep costs under control.
We launched a private beta of ‘KnowYourCloud Analytics’ a tool that helps AWS users to get to the bottom of their AWS cloud. By gathering data streams from multiple compute resources and crunching this data with its state-of-the-art analytics engine, Newvem enables AWS users to discover potential cost savings, identify security vulnerabilities and gain more control over availability.
Since our private beta’s launch, we’ve watched over 100,000 AWS instances and have seen users make repeated mistakes over their cloud operations. Ssome are simple, but can result in massive security, availability and cost issues within an organization.
Here are the ten most common mistakes you should avoid in order to make the most out of your AWS cloud footprint.
- Picking oversized instances. AWS offers a diverse variety of instance types and sizes for their operation. Although flexible, we found that many users pick instances that are far more powerful than they actually needed, which can lead to unnecessary costs.
- Provisioning too many instances. In addition to size, AWS allows for flexibility in the amount of instances a user needs. As a result they may run too many instances in clusters or load balancers. AWS features an on-demand business model, meaning that you don’t need to kick-off all of cluster notes needed for peak loads. Users can add nodes as needed, but can also automate provisioning with AWS’s auto-scaling functionality within their platform.
- Failing to make the right trade-offs when selecting instance types. AWS has a wide variety of instance types that differ based on use, such as general-purpose servers, CPU or memory intensive workloads, I/O performance, and size. Without proper application benchmarking, it’s very challenging to pick the most suitable instance type. As a result, users may choose instance types which are too big for their needs and much more expensive. Tracking resource utilization and frequently making the relevant instance trade-offs can help to optimize utilization and cost efficiencies.
- Leaving instances running idle. One amazing advantage of AWS is the ability to choose and provision instances based on the operational needs of your business. It’s simply a matter of adding a new server through a simple wizard. However, as a by-product of this flexibility, users easily lose track of their instances and forgot to turn them off, like leaving a room with the lights on. This results in confusion, wasted time trying to figure out the process, and spiraling costs.
- Forgetting to clean up stale resources. Stale resources can become a management nightmare in cloud environments. AWS’s pay-per-use model is great in theory, however in practice, a misunderstanding of what ‘use’ actually means leads to a series of problems. For example, EBS volumes are charged by provisioned storage. Ideally, it’s best to keep only those volumes that will be needed in the future. Keeping volumes that aren’t planned for the future, or those just forgotten about, can easily lead to unexpectedly high bills and a management headache.
- Taking too few or no EBS Snapshots. One of AWS’s coolest features is the ability to create virtual copies of EBS volumes at specific points in time. These snapshots are an excellent solution for performing backups on changed data since the last snapshot. The problem, when not used enough, changed data can be at risk in the case of crash or other data lose event.
- Taking too many EBS volume snapshots. EBS snapshots need to be taken in moderation. We found a decent sample size of our users actually taking too many EBS snapshots, which leads to unnecessary complexity when managing backups. Even though the charge is placed on differential data, snapshot sprawling can easily increase storage costs on an aggregated basis.
- Forgetting to release allocated Elastic IPs. Users tend to forget that AWS charges for elastic IPs when they’re not in use. With charges at a few cents per hour, the cost may not seem that much at first. However we’ve found users with hundreds of idle elastic IPs, literally lying around, which can easily add a few thousands of dollars each year to the AWS bill.
- Failing to proper configure security groups. Many AWS users mis-configure their cloud infrastructure with inherent security flaws and vulnerabilities. Many of these flaws open a loop-hole into their network which can be easily exploited by a variety of security threats.
- Not taking advantage of multiple Availability Zones. AWS ‘Availability Zones’ is a simple feature that distributes a user’s workload across multiple data centers within a given region. This is a very effective solution to lower risk in case of an outage by manage load balancing across distributed servers. Unfortunately, most users don’t think about spreading their workload until they experience an outage.
Read more : The 10 Biggest Mistakes Made With Amazon Web Services
0 Responses
Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.