AWS has an extraordinary number of moving parts, so it can be difficult to find and sift through all the necessary information and all too easy for something important to slip through the cracks. So, whether you are new to the cloud or you’re transitioning your software from another cloud platform or on-prem server, you have a lot of important decisions to make. Amazon Web Services (AWS) is the most comprehensive cloud platform in the world, making it perhaps the most powerful tool out there for any tech company. But with that comes a considerable burden of knowledge. So we thought it would be helpful to pool together our experience with the platform here at Promenade and devise a short list of tips and important considerations that’ll hopefully save you some time and money.
There are plenty of instances where you might find yourself on multiple cloud platforms. Perhaps some services can be found cheaper elsewhere, or maybe you have a client with platform-specific needs. In any case, you don’t want vendor lock to impede your work.
This means containerization is your friend. We can avoid worrying about the deployment environment by packing it along with the application, and Amazon has a handful of services we can leverage that support containers out of the box. For monolithic applications or independent services, Elastic Beanstalk is a great option with highly managed architecture and deployment. For microservice-based applications, AWS offers their Elastic Container Service (ECS) for managed container orchestration, or their Elastic Kubernetes Services (EKS) if you prefer to use Kubernetes for orchestration.
Regardless of the services you use or where you deploy, I would always highly recommend practicing infrastructure-as-code. For the hybrid cloud route, Terraform is a fantastic solution for cloud-agnostic infrastructure that is free and open-source with a large community and wide selection of plugins. They also offer paid plans with technical support, as well as graphical and cloud-based tools to make deployment even simpler. Otherwise, if you are confident you’ll be sticking with AWS, then CloudFormation is a great platform-specific alternative.
As you may already know, AWS breaks their service up into several regions, but how its functionality depends on those regions is not always straightforward. For example, you might find yourself in a different region than you expect depending on how you login. Take a look at the subdomain of the login URL to see what region you will be logging into.
Some services are only available in certain regions and navigating to those services may automatically change your selected region. Others have no region at all and may replace your region selection with “Global.”
Regions also break down further into availability zones, which are not all created equal. Availability zones grow over time and eventually cannot expand further, at which point AWS may prevent you from launching new instances in that zone or it may not show up on new accounts at all. So two accounts looking at the same region may see a different list of availability zones.
GovCloud is a highly restricted variant for government-affiliated organizations and service providers. Using it can cut down considerably on the requirements you are personally responsible for in any federal compliance process, but it comes at a cost. There are some important differences between these flavors of the platform, and not all of them are clearly documented. As such, consider running your system architecture by AWS Support before making the move.
GovCloud only has two regions, each with only 3 availability zones, and some services are only available in one region or the other. Certain services have limited functionality in GovCloud, and others aren’t available on the platform at all. You may find yourself leaving some services behind on your standard account if they are critical to the functionality of your system, such as Route53 which is a popular DNS solution on AWS.
AWS has very granular pricing, which is especially nice for small, simple systems, but as your system grows it will become increasingly difficult to keep tabs on every metric you are being charged for. This means it’s good to plan ahead for not just scalability, but shareability. Traffic can be highly controlled within a single Virtual Private Cloud (VPC), so share one or as few as possible for as long as possible, until either space or data segmentation becomes a concern.
Reduce the number and tier of services and resources you are being charged for whenever possible. The obvious reduction here is spot instances for cheaper, albeit less reliable, compute. For storage in Amazon S3, consider moving lightly-touched or long-term data to Glacier storage, which is cheaper in exchange for slower recovery time. And a less common but very impactful one is sharing the Network Firewall, which can be done for multiple VPCs through the use of AWS Transit Gateway.
Using the AWS command line interface (CLI) requires generating an access key and secret key in Identity Access Management (IAM). These can be used to acquire an authentication token which provides your session with the same privileges assigned to your account for the extent of the token’s life. There are a couple of ways the CLI will allow you to configure and manage these credentials, but you can simplify authentication considerably through the use of a third party tool called AWS Vault. It’s free, open source, supports a variety of vaulting backends for securely storing your credentials, and it will quickly and easily cycle out your keys for you when it comes time to retire any old ones. Basic two-factor authentication is supported out of the box, and physical two-factor devices like the YubiKey are supported with a little extra setup. Executing your AWS command from inside a call to AWS Vault will automatically load your credentials into the environment for the duration of the task, or you can choose to run a local EC2 instance metadata server that will automatically refresh your token for long-running processes. And if you want to get really fancy, you can use Bash or Zsh aliases to automatically wrap calls to the AWS CLI, that way you don’t need to think about authentication at all.
Like Mom always said: dress in layers. In the world of cloud security, that means protection at every level: Network firewalls, application firewalls, traffic routing and filtering on your load balancers, and any service-specific security features you can get your hands on. Add another layer on top of that with tools like Amazon Detective, Amazon Inspector, and Amazon Guard Duty which can all help you in different ways to continually monitor your system and identify security threats or suspicious activity.
Don’t assume your architecture is perfectly stable, either. Make sure you deploy into multiple availability zones, mirror any databases, and have a look at AWS Backup as a means to quickly restore service in the event of an outage.
You already have so much to tackle in this transition to the AWS, it’s best not to reinvent the wheel in the process. AWS currently has over 200 products and services, so they could very well already have a service for the feature you want. Since most of Amazon’s web services are charged around resource consumption, you’ll be hard pressed to find one that is less cost effective than a home-grown solution.
Use higher-level and managed services whenever possible. I mentioned a few already such as ECS and Beanstalk, but another good one to know is AWS Systems Manager, which greatly simplifies the management, grouping, and visualization of your network resources.
It is best to automate things when you can, and set alarms when you can’t. Use Amazon CloudWatch alarms to let the problems come to you. You can get alerts for specific errors, track important metrics, and even connect to other services for an automated mitigation response.
For maximum insight into your system, it is best to record absolutely everything. Use access and service logs wherever possible, and have your applications send their logs to CloudWatch so they’re backed up and available in the same place. You can track all actions, who performed them, and on what resources using AWS CloudTrail. And regardless of what resources you are using or how related they are, it is best to put tags on all of them. This is important for quickly and correctly identifying certain resources or for mapping your cloud infrastructure.
Getting started with AWS isn’t exactly easy. What is easy is creating plenty of technical debt in the process. But we hope that, by providing this list of tips and tricks, we have removed some of the confusion from the process and perhaps saved you some time and money along the way. From all of us here at Promenade, we wish you luck in your cloud computing journey. Please reach out if Promenade can be of service to you.