Skip to main content

Deploying a high availability web app in AWS using DevOps, a high level view

Deploying web apps is core to a lot of what cloud and DevOps consultants do. Let’s face it, everything these days is a web app, and if it isn’t, then it’s a mobile app powered by a web app. Whether the web app is a simple set of PHP scripts with a PostgreSQL database, or a complex system of microservices, the deployment process is typically fairly similar. Here I’ll describe the high level overview of how I deploy such a web app to the cloud so it’s highly available, secure and the whole process can be replicated easily.

Overall designgray building

Before writing any deployment code, I need to decide what the design  will look like. This will heavily depend on the business needs of the project, but typically it will look like this:

  • One VPC with at least 3 subnets: 2 private in different availability zones, and 1 public.
  • One load balancer to distribute traffic to the web apps, in a public subnet.
  • Two or more web servers, in different availability zones and private subnets, serving the web app.
  • Two or more backend databases, again in different availability zones and private subnets, to hold the user data.
  • A bastion host, in the public subnet, for logging, monitoring and admin access.

On top of that, you may also have a need for S3 buckets if you have a lot of static content, a CDN, Lambda functions, and many more pieces and parts, depending on your needs. But with the above, you can fill the vast majority of traditional use cases.

Infrastructure as code

These days everything is code, and that includes the infrastructure that you’re deploying. My choice of code for AWS deployments is CDK and Python. Using CDK, I can write a few lines of Python code, run cdk synth and a set of fully ready CloudFormation templates get created, with all the resources I need. Then it’s a simple matter of deploying it and watching the magic happen. Similarly, for post-deployment configuration, I typically use Ansible if there’s more to do than fits in a userdata tag.

Here is some sample CDK code you could use to setup the design mentioned above:

# Creating the VPC
vpc = ec2.Vpc(
    max_azs = 2,
    subnet_configuration = [
        ec2.SubnetConfiguration(name = "MyProject_PrivateSubnet1", subnet_type = ec2.SubnetType.PRIVATE),
        ec2.SubnetConfiguration(name = "MyProject_PrivateSubnet2", subnet_type = ec2.SubnetType.PRIVATE),
        ec2.SubnetConfiguration(name = "MyProject_PublicSubnet", subnet_type = ec2.SubnetType.PUBLIC)

# Creating the AMI 
ami = ec2.MachineImage.latest_amazon_linux(
    generation = ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
    edition = ec2.AmazonLinuxEdition.STANDARD,
    virtualization = ec2.AmazonLinuxVirt.HVM,
    storage = ec2.AmazonLinuxStorage.GENERAL_PURPOSE

# Creating the user data
userdata = ec2.UserData.for_linux(shebang="#!/bin/bash -xe")
    "yum -y install nano httpd php mariadb php-mysql",
    "systemctl enable httpd",
    "systemctl start httpd",
    "chown ec2-user /var/www/html"

# Creating the load balancer
lb = elbv2.ApplicationLoadBalancer(
    vpc = vpc,
    internet_facing = True,

# Allowing connections on port 80
    "Internet access on port 80"
listener = lb.add_listener(
    port = 80,
    open = True

# Creating the instances
asg = autoscaling.AutoScalingGroup(
    vpc = vpc,
    vpc_subnets = ec2.SubnetSelection(subnet_type = ec2.SubnetType.PRIVATE),
    instance_type = ec2.InstanceType("t3.nano"),
    machine_image = ami,
    key_name = "ssh_key",
    user_data = userdata,
    desired_capacity = 2,
    min_capacity = 2,
    max_capacity = 2,

# Allowing connections on ports 80 and 22
    "LB access 80 port of EC2 in Autoscaling Group"
    "Allow SSH from any internal IP"

# Adding the instances to the load balancer
    port = 80,
    targets = [asg]

The database

While it’s possible to just install MySQL, SQL Server, or whichever kind of database you need on an EC2 instance, I typically recommend against that. By using RDS, the PaaS offering from Amazon, you’ll end up paying a bit more but you get a lot of benefits. They handle all the database administration tasks, you can easily setup backups, multiple nodes, high availability, scaling, and so on. Doing all of that manually is possible, but not recommended if you aren’t a DBA. So for the design above, I would create a simple replicated RDS instance.

Additional steps

Once the infrastructure is deployed, and your web app is configured, there are a few more things to do if you want your setup to be ready to be delivered to the client:

  • Register your DNS so you have a proper hostname to access your webapp.
  • Setup SSL so all your web traffic is encrypted. AWS offers free certificates when used on a load balancer, or you can use something like Let’s Encrypt.
  • Install a bastion host so that all of your private instances are behind a single point of access. Make sure the bastion host is heavily secure, has logs and monitoring going on, along with an alerting system if your web app goes down. You can use a simple script that sends you an SMS using Amazon SNS if the site goes down, or something much more involved based around Nagios and ELK.
  • Setup a CI/CD pipeline so your developers can update the web app whenever they need. This typically would involve an existing tool like Jenkins or AWS CodePipeline, a git repository, and a simple script that copies the files on the various hosts and reloads any relevant service.

All of this can be done within a single day by an experienced consultant, and are things I’ve done many times. A small or medium sized web app can be migrated to the cloud, made highly available, secure, and delivered to the client. I hope this post gave you a good idea of the type of work involved in deploying a modern web app.