Skip to main content

Posts

Showing posts from July, 2020

Is your cloud deployment much more expensive than it should be?

If you take any of the cloud platforms, you can spend days looking through all of the various features. AWS alone has over 212 core services. In recent years, a lot of those new features have been created to make it easier to deploy projects into the cloud, and there's nothing wrong with that. However, you have to realize that by using these easier to use services, the cost you end up paying will be much higher than it could be. The problem with these one-click deployment systems is that they have to assume a lot of things. They deploy infrastructures in your name, including Windows or Linux instances, load balancers, DNS configuration, networking, and so on. While you can manually go in and tweak these resources, if you've relegated the deployment to Amazon, you might not want to then go in and start tweaking the result. Yet there are many ways that deployments can be improved, both in efficiency and cost optimization, if you have somebody with deep knowledge of the offerings

Working remotely needs to be the new default

The conversation with a company or recruiter typically goes something like this: The work will be 40 hours a week, the office is in a nice building downtown... oh, and we even offer remote work one or two days a week.   This is the wrong way to approach work in a post-pandemic world. In tech, we're lucky enough to be able to work in front of a computer from anywhere. Whether you spend most of your day in Microsoft Office, a code editor, a shell terminal, an email client, or pretty much any application, chances are it runs remotely, or at least accesses remote resources. We're well past the point where home broadband connections were a luxury and computer software needed extensive expertise to operate. Instead, the ability to work from anywhere should be the default, and the precision should be for things that can't, or shouldn't be done remotely. Perhaps there's value in meeting some clients face to face. Maybe you need to go to the factory floor to handle physical

Deploying a high availability web app in AWS using DevOps, a high level view

De ploying web apps is core to a lot of what cloud and DevOps consultants do. Let’s face it, everything these days is a web app, and if it isn’t, then it’s a mobile app powered by a web app. Whether the web app is a simple set of PHP scripts with a PostgreSQL database, or a com plex system of microservices, the deployment process is typically fairly similar. Here I’ll describe the high level overview of how I deploy such a web app to the cloud so it’s highly available, secure and the whole process can be replicated easily. Overall design Before writing any deployment code, I need to decide what the design  will look like. This will heavily depend on the business needs of the project, but typically it will look like this: One VPC with at least 3 subnets: 2 private in different availability zones, and 1 public. One load balancer to distribute traffic to the web apps, in a public subnet. Two or more web servers, in different availability zones and private subnets, serving t

Building a status screen with a Raspberry Pi

A few years back, I built my first status screen using a Raspberry Pi 2B and an old version of Raspbian. Now, I repeated the task using a Pi 3B and the latest version of the OS. Since things have changed significantly enough I decided it would be worth writing another quick post about it. The result includes the date, time, current temperature outside, world and local news, and looks like this: The first thing to do is to connect a Pi to an old TV, which is what I did. Installing Raspbian should be simple enough and is described quite clearly on the web site . Once done, there are basically three things we have to do: Creating a status website To make the actual web site, I used GitHub Pages. You can see the actual site I use here . The only file I had to create manually was the index page, which is a simple page that provides a black background, the needed styling, and some JavaScript functions to show the clock and refresh various parts of the page. Once the rep

Coming full circle with AWS CDK

When I first started working with Amazon Web Services (AWS), it didn’t take long before I started to look for a way to automate the creation of cloud resources. This was years ago, when DevOps was only just starting to become a popular concept. Back then, the newly releases Boto3 Python library gave me the result I needed. Using Python scripts, I could make API calls to AWS and interact with resources. For example, creating a S3 bucket can be done with these lines of code: import boto3 s3 = boto3.resource('s3') bucket = s3.create_bucket(Bucket='mybucket') This had the advantage of being fast and providing an automated way to get a consistent result. In fact, I still use Boto3 to this day. However, this isn’t true DevOps because it doesn’t give you a way to track changes or destroy the resources linked to the script. Enter CloudFormation. When looking for a proper DevOps tool, I stumbled upon CloudFormation. This is Amazon’s answer to Terraform, a way to

Deploying an AWS Lambda function using Ansible

Recently I wrote a post on how to create a Lambda function to check the health of your ELB targets . Today, we’ll see how to deploy this same function automatically using Ansible, instead of going in the AWS console and doing it manually. While a single function is not that big of a deal to do manually, the goal of DevOps is to automate everything, because once you have a dozen functions or more, the last thing you want is to manage all that code manually. With a deployment system like Ansible, you can make sure all your code is in one centralized location and gets deployed all at the same time. Creating support files The first thing to do is make sure you have Ansible installed along with the AWS libraries: yum install ansible pip install boto botocore boto3 Then, let’s create a folder for your playbook, and save the Lambda function from my previous post under templates/elb_check.py in that folder. Then, let’s create two policy documents: templates/trust_policy.json