In my last blog post, I went through the steps required to configure an existing Rails application to work with Docker. By the end of that post, our Rails app was running in multiple Docker containers and mirrored our local development environment, but now how do we deploy the code to production?

This blog post will go through the steps required to deploy a dockerized Rails app to an AWS EC2 instance. This multi-contianer app will include the web app, Nginx, Postgres, Sidekiq, and Redis all in individual containers. We will script the deployment steps so that deployment is automated, update Nginx on the fly so that we have zero downtime, and in the end our deployment commands will be written in a familiar rake task syntax that feels a lot like Heroku.

Note: Although this example uses a Rails application, the bash deployment scripts should work with any web application running nginx in the front with a few minor modifications.

If you would like to code along, clone this repo, which has a working example of a dockerized Rails app. Or perhaps you want to look directly at the individual deployment script files or a completed dockerized Rails app with all associated deployment scripts.

Setting Up Your AWS EC2 Instance

If you are familiar with setting up an EC2 instance, you can skip this section.

In this section we will go through the steps of launching an EC2 instance. The first step is to create an account at https://aws.amazon.com/.

Once you have created your account, log in and you should be redirected to the AWS dashboard. Under compute, click on “EC2”. Then click on “launch instance”. This will take you to step 1 where you choose which Amazon Machine Image (AMI) you would like to use. I like to use Ubuntu but any linux image that allows docker to be installed should work. Select Ubuntu Server 14.04 LTS. Next you will need to choose your instance type. Choose the default (General Purpose, t2.micro) that is free tier eligible and click “Next: Configure Instance Details”. We do not need to configure the instance for this example, so click “Next: Add Storage”. We also do not need to add any storage for this example, so click “Next: Tag Instance”. We do not need to tag this instance, so click “Next: Configure Security Group”. We will need to create a new security group. Select the radio box that says “Create a new security group”. Choose a name for this group; I am going to call mine ssh-and-http. You can also update the name in the description although it is not required. SSH is already added for us, so next click on the “Add Rule” button. In the dropdown select HTTP. It should look like this:

aws-security-group

These settings will allow incoming SSH and HTTP requests. Next, click the “Review and Launch” button. We can ignore the security warning for this example and click “Launch”. This will bring up a dialog box that asks you to set an existing key pair or to create a new key pair. We will create a new key pair which will allow us to SSH into our EC2 instance. Select “Create a new key pair” from the dropdown and choose a name; I am going to call mine EC2-docker-example. Then click “Download Key Pair”. This will download a file with whatever name you chose and a .pem file extension. This will be the only copy you get of this file so save it somewhere secure. Finally click “Launch Instances” and “View Instances”. You should see the instance you just created, and when you select it you should see a lot of useful information like Public DNS, Public IP, and other settings.

Your EC2 instance is now up and running. Now we need to change the permission settings of the .pem file you downloaded. To do so use the following command:

chmod 400 <PATH_TO_.PEM_FILE>

If you do not change these permissions you will get an unprotected private key warning and you will not be allowed to SSH into your EC2 instance. Finally, you can run the following command to SSH into our EC2 instance:

# -i specifies that you are referencing an identity file
# ubuntu is the default user for the ubuntu AMI
ssh -i <PATH_TO_.PEM_FILE> ubuntu@<YOUR_EC2_INSTANCE_PUBLIC_IP>

If everything worked, you should see something like this:

ssh aws

Installing Docker and Docker Compose

Now that we can SSH into our EC2 instance, we will need to install both Docker and Docker Compose. To install Docker, copy and paste the following commands into your terminal:

Click here to view full ubuntu docker install instructions in the docker docs

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
sudo touch /etc/apt/sources.list.d/docker.list

In the file that you just created with the last command, add the following:

deb https://apt.dockerproject.org/repo ubuntu-trusty main

Save and close the file. Then run the following commands:

sudo apt-get update
sudo apt-get purge lxc-docker
apt-cache policy docker-engine
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
sudo apt-get install docker-engine

Finally to verify that docker installed correctly, run:

sudo docker run hello-world

You should see the following if the installation worked:

docker hello world

Lastly, let’s add the ubuntu user to the docker group so that you can use the docker commands without sudo. To do so, perform the following commands:

sudo usermod -a -G docker ubuntu

Log out and log back in. You should be able to run the following command without any errors:

docker ps

Docker is now installed on our EC2 instance. Next we need to install Docker Compose. Copy and paste the following commands:

Click here to view full ubuntu docker compose install instructions in the docker docs

sudo -i
curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Exit out of root

exit

To test that the installation was successful, run the following command:

docker-compose --version

Docker Compose and Docker are now installed and running on our EC2 instance. It is now time to deploy our code.

Deploying Our Application Code for the First Time

Our EC2 instance is running, Docker and Docker Compose are installed, and we can now deploy our code. We could SSH into the container, clone the repo, and run our Docker Compose commands manually to get our containers up and running. However, this is time consuming and error prone; not to mention we do not want to have to deploy the code manually everytime we make a change. Luckily, as developers, we know we can do better than that and simply automate this process with a few scripts. We will start off with a few scripts that we will use for our first deploy.

First Time Deploying Scripts

The first step is to create a directory called deploy inside of your config directory. This is where we will put all of our scripts related to deploying our code. Once you have created your deploy directory, create a new file inside of it called deploy.sh. Copy the following code into this file:

#!/bin/bash

# precompile assets locally prior to pushing to production.
echo "precompiling assets.."
rails assets:precompile

# use rsync to transfer your files from your local machine to your EC2 instance
echo "transferring files..."
rsync -a -e "ssh -i $EC2_IDENTITY_FILE" ../docker-rails-school-complete $EC2_HOST:~ --exclude=.git --exclude=docker-compose.override.yml
echo "complete!"

# ssh into your EC2 instance and change into the root directory of your rails app.
# then run the docker_build_and_start.sh script
ssh -t -i $EC2_IDENTITY_FILE $EC2_HOST "cd $(basename $(PWD)) && config/deploy/docker_build_and_start.sh"

# delete the precompiled assets off your local machine since we only need them in production
echo "cleaning up precompiled assets..."
rm -rf public/assets

This script handles transferring all the files from your local machine to your running EC2 instance, and then calls the docker_build_and_start script that we have not created yet.

A few things to note about the above script:

  1. I am referencing a few environment variables in my script. The $EC2_IDENTITY_FILE is simply set as the path to the EC2_docker_example.pem file that I downloaded when setting up my EC2 instance. The $EC2_HOST environment variable is set to ubuntu@<MY_EC2_IP_ADDRESS>. I would suggest exporting these environment variables inside of your .bash_profile so that they are availble for use and you do not have to hard code these values in.
  2. In our rsync command we are ignoring all .git files and our docker-compose.override.yml file. We do not need git in production and we only want to use our docker-compose.override.yml file in development. Since we have placed the docker-compose.override.yml file inside of our .dockerignore file, if we were to tag our images and push them to docker hub and reference those images in our docker-compose file we could remove the --exclude=docker-compose.override.yml but we will save those scripts for the next post.

Next, we will need to create the docker_build_and_deploy.sh script that we are calling from our deploy.sh script. Create a file called docker_build_and_start.sh and place it in the deploy directory. Add the following code:

#!/bin/bash

echo "building images..."

docker-compose build

echo "starting containers..."

docker-compose up -d

These commands should look familiar to you. Here we are simply building the images with the docker-compose build command and then starting all the containers in daemon mode by running the docker-compose up -d command.

To run these scripts you could call the deploy.sh file by running the following command from the root folder of your Rails app:

./config/deploy/deploy.sh

That works perfectly fine but is kind of hard to remember. Let’s instead create a rake task so that we can use familiar Rails syntax to deploy our code. Create a file called upload_files.rake inside of lib/tasks. Add this code to the file you just created:

namespace :docker do
  desc "upload files, build images, and start containers for first deployment"
  task :first_deploy do
    sh "config/deploy/deploy.sh"
  end
end

In this rake task, we are creating the namespace docker to be used with our deployment commands. We are then titling the task first_deploy and calling the deploy.sh script. To run this rake task you can type the following command in your terminal:

rails docker:first_deploy

You should see the deployment taking place in your terminal after running that command. This process could take a few minutes as all of the ruby gems are going to need to be downloaded and installed.

Once all of the images have been built and the containers are up and running, copy your public IP address and paste it into your browser. You should see something along the lines of “An unhandled lowlevel error occurred. The application logs may have details”. This is expected since we have not set our production secret key. To do this run the following command in your terminal:

rake secret

Copy the string that is generated, SSH into your container, and create an environment variable called SECRET_KEY_BASE and set it to equal the string that you just copied when running the rake secret command. The commands to accomplish this are as follows:

# ssh into EC2 instance
ssh -i $EC2_IDENTITY_FILE $EC2_HOST

# open up the bashrc file with vi to add the environment variable
vi .bashrc

# use the down arrow key to scroll all the way to the bottom
# hit the i key to go into insert mode
# add this to the file
export SECRET_KEY_BASE=<STRING_FROM_RAKE_SECRET>

# press escape to exit insert mode
# type the following command and press enter to save your changes
:x

# refresh environment variables
source ~/.bashrc

# exit out of the EC2 instance
exit

Then add the following to your .env file:

SECRET_KEY_BASE=${SECRET_KEY_BASE}

This pulls the production secret key into the container. After running the above commands, you should be able to refresh your browser and see the app landing page.

Note that if you are following along in the example Rails app, email will not actually send since we have not configured action mailer in our production.rb file; we will not cover that in this blog post, but it is straight forward to get it setup.

At this point your app should be running in your EC2 instance. The only problem is that if we make a change and redeploy, you will notice that there is some downtime in between stopping the old containers and starting the new ones. We can add a few more scripts to address this issue.

Zero Downtime Deployment of App Updates

You would want to run the rails docker:first\_deploy command the first time to get all the containers up and running. However, if we want to make some changes to our app, there is no reason that we need to restart all of the containers. Instead we should just update the container running our web app and then update the Nginx upstream directive once the new updated container is up and running. To do this lets create a few more scripts. First, create a file called push_update.sh and place it in the config/deploy directory. Add the following code to the push_update file:

#!/bin/bash

# precompile assets locally
echo "precompiling assets.."
rails assets:precompile

# transfer files from local machine to EC2 instance
echo "transferring files..."
rsync -a -e "ssh -i $EC2_IDENTITY_FILE" ../docker-rails-school-complete $EC2_HOST:~ --exclude=.git --exclude=docker-compose.override.yml
echo "complete!"

# ssh into the EC2 instance and run the associated commands
ssh -t -i $EC2_IDENTITY_FILE $EC2_HOST "export NGINX_SERVICE=$1 && export APP_SERVICE=$2 && cd $(basename $(PWD)) && config/deploy/update_containers.sh"

# delete precompiled assets from local machine
echo "cleaning up precompiled assets..."
rm -rf public/assets

This file should look very similar to our deploy.sh file. The only difference is the commands that are being supplied to the ssh command. We will go through each command individually:

The first command in the ssh command is:

export NGINX_SERVICE=$1

This sets an environment variable called NGINX_SERVICE to the first argument that is input when the push_update.sh script is being called. This is nice because you can call your Nginx service anything you would like in the docker-compose.yml file and then just pass it the name when calling the script. We will see how this works a little later in this post when we write our rake task to call this script.

The second command is:

export APP_SERVICE=$2

This command is doing the exact same thing as the first command except this time it is setting the name of the service that is running our web app.

Finally, the last two commands move you into the root directory of our Rails app and then call the update_container.sh script which we have not yet created:

cd $(basename $(PWD))
config/deploy/update_containers.sh

Next we will need to create our update_containers.sh script. Create a new file called update_containers.sh and place it inside of the config/deploy directory. Add the following code to the script:

#!/bin/bash

# split all app container names that are already running into an array
old_containers=($(docker-compose ps app | awk '{ print $1 }' | tail -n +3))

# fetch container name for nginx service
nginx_container=$(docker-compose ps $NGINX_SERVICE | awk '{ print $1 }' | tail -n +3)

# get number of running webapp containers to update
number_of_containers_to_start=${#old_containers[*]}

# get bridge network name from first container in array
export PROJECT_NETWORK=$(docker inspect -f "{{ .NetworkSettings.Networks }}" ${old_containers[0]} | grep -o -P '(?<=\[).*(?=:)')

# build app with new files
docker-compose build app

# create the new containers with latest code changes
docker-compose scale app=$(echo $(($number_of_containers_to_start*2)))

# creat array with new containers only
all_containers=($(docker-compose ps app | awk '{ print $1 }' | tail -n +3))

new_containers=($(echo ${old_containers[*]} ${all_containers[*]} | tr ' ' '\n' | sort | uniq -u))

# loop through containers and get IP addresses separated by / so they can be exported below
IP_ADDRESSES=""
for element in ${new_containers[*]}; do
  IP_ADDRESSES+="$(docker inspect -f {{.NetworkSettings.Networks.$PROJECT_NETWORK.IPAddress}} $element)/"
done

# run script to update the nginx_upstream.conf file
docker exec -i $nginx_container bash -c "export CONTAINERS=$IP_ADDRESSES && /opt/update_upstream_directive.sh"

echo "please wait 30 seconds for containers to be ready..."
sleep 30

# send nginx hangup signal to update the nginx_upstream.conf file
echo "gracefully restarting nginx..."
docker kill -s HUP $nginx_container

echo "stopping and cleaning up old containers..."
# stops the old containers
for container in $old_containers; do
  docker kill $container
done

# deletes the old containers
docker rm $(docker ps -aqf status=exited)

# rename new webapp containers to start at 1 and go up sequentially
newly_started_containers=($(docker-compose ps app | awk '{ print $1 }' | tail -n +3))
counter=0
while [ $counter -lt ${#newly_started_containers[*]} ]; do
  for container in $newly_started_containers; do
    new_name=$(echo "${container%?}$(($counter+1))")
    docker rename $container $new_name
  done
  let counter=counter+1
done

This script seems complex but in a nutshell all it is doing is starting the new app containers, grabbing their IP addresses, updating the Nginx upstream directive, and then cleaning up. I have included comments above each action to explain each step. You will notice that most of the commands are simply parsing different Docker commands and assigning the important values to variables to be used to update the Nginx upstream directive. The following command in the script

docker exec -i $nginx_container bash -c "export CONTAINERS=$IP_ADDRESSES && /opt/update_upstream_directive.sh"

passes the IP_ADDRESSES variable that we created to the nginx container and then calls the update_upstream_directive.sh script that we have not yet created. Lets create it now. Create a file called update_upsteam_directive.sh and place it in the deploy directory. Add the following code to the file:

#!/bin/bash

echo "updating nginx upsteam directive..."

# change our string of ip addresses separated by / into an array
IFS="/" read -r -a addresses <<< $CONTAINERS

# update the upsteam directive contained within the nginx_upstream.conf file
echo "upstream rails_app {" > /etc/nginx/nginx_upstream.conf

for container in $addresses; do
  echo "  server $container:3000;" >> /etc/nginx/nginx_upstream.conf
done

echo "}" >> /etc/nginx/nginx_upstream.conf

This script is changing the IP addresses of the running app containers. It is configured to update as many app containers as you have running. However, in order to make this work we will need to change our nginx.conf file a bit. Our original nginx.conf file looked like this:

# config/nginx.conf
# for detailed nginx info reference the nginx docs at https://nginx.org/en/docs/

daemon off;
worker_processes 1;
pid /var/run/nginx.pid;

events {
  worker_connections 1024;
}

http {
  include mime.types;
  default_type application/octet-stream;

  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  sendfile on;

  tcp_nopush on;
  tcp_nodelay off;

  gzip on;
  gzip_http_version 1.0;
  gzip_proxied any;
  gzip_min_length 500;
  gzip_disable "MSIE [1-6]\.";
  gzip_types text/plain text/xml text/css
             text/comma-separated-values
             text/javascript application/x-javascript
             application/atom+xml;

  upstream rails_app {
    server app:3000;
  }

  root /rails_app/public;

  server {
    server_name rails_school;

    try_files $uri/index.html $uri.html $uri @rails;

    location @rails {
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

      proxy_set_header Host $http_host;

      proxy_redirect off;

      proxy_pass http://rails_app;
    }
  }
}

We will pull the upstream directive out so that it is easier to update. Change this part of the file

upstream rails_app {
  server app:3000;
}

to this

include /etc/nginx/nginx_upstream.conf;

This tells the nginx.conf file to insert whatever is in the nginx_upstream.conf file where that include statement is. Now we need to create our nginx_upstream.conf file. Create a file called nginx_upstream.conf inside of your config directory. Add the code that you just removed to it:

upstream rails_app {
  server app:3000;
}

Finally we will need to update our .nginx-Dockerfile to include this in the build. Our previous nginx-Dockerfile looked like this:

# config/nginx-Dockerfile

# use the nginx image
FROM nginx

# set environment variable
ENV RAILS_ROOT /rails_app

# set work directory
WORKDIR $RAILS_ROOT

# create a log directory where we will place the nginx log files
RUN mkdir log

# copy over public static files as nginx can serve this more quickly than our app
COPY public public/

# copy of the nginx configuration file to our container
COPY config/nginx.conf /etc/nginx/nginx.conf

# start nginx
CMD [ "/usr/sbin/nginx" ]

And our new one with the added scripts and files should look like this:

# config/nginx-Dockerfile

# use the nginx image
FROM nginx

# set environment variable
ENV RAILS_ROOT /rails_app

# set work directory
WORKDIR $RAILS_ROOT

# create a log directory where we will place the nginx log files
RUN mkdir log

# copy over public static files as nginx can serve this more quickly than our app
COPY public public/

# copy of the nginx configuration file to our container
COPY config/nginx.conf /etc/nginx/nginx.conf

# copy over bash script to update upstream directive
COPY config/deploy/update_upstream_directive.sh /opt/update_upstream_directive.sh

# copy upstream configuration file
COPY config/nginx_upstream.conf /etc/nginx/nginx_upstream.conf

# start nginx
CMD [ "/usr/sbin/nginx" ]

As you can see, we are simply adding our update_upstream_directive.sh and nginx_upstream.conf files to our container.

Lastly we just need to create a rake task to call all of these scripts. Add the following code within the docker namespace to lib/tasks/upload_files.rake:

desc "task that will handle continuous integration with zero downtime"

  # pass service name for nginx and app containers as arguments to rake command
  task :deploy, [:nginx_service_name, :webapp_service_name] do |t, args|

    # if they forget either one, print and error and an example of the correct syntax
    if args.nginx_service_name == nil || args.webapp_service_name == nil
      puts "Please enter the service name of both nginx and the webapp when calling the rake task"
      puts "Syntax: rails docker:deploy[nginx_service_name,webapp_service_name]"
    else
      # if called correctly, call the push_update script and pass the arguments supplied to the script
      sh "config/deploy/push_update.sh #{args.nginx_service_name} #{args.webapp_service_name}"
    end
  end

This rake task can be called with the following syntax:

rails docker:deply[nginx,app]

Where nginx is the name of my nginx service and app is the name of my web app service. This should also clarify where the arguments are coming from in the push_update.sh file.

Our final upload_files.rake file will look like this:

namespace :docker do
  desc "upload files, build images, and start containers for first deployment"
  task :first_deploy do
    sh "config/deploy/deploy.sh"
  end

  desc "task that will handle continuous integration with zero downtime"
  task :deploy, [:nginx_service_name, :webapp_service_name] do |t, args|
    if args.nginx_service_name == nil || args.webapp_service_name == nil
      puts "Please enter the service name of both nginx and the webapp when calling the rake task"
      puts "Syntax: rails docker:deploy[nginx_service_name,webapp_service_name]"
    else
      sh "config/deploy/push_update.sh #{args.nginx_service_name} #{args.webapp_service_name}"
    end
  end
end

At this point, you should be able to make a change to your app and then run the command and see the changes deployed with no downtime. The key take away here is that it is important to build and have the new containers up and running before updating nginx to point at the new containers.

Next Steps

You now have an app that is can easily be deployed to a running EC2 instance. It is good practice to tag your images, push them to docker hub, and then reference those images in your docker-compose.yml file instead of building from a local Dockerfile in production. This ensures that changes in any referenced images during your build will not make their way into your production site without your knowledge. In the next post we will add this behavior.