As I’ve covered in Jack and the Elastic Beanstalk, Elastic Beanstalk is a great PaaS offering from AWS that allows developers to deploy and run their applications on EC2 instances. I been tinkering with a few different ways to speed up the eb deploy command from my local machine and was able to speed it up somewhat. Though honestly I was hoping for better results. I will detailed the results to show what I learned.

Note, the project and all it’s files are available on GitHub at tongueroo/hi under the docker-cache branch.

Understanding How EB Handles Deployments

For EB I use Docker because it standardizes the deployment unit. There are a few ways to deploy your code to EB when using Docker. You can either have a Dockerfile or Dockerrun.aws.json in the project. EB looks for both of these files to build a docker image when deployments happen. The script that actually handles the docker build is at /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh.

So doing as little as possible in the Dockerfile will yield the fastest EB deployment. In fact if you look at the 03build.sh script you can see that when only a Dockerrun.aws.json exists, EB will generate a barebones simple Dockerfile with a FROM and EXPOSE statement as defined in the Dockerrun.aws.json. This makes a lot of sense because then there are no dependencies to pull in and the deployment will less likely break. All you effectively need is the Docker image to exist and be pulled down successfully. So the ideal deployment is that a Docker image gets built, Dockerrun.aws.json file gets updated with the image name, and it gets shipped to EB. I normally only make use of the Dockerrun.aws.json file to deploy.

While only using a Dockerrun.aws.json is ideal for production deployments, from the development perspective it is a pain. Having to build and a Docker image each time you want to make a slight change and deploying repeatedly to an environment is slow. It is especially painful if it is a rather large ruby image with a slow connection. The ideal approach is to not deploy the container on my Mac OSX laptop but on a build server. But when I’m developing I prefer to try and develop on my own machine when possible.

Developers Prefer eb deploy Most of the Time

I remember a developer mentioning to me many times that he prefers the eb deploy command to handle deployment because it was simply faster most of the time for him. Basically, after the initial eb deploy, as long as the servers were not freshly launched instances, eb deploys would be very fast because the Docker cache layers would exist. When you are trying to rapidly develop and are constantly deploying a change to staging instances over and over, using the eb deploy flow makes complete sense. It just does not make sense in a production environment because you do not want instances building a lot of dependencies when AutoScaling up occurs.

Building the Docker image live on the instance felt really dirty to me. I would image all the dependencies being downloaded, installed and built whenever code was deployed to a EB environment. What happens if apt-get or RubyGems is down? It feels like there is just too many things that could go wrong, especially if the Dockerfile must install a long list of things. So I have normally kept with building pre-building the Docker image using a Dockerrun.aws.json and deploying that as a unit instead of using eb deploy.

Cache Docker Image Concept Compromise

I then saw this StackOverflow post How to speed up CI build times when using docker? and it reminded me of a concept that I have tried before. The idea is simple, you create a cache baseline Docker image that has the app dependencies and packages already installed and push that image to the Docker registry. You then use that Docker image as the starting baseline image and eb deploy will build from that starting point. In this way when you use eb deploy it will only have to deploy incremental code changes. As long as the incremental code changes do not require a ton of new dependencies to be installed, which is often the case, then the deploy will be fast.

It is a compromise that results in the best from each world. You get the speed of eb deploy whenever the Docker cache layers exist. You also get the reliability of having the packages cached in the baseline Docker image. The dependencies are baked in and pulled down with the initial Docker image. EB is no longer installing packages on the fly when AutoScaling occurs. It does not matter if apt-get or RubyGems is down then.

The big caveat with this approach is that you have to remember to update the cache Docker image from time to time and this manual task sucks for humans beings to do. If you get lazy and forget to update the baseline image then the slowness and all the unreliable awfulness that I mentioned creeps back as the code strays away from the baseline.

Creating the Cache Docker Image Easily

So I set out to simplify the manual task of creating this baseline image and updating the starting point of the FROM statement. The structure that I ended up setting up was to create two Dockerfiles: a standard Dockerfile and a Dockerfile.base file.

Here’s what the two files initially look like. First the Dockerfile.base:

FROM ruby:2.3.3

RUN apt-get update && \
  apt-get install -y build-essential nodejs && \
  rm -rf /var/lib/apt/lists/* && apt-get clean && apt-get purge

WORKDIR /app
ADD Gemfile /app/Gemfile
ADD Gemfile.lock /app/Gemfile.lock
RUN bundle install --system

The important thing here is that I’m installing the RubyGems as part of the baseline Dockerfile.base. For theDockerfile:

FROM tongueroo/hi

WORKDIR /app
ADD Gemfile /app/Gemfile
ADD Gemfile.lock /app/Gemfile.lock
RUN bundle install --system

ADD . /app
RUN bundle install --system

RUN chmod a+x bin/*

EXPOSE 3000

CMD ["bin/web"]

The FROM statement for Dockerfile has tongueroo/hi initially but that gets automatically changed with the ufo tool. The ufo tool is coverage thorough in: Ufo — Easily Build Docker Images and Ship Containers to AWS ECS. The ufo tool has a handy command, ufo docker base , to build the base image and automatically update the current Dockerfile FROM statement. You can see that the FROM statement is updated in the example below.

$ ufo docker base
Pushed tongueroo/hi:base-2016-12-01T14-49-53-ca5cbd3 docker image. Took 2s.
The Dockerfile FROM statement has been updated with the latest base image:
  tongueroo/hi:base-2016-12-01T14-49-53-ca5cbd3
$ grep FROM Dockerfile
FROM tongueroo/hi:base-2016-12-01T14-49-53-ca5cbd3
$

All you have to do is initially set “tongueroo/hi” in the FROM statement and then ufo docker base will add a timestamped tag to the end of the “tongueroo/hi” as part of this process:

  1. Build and push Docker image using Dockerfile.base and name it with a timestamp cached name: tongueroo/hi:timestamp.

  2. Update the Dockerfile FROM statement with the timestamp cached Docker image name that just got pushed.

Using ufo docker base you can easily update the cache image with a single command and commit the new Dockerfile. It be good to put this in a scheduled job somewhere so it always automatically updates the cached image.

The project and all it’s files are available on GitHub at tongueroo/hi under the docker-cache branch.

Speed Tests

Let’s do some rudimentary speed tests to see how much of a difference this cache strategy buys us. For each benchmark case, I’m going to start with a fresh instance without any of the Docker cache layers. Then I’m going to run eb deploy twice for each case.

The first case is when I’m using only one Dockerfile and building everything from a ruby 2.3.3 image.

$ time eb deploy hi-web-stag # 1st deploy: no Docker layers
real  4m58.785s
$ time eb deploy hi-web-stag # 2nd deploy: Docker cache layers exist
real  0m30.050s
$

The second case is when I using a Dockerfile and a Dockerfile.base.

$ time eb deploy hi-web-stag # 1st deploy: no Docker layers
real  3m34.994s
$ time eb deploy hi-web-stag # 2nd deploy: Docker cache layers exist
real  0m30.287s
$

You can see that for eb deploy when there are no Docker cache layers, there is about a 1 minute and 26 seconds deployment speed improvement. When the Docker cache layers exists, on the second deploy, there is no real difference in the deploy times with either method. This makes complete sense.

For simple projects like the tongueroo/hi example project there is a gain of about a minute and a half. For real projects it makes a larger difference. I copied a Gemfile from one of the real applications I’m working on and tried the same test. Without cached Docker image:

$ time eb deploy hi-web-stag
real  7m5.983s
$ time eb deploy hi-web-stag
real  0m36.247s
$

With cached Docker image:

$ time eb deploy hi-web-stag
real  4m47.825s
$ time eb deploy hi-web-stag
real  0m30.378s
$

So the difference is 2 minutes and 18 seconds in a case with a real project.

Summary

For any eb deploy after the initial deploy, regardless of the strategy we take, it is very fast. This is because after the first deploy, the Docker layers already exist on the EC2 instance and so when EB builds the image the second time around it does not have to rebuild those Docker layers. This speedy eb deploy of about 30 seconds is a nice benefit.

Using this extra Docker baseline image as cache is one strategy to speed up the eb deploy command. However, the results were not as great as what I hoped for fresh instances but it is a few minutes of speed up.

One cliff note is that this strategy will also help speed up CI build times if you are on non-dedicated hardware and using one of the cloud based CI tools like CircleCI. It speeds it up because once again dependencies aren’t being install every time. If you are on dedicated hardware then this strategy add value because the server will have the docker cache layers. Running a dedicated instance to take advantage of Docker layers will always be the fastest approach and whether or not it is worth managing that server is up to you. Another technique, which I’ll cover in another post, is setting up a remote Docker daemon so that the build and push happens on an EC2 instances where the network will be blazing fast.