The common advice to speeding up your Docker development workflow focuses on reducing your Docker image size. At the surface, the logic makes a lot of sense. By making your Docker images smaller, you win. With smaller image sizes, there are fewer bytes to pull down and push up to the registry, thus saving you a boatload of time. Smaller images = solution. What the advice does not take into account is a precious resource: developer’s time.

Coming from a Good Place

To be clear, this comes from a good place. Smaller Docker image sizes are great, and we should all strive to keep the image sizes small as we can. But we should also account for the ROI in doing so.

I’ve tried a few approaches to making Docker images smaller and have had mixed results. I kept at it for a while because I would read about amazing results others were getting. I would hear Docker image sizes going down from 600MB to 60M - for an enormous 10X improvement. The methods I’ve used:

Usually, I would end up with a Docker image that would not quite work right or not work at all and, to pour salt on my wounds, the image size was still rather large. One of the reasons is that I work mainly with scripting languages. So I cannot take advantage of compiling a binary artifact, keeping that artifact, and trashing the everything else. Nope, the interpreter and its dependencies are required. Ultimately, I was often left with the feeling of disappointment and felt it was not the best effective use of time.

Let’s Get On with It

Okay, so onto what I’ve currently concluded as the best way to speed up your Docker development workflow. This technique wins by a grand canyon size large margin. The solution is simple: use a remote Docker daemon.

There are 2 main ways to improve the speed of Docker pull or push:

  1. Reduce the Docker image size
  2. Increase the speed of your internet

Pushing a large image from a slow internet connection to a Docker registry is going to take an eternity. But pushing it from within an EC2 instance using AWS’ pipeline is great. Even a 1GB image only takes 1-2 minutes. Furthermore, the new pushes are even faster and only take seconds since only the additional layers get pushed.

Instead of spending hours and sometimes days optimizing and squeezing every last byte out of the Docker image for each application you’re building, the investment is better spent setting up a Docker remote daemon once. It does take some time and cost money to run a Daemon remote server, but you quickly make up for it.

Using a remote Docker daemon alone has dramatically made my Docker development workflow much more enjoyable.