Best Way to Speed Up Docker Development Workflow
The common advice to speeding up your Docker development workflow focuses on reducing your Docker image size. At the surface, the logic makes a lot of sense. By making your Docker images smaller, you win. With smaller image sizes, there are fewer bytes to pull down and push up to the registry, thus saving you a boatload of time. Smaller images = solution. What the advice does not take into account is a precious resource: developer’s time.
Coming from a Good Place
To be clear, this comes from a good place. Smaller Docker image sizes are great, and we should all strive to keep the image sizes small as we can. But we should also account for the ROI in doing so.
I’ve tried a few approaches to making Docker images smaller and have had mixed results. I kept at it for a while because I would read about amazing results others were getting. I would hear Docker image sizes going down from 600MB to 60M - for an enormous 10X improvement. The methods I’ve used:
- Writing very long one-line Docker instructions
- Squashing docker layers down
- Using Alpine Linux
Usually, I would end up with a Docker image that would not quite work right or not work at all and, to pour salt on my wounds, the image size was still rather large. One of the reasons is that I work mainly with scripting languages. So I cannot take advantage of compiling a binary artifact, keeping that artifact, and trashing the everything else. Nope, the interpreter and its dependencies are required. Ultimately, I was often left with the feeling of disappointment and felt it was not the best effective use of time.
Let’s Get On with It
Okay, so onto what I’ve currently concluded as the best way to speed up your Docker development workflow. This technique wins by a grand canyon size large margin. The solution is simple: use a remote Docker daemon.
There are 2 main ways to improve the speed of Docker pull or push:
- Reduce the Docker image size
- Increase the speed of your internet
Pushing a large image from a slow internet connection to a Docker registry is going to take an eternity. But pushing it from within an EC2 instance using AWS’ pipeline is great. Even a 1GB image only takes 1-2 minutes. Furthermore, the new pushes are even faster and only take seconds since only the additional layers get pushed.
Instead of spending hours and sometimes days optimizing and squeezing every last byte out of the Docker image for each application you’re building, the investment is better spent setting up a Docker remote daemon once. It does take some time and cost money to run a Daemon remote server, but you quickly make up for it.
Using a remote Docker daemon alone has dramatically made my Docker development workflow much more enjoyable.
Thanks for reading this far. If you found this article useful, I'd really appreciate it if you share this article so others can find it too! Thanks 😁 Also follow me on Twitter.
Got questions? Check out BoltOps.
You might also like
Kubes: Kubernetes Deployment Tool
Kubes is a Kubernetes Deployment Tool. It builds the docker image, creates the Kubernetes YAML, and runs kubectl apply. It automates the deployment process and saves you precious finger-typing energy.
Jets: The Ruby Serverless Framework
Ruby on Jets allows you to create and deploy serverless services with ease, and to seamlessly glue AWS services together with the most beautiful dynamic language: Ruby. It includes everything you need to build an API and deploy it to AWS Lambda. Jets leverages the power of Ruby to make serverless joyful for everyone.
Lono: The CloudFormation Framework
Building infrastructure-as-code is challenging. Lono makes it much easier and fun. It includes everything you need to manage and deploy infrastructure-as-code.