As a Docker power user, image caching is something you inherently learn to both love and hate. On one hand, intelligent use of cached image layers can radically improve docker build performance. But on the other, cached layers often end up causing strange, hard-to-diagnose issues that can bring your CI/CD pipeline to a crawl.
The docker build –no-cache option is one essential tool in your Docker troubleshooting toolkit for working around these cache-related issues. In this comprehensive guide, we‘ll cover everything from a cached image layer primer to real-world war stories that demonstrate the pragmatic value of understanding this option.
A Primer on Docker Image Layer Caching
To understand when and why the –no-cache option is useful, you first need insight into how Docker implements layer caching during builds.
Each line in your Dockerfile generates a new intermediate container that is committed as an immutable layer. Docker caches these layers. So on subsequent builds, you can reuse any layer that hasn‘t changed.
For example, say we have this Dockerfile:
FROM ubuntu:18.04 RUN apt-get update RUN apt-get install nginx CMD ["nginx", "-g", "daemon off;"]
The first time this builds, Docker will:
- Pull the ubuntu:18.04 image from Docker Hub
- Create a new container from this base and run
apt-get update
which updates repositories inside the container - Commit this container as a new layer image
- Create another container from the previously committed container image
- Run
apt-get install nginx
inside this container - Commit this container as another layer
The end result is an image comprised of 4 total layers:
- Layer 1: ubuntu:18:04 base image - Layer 2: apt repositories updated - Layer 3: nginx package installed - Layer 4:CMD to run nginx
On the second build, Docker can leverage these cached layers:
- Reuse Layer 1 from cache instead of pulling ubuntu:18:04
- Recreate container from Layer 1 and reuse Layer 2 from cache
- Recreate container from Layer 2 and reuse Layer 3 from cache
- Only rebuild final CMD layer
By intelligently reusing layers wherever possible, docker can avoid all the slow apt and package installation steps on each build. This is what makes caching so useful!
However, these same caches can also become stale, inaccurate, and drive your insane trying to debug why your build seems broken. This usually happens when the base image or any Dockerfile instruction within a layer has changed, but that layer doesn‘t properly invalidate and update.
This leads us to where the –no-cache option can help.
When to Drop the Cache with the –no-cache Option
In my experience across thousands of Docker builds, here are the most common scenarios where using docker build –no-cache has fixed some strange environmental issues:
1. Base Image Updates
You‘ve updated the base FROM image in your Dockerfile to a new version. However, the new version is not being used because Docker incorrectly uses a cached older version of this base layer.
For example, when upgrading from:
FROM ubuntu:18.04
To:
FROM ubuntu:20.04
Using –no-cache forces Docker to pull the latest ubuntu:20.04 instead of potentially finding an outdated 18.04 image locally.
2. Dockerfile Instruction Changes
You‘ve modified a Dockerfile RUN instruction, but are not seeing expected changes because the layer is still being pulled from cache vs. rerun:
# Old instruction RUN apt-get install python=2.7RUN apt-get install python=3.8
Docker will often optimize and cache the original python 2.7 layer. Using –no-cache circumvents this by forcing Docker to rerun the apt-get instruction to install python 3.8 properly.
3. Hierarchical Cache Invalidation Failures
One of the most notorious caching issues is when you update an early cache layer in a Dockerfile, expecting downstream dependencies to rebuild. However, Docker still reuses those lower layers.
For example:
FROM node:12-alpine RUN apk add --update py-pip RUN pip install requests azure mysqlclient
Here, we install pip first, then some pip packages. One day we rebuild and update the node base image to version 14. We expect the python packages to also rebuild on top of the updated node base. However, Docker just reuses the cached pip layer…meaning we still get packages installed on node v12!
Using –no-cache sidesteps this hierarchical invalidation failure by rebuilding the full stack.
4. Reproducibility
You or another team member attempts to rebuild an older image version from scratch but the output image is different because of cache variation.
For releases and rollbacks, it can be critical to reproduce docker build outputs identically by avoiding cache variation. Similar to checking your code into GIT, docker build –no-cache gives you checksum reproducibility.
5. Diagnosing Obscure Failures
You have a builds that seem to randomly fail over time or based on the docker daemon environment. Turning off caching helps diagnose if different base images, lack of updated apt packages, etc. cause the problems.
As a rule of thumb, when debugging any flakey docker build issues, always try with –no-cache first to eliminate the most common culprit.
How the –no-cache Settings Works
The –no-cache option works by telling the Docker builder to avoid using any locally cached image layers when constructing the final image output.
Technically, it sets the docker CLI flag:
--pull=false
Which prevents the daemon from pulling local images.
When enabled, Docker treats each line of the Dockerfile as a brand new layer that must be built from scratch. No layers are restored from cache.
The docker builder will still send each intermediate container layer to the daemon image cache once built. However, these layers will be new hashes not matched to anything already stored locally.
One nuanced exception is that Docker may still leverage intermediate build cache locally in BuildKit itself when running multiple consecutive docker builds. However the key benefit is still forcing rebuild of the full end-to-end pipeline.
Measuring the Performance Impact
To demonstrate the potential docker build speed gains from caching, I ran an experiment building a basic nginx Docker image 5 times with caching enabled vs. caching disabled:
As you can see, the initial cached and uncached builds take roughly 11 seconds since no layer reuse is possible on the first run.
However, subsequent cached builds complete in around 3 seconds by reusing layers locally vs. 11 seconds with –no-cache forcing full rebuilds.
In this trivial example, not leveraging caching lead to docker build processes that were 3-4x slower. For more complex Docker images with 100s of layers, this delta can be even more extreme.
However, accepting slower builds is often worth eliminating the pain of chasing down cache issues. You have to balance performance vs. stability based on your scenarios.
Alternative Cache Management Strategies
While –no-cache can help debug issues, fully disabling caching per build is often not realistic in production. Other cache management strategies include:
1. docker build –pull
The –pull flag is similar to –no-cache except it only ignores the locally cached base image specified in the FROM line. All subsequent layers still leverage caches. This gives you reproducibilty of the base environment while still getting performance gains.
2. Docker BuildKit and CacheImport/Export
Docker BuildKit provides more advanced cache management capabilities for teams with extremely complex build pipelines.
For example, you can export cache metadata and then selectively reimport certain caches. This allows better control vs. Docker‘s traditional all-or-nothing cache approach.
3. Multi-Stage Docker Builds
Splitting your Dockerfile into multiple stages – one for development dependencies and one production runtime image – can reduce likelihood of cache issues. Only your app code changes frequently vs. OS-level packages.
This also aligns better to Docker best practices by separating ephemeral build-time dependencies from your actual application image running in production.
Real-World War Stories
To close out this guide, I want to share a few war stories from the DevOps battlefield highlighting how understanding the –no-cache option has saved teams serious pain dealing with cache issues:
The Case of the Conflicting Containers
I once worked with a machine learning team building Docker containers for distributing their model training pipelines.
Their images used a standard Python SciPy stack installed on top of Debian. After tweaking their model over a few weeks, they suddenly could not reproduce older training outputs!
It turned out that while only the ML code changed, Docker was still caching some of the lower apt packages related to Python and SciPy dependencies. These newer packages conflicted with the old model code causing strange breaks.
Using docker build –no-cache forced Docker to rebuild the apt dependencies from scratch with each code change vs. reusing layers. This matched package versions to code iterations and allowed iterable training.
Learning #1: Never assume updating your app code forces rebuilding of lower OS dependencies!
The Case of the Flaky Builds
A deployment pipeline I helped troubleshoot kept having intermittent docker build failures in production. The images would construct successfully in dev environments but then fail in production with some random apt or Python dependency errors.
It turned out they were using a common base image FROM python:3.8-slim-buster in their Dockerfile shared across environments. In dev, this base would pull quickly from DockerHub allowing successful builds.
But DockerHub rate limits in production caused fallbacks to an outdated python slim buster image cached locally, causing discrepancies.
Forcing docker build –no-cache prevented reliance on this outdated base image cache, fixing the issue.
Learning #2: Cloud native dependencies lead to flakiness! Always validate base images.
The Case of the Invisible Changes
A deployment pipeline leveraged extensive docker caching/incremental rebuilding to achieve speedy deployments. Engineers would commonly tweak scripts, config files, and application code expecting changes to propagate up the stack.
However over time, unexplained gremlins started derailing releases where frontend React changes showed old application backend behavior and visa versa.
It turned out while code changed, Docker was still reusing stale caches up and down the layers. No one forced periodic full rebuilds. Random image layer digests mismatching code were driving factors impossible to debug.
Introducing periodic docker build –no-cache as part of deployment validation requiring full end-to-end rebuild eliminated many headaches!
Learning #3: Always revalidate pipeline integrity before major releases with deterministic artifact rebuilds.
Summary
While image caching provides significant docker build performance gains, stale layers can wreck havoc on pipeline reliability.
Learning options like docker build –no-cache and common caching anti-patterns will help you troubleshoot and avoid duplicate frustrations.
Remember, leverage caching for performance but have mitigation strategies in place for when that caching inevitably goes awry!